text stringlengths 14 5.77M | meta dict | __index_level_0__ int64 0 9.97k ⌀ |
|---|---|---|
Султан Ахмедова џамија, позната и под називом Царева, Хунгарија и Стара џамија (тур. Атик џамија), саграђена је у Старом граду (Кастелу), близу десне обале реке Требишњице. Уз џамију се налазе мали мезари и чесма. Подигнута је 1719. године захваљујући Осман-паши Ресулбеговићу, а посвећена је султану Ахмеду.
Заштита
Султан Ахмедова џамија је унета на привремену листу националних споменика Босне и Херцеговине, Уписана је је у гр. улица број 310-1, кат. лист. 2053. и садржи предворје и мало гробље до њега површине 188 м².
Историја
Џамија је саграђена по наређењу Ахмед-паша Ресулбеговић 1132. х.г., односно 1719. године, у част султана Ахмеда III (1703—1730), о чему сведочи уклесана година градње на шерефи мунаре. Изградњом ове џамије османлије су хтеле да надокнаде рушење џамије у Полицама и недостатак џамије у којој би требињски муслимани вршили верске обреде. Хаџи Ибрахим Куртовић је 1847. године саградио чесму испред џамије и довео у њу воду из Требишњице.
Током обнове џамије у 21. веку пронађен је праг првобитне џамије који се данас чува у стаклу унутар ове џамије.
Предање
На малом гробљу испред зида михраба налази се гробница окружена каменим плочама и са два нишана без натписа. Прича се да је у њему сахрањен мујезин, који је пао са мунаре читајући езан и остао мртав на месту. Друга легенда, опет, каже да је у овом гробу сахрањен шехит, који је једини успео да побегне када је био покољ у џамији у Полицама, препливао Требишњицу, а када је дошао на место где му је данас гроб, пао је мртав, па је на том месту и сахрањен. На овом гробљу се налазе две велике и старе липе, које су овде вероватно биле засађене приликом изградње џамије.
Изглед
Џамија није поседовала архитектонску вредност. Припада типу једнопросторне џамије са кровом на четири воде покривене шаторским кровом, са отвореним наткривеним софама и каменом мунаром, високом 12. метара. Мелехна је квадратног облика. Крај џамије се налази мезарје са три нишана.
Првобитни облик џамија је задржала све до рата у Босни и Херцеговини 1992. године, када је срушена, заједно са још девет порушених џамија у општини Требиње.
Обнова
Реконструкција је изведена у првим деценијама 20. века према последњем познатом облику и димензијама џамије, од истог материјала и на исти начин градње и препокривенa каменим плочама. Званично је након рушења и обнове поново отворен за вреску службу 3. августа 2014. године.
Напомене
Види још
Национални споменик Босне и Херцеговине
Џамије у Требињу
Извори
Литература
Hasandedić Hivzija, Muslimanska baština u Istočnoj Hercegovini, El-Kalem, Sarajevo, 1990.
Спољашње везе
Џамије у Републици Српској
Град Требиње | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 795 |
Victoria de Stefano (21 June 1940 – 6 January 2023) was an Italo-Venezuelan novelist, essayist, philosopher and educator.
Early life and education
Victoria de Stefano was born in Rimini, Italy in 1940, and moved to Venezuela with her family in 1946. She recounts this experience in Su vida, a collection of autobiographical texts published in 2019.
De Stefano studied at the Instituto Politécnico Educacional. She graduated with a degree in Philosophy from Universidad Central de Venezuela (UCV) in 1962.
Exile
De Stefano, her husband Pedro Duno and their two sons went into exile at the end of 1962. They lived in Havana, Cuba; Algeria; Switzerland; Paris, France; and Sitges, Spain.
Return to Venezuela
De Stefano and her family returned to Caracas in 1966. There she worked as a researcher at the Institute of Philosophy at the Universidad Central de Venezuela, and taught Aesthetics, Contemporary Philosophy, and Art Theory at the School of Philosophy and School of Art of the Universidad Central de Venezuela.
Personal life and death
De Stefano was married to the philosopher Pedro Duno, with whom she had two sons: Rodrigo Duno and Martín Duno. De Stefano and Duno later separated.
De Stefano died in Caracas on 6 January 2023, at the age of 82.
Publications
De Stefano's works include:
El desolvido (1971),
Sartre y el marxismo (1975)
La noche llama la noche (1985),
Poesía y Modernidad, Baudelaire (1984)
El lugar del escritor (1990)
Cabo de vida (1993)
Historias de la marcha a pie (1997)
Lluvia (Barcelona: Candaya, 2002)
Paleografías (2010)
Historias de la marcha a pie (Reed. 2013)
Su vida (El Taller Blanco Ediciones, Bogotá. 2019 )
Venimos, vamos (Planeta, 2019)
Prizes
De Stefano won the following prizes:
Premio Municipal de Ensayo (1984)
Finalist in the Premio Internacional de Novela Rómulo Gallegos (1999)
Premio Municipal de Novela (2006).
References
1940 births
2023 deaths
Venezuelan women writers
Venezuelan novelists
Venezuelan essayists
Italian emigrants to Venezuela
People from Rimini | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 958 |
Q: Is this random bit generator broken? Consider a following problem:
Suppose, a random bit generator of type $p$ was brought to a random bit generator repair station. Before starting repairing it, the workers from the repair station were completely unsure, whether it is really broken, decided to check that it is really broken by launching it several times. They know, that random bit generators of type $p$ output $1$ with probability $p$ and $0$ otherwise, independently every time they are launched. However, when such random bit generators are broken, they always output $1$. After launching the generator $t$ times and receiving only $1$ the workers concluded that it is most likely broken (with probability of it functioning correctly being less than $\epsilon$). What is the least possible value of $t$?
Suppose the prior probability of the generator being broken is $q$. Then it was launched and returned $1$. The conditional probability of that outcome was $1$ if the generator was indeed broken, and the total probability of this output was $q + (1-q)p = p + (1-p)q$. Then the posterior probability of it being broken is $\frac{q}{p + (1-p)q} = \frac{1}{1-p}(1 - \frac{p}{p + (1-p)q}$).
Now, as initially the workers were completely unsure, with prior probability being uniform, the posterior probability of it being broken $q_n$ after $n$ trials will satisfy the recurrent relation:
$$q_0 = \frac{1}{2}$$
$$q_{n+1} = \frac{1}{1-p}(1 - \frac{p}{p + (1-p)q_n})$$
Now all that remained is to find the minimal $n$ for which $q_n > 1 - \epsilon$.
The alternative task can be further "prettified" by rewriting it in terms of $y_n = (1-p)p_n$:
Suppose the sequence $y_n$ is defined by a recurrence:
$$y_0 = \frac{1-p}{2}$$
$$y_{n+1} = 1 - \frac{p}{p + y_n}$$
Find $\min\{t|y_t \geq (1-p)(1-\epsilon)\}$.
All that remains now is to find the closed form of $y_n$ and derive $t$ from it. But, unfortunately, I do not know how.
A: Sketch for the solution: you can linearize the recursion noticing that
$$
q_{n+1}=\frac{q_n}{p+(1-p)q_n}=\frac1{\frac{p}{q_n}+1-p}\\
\therefore\quad \frac1{q_{n+1}}=\frac{p}{q_n}+1-p\implies r_{n+1}=r_np+(1-p)p
$$
for $r_n:=p/q_n$.
P.S.: sorry I'm too lazy to write a full answer :)
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,044 |
It's Hannah from Homemade Banana again, and this time I'll be sharing a DIY project that is guaranteed to bring a jolt of color to your accessories closet (and inspire a smile each time you glance down at your feet). After seeing some crazy-amazing platform sneakers, like these ones on the Gucci runway last summer, I've been wanting an excuse to make my own version of them, and Pride month seemed like the perfect time to make my dream a reality!
I tweaked the design a bit to be more real-world friendly—without losing the whimsy—and adapted some simple white faux-leather sneakers with some stripes of colorful paint. This is the perfect project to tackle while you catch up on an episode or two of World of Dance, especially because the end result will certainly inspire a living room twirl or two!
1. First, remove the laces. This will keep them from brushing up against the wet paint as it dries.
2. Measure the sneaker platform, then divide that by four. In my case, each stripe measured one centimeter.
3. Using tape, mark off where the first stripe will be (start at the bottom of the sole and work your way towards the top of the shoe).
4. Paint the first stripe and let it dry fully. Measure, tape and paint the next stripe up and repeat until you've painted all four colors on one shoe.
5. Do the same with the second shoe, and after all the paint is dry, touch up any spotty areas with a second coat of paint.
6. Using some metallic paint for a bit of playful shine, paint the side stripes and heel counter. Let dry.
7. Lace-up those bad boys and hit the town running.
Where do you plan on wearing these new rainbow sneaks?! | {
"redpajama_set_name": "RedPajamaC4"
} | 1,068 |
Bobby McFerrin (11. marts 1950 i New York City i USA) er en jazz-inspireret a cappella-kunstner. Han er i særdeleshed kendt for sin vokalimprovisation. I de senere år har han desuden gjort sig bemærket som dirigent. Hans sang "Don't Worry, Be Happy" indtog førstepladsen på hitlisterne i USA i 1988. Til trods for at det var Bobby McFerrin som skrev sangen, er Bob Marley ofte den som får æren for den. McFerrin har sunget sammen med instrumentalister som pianisterne Chick Corea og Herbie Hancock eller cellisten Yo-Yo Ma. McFerrin er derudover også kendt for at have en stemme med en ambitus på fire oktaver, og for at bruge stemmen til at lave lyde og effekter – såkaldt beatboxing. Han kan for eksempel efterligne både trommer og bass samtidig.
Udover vokalist-karrieren, blev McFerrin i 1993 ansat hos Saint Paul Chamber Orchestra, med ansvar for kreativitet.
Bobby McFerrin anses for at være manden som ikke kan begå fejl. Alle hans plader har vært anderledes and den forrige. Han er en stor fan af improvisation, og bliver i denne sammenhæng set som et musikalsk unikum.
Diskografi
Solo
Bobby McFerrin, 1982
The Voice, 1984
Spontaneous Inventions, 1985
Elephant's Child, 1987
Simple Pleasures, 1988 (Don't Worry, Be Happy – #1 hitsingel var på dette albumet)
How the Rhino Got His Skin/How the Camel Got His Hump, 1990
Medicine Music, 1990
Many Faces of Bird, 1991
Sorrow Is Not Forever, 1994
Paper Music, 1995
Bang! Zoom, 1996
Circlesongs, 1997
Mouth Music, 2001
Beyond Words, 2002 – med Chick Corea, Cyro Baptista og Richard Bona
Samarbejde
Bobby McFerrin & Jack Nicholson, The Just So Stories, 1987
Bobby McFerrin & Chick Corea, Play, 1990
Bobby McFerrin & Yo-Yo Ma, Hush, 1991
Bobby McFerrin & Chick Corea, The Mozart Sessions, 1996
Gæsteoptræden
Pharaoh Sanders, Journey to the One, 1980
Grover Washington, The best is yet to come, 1982
Diverse artister, The Young Lions, 1983
Charles Lloyd Quartet, A Night in Copenhagen, 1984
Diverse artister, A Tribute to Thelonious Monk, 1984
Chico Freeman, Tangents, 1984
Michael Hedges, Watching My Life Go By, 1985
The Manhattan Transfer, Vocalese, 1985
Joe Zawinul, Dialects, 1986
Weather Report, Sportin' Life, 1986
Al Jarreau, Heart's Horizon, 1988
Quincy Jones, Back on the Block, 1989
Laurie Anderson, Strange Angels, 1989
Gal Costa, The Laziest Gal in Town, 1991
Jack DeJohnette, Extra Special Edition, 1994
The Yellow Jackets, Dreamland, 1995
George Martin, In My Life, 1998 – på Come Together med Robin Williams
Bela Fleck and the Flecktones, Little Worlds, 2003
Chick Corea, Rendez-Vous in New York, 2003
Wynton Marsalis, Magic Hour, 2004
Grammypriser
1985, Best Jazz Vocal Performance, male, "Another Night In Tunisia" med Jon Hendricks
1985, Best Vocal Arrangement for two or more voices, "Another Night In Tunisia" med Cheryl Bentyne
1986, Best Jazz Vocal Performance, male, "Round Midnight"
1987, Best Jazz Vocal Performance, male, "What Is This Thing Called Love"
1987, Best Recording for Children, "The Elephants' Child" med Jack Nicholson
1988, Song of the year, Best Pop Vocal Performance, male, Record of the year, "Don't Worry, Be Happy"
1988, Best Jazz Vocal Performance, male, "Brothers"
1992, Best Jazz Vocal Performance, "Round Midnight"
Eksterne henvisninger
Officiel hjemmeside
Jazzvokalister fra USA
Afroamerikanere
Personer fra New York
A cappella | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 2,029 |
cove school tuition
Sign up for an Open House. Tuition is determined annually. *Tuition for the 2020/21 school year will be paid via a process called Smart Tuition. Payment Plans . Here's how you can apply to The Cove School. The Cove School believes the best way to ensure success in your child's education is to have a strong home school connection. The School is open to any eligible students in the county, including exceptional education students. In addition to tuition fees, there are school based charges for specific subjects, excursions, specialist sports and building works, repairs and maintenance. Attend an Open House or schedule a tour November-January We'd love to give you a tour of our space sometime between October and January; you can reserve a time for your tour by calling the admission office (206-923-COVE). The Cove School. Webb Institute, an undergraduate school, only has 98 students, and it covers tuition for all of them. A non-refundable initial payment of 5% of the tuition ($1,250) is due by January 31 with the re-enrollment agreement. 1. Each child needs their own application, but the application fee is per family. The 2012-2013 school year ranges from $3000-$7250. Schools in England can now book tuition for the children worst affected by months of closures during the lockdown through a subsidised scheme. Total enrollment: 104. What are your views on integrating home and school? sPineapple Cove Classical Academy is a tuition-free public school. An additional 15% of the tuition ($3,720) is due with each student's contract on April 1. We do not discriminate on the basis of race, religion, national or ethnic origin, or exceptionality in the admission of students, in accordance with federal and state anti-discrimination law. For Fall 2019-2020 School Year Application Deadline: February 1. A reduction applies for each additional child enrolled at the school. The school employed 34 teachers, yielding a student–teacher ratio of 15:1. Spring Cove Elementary School is located 137 Spring Cove Drive, Roaring Spring. 350 Lee Road Northbrook, IL 60062 Phone: 847-562-2100 Fax: 847-562-2112 Tuition fees are charged for all Sydney Catholic Schools' primary schools and secondary colleges, but vary according to the area in which the school is located. Summer camps are enrolled and billed separately. According to the National Center for Education Statistics, in 2010, the school reported an enrollment of 515 pupils in grades kindergarten through 5th, with 224 pupils receiving a federal free or reduced-price lunch due to family poverty. Webb Institute — Glen Cove, New York. Fill out the following application and submit $75.00 non-refundable application fee which can be paid through application form. Acceptance rate: 33%.
Celestron Nexstar 8se Used, Steps Of The Scientific Method, Neumann Kms 105 Vs Shure Ksm9, Buy Tinned Tomatoes Online, Vivitar 60x/120x Refractor Telescope With Tripod, Skywatcher Dobsonian Review, Clo2 Lewis Structure, Time Under Tension Bodyweight Exercises, National Board Of Medicine,
cove school tuition 2020 | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,833 |
"""
byceps.services.newsletter.models
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:Copyright: 2006-2020 Jochen Kupperschmidt
:License: Modified BSD, see LICENSE for details.
"""
from dataclasses import dataclass
from datetime import datetime
from sqlalchemy.ext.hybrid import hybrid_property
from ...database import db
from ...typing import UserID
from ...util.instances import ReprBuilder
from .transfer.models import ListID
from .types import SubscriptionState
class List(db.Model):
"""A newsletter list users can subscribe to."""
__tablename__ = 'newsletter_lists'
id = db.Column(db.UnicodeText, primary_key=True)
title = db.Column(db.UnicodeText, nullable=False)
def __init__(self, list_id: ListID, title: str) -> None:
self.id = list_id
self.title = title
def __repr__(self) -> str:
return ReprBuilder(self) \
.add_with_lookup('id') \
.build()
@dataclass(frozen=True)
class Subscriber:
screen_name: str
email_address: str
class SubscriptionUpdate(db.Model):
"""A user's declaration on wanting/not wanting to receive
newsletters from this list.
"""
__tablename__ = 'newsletter_subscription_updates'
user_id = db.Column(db.Uuid, db.ForeignKey('users.id'), primary_key=True)
list_id = db.Column(db.UnicodeText, db.ForeignKey('newsletter_lists.id'), primary_key=True)
expressed_at = db.Column(db.DateTime, primary_key=True)
_state = db.Column('state', db.UnicodeText, nullable=False)
def __init__(
self,
user_id: UserID,
list_id: ListID,
expressed_at: datetime,
state: SubscriptionState,
) -> None:
self.user_id = user_id
self.list_id = list_id
self.expressed_at = expressed_at
self.state = state
@hybrid_property
def state(self) -> SubscriptionState:
return SubscriptionState[self._state]
@state.setter
def state(self, state: SubscriptionState) -> None:
assert state is not None
self._state = state.name
def __repr__(self) -> str:
return ReprBuilder(self) \
.add_with_lookup('user_id') \
.add_with_lookup('list_id') \
.add_with_lookup('expressed_at') \
.add('state', self.state.name) \
.build()
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,898 |
La fasciolopsiasi è una malattia parassitaria che può colpire sia gli esseri umani sia gli animali:
Fasciolopsiasi animale
Fasciolopsiasi umana – distomatosi intestinale | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 291 |
Le Nyctophile nébuleux (Nyctophilus nebulosus) est une espèce de chauve-souris, endémique de Nouvelle-Calédonie de la famille des Vespertilionidae.
Cette espèce a été découverte en 2002, par le mammalogiste australien Tim Flannery et décrite par le docteur Harry Parnaby, spécialiste des nyctophiles d'Australasie. Deux exemplaires ont été prélevés à la lisière du mont Koghi, près de Nouméa par Flannery.
Depuis 2008, Nyctophilus nebulosus est classé en tant que espèce en danger critique d'extinction par l'UICN car son habitat se situe sur un seul site dont la superficie est inférieure à dans un paysage très fragmenté qui est soumis aux menaces de l'empiètement urbain et des feux sauvages provenant des établissements humains.
Notes et références
Liens externes
Vespertilionidae
Chiroptère (nom vernaculaire) | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,947 |
\section{Introduction}
The canonical values of the solar photospheric elemental abundances have recently become far less canonical. Starting from the work of \citet{Anders_Grevesse}, a standard reference for many years, the absolute abundance values of the more abundant trace elements like carbon, nitrogen, oxygen, neon, or iron have been significantly reduced over the past decade, initially by \citet{Grevesse_Sauval}, and again more recently \citep[and references therein]{Asplund}.
These lighter elements considerably influence the physics of the solar interior since they provide a substantial contribution to its radiative opacity. A change in the elemental abundances usually changes the depth of the solar convection zone,
which can be inferred from the measured helioseismological oscillation frequencies. With the "old" abundances by \citeauthor{Grevesse_Sauval}, good agreement could be found between the standard solar model of the appropriate age and the observed oscillation spectrum, while the "new" but rather controversial abundances proposed by \citeauthor{Asplund} and collaborators turned out to be inconsistent with helioseismology \citep{Bahcall_helioseismology}.
In order to rescue the agreement between the standard solar model and helioseismology
the opacity reduction by the downward revision of the CNO abundances must
be sufficiently compensated by increased abundances of other elements. The only suitable element is neon, since its (photospheric) abundance is not well determined due to the absence of strong photospheric lines; rather, the solar neon abundance is obtained either from solar energetic particles or from coronal measurements in the X-ray or EUV bands. The solar neon abundance is usually quoted relative to oxygen, and the solar Ne/O abundance ratio has remained more or less constant during the course of the revisions of the solar photospheric abundances, with values ranging from
0.14 from the compilation of \citeauthor{Anders_Grevesse}, 0.18 from \citeauthor{Grevesse_Sauval}, and 0.15 for the new set from \citeauthor{Asplund}. An increase of the solar Ne/O abundance by a factor of 2.5--3.5 \citep{Antia_Basu,Bahcall} would provide sufficient opacity to reconcile the low oxygen abundance with helioseismology.
Evidence for an increased neon abundance has been proposed by \citet{Drake_Testa} in their survey of the coronal Ne/O abundance in a sample of nearby stars, finding an average value of $A_{Ne}/A_O = 0.41$.
On the other hand, a
re-analysis of \emph{SOHO} CDS spectra and re-investigation of archival solar coronal X-ray spectra confirm the long-established, "canonical" lower $A_{Ne}/A_O$ values \citep{Young_neon_to_oxygen,Schmelz_neon_to_oxygen}. Also, a closer look at the sample of stars used by \citeauthor{Drake_Testa} reveals that most of these stars are RS~CVn systems or well-known young and active stars, known to show the inverse FIP effect \citep{Brinkman_HR_1099}, i.\,e., an enhancement of elements with high first ionization potential. Since neon is (apart from helium) the element with the highest FIP and the occurrence of the inverse FIP effect is related to activity \citep[and references therein]{Guedel_review}, the sample may be biased to higher neon abundances and not
be representative of the "true" cosmic neon abundance.
In order to settle the issue of a possible bias, a comparison to exclusively low-activity solar-like stars is needed. Due to their low X-ray luminosity, high-resolution X-ray spectra with reasonable signal-to-noise ratio of such inactive stars can be obtained only for very few objects, like $\epsilon$~Eri, Procyon, or $\alpha$~Cen. Additionally, their low coronal temperatures complicate the measurement of the otherwise prominent \ion{Ne}{ix} and \ion{Ne}{x} He-like and H-like lines that have peak formation temperatures of $\log T = 6.6$ and $\log T = 6.75$ respectively; for example, the
\emph{Chandra} LETGS spectra of $\alpha$ Cen A and B presented by \citet{Raassen}) do not show the
\ion{Ne}{x} Ly~$\alpha$ line. Our new \emph{XMM} RGS spectra of $\alpha$~Cen provide good sensitivity and signal-to-noise to detect and measure the relevant H- and He-like lines of O and Ne to allow an accurate determination of its neon-to-oxygen abundance.
\section{Observations and data analysis}
$\alpha$ Centauri is target of an \emph{XMM-Newton}
monitoring campaign of its long-term X-ray behavior. Since March 2003 X-ray
observations have been performed regularly at intervals of approximately six months, lasting between 5 and 9~ks each. Results from the first five observations of the program focusing on variability and possible activity cycles of both components have been presented by \citet{Robrade_alpha_Cen}. Two additional datasets (ObsIDs 0143630201 and 0202611201) are now available, resulting in 52~ks of accumulated observing time. All seven datasets were reduced with the \emph{XMM-Newton} Science Analysis System (SAS) software, version 7.0, making use of standard selection and filtering criteria.
\citeauthor{Robrade_alpha_Cen} showed that the K0\,V star $\alpha$~Cen~B is X-ray brighter than the solar twin $\alpha$~Cen~A (spectral type G2\,V) by factors ranging from 3.6 to 75; this applies also to the two latest observations. A spatially resolved spectral analysis of the $\alpha$~Cen~system is not possible with \emph{XMM-Newton}; the measured total X-ray flux refers essentially to $\alpha$~Cen~B ($\approx90$--95\%). The following analysis is based solely on the RGS data.
We use the SAS task {\tt rgscombine} to merge the individual RGS exposures to co-added spectra with corresponding response matrices. The resulting combined spectra thus constitute a mixture of $\alpha$~Cen~A and B (with B by far dominating) and an average over the quiescent and flaring states of $\alpha$~Cen~B and the long-term variability of $\alpha$~Cen~A \citep{Robrade_alpha_Cen}. This is not critical for our purposes since we focus on the coronal abundances, which we do not expect to change during $\alpha$~Cen's different states of activity; also, \citet{Raassen} showed $\alpha$~Cen~A and B to have similar abundances, thus we do not anticipate effects from the changing contributions of the two components. In Fig.~\ref{Ne_O_spec} we plot the relevant portions of the RGS spectrum of $\alpha$~Cen that cover the He-like and H-like lines of neon and oxygen. While the \ion{O}{vii} and \ion{O}{viii} lines are recorded with a very good signal-to-noise ratio, the signal is much lower for \ion{Ne}{ix} and \ion{Ne}{x}, but the lines are still easily detectable and the \ion{Ne}{ix} resonance, intercombination and forbidden lines can be resolved.
Using the CORA program \citep{CORA} we measured individual line fluxes in the
RGS~1 and 2 spectra in 1st and 2nd order assuming Lorentzian line profiles. Error-weighted means were calculated for further analysis.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{6220fig1.ps}}
\resizebox{\hsize}{!}{\includegraphics{6220fig2.ps}}
\caption[]{\label{Ne_O_spec} Segments of the merged RGS spectrum of $\alpha$~Cen, showing the spectral regions covering the neon (top) and oxygen (bottom) Ly~$\alpha$ lines and He-like triplets. The spectra were created with the SAS task {\tt rgsfluxer} from RGS~1 and 2 in 1st and 2nd order.}
\end{figure}
\section{Abundance determination}
\subsection{Differential emission measure modeling}
From our line flux measurements we proceed to determine the coronal Ne/O ratio of $\alpha$~Cen
using three different methods.
We reconstructed the differential emission measure ($DEM$) from abundance-independent ratios of the H-like Ly~$\alpha$ and the He-like resonance lines from N, O, and Ne, analogous ratios of H-like Ly~$\alpha$ and the lines originating from the $1s3p - 1s^2$ transition (''He-like Ly~$\beta$'') of C and O and the weakly temperature-dependent ratio of the \ion{Fe}{xvii} lines at 15.01~\AA\ and 16.78~\AA.
In addition we used continuum flux measurements at wavelengths around 20~\AA\ where the spectrum is essentially line-free for normalization. Our $DEM$ reconstruction method is similar to the one applied by \citet{Algol_EMD} and makes use of CHIANTI~5.2 line and continuum emissivities \citep{Chianti7}.
In a first approach, $\log DEM$ was modelled as a function of $\log T$ using polynomials of different orders without further constraints; the best-fit is obtained with a
3$^{rd}$ order polynomial ($\chi^2_{red} = 0.6$). However, the available line ratios cover only temperatures $\log T > 6.0$ and abundance-independent line ratios with suitable signal-to-noise for lower temperatures are not available in the RGS spectral range.
In a second approach, we model the linear $DEM$ again with polynomials as a function of $\log T$. Additionally the $DEM$ was forced to have two zeros defining the boundaries of the coronal $DEM$ distribution. Here, the best fit is obtained with a 4$^{th}$ order polynomial ($\chi^2_{red} = 2.2$). The resulting $DEM$ distributions are shown in
Fig.~\ref{alpha_Cen_DEM}; they agree at higher temperatures, but differ significantly for $\log T < 6.2$, indicating the uncertainties due to the poor coverage of lower temperatures. The formation temperatures of the neon and oxygen He-like and H-like lines are however well-determined and there the $DEM$ distributions look quite similar. In Table~\ref{results} we compare the observed line ratios with
the line ratios "predicted" by the two methods; the two methods agree quite well except for the \ion{Ne}{x}~Ly~$\alpha$~/~\ion{Ne}{x}~r ratio, which is reproduced much better
by method~1.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{6220fig3.ps}}
\caption[]{\label{alpha_Cen_DEM} $DEM$ distributions obtained by modeling a 3$^{rd}$ order polynomial to $\log DEM(\log T)$ (method~1) and a 4$^{th}$ order polynomial with two zeros to $DEM(\log T)$ (method~2). Note that the shape of the $DEM$ is well-determined only for $\log T > 6.0$.}
\end{figure}
By forcing the $DEM$ distributions obtained with both methods and the corresponding line contribution functions to reproduce the measured line fluxes, we determine the absolute
(and relative) abundances of neon and oxygen. The results are listed in Table~\ref{abundances}; the relative neon-to-oxygen abundance with method 1 is $A_{Ne}$/$A_O = 0.27 \pm 0.03$, while method 2 yields $A_{Ne}$/$A_O = 0.31 \pm 0.08$. Note that errors are based on count statistics alone, i.\,e. the smaller error for the first approach results in the better quality of the fit, giving consistent individual abundances for the two neon lines, while they clearly deviate in the second approach.
\subsection{Emission measure-independent linear combinations of line fluxes}
\citet{Acton_neon_to_oxygen} proposed to determine the solar coronal Ne/O abundance ratio from the ratio of the measured line fluxes of the \ion{Ne}{ix} resonance and \ion{O}{viii} Ly $\alpha$ lines since their contribution functions have similar peak formation temperatures and a similar temperature dependence. This approach thus avoids uncertainties in the abundance determination introduced by the initially unknown underlying temperature structure of the emitting plasma.
Acton's method has been refined by \citet{Drake_Testa} by adding a portion of the \ion{Ne}{x} Ly~$\alpha$ flux
to reduce the temperature residuals. We further refined this approach by also taking the \ion{O}{vii} resonance line into account and calculating optimal linear combinations of the measured fluxes. The coefficients for the linear combinations are obtained from a minimization procedure incorporating the corresponding ratios of the theoretical emissivities of the involved lines from the CHIANTI database. Relative to the \ion{Ne}{ix}~r line, we obtain scaling factors of 0.02, $-$0.17, and 0.69 for \ion{Ne}{x} Ly~$\alpha$, \ion{O}{vii}~r, and \ion{O}{viii} Ly~$\alpha$ respectively, for the line fluxes and line contribution functions in photon (not energy) units, see also Fig~\ref{Ne_to_O}. These coefficients give $A_{Ne}$/$A_O = 0.28 \pm 0.05$, in very good agreement with the value obtained with the $DEM$ reconstruction methods.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{6220fig4.ps}}
\caption[]{\label{Ne_to_O} Linear combinations of contribution functions of H-like Ly~$\alpha$ and He-like resonance lines of neon and oxygen.}
\end{figure}
\setlength{\tabcolsep}{5pt}
\begin{table}
\begin{center}
\begin{tabular}{lccc}
\hline\hline
line ratio &measured&method 1 &method 2\\
\hline
\ion{N}{vii} Ly $\alpha$ / \ion{N}{vi} r & 1.40 $\pm$ 0.16 & 1.36 & 1.30\\
\ion{O}{viii} Ly $\alpha$ / \ion{O}{vii} r & 0.79 $\pm$ 0.04 & 0.79 & 0.83\\
\ion{Ne}{x} Ly $\alpha$ / \ion{Ne}{ix} r & 0.48 $\pm$ 0.10 & 0.47 & 0.28\\
\ion{C}{vi} Ly $\alpha$ / \ion{C}{v} $\beta$& 24.17 $\pm$ 10.19& 25.08 & 19.97\\
\ion{O}{viii} Ly $\alpha$ / \ion{O}{vii} $\beta$ & 7.64 $\pm$ 0.67 & 7.85 & 8.22\\
\ion{Fe}{xvii} 15.01\,\AA\ / 16.78\,\AA\ & 1.70 $\pm$ 0.20 & 1.45 & 1.42\\
\hline
\end{tabular}
\caption[]{\label{results} Line ratios (photon fluxes) used in the fitting procedure.}
\end{center}
\end{table}
\setlength{\tabcolsep}{6pt}
\begin{table}
\begin{center}
\begin{tabular}{lcccc}
\hline\hline
&\multicolumn{2}{c}{$DEM$ modeling}& linear&Asplund\\
&method 1 &method 2 & combinations &et al.\\
\hline
Ne & $7.95\pm0.04$ & $8.01\pm0.13$ &---&7.84\\
O & $8.52\pm0.01$ & $8.51\pm0.01$ &---&8.66\\
$A_{Ne}$/$A_O$ & $0.27\pm0.03$ & $0.31\pm0.08$ & $0.28\pm0.05$& 0.15\\
\hline
\end{tabular}
\caption[]{\label{abundances} Absolute abundances of neon and oxygen and the Ne/O abundance ratio of $\alpha$~Cen obtained with different methods. }
\end{center}
\end{table}
\section{Results and discussion}
The results of our abundance modeling (cf., Table~\ref{abundances}) are very
robust and yield values of $A_{Ne}$/$A_O \approx 0.28$, independent of the
applied method. This value is twice as large as the "canonical" $A_{Ne}$/$A_O$
for the solar corona. $\alpha$~Cen is probably the most suitable star
for a comparison with the Sun avoiding a possible FIP/I-FIP bias. Our $DEM$ of $\alpha$~Cen resembles that of the quiet Sun \citep[e.\,g.][]{Brosius,Landi_Landini_DEM}, which typically peaks around $\log T \approx 6.0$--6.2.
However our values refer to $\alpha$~Cen~B,
which is by far the more active component known to show flaring activity \citep{NEXXUS,Robrade_alpha_Cen}.
Separate X-ray spectra of $\alpha$~Cen~A and B are available from
a 79~ks \emph{Chandra} LETGS exposure. However, both spectra have extremely
low signal in the wavelength range covering the \ion{Ne}{ix} and \ion{Ne}{x} lines. Values of $A_{Ne}$/$A_O = 0.18 \pm 0.07$ and $0.24 \pm 0.09$ for $\alpha$~Cen~A and B respectively were derived from global fitting by \citet{Raassen} and are thus based primarily on \ion{Ne}{vii} and \ion{Ne}{viii} lines located at longer wavelengths. Many of these lines suffer from significant blending as well as
low signal and their atomic physics parameters should be considered as more uncertain than those of the H-like and He-like lines. Formally, the Ne/O
abundances of $\alpha$~Cen~A and B are consistent with each other, and the
value for the B~component is consistent with our XMM-Newton result.
Measurements of the solar Ne/O abundance ratio tend to show a broad scatter (cf. the compilation provided by \citet{Drake_Testa} in the supplementary information, with values of $A_{Ne}$/$A_O$ ranging from 0.08 to 0.47), but the majority of them, based on miscellaneous data like solar energetic particles, X-ray or EUV spectra, are in good agreement with the "low" Ne/O abundances.
Additionally, the most recent analyses of \citet{Young_neon_to_oxygen} and \citet{Schmelz_neon_to_oxygen}, based on the most recent atomic data, clearly support values as low as 0.15.
All stars in the survey of \citeauthor{Drake_Testa} show higher values of $A_{Ne}$/$A_O$, incompatible with the "canonical" low solar value. The inference of the solar Ne/O abundance from other stars is, however, problematic.
Apart from the fundamental question of why the Sun should implicitly show the same abundance pattern as other stars do (and many stellar photospheric measurements show that it does not), the most severe problem is to find truly solar-like stars,
i.\,e. relatively old and inactive single stars of similar spectral type. These conditions and the basic requirement that the stars are observable in X-rays (or in the EUV) with a sufficient signal to obtain abundance measurements of neon and oxygen, are almost mutually exclusive since stars with an X-ray luminosity as low as that of the Sun can only be observed in the very solar vicinity with today's X-ray telescopes. Instead, the typical well-studied stellar coronal X-ray source is much brighter, usually consisting of an active young late-type star or even an RS~CVn system. Such objects are not appropriate for a direct comparison with the Sun, especially if one accepts the reality of the inverse FIP effect, i.\,e. an enhancement of elements with high first ionization potential; while the physics of abundance anomalies like the inverse FIP effect and its counterpart, the FIP effect as observed on the Sun, are not fully understood, a framework to explain both effects has been provided by
\citet{Laming}.
A correlation seems to exist in the sense that the FIP effect turns into the inverse FIP effect, with the inverse FIP effect
becoming stronger with increasing activity. Clearly, the sample used by \citeauthor{Drake_Testa} is then
strongly biased. The only star in their sample of truly solar-like activity is Procyon, where a
value of 0.42 from \citet{coronal_photospheric} was used, which was obtained from three combined \emph{Chandra} LETGS spectra
but still low signal at the wavelengths of the \ion{Ne}{ix} and \ion{Ne}{x} lines. For part of these data
\citet{Raassen_Procyon} obtained $A_{Ne}$/$A_O = 0.22$ with a global fitting approach, a value also found by the
same authors from 91~ks of \emph{XMM} RGS and MOS data; thus the correct
Ne/O abundance ratio of Procyon remains an open issue.
Moderately active K dwarfs seem to have values of $A_{Ne}$/$A_O \sim 0.37$, i.e, slightly lower than the median
obtained by \citeauthor{Drake_Testa}, but still at approximately twice the level of the Sun as shown by \citet{Wood_Linsky},
who investigated Ne/O abundance ratios of $\epsilon$~Eri, 36~Oph and 70~Oph. As pointed out above, our Ne/O value
also refers to a K star, $\alpha$~Cen~B, which is less active than any of the stars studied by \citeauthor{Wood_Linsky}.
Therefore, none of the stars studied so far has a Ne/O abundance as low as observed for the Sun. Therefore,
the question remains why the Sun is so special. Or do we first have to find a real solar-like star?
\begin{acknowledgements}
CHIANTI is a collaborative project involving the NRL (USA), RAL (UK), MSSL (UK), the Universities of Florence (Italy) and Cambridge (UK), and George Mason University (USA).\\
CL acknowledges support from DLR under 50OR0105.
\end{acknowledgements}
\bibliographystyle{aa}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 439 |
GlobalTimes - News About Latest US, Internation and much more.
Cardi B, Bernie Sanders Scorch Trump's COVID-19 Response On Instagram Live
It's safe to say that Sen. Bernie Sanders and his superfan Cardi B do not like President Donald Trump.
Or at least that was the main takeaway from the very fiery conversation the musician and her "Uncle Bernie" had on Instagram Live Thursday night, in which both were extremely critical of the commander-in-chief's response to the coronavirus pandemic.
The "Bodak Yellow" rapper wouldn't even call Trump by his name and instead referred to him as "45" (he's the 45th president). She began by slamming Trump and many of his high-profile supporters for initially trying to downplay the threat of the looming pandemic by claiming that Democrats were just using health crisis to hurt his reelection chances.
Donald Trump Jr., the president's eldest son, in late February said on "Fox & Friends" that Democrats hoped millions of Americans would die so "they can end Donald Trump's streak of winning." In early March, Fox News' Sean Hannity echoed that theory on his radio show, saying that the media and Democrats were "politicizing" and "weaponizing" the spread of the virus "to bludgeon" Trump.
The president himself went so far as to blame his predecessor, former President Barack Obama, in late March for his administration's slow response to the pandemic by saying it inherited a "broken, obsolete system" for testing for a disease that did not exist until last year.
Cardi B said these tactics used by Trump and his allies "baffled my mental."
"The thing is, honey, you don't need the Democrats to make you look bad," she said of Trump and his supporters. "You make your own self look bad."
Cardi B also slammed Trump's treatment of the media during his press briefings, saying the president has had plenty of opportunities to make himself look like "he actually cares" when answering journalists' questions. But instead he "shushed them off or degrades" reporters.
Sanders detailed other shortcomings in Trump's early responses to the outbreak, which included ignoring scientists and opting not to use the Defense Production Act "to tell private companies that instead of producing underwear right now or socks they got to start producing masks and ventilators."
One day I'll tell my grandchildren about the time I gaffe taped Bernie Sanders' iPad to a cushion so he could talk with his hands while live streaming with Cardi B. pic.twitter.com/ggbqNBW7BM
— Chris Witschy (@filmthebern) April 15, 2020
"He didn't do any of that," said Sanders, who ended his bid for the Democratic presidential nomination earlier this month and has endorsed the presumptive nominee, Joe Biden.
The Vermont independent also broke down why he thinks people should vote for Joe Biden this year as he offered a scathing analysis of Trump's character.
"Donald Trump is to my mind the most dangerous president in the modern history of America," Sanders said. "This is a guy who lies all the time, he doesn't believe in science, he downplayed this whole coronavirus, which has led to the deaths of many thousands of people dying unnecessarily."
"He doesn't believe in the Constitution, he thinks he's above the law. So this is a bad-news guy that has got to be defeated and I will do everything that I can to defeat him."
To hear more about Sanders' plan to help Biden win the presidency and steps Congress is taking to support people in need during the pandemic, watch the video above.
A HuffPost Guide To Coronavirus
Cameron Diaz Breaks Her Silence To Discuss Life As A New Mom During A Pandemic
Chris Cuomo's Biggest Coronavirus Fear Comes True As His Wife Is Now Infected
Katy Perry Closes Out Biden's Inauguration Celebration With A Literal Bang
There Were 2 Royal Moments You Might Have Missed At Biden's Inauguration
Jennifer Lopez Belts 'Let's Get Loud' In Middle Of Biden…
Lady Gaga Performs Soul-Stirring National Anthem Rendition At Joe Biden's…
Country Singer Randy Parton, Dolly Parton's Brother,…
Biden Cancels Keystone XL Pipeline Permit
Scientists Behind Dancing Robots Had To Really Bust A Move
Melania Trump Couldn't Be Bothered To Write Thank-You…
Fox News Personalities Are 'Real Victims' Of…
How Coronavirus Is Threatening Democracy
Large Car Bomb Kills 9 In Afghanistan's Capital,…
Husband Praises Chelsea Houska As 'Teen Mom'…
'Cowboys For Trump' Founder Arrested In D.C.…
Michigan Senate, House Close Offices Due To Threats Ahead Of…
Conservation Officer Fired For Refusing To Shoot Bear Cubs…
Trump Briefly Leaves Hospital To Wave To Supporters From His…
Donald Trump Jr.'s Wail About 'Tyranny' Gets Flipped Back On…
Kayleigh McEnany Breaks White House Whopper Record With MAGA…
Jobless And Quarantined, Thousands Turn To OnlyFans To Make…
Why Experts Predict 2021 Is The Year To Buy A Home, Despite…
Kim Kardashian's Fight With Kourtney Was Way More…
New Zealand Deputy Prime Minister Roasts COVID-19 Denier At…
36 Eerie Photos Of Empty Places Around The World During…
Conspiracy Theorist Alex Jones Arrested For DWI
Tourist Mails Back Stolen Roman Marble, Apologizes For Being…
Apple Closes All 53 California Stores And A Dozen More…
You're A Keen One, Mr. Grinch
Hundreds Of Nigerian Students Missing After Attack On School
Canada Revives Efforts To Ban 'Gay Conversion'…
NJ Prosecutor's 'Shining' Reference Leads…
How To Maximize Your Severance Package If You've Lost Your…
Thousands Protest In Berlin Against Coronavirus Restrictions
In Wake Of Prop 22, Albertsons Shifting In-House Delivery…
© 2021 - Dailyboard. All Rights Reserved. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 8,892 |
\section{Introduction} \label{sec:intro}
\Acp{agn} are powered by mass accretion onto \acp{smbh}. They emit intense electromagnetic radiation in broad range of frequencies. Measurements of X-ray spectra of \acp{agn} allow us to study various aspect of \acp{smbh} such as black hole spins \citep[e.g.,][]{Reynolds2014}, geometrical structures \citep[e.g.,][]{Ramos_Almeida2017}, and cosmological evolution \citep[e.g.,][]{Ueda2014}.
A key for understanding these phenomena is primary X-ray radiation of the accretion disk which arises from Comptonization of disk photons in moderately thick thermal plasma, namely coronae, above an accretion disk \citep[see, e.g.,][]{Katz1976,1977A&A....59..111B,Pozdniakov1977,Galeev1979,Takahara1979,Sunyaev1980}. X-ray observations have indicated the coronal temperature of $\sim10^9$~K and the Thomson scattering opacity of $\gtrsim1$ \citep[e.g.][]{Zdziarski1994,Fabian2015}. However, the nature of \ac{agn} coronae is still veiled in mystery.
Very recently, \citet{Inoue2018} has reported the detection of coronal radio synchrotron emission from two nearby Seyferts \citep[e.g.,][]{DiMatteo1997,Inoue2014,Raginski2006} utilizing the \ac{alma}. The inferred coronal magnetic field strength was $\sim10$~G with a size of $40R_s$, where $R_s$ is the Schwartzschild radius, for both active \acp{smbh} with a mass of $\sim10^8M_\odot$. It is also found that coronae of Seyferts contain both thermal and non-thermal electrons. This implies that acceleration of high energy particles happens in AGN coronae.
High energy particles in the nuclei of Seyferts have been discussed for a long time\footnote{High energy particles in the coronae of X-ray binaries have been also discussed in literature \citep[e.g.,][]{Bhattacharyya2003,Bhattacharyya2006}.}. In the past, it was argued that primary X-ray emission comes from pair cascades induced by high energy particles accelerated in and/or around accretion flows \citep[e.g.,][]{Zdziarski1986,Kazanas1986, Ghisellini2004}. In the pair cascade model, particles are accelerated by shock dissipation in accretion flows \citep[e.g.,][]{Cowsik1982,Protheroe1983,Zdziarski1986,Kazanas1986,Sikora1987,Begelman1990}. However, the detection of the \ac{agn} spectral cutoffs \citep[e.g.,][]{Madejski1995,Zdziarski2000} and non-detection of Seyfert \acp{agn} in the gamma-ray band \citep[e.g.,][]{Lin1993} ruled out the pair cascade scenario as a dominant source for the primary X-ray emission\footnote{TeV gamma rays are measured from the Galactic center \citep{HESS2016}. This detection indicated possible particle acceleration in accretion flow, even though accretion rate in the Galactic center is several orders of magnitude lower than that in standard disks.}.
In this paper, we investigate the production mechanism of the observed high energy particles in \ac{agn} coronae. As an example, we consider those high energy particles are supplied by \ac{dsa} processes \citep[e.g.,][]{Drury1983,Blandford1987} in the coronae. Contrary to the previously discussed AGN accretion shock models, the required shock power is much lower in order to explain the observed non-thermal species and to be in concordance with the current picture of coronal X-ray emission. Moreover, previous studies of high energy particles in \ac{agn} accretion disks have treated as free parameters corona size and magnetic field, which are important parameters for the understandings of particle acceleration. The \ac{alma} observations allowed us to determine both of them \citep{Inoue2018}. Most critically, the observationally determined strength of the magnetic field appeared to be significantly smaller than the one previously considered in the literature. We take into account these newly determined coronal parameters.
Thermal coronal emission from Seyferts is known to explain the entire cosmic X-ray background radiation \citep[e.g.,][]{Ueda2014}. In contrast, the origin of the cosmic MeV background radiation from 0.1~MeV to several tens MeV is still unknown \citep[see e.g.,][]{Inoue2014_Fermi}. Here, the non-thermal electrons in coronae seen by ALMA will invoke power-law MeV gamma-ray emission via Comptonization of disk photons. Such non-thermal emission is suggested as a possible explanation for the cosmic MeV gamma-ray background radiation \citep{Inoue2008}. However, non-thermal electron species in the previous work were included in an ad hoc way. In this work, we revisit the contribution of Seyferts to the MeV gamma-ray background radiation by considering the particle acceleration of non-thermal populations in coronae together with the latest X-ray luminosity function of Seyferts \citep{Ueda2014}.
High energy particles around accretion disks of \acp{agn} also generate intense neutrino emission through \ac{pp} and \ac{pg} interaction processes by interacting accreting gas and photon fields \citep[e.g.,][]{Eichler1979,Begelman1990,Stecker1992,Alvarez-Muniz2004}. Although these originally predicted fluxes have been significantly constrained by high energy neutrino observations \citep{IceCube2005}, recent studies have revisited the estimated fluxes and found that \ac{agn} core models are still viable \citep{Stecker2005,Stecker2013,Kalashev2015}. However, normalization of neutrino fluxes from \acp{agn} and acceleration properties of high energy particles in those models are assumed to match with the observation. In this work, we also discuss the possible contribution from \ac{agn} cores given our \ac{alma} observations and investigate the required parameter spaces for the explanation of the IceCube diffuse neutrino fluxes.
We describe general particle acceleration processes in \ac{agn} coronae in \S~\ref{sec:acceleration}. The broadband emission spectrum of the central region of \acp{agn} and physical properties of \ac{agn} coronae are presented in \S~\ref{sec:property}. Relevant timescales and steady-state particle spectra are discussed in \S~\ref{sec:timescales} and \S~\ref{sec:particle_spectrum}, respectively. \S~\ref{sec:g_nu_AGN} and \S~\ref{sec:background} present the results of the expected gamma-ray and neutrino fluxes from individual AGN cores and the cosmic gamma-ray and neutrino background fluxes from \ac{agn} cores, respectively. Discussion including other possible particle acceleration mechanism is given in \S~\ref{sec:discussion}, and conclusions are in \S~\ref{sec:conclusion}. Throughout this paper, we adopt the standard cosmological parameters of $(h, \Omega_M, \Omega_\Lambda) = (0.7, 0.3, 0.7)$.
\section{Particle Acceleration in Nuclei of Seyferts}
\label{sec:acceleration}
As non-thermal coronal synchrotron emission is seen in nearby Seyferts \citep{Inoue2018}, particle acceleration should occur in AGN coronae, even though thermal populations are energetically dominant. Particle acceleration mechanism in the coronae is highly uncertain. Various acceleration mechanisms can take place in the coronae such as \ac{dsa} mechanism \cite[e.g.,][]{Drury1983,Blandford1987}, turbulent acceleration \citep[e.g.,][]{Zhdankin2018}, magnetosphere acceleration \citep[e.g.,][]{Beskin1992,Levinson2000}, and magnetic reconnection \citep[e.g.,][]{Hoshino2012}. In this work, for simplicity, we consider the \ac{dsa} as the fiducial particle acceleration process. We discuss the other possible acceleration processes in \S~\ref{sec:other_acc}.
In order to investigate particle acceleration mechanism of the observed non-thermal electrons, we consider the interaction of locally injected relativistic particles with the matter, photons, and magnetic field in the infalling coronae. Although the location of shock sites is uncertain, for simplicity, we assume that shocks occur inside of the coronae. The shock accelerates a part of inflow plasma to high energies. As the energy loss timescale of high energy protons is in general longer than the free-fall timescale, a sufficiently high energy density of relativistic particles is maintained to provide pressure to support a standing shock around a \ac{smbh} \citep{Protheroe1983}.
Coronae are assumed to be spherical with a radius of $R_c\equiv r_cR_s$. $r_c$ is the dimensionless parameter of the corona size and $R_s=2G\mbh/c^2$, where $G$ is the gravitational constant, $\mbh$ is the mass of the central \ac{smbh}, $c$ is the speed of light. Coronae are also set to be in a steady state. We also do not consider positrons in coronae. Thus, the proton number density $n_p$ is equal to the electron density $n_e$ in this work, which gives the maximum number of protons in coronae. $n_e$ is defined through the Thomson scattering opacity in coronae, \(\taut\) as
\begin{eqnarray}
n_e &=& \frac{\taut}{\sigmat R_c}\\ \nonumber
&\simeq& 1.4\times10^9\left(\frac{\taut}{1.1}\right)\left(\frac{r_c}{40}\right)^{-1}\left(\frac{\mbh}{10^8M_\odot}\right)^{-1}\ {\rm cm}^{-3},
\end{eqnarray}
where $\sigmat$ is the Thomson scattering cross section.
\subsection{Dynamical Timescale}
The gas is assumed to be spherically accreted on to the \ac{smbh} with free-fall velocity $\vff = \sqrt{2G\mbh/R_c}$. The free-fall timescale from the coronal region is estimated to be
\begin{equation}
\label{eq:t_fall}
t_{\rm fall}= R_c / \vff\simeq2.5\times10^5\left(\frac{r_c}{40}\right)^{1/2}\left(\frac{\mbh}{10^8M_\odot}\right)\ [{\rm s}].
\end{equation}
\subsection{Radiative Cooling}
High energy particles loose their energies through radiative cooling processes. In \ac{agn} coronae, high-energy electrons mainly lose their energies via synchrotron and \ac{ic} radiation. The synchrotron cooling rate for an electron with a Lorentz factor of $\gamma_e$ is
\begin{eqnarray}
t_{{\rm syn}, e}(\gamma_e) &=& \frac{3}{4} \frac{m_e c}{\sigmat U_{\rm B}} \gamma_e^{-1}, \\ \nonumber
&\simeq& 7.7\times10^4\left(\frac{B}{10~{\rm G}}\right)^{-2}\left(\frac{\gamma_e}{100}\right)^{-1}\ [{\rm s}],
\end{eqnarray}
where $m_e$ is the electron rest mass and $U_{\rm B} =B^2/8\pi$ is the magnetic field energy density of magnetic field strength $B$.
The inverse Compton cooling rate including the \ac{kn} cross section \citep{Jones1968,Moderski2005,Khangulyan2014} is
\begin{equation}
\label{eq:time_ic}
t\ic(\gamma_e) = \frac{3 m_e c}{4\sigmat }\left[\int\limits_0^{\infty}d\epsilon f\KN(\tildeb)\frac{U_{\rm ph}(\epsilon)}{\epsilon} \right]^{-1}\gamma_e^{-1},
\end{equation}
where $\tildeb\equiv 4\gamma_e\epsilon/m_ec^2$ and $f\KN \simeq 1/(1.0+\tildeb)$ \citep{Moderski2005}. $\epsilon$ is the target photon energy and $U_{\rm ph}$ is the photon energy density given as $U_{\rm ph}(\epsilon)=L_{\rm ph}(\epsilon)/4\pi R_c^2c$. The total \ac{agn} disk luminosity, $L_{\rm ph}$, which includes contribution from the accretion disk and corona, is defined in \S~\ref{sec:AGN_SED}. For simplicity, we consider a uniform photon density in the coronae. If the coronae has spatially homogeneous emissivity rather uniform emission, the mean photon density inside the source is enhanced by a factor of $\sim2.24$ on average \citep{Atoyan1996}.
For the typical characteristics of the coronae, the energy density of the photon field is
\begin{eqnarray}
&&U_{\rm ph,}{}_{\rm tot}=\int d \epsilon\, U_{\rm ph}(\epsilon)\\
&& \sim5\times10^3 \frac{L_{\rm ph,bol}}{2\times10^{45}\rm\,erg\,s^{-1}} \left(\frac{r_c}{40}\right)^{-2}\left(\frac{\mbh}{10^8M_\odot}\right)^{-2}[{\rm erg\,cm^{-3}}]\,.\nonumber
\end{eqnarray}
For the magnetic field strength inferred with \ac{alma}, \(B\simeq10\rm\,G\) for $\mbh=10^8M_\odot$ SMBHs, the energy density of the photon field exceeds the magnetic field energy density if \mbox{\(L_{\rm ph,bol}\geq2\times10^{42}\rm\,erg\,s^{-1}\)}. We note that the dominance of photon fields over magnetic field does not necessary prevents particle acceleration as such conditions are met in some efficient non-thermal sources, e.g., in gamma-ray binary systems \citep{2006JPhCS..39..408A,2008MNRAS.383..467K}. Moreover, high density of target photons can enable the converter acceleration mechanism if a relativistic velocity jump present in the system \citep{2003PhRvD..68d3003D}.
Relativistic protons are predominately cooled though inelastic \ac{pp} interactions, \ac{pg} reactions, and proton \ac{ic}/synchrotron channels. Since only the Thomson regime might be relevant for the proton \ac{ic} cooling, the proton synchrotron and \ac{ic} cooling time-scales are
\begin{equation}
t\ic{}_{{\rm /syn}, p} = \frac{3}{4} \left(\frac{m_p}{m_e}\right)^3\frac{m_e c^2}{c\sigmat U_{\rm ph/B}} \gamma_p^{-1}\,,
\end{equation}
where $m_p$ is the proton rest mass and $\gamma_p$ is the proton Lorentz factor. In the case of the synchrotron losses, this yields
\begin{equation}
t_{{\rm syn}, p} \simeq 4.8\times10^{14}\left(\frac{B}{10~{\rm G}}\right)^{-2}\left(\frac{\gamma_p}{100}\right)^{-1}\ [{\rm s}]\,.
\end{equation}
Given the higher energy density of the photon field, the \ac{ic} cooling time can be up to \(\sim10^4\) times faster. These electrodynamic cooling channels are inefficient as compared to the hardronic mechanisms below. Hereinafter, we do not consider proton \ac{ic}/synchrotron coolings.
The \ac{pp} cooling time can be expressed as
\begin{eqnarray}
\label{eq:t_pp}
t_{pp} &=& \frac{1}{n_p\sigma_{pp} c \kappa_{pp}},\\ \nonumber
&\simeq& 1.6\times10^6\left(\frac{\taut}{1.1}\right)^{-1}\left(\frac{r_c}{40}\right)\left(\frac{\mbh}{10^8M_\odot}\right)\ [{\rm s}].
\end{eqnarray}
where $\kappa_{pp}\sim 0.5$ is the proton inelasticity of the process and we adopt $\sigma_{pp}=3\times10^{-26}\ {\rm cm}^2$. Below we adopt the formalism developed by \citet{Kelner2006}. The total cross section of the inelastic \ac{pp} process $\sigma_{pp}$ is represented as a function of the proton energy $E_p=\gamma_p m_pc^2$,
\begin{eqnarray}
\sigma_{pp} &\simeq& \\
&\Big(34.3&+1.88L + 0.25L^2\Big)\left[1-\left(\frac{E_{pp,\rm thr}}{E_p}\right)^4\right]^2~ \rm mb \nonumber
\end{eqnarray}
for $E_p\ge E_{pp,\rm thr}$, where $1\ {\rm mb}=10^{-27}\ {\rm cm}^2$, $L=\log(E_p/1\,\rm TeV)$, and $E_{pp,\rm thr}=1.22$~GeV \citep{Kelner2006}.
The \ac{pg} cooling time via photomeson interactions is
\begin{equation}
\label{eq:t_pg}
t_{p\gamma}^{-1} = \frac{c}{2\gamma_p^2}
\int\limits_{\bar{\varepsilon}_{\rm thr}}^{\infty}d\bar{\varepsilon}\sigma_{p\gamma}(\bar{\varepsilon})K_{p\gamma}(\bar{\varepsilon})\bar{\varepsilon} \int\limits_{\bar{\varepsilon}/(2\gamma_p)}^{\infty}d\epsilon\, \frac{U_{\rm ph}(\epsilon)}{\epsilon^4},
\end{equation}
where $\bar{\varepsilon}$ and $\epsilon$ are the photon energy in the proton rest frame and the black hole frame, respectively,
$U_{\rm ph}$ is the energy density of the photon target, and $\bar{\varepsilon}_{\rm thr} = 145$~MeV. For numerical calculation we follow the formalism suggested by \citet{Kelner2008}.
The \ac{pg} interaction also generates pairs, so-called the Bethe-Heitler pair production process and its cooling timescale is approximated as \citep{Gao2012}
\begin{eqnarray}
t_{\rm BH}^{-1} &\approx&\frac{7(m_{e}c^{2})^{3}\alpha_{f}\sigmat c}{9\sqrt{2}{\pi}m_{p}c^2\gamma_{p}^{2}}\int_{m_ec^2/\gamma_p}^{\infty}d\epsilon\frac{U_{\rm ph}(\epsilon)}{\epsilon^4} \\ \nonumber
&\times& \left\{\left(\frac{2\gamma_{p}\epsilon}{m_ec^2}\right)^{3/2}\left[\log\left(\frac{2\gamma_{p}\epsilon}{m_ec^2}\right) -2/3\right]+2/3\right\},
\end{eqnarray}
where $\alpha_f$ is the fine-structure constant.
\subsection{Acceleration}
In the frame work of \ac{dsa} \cite[e.g.,][]{Drury1983,Blandford1987}, the acceleration time scale can be approximated as
\begin{equation}
t\DSA\simeq\frac{\eta_{\rm acc}D(E\CR)}{\vsh^2},
\end{equation}
where $D$ is the diffusion coefficient, $E\CR$ is the particle energy, and $\vsh$ is the shock speed. $\eta_{\rm acc}$ is a numerical factor that depends on the shock compression ratio and the spatial dependence of $D$ \citep{Drury1983}. We set $\eta_{\rm acc}=10$. Assuming a Bohm-like diffusion,
\begin{equation}
D(E\CR )\simeq\frac{\eta_gcE\CR }{3eB},
\end{equation}
where $e$ is the electric charge and $\eta_g$ is the gyrofactor which is the mean free path of a particle in units of the gyroradius. $\eta_g$ characterizes the efficiency of the acceleration. $\eta_g=1$ corresponds to the Bohm limit case. The \ac{dsa} time can be written as
\begin{eqnarray}
&&t\DSA\simeq\frac{10}{3}\frac{\eta_g c R_g}{\vsh^2}, \label{eq:t_acc} \\ \nonumber
&&\simeq 7.6\times10^{-3} \left(\frac{\eta_g}{100}\right)\left(\frac{m_{p/e}}{m_e}\right)\left(\frac{r_c}{40}\right)\left(\frac{B}{10\ {\rm G}}\right)^{-1}\left(\frac{\gamma_{p/e}}{100}\right)\ [{\rm s}].
\end{eqnarray}
where $R_g$ is the gyro radius and $\vsh$ is set as $\vff(R_c)$. $\eta_g$ varies in different astrophysical environments. $\eta_g\sim1$ is possibly seen in a Galactic supernova remnant \citep{Uchiyama2007}, while $\eta_g\sim10^4$ is seen in the case of blazars in the framework of one-zone leptonic models \citep[e.g.,][]{Inoue1996,Finke2008,Inoue2016}.
\section{Properties of Active Supermassive Black Holes}
\label{sec:property}
In this section, we summarize the general observational properties of the central region of \acp{agn} related to high-energy particles in coronae.
\subsection{Broadband Emission from the Core Region}
\label{sec:AGN_SED}
Emission from the \ac{agn} core region mainly arises from two components \citep{Elvis1994}. First is the geometrically thin and optically thick standard accretion disks \citep{Shakura1973}. This standard accretion disk generates a big blue bump from optical to UV attributed by multi-color blackbody radiation. Second is the Comptonized accretion disk photons from the coronal regions above the accretion disk \citep{Katz1976,1977A&A....59..111B,Pozdniakov1977,Sunyaev1980}. This Comptonized emission appears in the X-ray band together with emission reprocessed by the surrounding cold materials, a so-called Compton reflection component \citep[e.g.,][]{Lightman1988,Magdziarz1995,Ricci2011}.
In this work, for the primary X-ray emission from coronae, we assume a cut-off power-law model in the form of $E^{-\Gamma}\exp(E/E_c)$, where we set $\Gamma=1.9$ and $E_c=300$~keV \citep{Ueda2003,Ueda2014}. For the Compton reflection component, we use the {\tt pexrav} model \cite{Magdziarz1995} assuming a solid angle of $2\pi$, an inclination angle of $\cos i = 0.5$, and the solar abundance for all elements. Since we consider the photons only around the core regions, we ignore the absorption by torus.
The optical-UV accretion-disk \acp{sed} are taken from \citet{Elvis1994}. Here, the primary 2~keV X-ray disk luminosity is connected to the accretion-disk luminosity at 2500~\AA \ as
\begin{equation}
\log L_{2\ {\rm keV}} = 0.760 \log L_{2500\ \mathrm{\AA}} + 3.508
\end{equation}
based on the study of 545 X-ray selected type~1 \acp{agn} from the XMM-COSMOS survey \citep{Lusso2010}. Between UV and X-ray, following \citet{Lusso2010}, we linearly connect the UV luminosity at 500~\AA \ to the luminosity at 1~keV. Figure~\ref{fig:AGN_SED} shows the broadband \ac{agn} SED arising from the core region for various X-ray luminosities. \ac{agn} core \acp{sed} typically have a spectral peak at $\sim30$~eV corresponding to $\sim10^5$~K (Fig.~\ref{fig:AGN_SED}), which corresponds to the emission radius at around $\sim10R_s$.
\begin{figure}
\begin{center}
\includegraphics[width=9.0cm]{./AGN_SED.pdf}
\caption{The typical broadband spectral energy distribution arising from the core region of \acp{agn}. From top to bottom, each curve corresponds to 2-10~keV luminosity of $10^{46}$, $10^{44}$, $10^{42}~{\rm erg\ s^{-1}}$, respectively. }\label{fig:AGN_SED}
\end{center}
\end{figure}
\subsection{Physical Properties of Coronae}
X-ray spectral studies allow us to determine some of the coronal parameters such as the coronal electron temperature $kT_e$ and the Thomson scattering optical depth $\taut$ \citep[e.g.,][]{Brenneman2014}. $k$ is the Boltzmann constant and $T_e$ is the electron temperature in Kelvin. The spectral cutoff at $\sim300$~keV of \ac{agn} core spectra corresponds to the electron temperature of $kT_e\sim100$~keV. The process of Comptonization by thermal plasma is described by the Kompaneets equation \citep{Kompaneets1957}. Here, the photon index of the primary emission is assumed to be 1.9 in this work. This corresponds to $\taut\sim1.1$ based on the solution to the Kompaneets equation \citep{Zdziarski1996} as
\begin{equation}
\Gamma = \sqrt{\frac{9}{4}+\frac{1}{\theta_e[\taut(\taut+1/3)]}}-\frac{1}{2},
\end{equation}
where the dimensionless electron temperature $\theta_e\equiv kT_e/m_ec^2$. Therefore, in this work, we adopt $kT_e=100$~keV and $\taut=1.1$. These values are consistent with the results from detailed X-ray spectral analysis \cite[e.g.,][]{Fabian2015}.
Recently, utilizing X-ray and radio data, \citet{Inoue2018} found that the coronal magnetic field strength $B$ is approximately $10$~Gauss on scales of $\sim40R_s$ from the \acp{smbh} for two nearby Seyferts whose BH masses are $\sim10^8~M_\odot$\footnote{Contrary to this observational result, recent numerical simulations of the hot accretion flows \citep[e.g.,][]{Kimura2019} shows the magnetic field enhanced more by the magnetorotational instability \citep[MRI;][]{Balbus1991,Balbus1998}.}. This coronal size is consistent with optical--X-ray spectral fitting studies \citep{Jin2012} and micorolensing observation \citep{Morgan2012}. Thus, in this paper, we set the coronal size as $40R_s$ for all \acp{smbh} and $B=10$~G for $10^8~M_\odot$ \acp{smbh}.
\citet{Inoue2018} also suggested that the coronae are likely to be advection heated hot accretion flows \citep{Kato2008,Yuan2014} rather than magnetically heated corona \citep{Haardt1991,Liu2002} because the measured magnetic field strength is too weak to keep the coronae hot and is rather consistent with the value based on the self-similar solutions of hot accretion flows \citep{Kato2008,Yuan2014}. Thus, we assume that coronal magnetic field strength scales as
\begin{equation}
B\propto \mbh^{-1/2},
\end{equation}
following the self-similar solution for the hot accretion flow \citep{Yuan2014} where we ignore dependence on accretion rate and other parameters for simplicity.
\citet{Mayers2018} have recently investigated a relation between the intrinsic 2--10~keV X-ray luminosity and the mass of central \acp{smbh} using \acp{agn} from the XMM-Newton Cluster Survey. The empirical relation found in \citet{Mayers2018} is given as
\begin{equation}
\mbh=2\times10^7 M_\odot \left[\frac{L_{2-10\ {\rm keV}}}{1.155\times10^{43}~{\rm erg\ s^{-1}}}\right]^{0.746}.
\end{equation}
Using this relation, we can convert X-ray luminosities to masses of central \acp{smbh}.
\subsection{Internal Gamma-ray Attenuation in Coronae}
\label{subsec:gg_int}
Accelerated electrons and protons in coronae would emit gamma rays (see \S \ref{sec:AGN_SED}). However, high energy gamma-ray photons are attenuated by photon-photon pair production interactions ($\gamma\gamma \rightarrow e^+e^-$) with low-energy photons. For isotropic target photons the pair production cross section achieves its maximum of \(\approx0.2\sigmat\) when a gamma-rays of energy $E_\gamma$ interacts with a low-energy photon with energy \citep[see, e.g.,][]{Aharonian_book}
\begin{equation}
\label{eq:ene_ebl}
\epsilon_{\rm peak}\simeq \frac{3.5m_e^2c^4}{E_\gamma}\simeq1\left(\frac{1{\rm \ TeV}}{E_\gamma}\right)\ {\rm eV}.
\end{equation}
In terms of wavelength, $\lambda_{\rm peak}\simeq1.4(E_\gamma[{\rm TeV}])\ \mu{\rm m}$.
\begin{figure}
\begin{center}
\includegraphics[width=9.0cm]{./tau_gg_int_AGN.pdf}
\caption{Internal gamma-ray optical depth in the core region of \acp{agn}. From top to bottom, each curve corresponds to 2-10~keV luminosity of $10^{46}$, $10^{44}$, $10^{42}~{\rm erg\ s^{-1}}$, respectively. The horizontal dot-dashed line represents $\tau_{\gamma\gamma}=1$.}\label{fig:tau_gg}
\end{center}
\end{figure}
\begin{figure*}[tb!]
\begin{center}
\includegraphics[width=17cm]{./timescale_electron_ff.pdf}
\caption{Electron energy losses in \ac{agn} coronae together with acceleration and dynamical timescales. Each panel corresponds to different 2--10~keV X-ray luminosity as indicated in panels. Thin solid line shows the acceleration timescale assuming DSA. Dashed, dotted, and thick solid curve corresponds to synchrotron cooling, \ac{ic} cooling, and total cooling timescale, respectively. Dot-dashed curve shows the free-fall timescale. In these plots, we set $\taut=1.1$, $R_c=40R_s$, $kT_e=100$~keV, and $\eta_g=30$. We note that the vertical axis ranges are different in each panel.}\label{fig:time_electron}
\end{center}
\end{figure*}
Abundant photons are emitted from the \ac{agn} core region (Fig.~\ref{fig:AGN_SED}). From the SED of \ac{agn} core regions as given in \S~\ref{sec:AGN_SED},
we can compute the optical depth for high-energy gamma rays to $\gamma\gamma$ pair production interactions. The cross section for this process is \citep{1934PhRv...46.1087B,Heitler1954}
\begin{eqnarray}
&&\sigma_{\gamma\gamma}(E_\gamma,\epsilon,\theta)=\frac{3\sigmat }{16}(1-\beta^2) \nonumber \\
&&\times\left[2\beta(\beta^2-2)+(3-\beta^4)\ln\left(\frac{1+\beta}{1-\beta}\right)\right],
\end{eqnarray}
where $\beta$ is
\begin{equation}
\beta\equiv\sqrt{1-\frac{2m_e^2c^4}{\epsilon E_\gamma(1-\mu)}};\ \ \mu\equiv\cos\theta.
\end{equation}
where $\theta$ is the angle between the colliding photons' momenta.
For a photon with an energy of $E_\gamma$, the $\gamma\gamma$ optical depth is
\begin{equation}
\label{eq:tau_gg}
\tau_{\gamma\gamma}(E_\gamma)=\int\limits_{-1}^{1}d\mu\int\limits_{\epsilon_{\rm th}}^{\infty}d\epsilon\frac{1-\mu}{2} \frac{U_{\rm ph}(\epsilon)}{\epsilon^2}\sigma_{\gamma\gamma}(E_\gamma,\epsilon,\theta)R_c
\end{equation}
where $\epsilon_{\rm th}$ is the pair production threshold energy,
\begin{equation}
\epsilon_{\rm th}=\frac{2m_e^2c^4}{E_\gamma(1-\mu)}.
\end{equation}
Integration over the interaction angle in Eq.~\eqref{eq:tau_gg} can be performed analytically resulting in the angle averaged \(\gamma\gamma\) cross section \citep{Aharonian_book}:
\begin{eqnarray}
\sigma_{\gamma \gamma} & =&
\frac{3 \sigmat}{2 s^2}
\left[ \left(s+ \frac{1}{2} \ln s- \frac{1}{6} +\frac{1}{2s} \right)
\ln(\sqrt{s}+ \sqrt{s-1}) - \right.
\nonumber \\
& & \left.
\left(s+ \frac{4}{9} - \frac{1}{9s}\right) \sqrt{1- \frac{1}{s}}\right] \,,
\end{eqnarray}
where \(s=E_\gamma \epsilon/m_e^2 c^4\).
Figure~\ref{fig:tau_gg} shows the internal gamma-ray optical depth in the core region for various X-ray luminosities. The core region is expected to be optically thick against gamma-ray photons above 10--100~MeV depending on disk luminosities. Such high optical thicknesses against pair production in \ac{agn} coronae are well known
\citep[e.g.,][]{1971MNRAS.152...21B,Done1989,Fabian2015} based on the compactness parameter argument \citep{Guilbert1983}.
\section{Timescales}
\label{sec:timescales}
Given the observed properties of AGN core regions, we can estimate the various timescales of high energy particles in the coronae. Figure~\ref{fig:time_electron} shows the cooling rates of electrons in the coronae for different energy-loss processes, together with the acceleration rate and the free-fall timescale following \S~\ref{sec:acceleration} and parameters presented in \S~\ref{sec:property}. We set $\eta_g=30$ in the figure, which reproduces the IceCube neutrino background fluxes as discussed later in \S~\ref{sec:background}. Each panel corresponds to 2-10~keV X-ray luminosity of $10^{42}$, $10^{44}$, $10^{46}~{\rm erg\ s^{-1}}$.
Due to the intense broadband radiation field, the cooling is dominated by the Compton cooling. However, at higher energy regions, the main cooling channel is replaced by synchrotron cooling because of the \ac{kn} effect. The more luminous AGNs tend to have more efficient \ac{ic} cooling effect, as the target photon density increases. When we assume $\eta_g=30$, electron acceleration up to $\gamma_e\sim10^5$ ($\sim50$~GeV) is feasible in \ac{agn} coronae at various luminosities. Therefore, synchrotron radiation through coronal magnetic fields and gamma-ray emission by Comptonization of disk photons are naturally expected in AGN coronae.
\ac{alma} spectra of two nearby Seyferts, whose X-ray luminosities are about $10^{44}\ {\rm erg\ s^{-1}},$ extends their radio synchrotron power-law spectra at least up to 230~GHz, which corresponds to $\gamma_e\sim80$ given the magnetic field strength of $10$~G \citep{Inoue2018}\footnote{This frequency limit is due to the instrumental coverage of the \ac{alma} band-6 receiver. Therefore, the emission itself is likely to extend to higher frequencies, even though those emission signals would be buried in thermal dust emission.}. As shown in the top right panel (the case of $\log L_X=44$) in Fig. \ref{fig:time_electron}, relativistic electrons with $\gamma_e\sim80$ seen by \ac{alma} can be easily accelerated in AGN coronae. Notably, such electrons can be accelerated even by a low efficiency acceleration process, e.g., with $\eta_g\sim10^6$. For this energy, Compton cooling is the dominant energy loss process. As the cooling timescale for $\gamma_e\sim80$ is about 100~s, flux variability in the radio synchrotron emission is expected, some Seyferts are already known to show a flux variation at least in day scales \citep{Baldi2015}. Further dense light curve observations may see shorter timescale variabilities.
\begin{figure*}
\begin{center}
\includegraphics[width=17cm]{./timescale_proton_ff.pdf}
\caption{Same as in Fig.~\ref{fig:time_electron}, but for protons. Dashed, dotted, double-dot-dashed, and thick solid curve corresponds to \ac{pp} cooling, \ac{pg} cooling, BH cooling, and total cooling timescale, respectively.}\label{fig:time_proton}
\end{center}
\end{figure*}
Similar to Fig.~\ref{fig:time_electron} for electrons, Fig.~\ref{fig:time_proton} shows the timescales for high energy for various luminosities. As in Fig.~\ref{fig:time_electron}, we set $\eta_g=30$. Since synchrotron and Compton cooling are not effective for protons in our case, we do not show these timescales in the figure.
It is evident that protons can be accelerated up to $\gamma_p\sim10^6$ ($\sim1$~PeV) in \ac{agn} coronae for various luminosities. Maximum attainable energy is controlled by different processes for different luminosity AGNs due to SED and size dependence. For low-luminosity Seyferts ($L_X<10^{44}\ {\rm erg\ s^{-1}}$), acceleration is limited by the dynamical timescale rather than radiative cooling, while it becomes limited by the Bethe-Heitler cooling for higher luminosity objects. As the luminosity increases, \ac{pg} and Bethe-Heitler cooling effects become more prominent. At higher luminosities, the Bethe-Heitler processes dominate the energy loss process of high energy particles. Therefore, in cases of high luminosity objects, resulting hadronic gamma-ray and neutrino spectra in the TeV band will show spectral suppression due to the Bethe-Heitler processes \citep[see e.g.,][for the cases of gamma-ray burst]{Murase2008}.
\section{Particle Spectrum}
\label{sec:particle_spectrum}
The steady state particle distributions $n=dN/d\gamma$ can be derived from the solution of the transport equation \citep{Ginzburg1964}
\begin{equation}
\frac{\partial}{\partial \gamma}\left(\dot{\gamma}_{\rm cool}n\right)+\frac{n}{t_{\rm fall}} = Q(\gamma),
\end{equation}
where $\dot{\gamma}_{\rm cool}$ is the total cooling rate, $Q(\gamma)$ is the injection function, which describes phenomenologically some acceleration process, e.g., \ac{dsa}. The injection function for non-thermal protons and electrons is set as $Q(\gamma) = Q_0\gamma^{-p_{\rm inj}}\exp(-\gamma/\gamma_{\rm max})$. Here, $\gamma_{\rm max}$ is the maximum Lorentz factor determined by balancing the acceleration and cooling time scales (Figures. \ref{fig:time_electron} and \ref{fig:time_proton}). The corresponding solution is
\begin{equation}
\label{eq:electron_spectrum}
n=\frac{1}{\dot{\gamma}_{\rm cool}}\int\limits_\gamma^{\infty}Q(\gamma')e^{-T(\gamma,\gamma')} d\gamma',
\end{equation}
where
\begin{equation}
T(\gamma_1, \gamma_2) = \frac{1}{t_{\rm fall}}\int\limits_{\gamma_1}^{\gamma_2}\frac{d\gamma}{\dot{\gamma}_{\rm cool}}
\end{equation}
By solving Equation. \ref{eq:electron_spectrum}, we obtain a steady-state spectrum of the non-thermal particles.
\begin{figure}
\begin{center}
\includegraphics[width=9.0cm]{./particle_spectrum_electron.pdf}
\caption{The steady-state electron spectral distribution in \ac{agn} coronae. Solid curve corresponds to the model with $p_{\rm inj}=2.0$. We set $\mbh=10^8M_\odot$, $r_c = 40$, $B=10$~G, $kT_e=100$~keV, $\taut=1.1$, and $\eta_g=30$. Dashed curve corresponds to the observationally determined electron distribution for IC~4329A \citep{Inoue2018}. The shaded region shows the Lorentz factors responsible for the observed radio spectrum.}\label{fig:electron_spectrum}
\end{center}
\end{figure}
Fig.~\ref{fig:electron_spectrum} shows the steady-state non-thermal electron spectrum obtained for the injection spectral index of $p_{\rm inj}=2.0$ together with the observationally determined electron spectral distribution for IC~4329A \citep{Inoue2018}. \ac{alma} observed non-thermal synchrotron radiation between 90.5~GHz and 231~GHz which corresponds to the electron Lorentz factors between 50 and 80, respectively. The corresponding region is shown as the shaded region in the Fig.~\ref{fig:electron_spectrum}.
For the calculation of the steady-state spectrum, we set $\mbh=10^8M_\odot$, $r_c = 40$, $B=10$~G, $kT_e=100$~keV, $\taut=1.1$, and $\eta_g=30$. The synthetic electron distribution obtained for $p_{\rm inj}=2.0$ nicely reproduces the observationally determined electron spectrum in the energy range constrained by the observations. This injection index is naturally expected in a simple \ac{dsa} scenario for a strong shock.
The resulting particle spectrum at $\gamma_e>10^4$ becomes softer than observationally determined index at $50\lesssim\gamma_e\lesssim80$. This is because of the influence of the cutoff imposed by the particle cooling. Therefore, if we consider the high energy synchrotron or \ac{ic} spectral shapes, the cooling effects should be taken into account accurately. Even though the electron spectrum extends down to lower energies, it is hard to see the corresponding synchrotron emission due to synchrotron self-absorption effect \citep{Inoue2014}.
The calculated electron spectrum is renormalized to agree with the observationally determined spectrum, which is achieved if the non-thermal electrons contains $f_{\rm nth}=0.03$ of the energy in thermal leptons. We note that, in order to define the energy content in the non-thermal particles, we formally integrate above $\gamma_e=1$ in this study. We keep this fraction for non-thermal electron energy fixed in calculations below for all Seyferts.
The energy fraction of non-thermal electrons was fixed to $\xi_{\rm nth}=0.04$ in \citet{Inoue2018}. $\xi_{\rm nth}$ is defined beyond the break electron Lorentz factor, while $f_{\rm nth}$ is above $\gamma_e=1$. That amount of non-thermal electrons overproduces the MeV background flux given the measured electron spectral index (see \S. \ref{sec:background}). To be consistent with the observed cosmic MeV gamma-ray background flux, we set $\xi_{\rm nth}=0.015$ in this work, which corresponds to $f_{\rm nth}=0.03$. The obtained best fit parameters with this fraction for the radio spectrum of IC~4329A is $p=2.9\pm0.9$, $B=11.4\pm5.6$~G, and $r_c = 42.7\pm7.8$, which are very similar to those obtained for the case of $\xi_{\rm nth}=0.04$. We adopt these parameters for the observationally determined electron distribution in the Fig. \ref{fig:electron_spectrum}. Fitting results for the other parameters were also the same as those with $\xi_{\rm nth}=0.04$.
Here, the total shock power $\Psh$ can be estimated as
\begin{eqnarray}
\Psh&=&4\pi R_c^2 n_pm_p v^3_{\rm sh}/2\\ \nonumber
&\simeq&2.2\times10^{45}\left(\frac{\taut}{1.1}\right)\left(\frac{r_c}{40}\right)^{-1/2}\left(\frac{\mbh}{10^8M_\odot}\right)\ {\rm erg\ s^{-1}}.
\end{eqnarray}
For objects with $L_{\rm X}=10^{44}~{\rm erg\ s^{-1}}$, $f_{\rm nth}=0.03$ corresponds to $\sim5$\% of the shock power is injected into acceleration of electrons. This high value implies that if \ac{dsa} is responsible for particle acceleration in \ac{agn} coronae then processes regulating injection of electrons into \ac{dsa} are very efficient. For example in the case of \ac{dsa} in supernovae remnants non-thermal electrons obtain only $\sim1$\% of energy transferred to non-thermal protons \citep{Ackermann2013}. Detailed consideration of the reasons of this unusually high efficiency of electron acceleration is beyond the scope of this paper, however we note that a significant presence of positrons may affect the ratio \citep[see, e.g.,][]{2015PhRvL.114h5003P}. Given these uncertainties, for protons we set that the same energy injection rate is achieved as for electrons. This power appears to be sufficient to explain the observed IceCube neutrino fluxes.
For the other object, NGC~985, the observed electron spectral index is $2.11\pm0.28$ \citep{Inoue2018}, which is hard considering the radiative cooling effect.
Cascade components would have such a hard spectrum below the threshold energy \citep[see, e.g.,][]{2003APh....19..525A}. In addition, due to the quality of data at low frequencies, we could not precisely determine the other components such as free-free emission and synchrotron emission from star formation activity, and synchrotron emission from the jet. Those uncertainties may resulted in a less reliable measurement of the corona emission spectrum slope. Further observations are required to determine the radio spectral properties in NGC~985 precisely.
\begin{figure*}
\begin{center}
\includegraphics[width=8.9cm]{./NGC4151_like_eta_g_30.pdf}
\includegraphics[width=8.9cm]{./IC4329A_like_eta_g_30.pdf}
\caption{{\it Left}: Gamma-ray and neutrino spectrum per flavour from an \ac{agn} coronae with $p_{\rm inj}=2.0$ and $\eta_g=30$. We set 2-10~keV luminosity of $10^{43}~{\rm erg\ s^{-1}}$ at a distance of 14~Mpc, which roughly corresponds to NGC~4151. We renormalize the overall fluxes in order to match the {\it Swift}/BAT flux of NGC~4151 at 14-195~keV \citep{Oh2018}. The thick black solid and thick dot curve shows gamma rays from \ac{ic} interaction and \ac{pp}+\ac{pg} interaction including internal and EBL attenuation effect. Each thin curve shows the spectrum before the attenuation. The black dashed curve shows the \ac{ic} spectrum considering only thermal electrons, in which the effect of reflection is taken into account. The blue dot-dashed, double-dot-dashed, and solid curve shows the neutrino contribution per flavour of \ac{pp} interaction, \ac{pg} interaction, and the sum of the two, respectively. The non-thermal electrons in coronae are assumed to carry 3\% of the total lepton energies. We assume the injection powers in electrons and protons are the same. For the comparison, we overplot the sensitivity curve of {\it COSI-X} (300~days), {\it e-ASTROGAM} \citep[3~yrs;][]{DeAngelis2017}, {\it GRAMS} \citep[35~days;][]{Aramaki2019}, {\it GRAMS} \citep[3~yrs;][]{Aramaki2019}, and {\it Fermi}/LAT (10~yrs). We also plot the sensitivity of IceCube and IceCube-Gen2 at $\delta=30^\circ$ \citep{vanSanten2017}. {\it Right:} The same as the {\it Left} panel, but we set 2-10~keV luminosity of $10^{44}~{\rm erg\ s^{-1}}$ at a distance of 69~Mpc which roughly corresponds to IC~4329A. We renormalize the overall fluxes in order to match the {\it Swift}/BAT flux of IC~4329A at 14-195~keV \citep{Oh2018}. For the IceCube sensitivity, we show that at $\delta=-30^\circ$.}\label{fig:SED_gamma_nu}
\end{center}
\end{figure*}
\section{Gamma Rays and Neutrinos from AGN Coronae}
\label{sec:g_nu_AGN}
Accelerated electrons and protons in AGN coronae generate gamma-ray and neutrino emission through \ac{ic} scattering, $pp$ interaction, and $p\gamma$ interaction. Adopting a steady-state particle spectrum, we calculate the resulting gamma-ray and neutrino spectra from AGN coronae. We follow \citet{Blumenthal1970} for the gamma-ray emission due to the \ac{ic} scattering by non-thermal electrons. We calculate the gamma-ray and neutrino emission induced by hadronic interactions following \citet{Kelner2006} for \ac{pp} interactions and \citet{Kelner2008} for \ac{pg} interactions. For simplicity, we do not take into account \ac{ic} scattered emission by secondary electrons and positrons. For the thermal Comptonization spectra, we adopt the AGN SED shown in Fig.~\ref{fig:AGN_SED} which takes into account reflection components but does not account for attenuation by torus. The torus attenuation is mainly relevant for $\lesssim30$~keV, which is below the range of our interest.
Figure~\ref{fig:SED_gamma_nu} shows the resulting gamma-ray and neutrino spectra for two cases. The neutrino flux is shown in the form of per flavour. The left panel of the figure shows the case with a 2-10~keV luminosity of $10^{43}~{\rm erg\ s^{-1}}$ at a distance of 14~Mpc, while the right panel shows the case with a luminosity of $10^{44}~{\rm erg\ s^{-1}}$ at a distance of 69~Mpc. The former and the latter roughly corresponds to NGC~4151 and IC~4329A, respectively. NGC~4151 is the brightest Seyfert in the X-ray sky \citep{Oh2018}. For the comparison, the overall fluxes of both panels are renormalized to match with the {\it Swift}/BAT flux of NGC~4151 and IC~4329A, respectively, at 14-195~keV \citep{Oh2018}. We note that we do not calculate the detailed X-ray spectra of each objects, which is beyond the scope of this paper.
We set the injection spectral index of $p_{\rm inj}=2.0$ and the gyrofactor of $\eta_g=30$ for both electrons and protons (See \S~\ref{sec:particle_spectrum}). We also set the same injection power into protons and electrons as described in \S. \ref{sec:particle_spectrum}. The target photon density for \ac{ic} scatterings and \ac{pg} interactions is defined as $U_{\rm ph}(\epsilon)$ (See \S~\ref{sec:AGN_SED}). Since we assume a uniform spherical source, gamma-ray photons are attenuated by internal photon field by a factor of $3u_{\rm int}(\tau_{\rm int})/\tau_{\rm int}$, where $u_{\rm int}(\tau)=1/2 + \exp(-\tau)/\tau - [1 - \exp(-\tau)]/\tau^2$ \citep[See Sec. 7.8 in][]{Dermer2009}, where $\tau_{\rm int}$ is the internal gamma-ray optical depth (See \S. \ref{subsec:gg_int}). Gamma rays are also attenuated by the \ac{ebl} during the propagation in the intergalactic space. We adopt \citet{Inoue2013} for the \ac{ebl} attenuation.
For the comparison, we also show the expected sensitivity curve of planned MeV missions: {\it COSI-X} (300~days)\footnote{{\it COSI} collaboration website (The Compton Spectrometer and Imager \url{http://cosi.ssl.berkeley.edu/}}, {\it e-ASTROGM} \citep[3~yrs,][]{DeAngelis2017}\footnote{{\it e-ASTROGAM} collaboration website (enhanced ASTROGAM \url{http://eastrogam.iaps.inaf.it/}}, {\it GRAMS} \citep[35~days,][]{Aramaki2019}, and {\it GRAMS} \citep[3~yrs,][]{Aramaki2019}. 10-yr sensitivity of {\it Fermi}/LAT\footnote{{\it Fermi}/LAT collaboration website (The Large Area Telescope \url{http://www.slac.stanford.edu/exp/glast/groups/canda/lat_Performance.htm}} is also shown. We also plot the sensitivity of neutrino detectors: IceCube\footnote{IceCube collaboration website (\url{https://icecube.wisc.edu/}} and IceCube-Gen2 \citep{vanSanten2017}. For the left panel, we assume the declination $\delta$ of $30^\circ$, while $-30^\circ$ for the right panel.
Since the spectral index of electrons is $\sim3$ after radiative cooling, the resulting non-thermal gamma-ray spectrum is flat in $\nu F_\nu$ in the MeV band which appears after the thermal cutoff. Given the cooling limited maximum energy $\gamma_e\sim10^5$, the intrinsic \ac{ic} spectrum can extend up to $\sim100$~GeV. However, due to the strong internal gamma-ray attenuation effect, the spectra will have a cutoff around 100~MeV in both cases. In the sub-MeV band, the spectrs show super-thermal tails due to the combination of thermal and non-thermal components and a spectral hardening at $\sim1$~MeV. These superthermal and flat spectral tails should be tested by future MeV gamma-ray missions. Ballon flights with such as {\it GRAMS} \citep{Aramaki2019} and {\it SMILE} \citep{Takada2011,Komura2017}\footnote{{\it SMILE} collaboration website (The Sub-MeV gamma-ray Imaging Loaded-on-balloon Experiment \url{http://www-cr.scphys.kyoto-u.ac.jp/research/MeV-gamma/wiki/wiki.cgi?page=Top_en}} may be able to catch this superthermal tail. And, satellite-class MeV missions such as {\it e-ASTROGAM} \citep{DeAngelis2017}, {\it AMEGO}\footnote{{\it AMEGO} collaboration website (The All-sky Medium Energy Gamma-ray Observatory \url{https://asd.gsfc.nasa.gov/amego/}}, and {\it GRAMS} \citep{Aramaki2019} will be able to see also the non-thermal power-law tail. For the case of NGC~4151, {\it Fermi}/LAT may be able to see the signature with its 10~yrs survey. However, the expected flux is almost at the sensitivity limit. Thus, it may need further exposures for {\it Fermi}/LAT to see the coronal emission.
The \ac{pp} and \ac{pg} production efficiency is given by the ratio between the dynamical timescale (Eq. \ref{eq:t_fall}) and the interaction timescales (Eqs. \ref{eq:t_pp} and \ref{eq:t_pg}). The \ac{pp} production efficiency is analytically given as
\begin{equation}
f_{pp}=\frac{t_{\rm fall}}{t_{pp}} \simeq 0.16 \left(\frac{\taut}{1.1}\right)\left(\frac{r_c}{40}\right)^{-1/2}.
\end{equation}
Gamma rays and neutrinos induced by hadronic interactions carry $1/3$ and $1/6$ of those interacted hadron powers. Therefore, hadronic gamma-ray and neutrino luminosity is expected to be $\sim5$\% and $\sim3$\% of the intrinsic proton luminosity, respectively. Since we assume the same energy injection to electrons and protons and the coronal Thomson scattering optical depth is 1.1, before the attenuation, we have hadronic gamma-ray and neutrino fluxes are $\sim5$\% and $\sim3$\% of the \ac{ic} gamma-ray fluxes.
The \ac{pp} and \ac{pg} induced gamma rays are also mostly attenuated by the internal photon fields. Thus, we do not expect any $\gtrsim$GeV gamma-ray emission from Seyferts. Moreover, the intrinsic gamma-ray energy fluxes due to hadronic interactions is about a factor of 10 less than that by primary electrons because of radiative efficiency differences between protons and electrons. This implies that gamma rays produced by secondary pairs should not significantly alter the resulting spectra. Therefore, we can safely ignore the cascade contribution.
On the contrary to gamma rays, neutrinos induced by hadronic interactions can escape from the system without any attenuation. Since we adopt the same $p_{\rm inj}=2$ for protons as for electrons, we expect a flat $\nu F_\nu$ spectrum for neutrinos, to which \ac{pp} makes dominant contribution. At higher energies, especially in the case of IC~4329A, \ac{pp} and \ac{pg} spectra are suppressed due to the Bethe-Heitler cooling process. The exact position of the cutoff energy depends on the assumed $\eta_g$. Here, as described later, we set $\eta_g=30$ in order to be consistent with the IceCube background flux measurements. This gyrofactor results in a neutrino spectral cutoff around 100~TeV. Although it is difficult to see neutrino signals from individual Seyferts with the current generation of IceCube, it would be possible to see bright Seyferts in the northern hemisphere in the era of IceCube-Gen2 \citep[see also][for more general arguments]{Murase2016}. Therefore, even though Seyferts are faint in the GeV gamma-ray band, future MeV gamma-ray and TeV neutrino observations can test our scenario.
\begin{figure*}
\begin{center}
\includegraphics[width=17cm]{./CGNB_eta_g_30.pdf}
\caption{The cosmic gamma-ray and neutrino background spectrum from \ac{agn} coronae with $p_{\rm inj}=2.0$ and $\eta_g=30$ assuming that the injection powers in electrons and protons are the same. The thick black solid and thick dot curves show the gamma-ray contribution of \ac{ic} interaction and \ac{pp}+\ac{pg} interaction, respectively, in which internal and EBL attenuation effects are taken into account. Corresponding thin curves show the spectra before the attenuation. The black dashed curve shows the \ac{ic} spectrum considering only thermal electrons. The blue dot-dashed, double-dot-dashed, and solid curve shows the neutrino contributions per flavour produced via \ac{pp} interactions, \ac{pg} interactions, and the sum of the two, respectively. The circle and square data points correspond to the total cosmic gamma-ray background spectrum measured by the {\it Fermi}/LAT \citep{Ackermann2015} and the cosmic neutrino background spectrum by the IceCube \citep{Aartsen2015}, respectively. The cosmic X-ray and MeV gamma-ray background spectrum data of {\it HEAO}-1 A2 \citep{gru99}, {\it INTEGRAL} \citep{chu07}, {\it HEAO}-1 A4 \citep{kin97}, \textit{Swift}-BAT \citep{aje08}, {\it SMM} \citep{wat97}, Nagoya--Ballon \citep{fuk75}, COMPTEL \citep{wei00} are also shown in the figure. }\label{fig:CGNB_etag_100}
\end{center}
\end{figure*}
\section{Cosmic Gamma-ray and Neutrino Background Fluxes From High Energy Particles in AGN Coronae}
\label{sec:background}
In this section, we calculate the cosmic gamma-ray and neutrino background spectra from AGN coronae. For the cosmological evolution of \acp{agn}, we follow \citet{Ueda2014} in which the evolutionary functions are defined at 2--10~keV intrinsic X-ray luminosity. We briefly review their formalism here.
Based on the luminosity-dependent density evolution model, the \ac{agn} X-ray luminosity function at a given luminosity $L_X$ and a given redshift $z$ is defined as
\begin{equation}
\frac{ d \Phi_{\rm X} (L_{\rm X}, z)}{ d{\rm log} L_{\rm X}}
= \frac{ d \Phi_{\rm X} (L_{\rm X}, 0)}{ d{\rm log} L_{\rm X}} e(z, L_{\rm X}),
\end{equation}
where ${d \Phi_{\rm X} (L_{\rm X}, 0)}/{ d{\rm log} L_{\rm X}}$ is the luminosity function in the local universe defined as
\begin{equation}
\frac{d \Phi_{\rm X} (L_{\rm X}, z=0)}{d{\rm log} L_{\rm X}}
= A [(L_{\rm X}/L_{*})^{\gamma_1} + (L_{\rm X}/L_{*})^{\gamma_2}]^{-1},
\end{equation}
where $A$ is the normalization and $L_{*}$ is the break luminosity. $e(z, L_{\rm X})$ is the evolution factor represented as
\begin{eqnarray}
&&e(z, L_{\rm X} ) = \\ \nonumber
&& \left\{ \begin{array}{ll}
(1 + z)^{p1} & [z \le z_{c1}(L_{\rm X})], \\
(1 + z_{c1})^{p1}
\left(\frac{ 1 + z}{ 1 + z_{c1}}\right)^{p2} & [z_{c1}(L_{\rm X}) < z \le z_{c2}], \\
(1 + z_{c1})^{p1} \left(\frac{ 1 + z_{c2}}{ 1 + z_{c1}}\right)^{p2} \left(\frac{ 1 + z}{ 1 + z_{c2}}\right)^{p3} & [z > z_{c2}]. \\
\end{array} \right.
\end{eqnarray}
Here the luminosity dependence for the $p1$ parameter is considered as
\begin{equation}
p1(L_{\rm X})=p1^* + \beta_1 ({\rm log} L_{\rm X} - {\rm log} L_{\rm p}),
\end{equation}
where we set ${\rm log} L_{\rm p} = 44$. Both cutoff redshifts are given by power law functions of
$L_{\rm X}$ as
\begin{equation}
z_{\rm c1}(L_{\rm X}) = \left\{ \begin{array}{ll}
z_{\rm c1}^* (L_{\rm X}/L_{\rm a1})^{\alpha1} & [L_{\rm X} \le L_{\rm a1}], \\
z_{\rm c1}^* & [L_{\rm X} > L_{\rm a1}], \\
\end{array} \right.
\end{equation}
and
\begin{equation}
z_{\rm c2}(L_{\rm X}) = \left\{ \begin{array}{ll}
z_{\rm c2}^* (L_{\rm X}/L_{\rm a2})^{\alpha2} & [L_{\rm X} \le L_{\rm a2}], \\
z_{\rm c2}^* & [L_{\rm X} > L_{\rm a2}]. \\
\end{array} \right.
\end{equation}
The parameters are summarized in Table.~4 in \citet{Ueda2014}. There is also a substantial fraction of Compton-thick AGNs in the universe \citep[e.g.,][]{Ueda2003,Ricci2015}. In order to take into account this population, we multiply the normalization factor by a factor of 1.5 \citep[see][for details]{Ueda2014}.
The cosmic gamma-ray background fluxes are calculated as
\begin{eqnarray}
\nonumber
E^2\frac{dN}{dE} &=& \frac{c}{4\pi}\int\limits_{0.002}^5dz\int\limits_{41}^{47}d\log L_{\rm X} \left|\frac{dt}{dz}\right| \frac{ d \Phi_{\rm X} (L_{\rm X}, z)}{ d{\rm log} L_{\rm X}}\\ \nonumber
&\times& \frac{L_\gamma(E', L_{\rm X})}{1+z}\frac{3u_{\rm int}(\tau_{\rm int}[E',\log L_{\rm X}])}{\tau_{\rm int}(E',\log L_{\rm X})}\\
&\times&\exp(-\tau\EBL [E, z]),
\end{eqnarray}
where $E'=(1+z)E$ and $L_\gamma(E, L_{\rm X})$ is the gamma-ray luminosity at energy $E$ for a given X-ray luminosity of $L_{\rm X}$. The redshift and luminosity ranges are selected to be the same as in \citet{Ueda2014}. $\tau_{\rm int}$ and $\tau\EBL $ is the gamma-ray optical depth due to the internal photon field and the \ac{ebl}. We do not consider the cascade gamma-ray photons \citep[e.g.,][]{Inoue2012} because the gamma-ray energy fluxes due to hadronic interactions is already subdominant compairing to that by primary electrons.
The neutrino background fluxes can be also calculated in the same manner ignoring the gamma-ray attenuation terms and replacing $L_\gamma(E, L_{\rm X})$ with $L_\nu(E, L_{\rm X})$. $L_\nu(E, L_{\rm X})$ is the neutrino intensity at an energy of $E$ for a given X-ray luminosity of $L_{\rm X}$.
Figure~\ref{fig:CGNB_etag_100} shows the cosmic X-ray/gamma-ray and neutrino background spectra from \ac{agn} coronae assuming the case of $p_{\rm inj}=2.0$ and $\eta_g=30$. We also plot the observed background spectrum data by {\it HEAO}-1 A2 \citep{gru99}, {\it INTEGRAL} \citep{chu07}, {\it HEAO}-1 A4 \citep{kin97}, \textit{Swift}-BAT \citep{aje08}, {\it SMM} \citep{wat97}, Nagoya--Ballon \citep{fuk75}, COMPTEL \citep{wei00}, {\it Fermi}-LAT \citep{Ackermann2015}, and IceCube \citep{Aartsen2015}.
Figure~\ref{fig:CGB_MeV} shows the cosmic MeV gamma-ray background spectrum only from Figure~\ref{fig:CGNB_etag_100}. By setting $f_{\rm nth}=0.03$, the gamma-ray fluxes from \acp{agn} coronae due to \ac{ic} scattering by thermal and non-thermal electrons can nicely explain the observed cosmic MeV gamma-ray background radiation in an extension from the cosmic X-ray background radiation, which is known to be explained by Seyferts \citep{Ueda2014}. Since the spectral index of non-thermal electrons in the coronae is $\sim3$, the resulting MeV gamma-ray background spectrum becomes flat in $E^2dN/dE$ (See Fig.~\ref{fig:CGB_MeV}). Here, the cosmic X-ray background spectrum by Seyferts has a spectral cutoff above $\sim300$~keV because of temperature of thermal electrons $\sim100$~keV \citep{Ueda2014}. By summing up these two thermal and non-thermal components, superthermal tail appears in the sub-MeV band as observed by \citet[e.g.,][]{fuk75,kin97,wat97}. Since the dominant \ac{ic} contributors switches from thermal electrons to non-thermal electrons at around $1$~MeV, the MeV background spectrum may have spectral hardening feature at $\sim1$~MeV. In the figure, we set $\eta_g=30$. The result does not significantly change as far as $\eta_g<1000$. If $\eta_g>1000$, we may require lower $f_{\rm nth}$.
\begin{figure}
\begin{center}
\includegraphics[width=9.0cm]{./CGB_MeV_eta_g_30.pdf}
\caption{Same as Figure~\ref{fig:CGNB_etag_100}, but enlarging the cosmic MeV gamma-ray background spectrum from 0.03~MeV to 100~MeV. The thick black solid curve shows the total (thermal + non-thermal) contribution of \ac{ic} interaction where internal and EBL attenuation effects are taken into account. Thin curve shows the spectrum before the attenuation. The dashed and dotted curve shows the contribution from thermal electrons and non-thermal electrons, respectively. Contribution of reflection is included in the thermal contribution. The cosmic X-ray and MeV gamma-ray background spectrum data of {\it HEAO}-1 A2 \citep{gru99}, {\it INTEGRAL} \citep{chu07}, {\it HEAO}-1 A4 \citep{kin97}, \textit{Swift}-BAT \citep{aje08}, {\it SMM} \citep{wat97}, Nagoya--Ballon \citep{fuk75}, COMPTEL \citep{wei00} are also shown in the figure.}\label{fig:CGB_MeV}
\end{center}
\end{figure}
Due to the internal gamma-ray attenuation effect, these non-thermal gamma rays can not contribute to the emission above GeV. Because of the same reason, most of hadronic gamma-ray photons are attenuated by internal photon fields, resulting in generation of multiple secondary particles.
Since calculation of those populations are beyond the scope of this paper, we ignore those populations in our estimate. Moreover, as we describe above, the intrinsic hadronic fluxes are already an order of magnitude below the leptonic fluxes. Thus, pairs induced by hadronic cascades will not significantly change our results.
Here, \ac{ic} emission due to non-thermal electrons also contribute in the X-ray band. Their contribution is about $\sim5$\% at 30~keV of the observed cosmic X-ray background flux, which may reduce the required number of the Compton-thick population of AGNs.
The model curve at $\sim10$~keV slightly overproduces the measured background spectrum. This is because we do not take into account X-ray attenuation by torus. However, the treatment of those soft X-ray photons does not affect our results at all.
\begin{figure}
\begin{center}
\includegraphics[width=9.0cm]{./CNB_eta_g_all.pdf}
\caption{The cosmic neutrino background spectrum per flavour from \ac{agn} coronae. The dashed, dotted, solid, dot-dashed, and double-dot-dashed curve shows the \ac{pp} + \ac{pg} contribution with $\eta_g=$1 (Bohm limit), 10, 30, $10^2$, and $10^3$, respectively. The square data points correspond to the cosmic neutrino background spectrum by the IceCube \citep{Aartsen2015}. }\label{fig:CNB_All}
\end{center}
\end{figure}
For neutrinos, the combination of \ac{pp} and \ac{pg} interactions can nicely reproduce the IceCube fluxes below 100--300~TeV by assuming $\eta_g=30$ and about 5\% of the shock power into proton acceleration, same as electrons. \ac{pp} interactions dominate the flux at $\lesssim10$~TeV, while \ac{pg} interactions prevail above this energy. Because of the target photon field SED, \ac{pg} is subdominant in the GeV-TeV band. If we inject more powers into protons, it inevitably overproduces the IceCube background fluxes. As $\gtrsim$~GeV gamma rays are internally attenuated, AGN coronae emission will not be seen in GeV gamma-rays, even though they can make the IceCube neutrino fluxes. Such hidden cosmic-ray accelerators are suggested as a possible origin of the IceCube neutrinos \citep[see][for a general argument]{Murase2015}.
Figure~\ref{fig:CNB_All} shows the cosmic neutrino background spectra from \ac{agn} cores with various gyro factors ranging from 1 (Bohm limit) to $10^3$. It is clear that if $\eta_g\ll30$, the resulting neutrino fluxes overproduce the measured fluxes. On the contrary, if $\eta_g\gg30$, AGN coronae can not significantly contribute to the observed neutrino background fluxes. Thus, in order to explain the IceCube neutrino background fluxes by AGN cores, $\eta_g\sim30$ is required. However, we note that these estimates are based on the assumed energy injection fraction to protons. Recent particle-in-cell simulations of proton-electron plasma considering radiatively inefficient accretion flows (RIAFs) showed that protons will carry have several factors more energies than electrons \citep{Zhdankin2018}. If this is the case, larger $\eta_g$ is favored.
\section{Discussion}
\label{sec:discussion}
\subsection{Comparison with Previous works on High Energy Neutrinos}
In literature, it has been argued that high energy particles in the core of \acp{agn} generate intense neutrino emission \citep[e.g.,][]{Eichler1979,Begelman1990,Stecker1992,Alvarez-Muniz2004}. These originally predicted fluxes have been ruled out by high energy neutrino observations \citep{IceCube2005}. However, recent studies have revisited the estimated fluxes and found that \ac{agn} core models can account for the whole measured fluxes \citep{Stecker2013,Kalashev2015}. In this section, we would like to compare our results with those recent studies \citep{Stecker2013,Kalashev2015}.
The model suggest by \citet{Stecker2013} is very similar to the originally proposed one \citep{Stecker1992}, but the background flux is assumed to be lower by a factor of 20. The original model is motivated by the models explaining \ac{agn} X-ray spectra by the electromagnetic cascade emission of secondary particles \citep{Zdziarski1986,Kazanas1986}, which is not the case based on current X-ray and gamma-ray observational results. The shock radius and the magnetic field strength was assumed to be $10R_s$ and $10^3$~G in the model by \citet{Stecker1992}.
The model in \citet{Kalashev2015} is an extension of \citet{Stecker1992} taking into radial emission profile in the standard accretion disk for the consideration of the \ac{pg} cooling processes. In our modeling, we do not take into account such anisotropic radiation field. However, given the observationally determined corona size, the dominant photon targets are likely to be generated in the inner region of the coronae. The particle spectra in \citet{Kalashev2015} are fixed to match with the IceCube data.
Neutrino fluxes or cosmic-ray spectra are fixed to match with the latest IceCube data in \citet{Stecker2013,Kalashev2015}. In this work, we take more physical approach. Corona plasma density, corona size, and magnetic field strength are determined from observations \citep{Inoue2018} in our work. For example, we set $R_c=40R_s$ and $B=10$~G based on ALMA observations \citep{Inoue2018}. With those parameters, we can follow the acceleration processes in coronae in the framework of \ac{dsa}. We found the AGN coronae can explain the IceCube neutrino background in the TeV band, if the gyrofactor is $\eta_g=30$ and about 5\% of the shock energy goes into proton acceleration. We also predict that next generation MeV gamma-ray and neutrino experiments can test our model by observing nearby bright Seyferts such as NGC~4151 and IC~4329A.
\subsection{Plasma Condition in Coronae}
Considering the plasma density in the accreting coronae, high energy particles may have sufficient time to redistribute their kinetic energy through thermalization by elastic Coulomb (EC) collisions before the gas reaches the event horizon \citep{Takahara1985,Mahadevan1997}. In this section, we discuss thermalization timescales of electrons and protons in the AGN coronae.
First, the electron thermalization timescale in the non-relativistic regime is estimated to be
\citep{Spitzer1962,Stepney1983}
\begin{eqnarray}
&&t_{\rm \ec, ee}\simeq \frac{4\sqrt{\pi}}{n_e\sigmat c\ln \Lambda} \theta_e^{3/2}\\ \nonumber
&&\simeq 1.1\times10^3\left(\frac{\taut}{1.1}\right)^{-1}\left(\frac{r_c}{40}\right)\left(\frac{\mbh}{10^8M_\odot}\right)\left(\frac{kT_e}{100\ {\rm keV}}\right)^{3/2}\ [{\rm s}],
\end{eqnarray}
where $\ln \Lambda\approx20$ is the Coulomb logarithm. For relativistic electrons with Lorentz factors $\gamma_e \gg 1 + \theta_e$ the thermalization timescale due to interactions with the background plasma becomes \citep{Dermer1989}
\begin{eqnarray}
\nonumber
t_{\rm \ec, ee}(\gamma_e)&=&\frac{4}{3} \frac{K_2(\theta_e^{-1})\gamma_e^3}{n_e\sigmat c(\ln \Lambda+9/16-\ln \sqrt{2})} \\
&\times&\left|\int\limits_1^\infty d\gamma_e'\exp(-u_{\rm ee})[\theta_e(1+2u)-\gamma_e]\right|^{-1},
\end{eqnarray}
where $K_n$ is the modified Bessel function of order $n$, and parameter \mbox{$u_{\rm ee}=(\gamma_e/\gamma'_e+\gamma'_e/\gamma_e)/2\theta_e$}. This equation can be approximated as
\begin{eqnarray}
&& t_{\rm \ec, ee}(\gamma_e)\approx \\ \nonumber
&& \frac{2}{3} \frac{\gamma_e}{n_e\sigmat c(\ln \Lambda+9/16-\ln \sqrt{2})} \left|\frac{K_1(\theta_e^{-1})}{K_2(\theta_e^{-1})}-\frac{1}{\gamma_e} \right|^{-1}.
\end{eqnarray}
This is a good analytic approximation at $\theta_e\gtrsim0.3$ and $\gamma_e\gtrsim 2$ \citep{Dermer1989}.
Second, the proton-proton relaxation timescale in the non-relaticistic regime is estimated to be \citep{Spitzer1962,Stepney1983}
\begin{eqnarray}
\label{eq:t_ecpp}
&&t_{\rm \ec, pp}\simeq \frac{4\sqrt{\pi}}{n_p\sigmat c\ln \Lambda} \left(\frac{m_p}{m_e}\right)^{2}\theta_p^{3/2}\\ \nonumber
&&\simeq 4.7\times10^4\left(\frac{\taut}{1.1}\right)^{-1}\left(\frac{r_c}{40}\right)\left(\frac{\mbh}{10^8M_\odot}\right)\left(\frac{kT_p}{100\ {\rm keV}}\right)^{3/2}\ [{\rm s}],
\end{eqnarray}
where $\theta_p\equiv kT_p/m_pc^2$ is the dimensionless proton temperature. At high kinetic energies, nuclear interaction becomes important \citep[see][for details]{Gould1982}. In the mildly relativistic case, the elastic proton-proton relaxation timescale approximately becomes \citep{Gould1982}
\begin{equation}
t_{\rm \ec, pp}\simeq \frac{4}{n_p\sigma_hc}\frac{\beta_p\gamma_p^2}{\gamma_p^2-1},
\end{equation}
where $\sigma_h\sim2.3\times10^{-26}\ {\rm cm}^2$. This approximation is valid at $70\ {\rm MeV}\lesssim(\gamma-1)m_pc^2\lesssim500$~MeV. Above 500~MeV, inelastic processes start to dominate.
\begin{figure}
\begin{center}
\includegraphics[width=9.0cm]{./timescale_EC.pdf}
\caption{Electron and proton thermalization timescales in AGN coronae together with radiative cooling and dynamical timescales. Thick solid curve shows the free-fall timescale. Dashed, dotted, and dot-dashed curve corresponds to synchrotron cooling, \ac{ic} cooling, and $ee$ \ac{ec} thermalization timescale for electrons, respectively. Double-dot-dashed, triple-dot-dashed, and thin solid curve corresponds to $pp$ \ac{ec} thermalization, $pe$ \ac{ec} thermalization, and $pp$ inelastic interaction timescale for protons, respectively. We set $\log L_X=44$, $\taut=1.1$, $R_c=40R_s$, and $kT_e=kT_p=100$~keV.}\label{fig:time_ec}
\end{center}
\end{figure}
Lastly, the proton-electron thermalization timescale due to \ac{ec} collisions in the non-relativistic regime is estimated to be \citep{Spitzer1962,Stepney1983}
\begin{eqnarray}
&&t_{\rm \ec, ep}\simeq \frac{\sqrt{\pi/2}}{n_e\sigmat c\ln \Lambda} \left(\frac{m_p}{m_e}\right)(\theta_e+\theta_p)^{3/2}\\ \nonumber
&&\gtrsim 3.6\times10^5\left(\frac{\taut}{1.1}\right)^{-1}\left(\frac{r_c}{40}\right)\left(\frac{\mbh}{10^8M_\odot}\right)\left(\frac{kT_e}{100\ {\rm keV}}\right)^{3/2}\ [{\rm s}],
\end{eqnarray}
where we assume $\theta_p=\theta_e$. The temperature of a hot accretion can roughly reach to virial temperature $T_p\simeq G\mbh m_p/3kR\sim3\times10^{12}(R/R_s)^{-1}$~K. At such higher temperature, $t_{\rm \ec, ep}$ becomes longer. In the case of relativistic protons, the energy loss timescale through \ac{ec} interactions is given as \citep{Mannheim1994,Dermer1996}
\begin{equation}
t_{\rm \ec, ep}\simeq 1.2\times10^3 \frac{(3.8\theta_e^{3/2}+\beta_p^3)(\gamma_p-1)}{n_p\sigmat c\beta_p^2\ln\Lambda},
\end{equation}
where $\beta_p = \sqrt{1-1/\gamma_p^2}$. At $\gamma_p\gg1$ and $\theta_e\ll1$, the relativistic \ac{ec} scattering relaxation time can be approximated as
\begin{equation}
t_{\rm \ec, ep}\simeq 2.9\times10^8 \left(\frac{\taut}{1.1}\right)^{-1}\left(\frac{r_c}{40}\right)\left(\frac{\mbh}{10^8M_\odot}\right) \left(\frac{\gamma_p}{100}\right)\ [{\rm s}].
\end{equation}
Fig.~\ref{fig:time_ec} shows EC thermalization timescales for electrons and protons for the luminosity of $L_X=10^{44}~{\rm erg\ s^{-1}}$. Since EC thermalization is effective at low energy particles, the horizontal axis is shown in $\gamma\beta$.
Around $\gamma_e\beta_e\sim2$, $t_{\ec,ee}$ shows a sharp feature, which is related to the temperature of the background plasma, $kT_e=100$~keV. At this temperature, the electron distribution has a peak around $\sim3kT_e$ corresponding to $\gamma_e\beta_e\sim1.2$. Thus, around this energy, mean energy transfer is small. We note that below this energy, electrons gain energies from the background plasma through elastic $ee$ scatterings rather than loosing their energies \citep{Dermer1989}, however, this energy gain process is not considered in our work, since it is not relevant for our energy range of interest. As seen in the Fig.~\ref{fig:time_ec}, the energy loss process of electrons is dominated by the Compton cooling at $\gamma_e\beta_e\gtrsim1$.
Following \citet{Gould1982}, we calculate the elastic $pp$ timescale in the mildly relativistic regime. Since it assumes an incident proton has much higher kinetic energy than background plasma, we combine the non-relativistic $t_{\ec,pp}$ (Equation. \ref{eq:t_ecpp}) and that from \citet{Gould1982}. As discussed above, inelastic processes start to dominate at the kinetic energies of $\gtrsim500$~MeV ($\gamma_p\beta_p\gtrsim1.2$). For the comparison, we also show inelastic $pp$ interaction timescale $t_{pp}$.
As the proton-electron Coulomb timescale ($t_{\ec,pe}$) is longer than $t_{\rm fall}$, protons and electrons may not be in the thermal equilibrium in AGN coronae. The proton temperature of a hot accretion can roughly reach to virial temperature $T_p\simeq G\mbh m_p/3kR\sim3\times10^{12}(R/R_s)^{-1}$~K, which is $\gg T_e$. And, the existence of pairs in coronae can reduce $n_p$. Moreover, the shock heated proton temperature becomes $kT_{p}\sim 3m_p v_{\rm sh}^2\sim 4(r_c/40)^{-1}~{\rm MeV}$. Those shock heated protons and electrons also gain and loose their energies through the processes and would contribute as a thermal population in the coronae. These electrons are heated and cooled through EC proton-electron thermalization and Comptonization, respectively \citep[e.g.,][]{Katz2011,Murase2011}. The heating rate can be written as
\begin{equation}
-\frac{dT_p}{dt}=\frac{dT_e}{dt}=\frac{T_p}{t_{\ec,pe}}\simeq \frac{n_e\sigmat c\ln \Lambda}{\sqrt{\pi/2}} \left(\frac{m_e}{m_p}\right)T_p\theta_e^{-3/2},
\end{equation}
assuming $\theta_e \gg \theta_p$. The cooling rate through Comptonization is
\begin{equation}
\frac{dT_e}{dt}\approx-\frac{4}{3} \frac{\sigma_T U_{\rm ph,tot}T_e}{m_ec}
\end{equation}
By equating these two heating and cooling rates of thermal electrons, the shock heating electron temperature is estimated to be
\begin{eqnarray}
kT_e&\simeq& k\left(\frac{3\ln \Lambda}{4\sqrt{\pi/2}}\frac{m_e}{m_p}\frac{n_e}{U_{\rm ph,tot}}T_p\right)^{2/5}\\ \nonumber
&\simeq&86\left(\frac{\taut}{1.1}\right)^{2/5}~[\rm keV],
\end{eqnarray}
where we assume $L_{\rm ph, bol}\propto M_{\rm BH}$. This temperature is close to the measured coronal temperature. Therefore, such shock heating mechanism may be able to explain the current observed coronal temperature. For the understanding the detailed nature of thermal coronae, further studies including thermodynamical processes are required.
\subsection{Other Particle Acceleration Mechanisms}
\label{sec:other_acc}
In this paper, we consider the \ac{dsa} as fiducial acceleration mechanism. However, other acceleration mechanisms such as turbulent acceleration, magnetosphere acceleration, and magnetic reconnection can also operate in AGN coronae. We briefly discuss these processes here.
First, turbulent acceleration is considered for low-accretion rate objects such as low-luminosity \acp{agn} \citep[e.g.,][]{Kimura2015,Zhdankin2017,Zhdankin2018,Wong2019}. In this scenario, particles are accelerated stocastically by turbulence and magnetic reconnection in accretion disk or coronae. Recently, \citet{Zhdankin2018} investigated electron-ion plasma energization via turbulent dissipation in RIAFs using particle-in-cell simulations for the ion temperature $T_i$ in the range of $m_ec^2\lesssim k_BT_i\lesssim m_pc^2$. Turbulent electron-ion plasma driven by MRIs generate power-law spectra for both species and the indices depends on the initial ion temperature. The fraction of the kinetic energy in the non-thermal ions and electrons are $\sim60$\% and $6$\% for ions and electrons at $k_BT_i\sim m_ec^2$, respectively. The fraction in non-thermal electrons is close to the required value for the MeV background (See \S~\ref{sec:background}).
We briefly follow the stochastic acceleration in the AGN coronae case. According to the quasi-linear theory, the diffusion coefficient in the momentum space is \cite[e.g.,][]{Dermer1996}
\begin{equation}
D_p\simeq (m_pc)^2 (ck_{\rm min})\left(\frac{\vA }{c}\right)^2\zeta(r_\lar k_{\rm min})^{q-2}\gamma^q,
\end{equation}
where $k_{\rm min}\sim R_c^{-1}$ is the minimum wave number of turbulence spectrum (corresponding to the size of the corona), $\vA =B/\sqrt{4\pi m_p n_p}$ is the Alfv\'en speed, $r_\lar=m_pc^2/eB$ is the Larmor radius, and $\zeta=\delta B^2/B^2$ is the ratio of strength of turbulence fields against the background. Then, the acceleration timescale is estimated to be
\begin{equation}
t_{\STO}\simeq\frac{p^2}{D_p}\simeq\frac{1}{\zeta}\left(\frac{\vA }{c}\right)^{-2}\frac{R_c}{c}\left(\frac{r_\lar}{R_c}\right)^{2-q}\gamma^{2-q}
\end{equation}
Assuming the Kolomogorov spectrum for the turbulent ($q=5/3$) and $\zeta=1$, the timescale becomes
\begin{eqnarray}
\nonumber
t_{\STO}&\simeq& 3.1\times10^7\left(\frac{\taut}{1.1}\right)\left(\frac{r_c}{40}\right)^{-1/3}\left(\frac{\mbh}{10^8M_\odot}\right)^{-1/3}\\
&\times&\left(\frac{B}{10\ {\rm G}}\right)^{-7/3}\left(\frac{\gamma_p}{100}\right)^{1/3}\ [{\rm s}].
\end{eqnarray}
Thus, stochastic acceleration appears to be inefficient as compared to the typical cooling rates. This is caused by the measured weak magnetic fields, which results in small Alfv\'en speed. If the magnetic fields are amplified by MRIs, more efficient acceleration can be realized \citep[e.g.,][]{Zhdankin2018}\footnote{After we submitted our paper to the journal and arXiv, similar study on AGN coronae by \citet{Murase2019} appeared on arXiv. Both studies are independent and the most different point is the assumed particle acceleration processes. In our paper, we consider \ac{dsa}, while \citet{Murase2019} consider stochastic acceleration motivated by recent numerical simulations \citep{Kimura2019}. However, as we discussed in this section, stochastic acceleration may not work given the ALMA results of weak coronal magnetic field.}.
Second, magnetosphere acceleration can also accelerate particles in the vicinity of \acp{smbh} \citep[e.g.,][]{Beskin1992,Levinson2000,Neronov2007,Levinson2011,Rieger2011}. At low accretion rates, the injection of charges into the BH magnetosphere is not sufficient for a full screening of the electric field induced by the rotation of the compact object. The regions with unscreened electric field, so-called gaps, are able to accelerate charged particles effectively.
In order to have gaps, the maximum allowed accretion rate is \citep{Levinson2011,Aleksic2014,Aharonian2017}
\begin{equation}
\dot{m}<3\times10^{-4}\left(\frac{\mbh}{10^8M_\odot}\right)^{-1/7},
\end{equation}
where $\dot{m}$ is the accretion rate in the Eddington units. Since we are considering the standard accretion disk regime $\dot{m}\gtrsim0.01$, particle acceleration by gaps will not be operated in our case.
Lastly, magnetic reconnection would accelerate particles \citep[see e.g.,][for reviews]{Hoshino2012}. Reconnection would naturally happens in coronae as they are magnetized and radiative magnetic reconnection is suggested as an origin of the X-ray emission seen in accreting black hole systems \citep{Beloborodov2017}. However, even in the case of solar flares, particle acceleration mechanisms in magnetic reconnection is still uncertain \citep[e.g.,][]{Liu2008,Nishizuka2013}. Although quantitative discussion is not easy here, the available energy injection power can estimated as
\begin{eqnarray}
P_B &=& \frac{B^2R_c^2v_A}{2}\\ \nonumber
&\simeq &5.4\times10^{39}\left(\frac{\taut}{1.1}\right)^{-1/2}\left(\frac{r_c}{40}\right)^{5/2}\left(\frac{\mbh}{10^8M_\odot}\right)^{5/2}\\ \nonumber
&\times&\left(\frac{B}{10~{\rm G}}\right)^3\ [{\rm erg\ s^{-1}}].
\end{eqnarray}
This power is not sufficient for providing the non-thermal particle energies. For detailed estimation, we may need to consider spatial distribution fo magnetic field. However, such information is not currently available.
\subsection{Cosmic MeV Gamma-ray Background Radiation}
It is known that Seyferts generate the cosmic X-ray background radiation \citep{Ueda2014}. The cosmic gamma-ray background at 0.1--820~GeV is believed to be explained by three components: blazars \citep[e.g.,][]{Inoue2009,Ajello2015}, radio galaxies \citep{Inoue2011}, and star-forming galaxies \citep{Ackermann2012_SB}, even though the contributions of radio galaxies and star-forming galaxies are still uncertain due to a small number of gamma-ray detected samples. On the contrary to the cosmic X-ray and GeV background radiation, the origin of the cosmic MeV gamma-ray background radiation is still veiled in mystery.
As a possible scenario, non-thermal \ac{ic} emission from coronae in Seyferts has been suggested \citep{Inoue2008}. The MeV tail extended from the X-ray background spectrum is generated by non-thermal electrons with very soft spectral index \citep{Inoue2008}. However, non-thermal electrons are included in an ad hoc way. In our work, we consider the particle acceleration and cooling processes given the latest observations. The tail is due to the superposition of thermal Comptonization cut-off spectrum and $\gamma\gamma$ attenuated flat non-thermal \ac{ic} component. We can distinguish these two scenarios by observing individual objects in radio and X-ray bands.
Not only Seyferts, but also blazars are considered as a candidate as the origin of the MeV background \citep{Ajello2009}. In order to distinguish Seyferts and blazars, we need to resolve the MeV sky. However, it is not easy even with future MeV instruments \citep{Inoue2015}. Here, it is suggested that anisotropy measurements may distinguish these two scenarios \citep{Inoue2013_CXB} because blazar background should feature stronger Poisson fluctuations. Future MeV gamma-ray anisotropy observations will be important to understand the particle acceleration in coronae and the origin of the MeV gamma-ray background radiation.
\subsection{Gamma-ray Observations toward Seyferts}
Gamma rays from Seyfert galaxies are not robustly detected yet \citep{Lin1993,Teng2011,Ackermann2012}. Possible signature of gamma-ray emission above 0.1~GeV have been reported for ESO~323-G077 and NGC~6814 \citep{Ackermann2012}, whose X-ray luminosities are about $10^{43}\ {\rm erg\ s^{-1}}$. The required luminosity ratio between X-ray and gamma-ray $L_{\rm 0.1-10~GeV}/L_{\rm 14-195~keV}$ for these sources is about 0.1 \citep{Ackermann2012}. Our model estimates this ratio as $\sim0.01$. Therefore, coronal gamma-ray emission is most-like not able to account for the observed gamma-ray fluxes from those Seyfert galaxies.
Although gamma rays from other Seyferts have not been detected yet, {\it Fermi}/LAT has set upper limits on their gamma-ray fluxes \citep{Teng2011,Ackermann2012}. Based on the analysis of the first 2-3 years data, $L_{\rm 0.1-10~GeV}/L_{\rm 14-195~keV}<0.1$ in the 95\% confidence level is obtained in most cases, which is consistent with our model estimate. The most stringent observational constraint is derived for NGC~4151, in which $L_{\rm 0.1-10~GeV}/L_{\rm 14-195~keV}<0.0025$, even though the limit can vary with an assumed spectral shape. Following our models, the current 10~yrs survey data of {\it Fermi}/LAT may be able to see NGC~4151 (Figure.~\ref{fig:AGN_SED}), even though the expected flux is almost at the sensitivity limit.
\subsection{Fraction of Non-thermal Electrons}
We set the energy fraction of non-thermal electrons in AGN coronae as $f_{\rm nth}=0.03$ because it nicely reproduces the observed MeV gamma-ray background radiation. As discussed in \citet{Inoue2018}, $f_{\rm nth}$, $B$, and $R_c$ are closely tied, current radio and X-ray data do not allow us to solve these three parameters simultaneously without decoupling thermal and non-thermal components.
Observationally, $f_{\rm nth}$ is constrained as $<0.3$ in order not to violate X-ray data based on {\it NuSTAR} observations \citep{Fabian2017}. If $f_{\rm nth}$ is significantly lower, it becomes difficult for Seyfert to explain the MeV gamma-ray background radiation. However, too much lower $f_{\rm nth}$ contradicts with other observations since it requires a bigger $R_c$ based on the radio spectral fitting. If we set $f_{\rm nth}=10^{-3}$ and $10^{-4}$, $R_c$ becomes $\sim70R_s$ and $\sim100R_s$, respectively. The size of coronae is also constrained as an order of $\sim10R_s$ by optical--X-ray spectral fitting studies \citep{Jin2012} and micorolensing observation \citep{Morgan2012}. Therefore, $f_{\rm nth}$ can not become much smaller than the adopted value.
\subsection{Nuclear Spallation in AGNs}
Given the \ac{alma} results, particle accelerations occurs in AGN coronae. As we demonstrated, high energy protons are easily accelerated in coronae. These high energy protons can be also traced by future high-resolution calorimeter spectroscopy in the X-ray band such as {\it XRISM} \citep{Tashiro2018} and {\it Athena} \citep{Nandra2013}\footnote{The Athena X-ray observatory website (\url{https://www.the-athena-x-ray-observatory.eu/}}. As narrow line features are seen in AGN X-ray disk spectra, there are abundant metal elements in AGN cores. Accelerated protons also interact with those nuclei and induce nuclear spallation. The nuclear spallation in AGN disks will result in enhancement of emission lines from Mn, Cr, V, and Ti \citep{Gallo2019}. Those signatures will be another clue for the test of our model.
\section{Conclusion}
\label{sec:conclusion}
Recently, \citet{Inoue2018} has reported the coronae of Seyferts are composed of both thermal and non-thermal electrons based on \ac{alma} observations, which implies that particle acceleration occurs in AGN coronae. In order to investigate the production mechanism of those high energy particles, we study the particle acceleration process in AGN coronae. We consider particle acceleration by the \ac{dsa} process in the coronae as an example. By taking into account the observationally determined coronal properties, such as temperature, density, size, and magnetic field strength, we found that standard \ac{dsa} processes can easily reproduce the observed non-thermal electron in the coronae with an injection electron spectral index of $p_{\rm inj}=2$. Even in low acceleration efficiency cases ($\eta_g\sim10^6$), such populations can be realized in coronae. Given the observed magnetic field strength of 10~G and accretion rates, we also found that other possible acceleration mechanisms such as turbulent acceleration, magnetosphere acceleration, and magnetic reconnection confront difficulty in reproducing the observed non-thermal electrons.
The accelerated non-thermal electron populations will generate a MeV gamma-ray power-law spectrum in the AGN SEDs up to $\sim0.1$~GeV, which is limited by internal gamma-ray attenuation. In the sub-MeV band, the spectrum shows a superthermal tail due to the combination of thermal and non-thermal components and spectral flattening occurs at $\sim1$~MeV. These superthermal and flat spectral tails should be tested by future MeV gamma-ray missions.
We also study the contribution of \ac{agn} coronae to the cosmic gamma-ray background radiation. By setting the energy fraction of non-thermal electrons $f_{\rm nth}\sim3$\%, corresponding to $\sim5$\% of the shock energy in electron acceleration, \ac{agn} coronae can explain the MeV background in an extension of the X-ray background contribution of Seyferts. Due to a strong internal gamma-ray attenuation effect, the contribution of \ac{agn} coronae to the GeV background is negligible.
Accelerated particles would also result in neutrino production through hadronic processes. Intense neutrino emission has been expected to be produced in \ac{agn} coronae once hadrons are accelerated together \citep[e.g.,][]{Begelman1990,Stecker1992,Alvarez-Muniz2004}. Recent studies have proposed that these \ac{agn} core models could reproduce the high energy neutrino fluxed measured by IceCube \citep{Stecker2005,Stecker2013,Kalashev2015}. However, normalization of neutrino fluxes from \acp{agn} and acceleration properties of high energy particles in those models are assumed to match with the observation.
We found that \ac{agn} coronae can explain the diffuse neutrino fluxes below 100--300~TeV under specific parameters of energy injection rates in protons and gyro factors. The allowed parameter regions are quite narrow. Protons and electrons should have the same energy injection rate and the gyro factor $\eta_g$ should be $\sim30$. IceCube Gen-2 will be able to test this scenario by searching the neutrino signal from nearby Seyfert galaxies such as NGC~4151 and IC~4329A.
In summary, Seyfert coronae are feasible sites for particle acceleration. If the energy injection rate is 5\% for both protons and electrons and the gyro factor is $\eta_g=30$, they may be able to simultaneously explain the cosmic X-ray, MeV gamma-ray, and TeV neutrino background radiation. Future MeV gamma-ray and TeV neutrino observations will be able to test this scenario by observations of nearby bright Seyferts.
\acknowledgments
We thank the anonymous referee for his/her helpful comments which improved the manuscript. We also would like to thank Tsuguo Aramaki, Mitch Begelman, Norita Kawanaka, Shigeo Kimura, Ari Laor, Kohta Murase, Satomi Nakahara, and Marek Sikora for useful discussions and comments. YI is supported by JSPS KAKENHI Grant Number JP16K13813, JP19K14772, program of Leading Initiative for Excellent Young Researchers, MEXT, Japan, and RIKEN iTHEMS Program. DK is supported by JSPS KAKENHI Grant Numbers JP18H03722, JP24105007, and JP16H02170.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,510 |
\section{Introduction}
Model building in cosmology requires two main ingredients: a
theory of gravity and a description of the matter content of the
universe. In general relativity (GR) the gravity sector of the
theory is completely fixed, there are no free parameters. The
matter sector is represented in the field equations by the
energy-momentum tensor, and for a fluid the further specification of
an equation of state (EoS) is required. Apart from scalar fields,
typical cosmological fluids such as radiation or cold dark matter (CDM) are
represented by a {\it linear} EoS, $P={\it w}\rho $.
The combination of cosmic microwave background radiation
(CMBR)~\cite{CMBR,spergel}, large scale structure (LSS)~\cite{LSS}
and supernova type Ia (SNIa)~\cite{SNI} observations provides
support for a flat universe presently dominated by a component,
dubbed in general ``dark energy'', causing an accelerated
expansion. The simplest form of dark energy is an {\it ad hoc}
cosmological constant $\Lambda$ term in the field equations, what
Einstein called his ``biggest blunder''. However, although the
standard $\Lambda$CDM ``concordance'' model provides a rather
robust framework for the interpretation of present observations
(see e.g.~\cite{spergel,turner}), it requires a $\Lambda$ term
that is at odds by many order of magnitudes with theoretical
predictions~\cite{weinberg}. This has prompted theorists to
explore possible dark energy sources for the acceleration that go
beyond the standard but unsatisfactory $\Lambda$. With various
motivations, many authors have attempted to describe dark energy
as quintessence, {\it k}-essence or a ghost field, i.e. with
scalar fields with various properties. There have also been
attempts to describe dark energy by a fluid with a specific
non-linear EoS like the Chaplygin gas~\cite{KMP}, generalized
Chaplygin gas~\cite{GCG}, van der Waals fluid~\cite{GK}, wet dark
fluid~\cite{HN} and other specific gas EoS's~\cite{CTTC}.
Recently, various ``phantom models'' (${\it
w}=P/\rho<-1 $) have also been considered~\cite{RRC, sahni}. More
simply, but also with a higher degree of generality, many authors
have focused on phenomenological models where dark energy is
parameterized by assuming a $w=P/\rho=w(a)$, where $a=a(t)$ is the
expansion scale factor (see e.g.~\cite{bruce,will}).
Another possibility is to advocate a modified theory of gravity.
At high energies, modification of gravity beyond general
relativity could come from extra dimensions, as required in string
theory. In the brane world~\cite{SMS,BMW,DL,RM} scenario the extra
dimensions produce a term quadratic in the energy density in the
effective 4-dimensional energy-momentum tensor. Under the
reasonable assumption of neglecting 5-dimensional Weyl tensor
contributions on the brane, this quadratic term has the very
interesting effect of suppressing anisotropy at early enough
times. In the case of a Bianchi I brane-world cosmology containing
a scalar field with a large kinetic term the initial expansion is
quasi-isotropic~\cite{MSS}. Under the same assumptions, Bianchi I
and Bianchi V brane-world cosmological models containing standard
cosmological fluids with linear EoS also behave in a similar
fashion\footnote{This only requires $P/ \rho={\it w}>0 $, as
opposed to ${\it w}>1 $ in the GR case. In the case of
ekpyrotic/cyclic and pre-big bang models the initial expansion is
only isotropic if ${\it w}>1 $ as in the case of GR
~\cite{EWST}.}~\cite{CS}, and the same remains true for more
general homogeneous models~\cite{coley1,coley2} and even some
inhomogeneous exact solutions~\cite{coley3}. Finally, within the
limitations of a perturbative treatment, the
quadratic-term-dominated isotropic brane-world models have been
shown to be local past attractors in the larger phase space of
inhomogeneous and anisotropic models~\cite{DGBC, GDCB}. More
precisely, again assuming that the 5-d Weyl tensor contribution to
the brane can be neglected, perturbations of the isotropic models
decay in the past. Thus in the brane scenario the observed high
isotropy of the universe is the natural outcome of {\it generic
initial conditions}, unlike in GR where in general cosmological
models with a standard energy momentum tensor are highly
anisotropic in the past (see e.g. \cite{LL}).
Recently it has been shown that loop quantum gravity
corrections result in a modified Friedmann equation~\cite{KV},
with the modification appearing as a negative term which is
quadratic in the energy density. Further motivation for
considering a quadratic equation of state comes from recent
studies of {\it k}-essence fields as unified dark matter (UDM)
models\footnote{These attempt to provide a unified model for both
the dark matter and the dark energy components necessary to make
sense of observations.}~\cite{GH,RS}. The general {\it k}-essence
field can be described by a fluid with a closed-form barotropic
equation of state. The UDM fluid discussed in~\cite{GH} has a
non-linear EoS of the form $P\propto \rho^2$ at late times. More
recently, it has been shown that any purely kinetic {\it
k}-essence field can be interpreted as an isentropic perfect
fluid with an EoS of the form $P=P(\rho)$~\cite{DTF}. Also, low
energy dynamics of the Higgs phase for gravity have been shown to
be equivalent to the irrotational flow of a perfect fluid with
equation of state $P=\rho^2$~\cite{ACLMW}.
Given the isotropizing effect that the quadratic energy density
term has at early times in the brane scenario this then prompts
the question: can a term quadratic in the energy density have the
same effect in general relativity. This question is non-trivial as
the form of the equations in the two cases is quite different. On
the brane, for a given EoS the effective 4-dimensional Friedmann
and Raychaudhuri equations are modified, while the continuity
equation is identical to that of GR. With the introduction of a
quadratic EoS in GR, the Friedman equation remains the same, while
the continuity and Raychaudhuri equations are
modified\footnote{With Respect to the case of the same EoS with
vanishing quadratic term.}.
Taking into account this question (to be explored in detail in
Paper II~\cite{AB}), the diverse motivations for a quadratic
energy density term mentioned above and with the dark energy
problem in mind, in this paper we explore the GR dynamics of
homogeneous isotropic Robertson-Walker models with a quadratic
EoS, $P=P_0+\alpha \rho +\beta \rho^2$. This is the simplest model
we can consider without making any more specific assumptions on
the EoS~\cite{MV}. It represents the first terms of the Taylor
expansion of {\it any} EoS function $P=P(\rho)$ about $\rho=0$.
It can also be taken to represent (after re-grouping of terms) the
Taylor expansion about the present energy density $\rho_0$, see
\cite{MV}. In this sense therefore the out-coming dynamics is very
general. Indeed it turns out that this simple model can produce a
large variety of qualitatively different dynamical behaviors that
we classify using dynamical systems theory~\cite{WE, AP}. An
outcome of our analysis is that accelerated expansion phases are
mostly natural for non-linear EoS's. These are {\it in general}
asymptotically de Sitter thanks to the appearance of an {\it
effective cosmological constant}. This suggests that an EoS with
the right combination of $P_0$, $\alpha$ and $\beta$ may provide a
good and simple phenomenological model for UDM, or at least for a
dark energy component. Other interesting possibilities that arise
from the quadratic EoS are closed models that can oscillate with
no singularity, models that bounce between infinite
contraction/expansion and models which evolve from a phantom
phase, asymptotically approaching a de Sitter phase instead of
evolving to a ``big rip'' or other pathological future states
\cite{RRC,BLJM,NOT}.
As mentioned before, the question of the dynamical effects the
quadratic energy density term has on the anisotropy in GR is
explored in Paper II~\cite{AB}. There we analyze Bianchi I and V
models with the EoS $P=\alpha\rho +\beta \rho^2$, as well as
perturbations of the isotropic past attractor of those models
that are singular in the past. We anticipate that Bianchi I and V
non-phantom models with $\beta>0$ have an isotropic singularity,
i.e. they are asymptotic in the past to a certain isotropic model,
and that perturbations of this model decay in the past. Phantom
anisotropic models with $\beta>0$ are necessarily asymptotically
de~Sitter in the future, but the shear anisotropy dominates in
the past. For $\beta<0$ all models are anisotropic in the past,
while their specific future evolution depends on the value of
$\alpha$.
The paper is organized as follows. In section~\ref{sec2} we
outline the setup and the three main cases we will investigate. In
section~\ref{sec3}, we study the dynamics of isotropic
cosmological models in the high energy limit (neglecting the $P_0$
term). We find the critical points, their stability nature and the
occurrence of bifurcations of the dynamical system. In
section~\ref{sec4}, we consider the low energy limit (neglecting
the $\rho^2$ term). The full system is then analyzed in
section~\ref{sec5}, showing the qualitatively different behavior
with respect to the previous cases. We then finish with some
concluding remarks and an outline of work in progress in
section~\ref{sec6}. Units are such that $8\pi G/c^4=1$.
\section{Cosmology with a quadratic EoS}\label{sec2}
\subsection{Dynamics with non-linear EoS}
The evolution of Robertson-Walker isotropic models with no
cosmological constant $\Lambda$ term is given in GR by the
following non-linear planar autonomous dynamical system:
\begin{eqnarray}
\dot{\rho}&=& -3 H \left( \rho + P \right), \label{energycons}\\
\dot{H} &=& -H^2 - \frac{1}{6}\left( \rho + 3P \right),\label{Ray}
\end{eqnarray}
where $H$ is the Hubble expansion function, related to the scale
factor $a$ by $H=\dot{a}/a$. In order to close this system of
equations, an EoS must be specified, relating
the isotropic pressure $P$ and the energy density $\rho$. When an
EoS $P=P(\rho)$ is given, the above system admits a first
integral, the Friedman equation
\begin{equation}\label{Friedman}
H^{2} = \frac{1}{3}\rho - \frac{K}{a^2},
\end{equation}
\noindent where $K$ is the curvature, $K=0,\pm 1$ as usual for
flat, closed and open models.
Here we are interested in exploring the general dynamical features
of a non-linear EoS $P=P(\rho)$. Before considering the specific
case of a quadratic EoS, we note some important general points.
First, it is immediately clear from Eq.~(\ref{energycons}) that an
effective cosmological constant is achieved whenever there is an
energy density value $\rho_{\Lambda}$ such that $P(\rho_{\Lambda})
= -\rho_{\Lambda}$.
More specifically:\\
\noindent {\bf Remark 1.} If for a given EoS function $P=P(\rho)$
there exists a $\rho_\Lambda$ such that $P(\rho_\Lambda)=
-\rho_\Lambda$, then $\rho_\Lambda$ has the dynamical role of an
effective cosmological constant.\\
\noindent {\bf Remark 2.} A given EoS $P(\rho)$ may admit more
than one point $\rho_\Lambda$. If these points exist, they are
fixed points of Eq. (\ref{energycons}).\\
\noindent {\bf Remark 3.} From Eq.~(\ref{Ray}), since
$\dot{H}+H^2 = \ddot{a}/a$, an accelerated phase is achieved
whenever $P(\rho) < -\rho/3$.\\
\noindent {\bf Remark 4.} Remark 3 is only valid in GR, and a
different condition will be valid in other theories of gravity.
Remarks 1 and 2, however, are only based on conservation of
energy, Eq.~(\ref{energycons}). The latter is also valid (locally)
in inhomogeneous models, provided that the time derivative is
taken to represent the derivative along the fluid flow lines (e.g.
see~\cite{ellis_varenna71}), and is a direct consequence of
$T^{ab}{}_{;b}=0$. Thus Remarks 1 and 2 are valid in any gravity
theory where $T^{ab}{}_{;b}=0$, as well as (locally) in inhomogeneous models.\\
Second, assuming expansion, $H>0$, we may rewrite
Eq.~(\ref{energycons}) as:
\begin{equation}
\frac{d\rho}{d\tau}=-3\left[\rho+P(\rho)\right], \label{encon1}
\end{equation}
where $\tau=\ln a$. Eq.~(\ref{encon1}) is a 1-dimensional
dynamical system with fixed point(s) $\rho_\Lambda$(s), if they
exist. If $\rho +P(\rho)<0$ the fluid violates the null energy
condition~\cite{carroll,visser} and Eq. (\ref{energycons}) implies what
has been dubbed phantom behavior~\cite{RRC} (cf.~\cite{LM}),
i.e.\ the fluid behaves counter intuitively in that the energy
density increases (decreases) in the future for an expanding
(contracting) universe.
Then:\\
\noindent {\bf Remark 5.} Any point $\rho_\Lambda$ is an attractor
(repeller) of the evolution during expansion (the autonomous system (\ref{encon1})) if
$\rho+P(\rho)<0$ ($>0$) for $\rho<\rho_\Lambda$ and
$\rho+P(\rho)>0$ ($<0$) for $\rho>\rho_\Lambda$.\\
\noindent {\bf Remark 6.} Any point $\rho_\Lambda$ is a
shunt\footnote{This is a fixed point which is an attractor for one
side of the phase line and a repeller for the other~\cite{AP}.}
of the autonomous system Eq.~(\ref{encon1}) if either
$\rho+P(\rho)<0$ on both sides of $\rho_\Lambda$, or
$\rho+P(\rho)>0$ on both sides of $\rho>\rho_\Lambda$. In this
case the fluid is respectively phantom or standard on both
sides.\\
Let's now consider the specific case of a general quadratic EoS of
the form:
\begin{equation}\label{QuadEoS}
P = {P}_{o} + \alpha\rho + \beta{{\rho}^2}.
\end{equation}
\noindent The parameter $\beta$ sets the characteristic energy
scale $\rho_{c}$ of the quadratic term as well as it's sign
$\epsilon$
\begin{equation}
\beta=\frac{\epsilon}{{\rho}_{c}}.
\end{equation}
\noindent {\bf Remark 7.} Eq. (\ref{QuadEoS}) represents the
Taylor expansion, up to ${\cal O}(3)$, of {\it any} barotropic EoS
function $P=P(\rho)$ about $\rho=0$. It also represents, after
re-grouping of terms, the Taylor expansion about the present
energy density value $\rho_0$~\cite{MV}. In this sense, the
dynamical system (\ref{energycons},\ref{Ray}) with (\ref{QuadEoS})
is {\it general}, i.e. it represents the late evolution, in GR,
of {\it any} cosmological model with non-linear barotropic EoS
approximated by Eq. (\ref{QuadEoS}).\\
The usual scenario for a cosmological fluid is a
standard linear EoS ($P_0=\beta=0$), in which case $\alpha=w$ is
usually restricted to the range $-1<\alpha<1$. For the sake of
generality, we will consider values of $\alpha$ outside this
range, considering dynamics only restricted by the request
that $\rho\geq 0$. The first term in Eq~(\ref{QuadEoS}) is a
constant pressure term which in general becomes important in what we call
the low energy regime. The second term is the standard linear term
usually considered, with
\begin{equation}
\alpha=\frac{dP}{d\rho}\Bigg|_{\rho=0}.
\end{equation}
If it is positive, $\alpha$ has an interpretation in terms of the
speed of sound of the fluid in the limit $\rho\rightarrow 0$,
$\alpha=c_s^2$. The third term is quadratic in the energy density
and will be important in what we call the high energy regime.
In the following, we first split the analysis of the dynamical
system Eqs. (\ref{energycons},~\ref{Ray},~\ref{QuadEoS}) in two
parts, the high energy regime where we neglect $P_0$ and the low
energy regime where we set $\beta=0$, then we consider the full
system with EoS (\ref{QuadEoS}). Using only the energy
conservation Eq.~(\ref{energycons}) we list the various sub-cases,
also briefly anticipating the main dynamical features coming out
of the analysis in Sections \ref{sec3}, \ref{sec4} and \ref{sec5}.
\subsection{Quadratic EoS for the high energy regime}
In the high energy regime we consider the restricted equation of
state:
\begin{equation}
P_{HE}= \alpha\rho + \frac{\epsilon {\rho}^2}{\rho_{c}}.
\end{equation}
\noindent The energy conservation Eq.~(\ref{energycons}) can be
integrated in general to give:
\begin{eqnarray} \label{rhohe}
\rho_{HE}(a) &=& \frac{A(\alpha+1)\rho_{c}}{a^{3(\alpha+1)} - \epsilon A},\\
A &=& \frac{\rho_{o} a_{o}^{3(\alpha+1)}}{(\alpha+1)\rho_{c}
+\epsilon \rho_{o}},
\end{eqnarray}
where $\rho_o$, $a_o$ represent the energy density and scale
factor at an arbitrary time $t_o$. This is valid for all values of
$\epsilon$, $\rho_{c}$ and $\alpha$, except for $\alpha\neq-1$. In
the case $\alpha=-1$ the evolution of the energy density is:
\begin{eqnarray} \label{rhoheb}
\rho_{HE}(a) &=& \left[ \frac{1}{\rho_{o}}+
\frac{3\epsilon}{\rho_{c}} \ln \left( \frac{a}{a_{o}} \right)
\right]^{-1}.
\end{eqnarray}
\noindent The EoS with this particular choice of parameters has
already been considered as a possible dark energy model
\cite{NOT, HS}. We will concentrate on the broader class of models
where $\alpha\neq-1$.
In Section \ref{sec3} we will give a dynamical system analysis of
the high energy regime, but it is first useful to gain some
insight directly from Eq.~(\ref{rhohe}).
We start by defining
\begin{equation}
\label{lambda}
\rho_\Lambda :=-\epsilon
(1+\alpha)\rho_c,
\end{equation}
noticing that this is an effective positive cosmological
constant point only if $\epsilon
(1+\alpha) <0$. It is then convenient to rewrite
Eq.~(\ref{rhohe}) in three different ways, defining $a_{\star}=
|A|^{1/3(\alpha+1)}$, each representing two different
subcases.\\
\noindent {\bf A:} $ \epsilon (1+\alpha)>0$, $\rho_\Lambda<0$,
\begin{equation}
\rho=\frac{|1+\alpha|\rho_c}{\left(\frac{a}{a_\star}\right)^{3(1+\alpha)} -1}.
\label{rho1}
\end{equation}
\noindent {\bf A1:} $\epsilon>0$, $(1+\alpha)>0$. In this case
$a_\star<a<\infty$, with $\infty>\rho >0$. Further restrictions on
the actual range of values that $a$ and $\rho$ can take may come
from the geometry. For a subset of appropriate initial conditions
closed (positively curved) models may expand to a maximum $a$
(minimum $\rho$) and re-collapse, and for $\alpha<-1/3$ not all
closed models have a past singularity at $a=a_\star$, having
instead a bounce at a minimum $a$ (maximum $\rho$).\\
\noindent {\bf A2:} $\epsilon<0$, $(1+\alpha)<0$. In this case
$0<a<a_\star$, with $0 <\rho <\infty$, and the fluid
exhibits phantom behavior. All models have a future singularity at
$a=a_\star$, but in general closed models contract from a past
singularity, bounce at a minimum $a$ and $\rho$, then
re-expand to the future singularity (we will refer to this as a phantom bounce).\\
\noindent {\bf B:} $ \rho_\Lambda>0$,
$\rho>\rho_\Lambda$,
\begin{equation}
\rho=\frac{\rho_\Lambda}{1-\left(\frac{a}{a_\star}\right)^{3(1+\alpha)}}.
\label{rho2}
\end{equation}
\noindent {\bf B1:} $\epsilon>0$, $(1+\alpha)<0$, $A>0$. In this
case $a_\star<a<\infty$, with $\infty>\rho >\rho_\Lambda$. As in
case {\bf A1}, further restrictions on the actual range of values
that $a$ and $\rho$ can take may come from the geometry. For a
subset of initial conditions closed models may expand to a maximum
$a$ (minimum $\rho$) and re-collapse, while for another subset
closed models don't have a past singularity at $a=a_\star$,
having instead a bounce at a minimum $a$ (maximum $\rho$).\\
\noindent {\bf B2:} $\epsilon<0$, $(1+\alpha)>0$, $A<0$. In this
case $0<a<a_\star$, with $\rho_\Lambda<\rho <\infty$. As in the
case {\bf A2}, the fluid has a phantom behavior. All models
have a future singularity at $a=a_\star$, with closed models
contracting from a past singularity to a minimum $a$ and $\rho$
before re-expanding.\\
\noindent {\bf C:} $ \rho_\Lambda>0$,
$\rho<\rho_\Lambda$,
\begin{equation}
\rho=\frac{\rho_\Lambda}{1+\left(\frac{a}{a_\star}\right)^{3(1+\alpha)}}.
\label{rho3}
\end{equation}
\noindent {\bf C1:} $\epsilon>0$, $(1+\alpha)<0$, $A<0$. In this
case $0<a<\infty$, with $0<\rho <\rho_\Lambda$. The fluid behaves
in a phantom manner but avoids the future singularity and instead
evolves to a constant energy density $\rho_\Lambda$. Closed models,
however, typically bounce with a minimum $\rho$ at a finite $a$.\\
\noindent {\bf C2:} $\epsilon<0$, $(1+\alpha)>0$, $A>0$. In this
case $0<a<\infty$, with $\rho_\Lambda>\rho >0$. Again, closed
models may evolve within restricted ranges of $a$ and $\rho$, even
oscillating, for $\alpha\geq -1/3$, between maxima and minima of
$a$ and $\rho$.
\subsection{Low energy regime: affine EoS}
In the low energy regime we consider the affine equation of state:
\begin{equation}
P_{LE}= P_{o} + \alpha\rho.
\end{equation}
\noindent This particular EoS has been investigated as a possible
dark energy model~\cite{HN, Babi}, however, only spatially flat
Friedmann models where considered. The scale factor dependence of
the energy density is:
\begin{eqnarray}\label{rhole}
\rho_{LE}(a)= -\frac{P_{o}}{(\alpha+1)} + B a^{-3(\alpha+1)},\\
B={\left[ \frac{P_{o}}{(1+\alpha)} + \rho_{o}
\right]}{a_{o}}^{3(1+\alpha)}.
\end{eqnarray}
\noindent This is valid for all values of $P_{o}$ and $\alpha$
except $\alpha\neq-1$. In the case $\alpha=-1$, the evolution of
the energy density is:
\begin{eqnarray} \label{rholeb}
\rho_{LE}(a) &=& {\rho_{o}} - 3{P_{o}} \ln \left( \frac{a}{a_{o}}
\right) ,
\end{eqnarray}
\noindent As in the high energy case, we will concentrate on the
broader class of models where $\alpha\neq-1$.
In Section \ref{sec4} we present the dynamical system analysis of
the low energy regime, but first let us gain some insight from Eq.
(\ref{rhole}). As with the high energy case, in many cases the fluid
violates the null energy condition ($\rho+P<0$) and exhibit
phantom behavior. Defining
\begin{equation}
\label{lambdatilde}
\tilde{\rho}_\Lambda :=-P_{o}/(1+\alpha),
\end{equation}
we see that a positive effective cosmological constant point
exists, $\tilde{\rho}_\Lambda > 0$, only if $P_{o}/(1+\alpha) <0$.
Eq.~(\ref{rhole}) can be rewritten in three different ways,
defining $\tilde{a}_{\star}= |B|^{1/3(\alpha+1)}$, each
representing two different subcases.\\
\noindent {\bf D:} $P_{o}/(1+\alpha)>0$, $\tilde{\rho}_\Lambda<0$,
\begin{equation}\label{rho4}
\rho=-\frac{P_{o}}{(\alpha+1)}+\left(\frac{a}{\tilde{a}_\star}
\right)^{-3(1+\alpha)}.
\end{equation}
\noindent {\bf D1:} $P_{o}>0$, $(1+\alpha)>0$. In this case
$0<a<\infty$, with $\infty>\rho >-|\tilde{\rho}_\Lambda|$. The
geometry places further restrictions on the values that $a$ and
$\rho$ can take. The subset of open models (negative curvature)
are all non-physical as they evolve to the $\rho<0$ region of the
phase space. The spatially flat models expand to a maximum $a$
(when $\rho=0$) and recollapse. The closed (positively curved)
models expand to a maximum $a$ (minimum $\rho$) and recollapse,
and for $-1\leq\alpha<-1/3$ a subset of closed models oscillate
between a maximum and minimum $a$ (minimum and maximum $\rho$).\\
\noindent {\bf D2:} $P_{o}<0$, $(1+\alpha)<0$. In this case
$0<a<\infty$, with $-|\tilde{\rho}_\Lambda| <\rho <\infty$. In
this case the fluid exhibits phantom behavior. The subset of open
models are all non-physical as they evolve from the $\rho<0$
region of the phase space. The spatially flat models contract,
bounce at a minimum $a$ when $\rho=0$ and re-expand in the future.
The closed models contract, bounce at a minimum $a$ and $\rho$,
then re-expand in the future.\\
\noindent {\bf E:} $\tilde{\rho}_\Lambda>0$,
$\rho>\tilde{\rho}_\Lambda$,
\begin{equation}\label{rho5}
\rho=\tilde{\rho}_\Lambda+\left(\frac{a}{\tilde{a}_\star}
\right)^{-3(1+\alpha)}.
\end{equation}
\noindent {\bf E1:} $P_{o}>0$, $(1+\alpha)<0$, $B>0$. In this case
$0<a<\infty$, with $\tilde{\rho}_\Lambda<\rho<\infty$. As in the
case {\bf D2}, the fluid behaves in a phantom manner. The flat and
open models are asymptotically de Sitter in the past, when their
energy density approaches a finite value ($\rho \rightarrow
\tilde{\rho}_\Lambda$ as $a\rightarrow0$), and when
$\tilde{\rho}_\Lambda $ becomes negligible in Eq.~(\ref{rho5})
they evolve as standard linear phantom models, reaching a future
singularity in a finite time ($\rho \rightarrow \infty$ as
$a\rightarrow\infty$). The closed models contract to a
minimum $a$ (minimum $\rho$), bounce and re-expand.\\
\noindent {\bf E2:} $P_{o}<0$, $(1+\alpha)>0$, $B>0$. In this case
$0<a<\infty$, with $\infty>\rho>\tilde{\rho}_\Lambda$. All flat
and open models expand from a singularity and asymptotically
evolve to a de Sitter model, with $\rho=\tilde{\rho}_\Lambda$. The
closed models evolve from a contracting de Sitter model to minimum
$a$ (maximum $\rho$), bounce and then evolve to an expanding de Sitter model.\\
\noindent {\bf F:} $\tilde{\rho}_\Lambda>0$,
$\rho<\tilde{\rho}_\Lambda$,
\begin{equation}\label{rho6}
\rho=\tilde{\rho}_\Lambda-\left(\frac{a}{\tilde{a}_\star}
\right)^{-3(1+\alpha)}.
\end{equation}
\noindent {\bf F1:} $P_{o}>0$, $(1+\alpha)<0$, $B<0$. In this
case $0<a<\infty$, with $\tilde{\rho}_\Lambda>\rho>-\infty$. The
subset of open models are all non-physical as they evolve to the
$\rho<0$ region of the phase space. The flat models evolve from an
expanding de Sitter phase to a contracting de Sitter phase. The
closed models oscillate between a maximum and minimum $a$ (minimum
and maximum $\rho$).\\
\noindent {\bf F2:} $P_{o}<0$, $(1+\alpha)>0$, $B<0$. In this
case $0<a<\infty$, with $-\infty<\rho<\tilde{\rho}_\Lambda$. The
fluid exhibits phantom behavior. The open models are all
non-physical as they evolve from the $\rho<0$ region of the phase
space. The flat and closed models evolve from a contracting de
Sitter phase, bounce at minimum $a$ and $\rho$, then re-expand,
asymptotically approaching a expanding de Sitter phase.
\subsection{The full quadratic EoS}
In Section \ref{sec5} we present the dynamical system analysis of
the full quadratic EoS models given by Eq.~(\ref{QuadEoS}), but
again we first study the form of $\rho(a)$ implied by conservation
of energy, Eq. (\ref{energycons}). As with the previous cases the
fluid can violate the null energy condition ($\rho+P<0$) and
therefore may exhibit phantom behavior. The system may admit two
(possibly negative) effective cosmological constant points:
\begin{eqnarray}
\rho_{\Lambda,1} &:=& \frac{1}{2\beta}\left[-(\alpha+1) + \sqrt{\Delta} \right],\\
\rho_{\Lambda,2} &:=& \frac{1}{2\beta}\left[-(\alpha+1) - \sqrt{\Delta} \right],
\end{eqnarray}
if
\begin{equation}
\Delta := (\alpha+1)^2 - 4\beta P_{o}
\end{equation}
is non negative. Clearly, the existence of the effective
cosmological points depends on the values of the parameters in the
EoS. This in turn affects the functional form of $\rho(a)$. In
order to find $\rho(a)$ the following integral must be evaluated:
\begin{eqnarray}
-3\ln \left( \frac{a}{a_{o}} \right)= \int^{\rho}_{\rho_{o}}
\frac{d\rho}{{P}_{o} + (\alpha+1)\rho + \beta{{\rho}^2}}.
\end{eqnarray}
\noindent This is done separately for the cases when
no effective cosmological points exist ($\Delta<0$), when one
cosmological point exist, $\rho_{\Lambda,1} = \rho_{\Lambda,2} =
\bar{\rho}_{\Lambda} \neq 0 $ ($\Delta=0$) and when two cosmological
points exist, $\rho_{\Lambda,1} \neq \rho_{\Lambda,2} \neq 0 $
($\Delta>0$). We now consider these three separate sub-cases.\\
\noindent {\bf G:} $(1+\alpha)^{2}<4\beta P_{o}$, $\Delta<0$,
\begin{eqnarray}
\rho &=& \frac{ \Gamma - \sqrt{|\Delta|}\tan \left( \frac{3}{2}
\sqrt{|\Delta|} \ln \left( \frac{a}{a_{o}} \right)\right) }{
2\beta + \frac{2\beta}{\sqrt{|\Delta|}}\Gamma \tan \left(
\frac{3}{2} \sqrt{|\Delta|} \ln \left( \frac{a}{a_{o}}
\right)\right) } - \frac{(\alpha+1)}{2\beta},\nonumber\\
\nonumber\\
\Gamma &=& 2 \beta \rho_{o} + (\alpha+1).\label{rho7}
\end{eqnarray}
\noindent {\bf G1:} $\beta>0$, $P_{o}>0$. In this case $a_{1}< a <
a_{2}$ (where $a_{1}<a_{2}$), with $\infty>\rho>-\infty$. The
fluid behaves in a standard manner and all models have a past
singularity at $a=a_{1}$. All open models are non-physical as they
evolve to the $\rho<0$ region of the phase space. The flat models
expand to a maximum $a$ ($\rho = 0$) and then re-collapse. The
closed models can behave in a similar manner to flat models except
they reach a minimum $\rho$ before re-collapsing. Some closed
models oscillate between maxima and minima $a$ and $\rho$.\\
\noindent {\bf G2:} $\beta<0$, $P_{o}<0$. In this case $a_{1}< a <
a_{2}$ (where $a_{1}<a_{2}$), with $-\infty<\rho<\infty$. The
fluid behaves in a phantom manner. All open models are
non-physical as they evolve from the $\rho<0$ region of the phase
space. The flat and closed models represent phantom bounce models,
that is they evolve from a singularity at $a=a_{1}$
($\rho=\infty$), contract to a minimum $a$ (minimum $\rho$) and
then re-expand to the future singularity at $a=a_{2}$.\\
\noindent {\bf H:} $(1+\alpha)^{2}=4\beta P_{o}$, $\Delta=0$,
\begin{eqnarray}
\rho &=& \bar{\rho}_{\Lambda} + \frac{1} { 3 \beta \ln \left(
\frac{a}{a_{o}} \right) + \frac{2\beta}{\Gamma} } .\label{rho8}
\end{eqnarray}
\noindent {\bf H1:} $\beta>0$, $P_{o}>0$, $\rho<\bar{\rho}_{\Lambda}$.
In this case $0 < a < a_{1}$ with $\bar{\rho}_{\Lambda}>\rho>-\infty$.
The fluid behaves in a standard manner. The subset of open models
are all non-physical as they evolve to the $\rho<0$ region of the
phase space. The flat models evolve from an expanding de Sitter
phase to a contracting de Sitter phase. The closed models
oscillate between maxima and minima $a$ and $\rho$.\\
\noindent {\bf H2:} $\beta>0$, $P_{o}>0$,
$\rho>\bar{\rho}_{\Lambda}$. In this case $a_{1} < a < \infty $
with $\infty> \rho> \bar{\rho}_{\Lambda}$ and the fluid behaves in
a standard manner. If $\bar{\rho}_{\Lambda}>0$, the open and flat
models evolve from a past singularity ($a=a_{1}$) and evolve to a
expanding de Sitter phase. For a subset of initial conditions
closed models may expand to a maximum $a$ (minimum $\rho$) and
re-collapse, while for another subset closed models avoid a past
singularity, instead having a bounce at a minimum $a$ (maximum
$\rho$). If $\bar{\rho}_{\Lambda}<0$, the open models are
non-physical, while flat and closed models represent recollapse models.\\
\noindent {\bf H3:} $\beta<0$, $P_{o}<0$, $\rho<\bar{\rho}_{\Lambda}$.
In this case $a_{1}<a<\infty $ with $-\infty< \rho<
\bar{\rho}_{\Lambda}$. The fluid behaves in a phantom manner. The open
models are all non-physical as they evolve from the $\rho<0$
region of the phase space. The flat and closed models evolve from
a contracting de Sitter phase, bounce at minimum $a$ and $\rho$,
then re-expand, asymptotically approaching an expanding de Sitter
phase.\\
\noindent {\bf H4:} $\beta<0$, $P_{o}<0$,
$\rho>\bar{\rho}_{\Lambda}$. In this case $0 < a < a_{1} $ with
$\bar{\rho}_{\Lambda}<\rho<\infty$ and the fluid behaves in a
phantom manner. All models have a future singularity at $a=a_{1}$.
If $\bar{\rho}_{\Lambda}>0$, closed models contract from a past
singularity to a minimum $a$ and $\rho$ before re-expanding
(phantom bounce), while flat and open models are asymptotic to
generalized de Sitter models in the past. If
$\bar{\rho}_{\Lambda}<0$, open models are non-physical, while
flat and closed models contract from a past singularity to
a minimum $a$ and $\rho$ before re-expanding.\\
\noindent {\bf I:} $(1+\alpha)^{2}>4\beta P_{o}$, $\Delta>0$,
\begin{eqnarray}
\rho &=&\frac{\rho_{\Lambda,2} \left(
\frac{a}{a_{o}} \right)^{-3\sqrt{\Delta}} -\rho_{\Lambda,1} C}{ \left(
\frac{a}{a_{o}} \right)^{-3\sqrt{\Delta}}-C} , \\
C&=&\frac{ \rho_o-\rho_{\Lambda,2}}{\rho_o-\rho_{\Lambda,1}}.
\end{eqnarray}
Note that $\beta>0$ ($<0$) implies $\rho_{\Lambda,2}<\rho_{\Lambda,1}$
($\rho_{\Lambda,1}<\rho_{\Lambda,2}$), and $C<0$ implies
$\rho_{\Lambda,2}<\rho_o<\rho_{\Lambda,1}$ for $\beta>0$
($\rho_{\Lambda,1}<\rho_o<\rho_{\Lambda,2}$ for $\beta<0$).\\
\noindent {\bf I1:} $\beta>0$, $P_{o}>0$, $\rho<\rho_{\Lambda,2}$,
hence we consider $\rho_{\Lambda,2}>0$. In this case $0 < a <
a_{1} $ with $\rho_{\Lambda,2}>\rho>-\infty$ and the fluid behaves
in a standard manner. The open models are all non-physical as they
evolve to the $\rho<0$ region of the phase space. The flat models
evolve from an expanding de Sitter phase to a contracting de
Sitter phase. The closed model region contains a generalized
Einstein static fixed point and models which
oscillate indefinitely (between minima and maxima $a$ and $\rho$).\\
\noindent {\bf I2:} $\beta>0$, $P_{o}>0$, $\rho_{\Lambda,2}<\rho
<\rho_{\Lambda,1}$. In
this case $0<a<\infty$ with $\rho_{\Lambda,2}<\rho
<\rho_{\Lambda,1}$ and the fluid behaves in a phantom manner. The
open models evolve from one expanding de Sitter phase
($\rho=\rho_{\Lambda,2}$) to more rapid (greater $\rho$ and $H$)
de Sitter phase ($\rho=\rho_{\Lambda,1}$), however the spatial
curvature is negative in the past and asymptotically approaches
zero in the future. The flat models behave in a similar manner
except that the curvature remains zero. The closed models undergo
a phantom bounce with asymptotic de Sitter behavior, that is they
evolve from a contracting de Sitter phase, reach a minimum $a$,
minimum $\rho$ and then evolve to a expanding de Sitter phase.\\
\noindent {\bf I3:} $\beta>0$, $P_{o}>0$, $\rho>\rho_{\Lambda,1}$.
In this case $a_{1} < a < \infty $ with $\infty>\rho
>\rho_{\Lambda,1}$ and the fluid behaves in a standard manner.
All flat and open models expand from a singularity at $a=a_{1}$
and asymptotically evolve to a expanding de Sitter phase
($\rho=\rho_{\Lambda,1}$). A subset of closed models evolve from a
contracting de Sitter phase to minimum $a$ (maximum $\rho$),
bounce and then evolve to an expanding de Sitter phase. Another
subset of closed models expand from a singularity at $a=a_{1}$,
reach a maximum $a$ and minimum
$\rho$, only to re-collapse.\\
\noindent {\bf I4:} $\beta<0$, $P_{o}<0$, $\rho<\rho_{\Lambda,1}$.
In this case $a_{1} < a< \infty $, with
$-\infty<\rho<\rho_{\Lambda,1}$ and the fluid behaves in a phantom
manner. The open models are all non-physical as they evolve from
the $\rho<0$ region of the phase space. The flat and closed models
evolve from a contracting de Sitter phase, bounce at minimum $a$
and $\rho$, then re-expand, asymptotically approaching a expanding
de Sitter phase.\\
\noindent {\bf I5:} $\beta<0$, $P_{o}<0$, $\rho_{\Lambda,1}<\rho
<\rho_{\Lambda,2}$ (where $\rho_{\Lambda,1}<\rho_{\Lambda,2}$). In
this case $0<a<\infty$ with $\rho_{\Lambda,2}>\rho
>\rho_{\Lambda,1}$ and the fluid behaves in a standard manner.
The open models evolve from a expanding de Sitter phase
($\rho=\rho_{\Lambda,2}$) to less rapid (lower $\rho$ and $H$) de
Sitter phase ($\rho=\rho_{\Lambda,1}$) with the spatial curvature
being negative in the past and zero asymptotically in the future.
The flat models behave in a similar manner, except that the
curvature remains zero throughout the evolution. The closed models
can undergo a phantom bounce with asymptotic de Sitter behavior in
the future and past, a subset of these models enter a
loitering phase both before and after the bounce. There are a
subset of closed models which oscillate indefinitely.\\
\noindent {\bf I6:} $\beta<0$, $P_{o}<0$, $\rho>\rho_{\Lambda,2}$.
In this case $0 < a < a_{1} $ with $\rho_{\Lambda,2}<\rho <\infty$
and the fluid behaves in a phantom manner. All models have a
future singularity at $a=a_{1}$, with closed models contracting
from a past singularity to a minimum $a$ and $\rho$ before
re-expanding (phantom bounce).
\subsection{The Singularities}
In general, singularities may behave in qualitatively different
ways. The singularities present for the non linear EoS
are quite different from the standard ``Big Bang"/``Big Crunch"
singularity. The standard singularities are such that:
\begin{itemize}
\item ``Big Bang"/``Big Crunch" : For $a \to 0$,
$\rho \to \infty$.
\end{itemize}
If the singularity occurs in the past (future) we refer to it as a
``Big Bang" (``Big Crunch"). In order to differentiate between
various types of singularities, we will use the following
classification system for future singularities~\cite{NOT} (cf.\ also~\cite{barrow}):
\begin{itemize}
\item Type I (``Big Rip'') : For $t \to t_{\star}$, $a \to \infty$,
$\rho \to \infty$ and $|P| \to \infty$.
\item Type II (``sudden'') : For $t \to t_{\star}$, $a \to a_{\star}$,
$\rho \to \rho_{\star}$ or $0$ and $|P| \to \infty$.
\item Type III : For $t \to t_{\star}$, $a \to a_{\star}$,
$\rho \to \infty$ and $|P| \to \infty$.
\item Type IV : For $t \to t_{\star}$, $a \to a_{\star}$,
$\rho \to \rho_{\star}$ or $0$, $|P| \to |P_{\star}|$ or $0$ and
derivatives of $H$ diverge.
\end{itemize}
\noindent Here $t_{\star}$, $a_{\star}$, $\rho_{\star}$ and
$|P_{\star}|$ are constants with $a_{\star}\neq 0$. The main
difference in our case is that the various types of singularities
may occur in the past or the future. The future singularity
described in case {\bf A2} falls into the category of Type III,
however, the past singularity mentioned in case {\bf A1} is also a
Type III singularity. In the case of the full quadratic EoS, all
singularities which occur for a finite scale factor ($a=a_{1}$)
are of Type III.
\section{ High energy regime Dynamics}\label{sec3}
\subsection{The dimensionless dynamical system}\label{sec3_a}
It is convenient to describe the dynamics in terms of
dimensionless variables. In the high energy regime these are:
\begin{equation}
x=\frac{\rho}{|\rho_{c}|}\;,~~ y=\frac{H}{\sqrt{|\rho_{c}|}}\;,~~
\eta=\sqrt{|\rho_{c}|}t\;.
\end{equation}
\noindent The system of equations (\ref{energycons})-(\ref{Ray}) then changes into:
\begin{eqnarray}
x'&=& -3 y \left( (\alpha +1)x + \epsilon x^2 \right),\nonumber\\
y' &=& -y^{2} - \frac{1}{6} \left( (3\alpha +1)x + 3\epsilon x^{2}
\right) ,\label{HED}
\end{eqnarray}
\noindent and the Friedman equation (\ref{Friedman}) gives
\begin{eqnarray}\label{fried_dim}
y^{2} &=& \frac{x}{3} - \frac{K}{|\rho_{c}|a^2}.
\end{eqnarray}
\noindent The discrete parameter $\epsilon$ denotes the sign of
the quadratic term, $\epsilon\in\{-1,1\}$. The primes denote
differentiation with respect to $\eta$, the normalized time
variable. The variable $x$ is the normalized energy density and
$y$ the normalized Hubble function. We will only consider the
region of the phase space for which the energy density remains
positive ($x\geq0$). The system of equations above is of the form
$u'_{i}=f_{i}(u_{j})$. Since this system is autonomous,
trajectories in phase space connect the fixed/equilibrium points
of the system ($u_{j,o}$), which satisfy the system of equations
$f_{i}(u_{j,0})=0$. The fixed points of the high energy system and
their existence conditions (the conditions for which $x\geq0$ and
$x,y\in\mathbb{R}$) are given in Table~\ref{Tab1}.
\begin{center}
\begin{table}[h!]\caption{\label{Tab1}Location and existence conditions
($x\geq0$ and $x,y\in\mathbb{R}$) of the fixed points of the high
energy regime system.}
\begin{tabular*}{0.47\textwidth}{@{\extracolsep{\fill}}cccc}
\hline \hline
Name & $x$&$y$& Existence \\
\hline
\\
$M$ & $0$ & $0$ & $-\infty<\alpha<\infty$ \\
$E$ & $-\frac{\epsilon(3\alpha+1)}{3}$ & $0$ &
$\epsilon(3\alpha+1)<0$ \\
$dS_{+}$ & $-\epsilon(\alpha+1)$ &
$+\sqrt{\frac{-\epsilon(\alpha+1)}{3}}$ &
$\epsilon(\alpha+1)<0$ \\
$dS_{-}$ & $-\epsilon(\alpha+1)$ &
$-\sqrt{\frac{-\epsilon(\alpha+1)}{3}}$&
$\epsilon(\alpha+1)<0$\\
\\
\hline \hline
\end{tabular*}
\end{table}
\end{center}
The first fixed point (M) represents an empty flat (Minkowski)
model. The parabola $y^2=x/3$ is the union of trajectories
representing flat models, $K=0$ in Eq.~(\ref{fried_dim}) (see
Figs.\ 1 and 3-7). The trajectories below the parabola represent
open models ($K=-1$), while trajectories above the parabola
represent closed models ($K=+1$). The second fixed point (E)
represents a generalized static Einstein universe. This requires
some form of inflationary matter and therefore may only exists
when $\alpha<-1/3$ if $\epsilon=+1$ and when $\alpha>-1/3$ if
$\epsilon=-1$. The last two points represent expanding and
contracting spatially flat de Sitter models ($dS_{\pm}$). These
points exist when the fluid permits an effective cosmological
constant point, $x_{\Lambda} :=\rho_\Lambda/\rho_c=
-\epsilon(\alpha+1)$; in addition $x_{\Lambda}>0$ must be true for
the fixed points to be in the physical region of the phase space.
There are further fixed points at infinity, these can be found by
studying the corresponding compactified phase space. The first
additional fixed point is at $x=y=\infty$ and represents a
singularity with infinite expansion and infinite energy density.
The second point is at $x=\infty$, $y=-\infty$ and represents a
singularity with infinite contraction and infinite energy density.
\subsection{Generalities of stability analysis}\label{sec3_b}
\indent The stability nature of the fixed points can be found by
carrying out a linear stability analysis. In brief (see
e.g.~\cite{AP} for details), this involves analyzing the behavior
of linear perturbations $u_{j}=u_{j,o}+v_{j}$ around the fixed
points, which obey the equations $v'_{i}={\bf M} v_{j}$. The
matrix ${\bf M}$ is the Jacobian matrix of the dynamical system
and is of the form:
\begin{equation}
{\bf M}_{ij}=\frac{\partial f_{i}}{\partial
u_{j}}\Bigg|_{u_{k}=u_{k,o}}.
\end{equation}
The eigenvalues $\lambda_{i}$ of the Jacobian matrix evaluated at
the fixed points tell us the linear stability character of the
fixed points. The fixed point is said to be hyperbolic if the real
part of the eigenvalues is non-zero ($\mathbb{R}(\lambda_{i})\neq0
$). If all the real parts of the eigenvalues are positive
($\mathbb{R}(\lambda_{i})>0 $) the point is said to be a repeller.
Any small deviations from this point will cause the system to move
away from this state. If all the real parts are negative
($\mathbb{R}(\lambda_{i})<0 $), the point is said to be an
attractor. This is because if the system is perturbed away from
this state, it will rapidly return to the equilibrium state. If
some of the values are positive, while others are negative then
the point is said to be a saddle point. If the eigenvalues of the
fixed point are purely imaginary then the point is a center. If
the center nature of the fixed point is confirmed by some
non-linear analysis, then the trajectories will form a set of
concentric closed loops around the point. If the eigenvalues do
not fall into these categories, we will resort to numerical
methods to determine their stability.
The eigenvalues for the fixed points of the system
(Eq.'s~(\ref{HED})) are given in Table~\ref{Tab2} and the linear
stability character is given in Table~\ref{Tab3}.
\begin{center}
\begin{table}[h!]\caption{\label{Tab2}Eigenvalues for
the fixed points of the high energy regime system.}
\begin{tabular*}{0.47\textwidth}{@{\extracolsep{\fill}}ccc}
\hline \hline
Name & $\lambda_{1}$ & $\lambda_{2}$ \\
\hline
\\
$M$ & $0$ & $0$ \\
$E$ & $\sqrt{\epsilon}\frac{(3\alpha+1)}{3}$ & $-\sqrt{\epsilon}
\frac{(3\alpha+1)}{3}$ \\
$dS_{+}$ & $(\alpha+1)\sqrt{-3\epsilon(\alpha+1)}$ &
$-\frac{2}{3}\sqrt{-3\epsilon(\alpha+1)}$ \\
$dS_{-}$ & $-(\alpha+1)\sqrt{-3\epsilon(\alpha+1)}$ &
$\frac{2}{3}\sqrt{-3\epsilon(\alpha+1)}$ \\
\\
\hline \hline
\end{tabular*}
\end{table}
\end{center}
\begin{center}
\begin{table}[h!]\caption{\label{Tab3} The linear
stability of the fixed points for the high energy regime system.}
\begin{tabular*}{0.47\textwidth}{@{\extracolsep{\fill}}ccc}
\hline \hline
Name & $\epsilon=+1$ & $\epsilon=-1$ \\
\hline
\\
$M$ & undefined & undefined \\
$E$ & Saddle ($\alpha \neq -1/3$) &
Center ($\alpha \neq -1/3$) \\
$dS_{+}$ & Attractor ($\alpha < -1$) &
Saddle ($\alpha > -1$) \\
$dS_{-}$ & Repeller ($\alpha < -1$) &
Saddle ($\alpha > -1$) \\
\\
\hline \hline
\end{tabular*}
\end{table}
\end{center}
\subsection{The $\epsilon=+1$ case}\label{sec3a}
We first consider the system when we have a positive quadratic
energy density term ($\epsilon=+1$) in the high energy regime EoS.
We will concentrate on the region around the origin as this is
where the finite energy density fixed points are all located. The
plots have been created using the symbolic mathematics application
Maple 9.5. The individual plots are made up by three layers, the
first is a directional (represented by grey arrows) field plot of
the state space. The second layer represents the separatrices and
fixed points of the state space . A separatrix (black lines) is a
union of trajectories that marks a boundary between subsets of
trajectories with different properties and can not be crossed. The
fixed points are represented by black dots. The final layer
represents some example trajectories (grey lines) which have been
calculated by numerically integrating the system of equations for
a set of initial conditions. The character of the fixed point M is
undefined and so is determined numerically. The fixed point
representing the generalized Einstein static solution is a saddle
point. The fixed points representing the generalized expanding
(contracting) de Sitter points are attractor (repeller) points.
The trajectories or fixed points in the $y>0$ ($y<0$) region
represent expanding (contracting) models. We will mainly discuss
the right hand side of the state space (expanding models) as in
general the corresponding trajectory on the left hand side is
identical under time reversal.
\subsubsection{The $\alpha<-1$ sub-case}\label{sec3a1}
\begin{figure}[t!]
\begin{center}
\hspace{0.4cm}\includegraphics[width=8.5cm,height=8.5cm,angle=270]{fig1.eps}
\caption{The phase space for the high energy regime system with
$\epsilon=+1$ and $\alpha<-1$. The upper (lower) region
corresponds to the case {\bf B1} ({\bf C1}).}
\end{center}
\label{fig1}
\end{figure}
The phase space of the system is considered when $\alpha<-1$ and
is shown in Fig. 1. The lowest horizontal line ($x=0$) is the
separatrix for open models ($K=-1$) and will be referred to as the
open Friedmann separatrix (OFS). The trajectories on the
separatrix represent Milne models ($x=0$, $K=-1$ and
$a(\eta)\propto\eta$) which are equivalent to a Minkowski
space-time in a hyperbolic co-ordinate system. The second higher
horizontal line ($x_{\Lambda}=-(\alpha+1)$) is the separatrix
which is the dividing line between regions of phantom
($x<x_{\Lambda}$) and non-phantom/standard
behavior($x>x_{\Lambda}$), we will call this the phantom
separatrix (PS). The standard region corresponds to the case {\bf
B1}, while the phantom region corresponds to the case {\bf C1}. In
the phantom region the fluid violates the Null Energy Condition
($\rho+P<0$). This means the energy density is increasing
(decreasing) in the future for an expanding (contracting)
universe. In the standard case of the linear EoS in GR, this
occurs when ${\it w}<-1 $ and ultimately leads to a Type I
singularity~\cite{BLJM, CKW}. The parabola ($y^2=x/3$) represents
the separatrix for flat Friedmann models ($K=0$), we will call
this the flat Friedmann separatrix (FFS). The inner most thick
curve is the separatrix for closed Friedmann models ($K=+1$) and
will be called the closed Friedmann separatrix (CFS). The
separatrix has the form:
\begin{equation}
y^2 = \frac{x}{3} - \left[ \frac{A(\alpha+1)x}{(\alpha+1)+\epsilon
x} \right]^{\frac{2}{3(\alpha+1)}}.
\end{equation}
\noindent The constant $A$ is fixed by ensuring that the saddle
fixed point coincides with the fixed point representing the
generalized Einstein static model ($E$). The constant is given in
terms of the EoS parameters and has the form:
\begin{equation}
A=-\frac{2}{\epsilon(3\alpha+1)(\alpha+1)}\left(
-\frac{\epsilon(3\alpha+1)}{9}\right)^{\frac{3(\alpha+1)}{2}}.
\end{equation}
The Minkowski fixed point is located at the intersection of the
OFS and FFS. The generalized flat de Sitter fixed points are
located at the intersection of the PS and FFS. The generalized
Einstein static fixed point is located on the CFS. The
trajectories between the OFS and the PS ($0 < x < x_{\Lambda}$)
represent models which exhibit phantom behavior (the case {\bf
C1}). The open models in the phantom region are asymptotic to a
Milne model in the past and to a generalized flat de Sitter model
($dS_{+}$) in the future. The closed models in the phantom region
evolve from a contracting de Sitter phase, through a phantom phase
to an expanding de Sitter phase (phantom bounce). It is
interesting to note that unlike the standard GR case the phantom
behavior does not result in a Type I singularity but
asymptotically evolves to a expanding de Sitter phase. This is
similar to the behavior seen in the phantom generalized Chaplygin
gas case~\cite{BLJM}. The trajectories on the PS all represent
generalized de Sitter models ($x'=0$). The fixed points represent
generalized flat de Sitter models ($K=0$). The open model on the
PS represent generalized open de Sitter models ($K=-1$) in
hyperbolic co-ordinates. The closed models on the PS evolve from a
contracting phase to an expanding phase and represent generalized
closed de Sitter models ($K=+1$). The Friedmann equation can be
solved for such models to give:
\begin{equation}
\begin{array}{ll}
a(\eta) = \sqrt{\frac{3}{x_{o}}} \cosh\left[
\sqrt{\frac{x_{o}}{3}}(\eta-\eta_o)\right]~~ & \mbox{for $k=1$}\,, \\
\\
a(\eta) = \mbox{e}^{\sqrt{\frac{x_{o}}{3}}(\eta-\eta_o)}
& \mbox{for $k=0$}\,, \\
\\
a(\eta) = \sqrt{\frac{3}{x_{o}}} \sinh\left[
\sqrt{\frac{x_{o}}{3}}(\eta-\eta_o)\right] & \mbox{for $k=-1$}\,,
\end{array}
\end{equation}
The region above the PS represents models which evolve in a
non-phantom/standard manner (the case {\bf B1}). The trajectories
in the expanding region ($y>0$) of the phase space are asymptotic
to a Type III singularity in the past\footnote{The Type III
singularity appears to be a generic feature of the high energy
regime EoS and can occur both in the future and the past.}. The
trajectories outside the FFS represent open models which evolve
from a Type III singularity to a flat de Sitter phase, as do the
trajectories on the FFS. The trajectories in between the CFS and
the FFS evolve from a Type III singularity to a flat expanding de
Sitter phase but may enter a phase of loitering. Loitering is
characterized by the Hubble parameter dipping to a low value over
a narrow red-shift range, followed by a rise again. In order to
see this more clearly, we have plotted the normalized Hubble
parameter ($y$) as a function of scale factor for three different
trajectories in Fig. 2.
\begin{figure}[t!]
\begin{center}
\hspace{0.4cm}\includegraphics[width=8.5cm,height=8.5cm,angle=270]{fig2.eps}
\caption{The normalized Hubble parameter, $y$ for models with
differing curvature. Starting from the top we have open, flat and
the closed models. }
\end{center}
\label{fig2}
\end{figure}
The top two curves represent the open and flat models, with the
Hubble parameter dropping off quicker for the flat Friedmann
model. The lower most curve is the Hubble parameter for the closed
model. The plot shows that the closed model evolves to a loitering
phase. Loitering cosmological models in standard cosmology were
first found for closed FLRW models with a cosmological constant.
More recently, brane-world models which loiter have been found
~\cite{SS}, these models are spatially flat but can behave
dynamically like a standard FLRW closed model. The interesting
point here is that the models mentioned above loiter without the
need of a cosmological constant (due to the appearance of an
effective cosmological constant), the topology is asymptotically
closed in the past and flat in the future. The trajectories inside
the CFS can have two distinct types of behavior corresponding to
the central regions above and below the generalized Einstein
static fixed point. Trajectories in the lower region represent
closed models which evolve from a contracting de Sitter phase,
bounce and then evolve to a expanding de Sitter phase. The
trajectories in the upper region evolve from a Type III
Singularity, expand to a maximum $a$ (minimum $x$) and then
re-collapse to a Type III singularity (we will refer to such
re-collapsing models as turn-around models).
\subsubsection{The $-1<\alpha<-1/3$ sub-case}\label{sec3a2}
\begin{figure}[t!]
\begin{center}
\hspace{0.4cm}\includegraphics[width=8.5cm,height=8.5cm,angle=270]{fig3.eps}
\caption{The phase space for the high energy regime system with
$\epsilon=+1 $ and $-1<\alpha<-1/3$. The entire region corresponds
to the case {\bf A1}.}
\end{center}
\label{fig3}
\end{figure}
The phase space for the system when $-1<\alpha<-1/3$ is shown in
Fig. 3. The fixed points representing the flat generalized de
Sitter models are no longer in the physical region ($x>0$) of the
phase space. The open, flat and closed Friedmann separatrices
(OFS, FFS and CFS) remains the same. The phantom separatrix (PS)
is no longer present and all trajectories represent models with
non-phantom/standard fluids (this corresponds to the case {\bf
A1}). The main difference is that the generic future attractor is
now the Minkowski model. The trajectories between the OFS and FFS
now evolve from a Type III singularity to a Minkowski model, as do
the flat Friedmann models. The models between the FFS and CFS now
evolve from a Type III singularity to a Minkowski with the
possibility of entering a loitering phase (as before the model is
asymptotically flat in the future). The trajectories inside the
CFS and above the Einstein static fixed point still represent
turn-around models. The trajectories inside the OFS and below the
Einstein static model now represent standard bounce models, that
is they evolve from a Minkowski model, contract to a finite size,
bounce and then expand to a Minkowski model.
\subsubsection{The $\alpha\geq-1/3$ sub-case}\label{sec3a3}
\begin{figure}[t!]
\begin{center}
\hspace{0.4cm}\includegraphics[width=8.5cm,height=8.5cm,angle=270]{fig4.eps}
\caption{The phase space for the high energy regime system with
$\epsilon=+1$ and $\alpha\geq-1/3$. The entire region corresponds
to the case {\bf A1}.}
\end{center}
\label{fig4}
\end{figure}
Next we consider the system when $\alpha\geq-1/3$, the phase space
is shown in Fig. 4. The fixed point representing the Einstein
static models is now located in the $x<0$ region of the phase
space. The fluid in the entire physical region behaves in a
non-phantom manner and corresponds to the case {\bf A1}. The OFS
and FFS remain the same and the CFS is no longer present. The
trajectories between the OFS and FFS evolve from a Type III
singularity to a Minkowski model. All trajectories above the FFS
now represent turn-around models which start and terminate at a
Type III singularity. The behavior of the models is qualitatively
the same as that of the standard FLRW model with a linear EoS
where ${\it w}=\alpha$, in the linear EoS case the Type III
singularity is replaced by a standard ``Big Bang".
\subsection{The $\epsilon=-1$ case}\label{sec3b}
We now consider the system when we have a negative quadratic
energy density term ($\epsilon=-1$) in the high energy regime EoS.
The character of the fixed point M is still undefined. The fixed
point representing the generalized Einstein static model is now a
center. The fixed points representing the expanding/contracting
flat de Sitter points now have saddle stability. As before, the
trajectories or fixed points in the $y>0$ ($y<0$) region represent
expanding models (contracting models), the black lines represent
separatrix, grey lines represent example trajectories and fixed
points are represented by black dots.
\subsubsection{The $\alpha<-1$ sub-case}\label{sec3b1}
\begin{figure}[t!]
\begin{center}
\hspace{0.4cm}\includegraphics[width=8.5cm,height=8.5cm,angle=270]{fig5.eps}
\caption{The phase space for the high energy regime system with
$\epsilon=-1$ and $\alpha<-1$. The entire region corresponds to
the case {\bf A2}. }
\end{center}
\label{fig5}
\end{figure}
The phase space of the system when $\alpha<-1$ is shown in Fig. 5.
The horizontal line ($x=0$) is still the open Friedman separatrix
(OFS). The parabola is the flat Friedmann separatrix (FFS). The
intersection of the OFS and FFS coincides with the Minkowski fixed
point. All the trajectories in the physical region of the phase
space exhibit phantom behavior (corresponding to the case {\bf
A2}), the energy density increases in an expanding model. The
trajectories in the expanding (contracting) region in general
evolve to a Type III singularity in the future (past). The
trajectories between the OFS and the FFS are asymptotic to a Milne
model in the past and are asymptotic to a Type III singularity in
the future. The trajectories on the FFS start from a Minkowski
model and enter a phase of super-inflationary expansion and evolve
to a Type III singularity. Trajectories that start in a
contracting phase during which the energy density decreases, reach
a minimum $a$ (minimum $x$) and then expand where the energy
density increases represent phantom bounce models. The
trajectories above the FFS represent closed models which evolve
through a phantom bounce, but start and terminate in a Type III
singularity. The behavior of the models is qualitatively the same
as that of the FLRW models with a phantom linear EoS (${\it
w}<-1$) except that the Type III singularity is replaced by a Type
I (``Big Rip") singularity.
\subsubsection{The $-1<\alpha<-1/3$ sub-case}\label{sec3b2}
\begin{figure}[h!]
\begin{center}
\hspace{0.4cm}\includegraphics[width=8.5cm,height=8.5cm,angle=270]{fig6.eps}
\caption{The phase space for the high energy regime system with
$\epsilon=-1 $ and $-1<\alpha<-1/3$. The upper (lower) region
corresponds to the case {\bf B2} ({\bf C2}). }
\end{center}
\label{fig6}
\end{figure}
The phase space for the system when $-1<\alpha < -1/3$ is shown in
Fig. 6. The lowest horizontal line ($x=0$) is the OFS. The second
higher horizontal line, $x_{\Lambda} = (\alpha+1) $ is the phantom
separatrix (PS), this divides the state space into regions of
phantom ($x>x_{\Lambda}$) and non-phantom/standard behavior
($x<x_{\Lambda}$). The phantom region corresponds to the case {\bf
B2} and the standard region corresponds to the case {\bf C2}. The
flat de Sitter ($dS_{\pm}$) points are located at the intersection
of the FFS and the PS. The open models in the standard matter
region ($0<x<x_{\Lambda}$) are past asymptotic to open expanding
de Sitter models in the past and evolve to Minkowski models in the
future. The closed models in the region represent the standard
bounce models, that is they evolve from a Minkowski model,
contract to a minimum $a$ (maximum $x$) and then expand to a
Minkowski model. The trajectories above the PS ($x>(\alpha+1)$)
all exhibit phantom behavior. The open models in this region are
past asymptotic to open de Sitter models in the past and evolve to
a Type III singularity in the future. The closed models in the
region all represent models which undergo a phantom bounce but
start and terminate in a Type III singularity.
\subsubsection{The $\alpha\geq-1/3$ sub-case}\label{sec3b3}
\begin{figure}[h!]
\begin{center}
\hspace{0.4cm}\includegraphics[width=8.5cm,height=8.5cm,angle=270]{fig7.eps}
\caption{The phase space for the high energy regime system with
$\epsilon=-1$ and $\alpha\geq-1/3$. The upper (lower) region
corresponds to the case {\bf B2} ({\bf C2}).}
\end{center}
\label{fig7}
\end{figure}
We now consider the system when $\alpha\geq-1/3$, the phase space
is shown in Fig. 7. The OFS, FFS and the PS are all still present,
the phantom regions still corresponds to the case {\bf B2} and the
standard region to the case {\bf C2}. The trajectories in the
phantom region ($x>x_{\Lambda}$) behave in a similar manner to the
previous case, as do the open models in the standard matter region
($0<x<x_{\Lambda}$). The main difference is in the region
representing closed models ($K=1$) with non-phantom behavior.
There is now a new fixed point which represents a generalized
Einstein static model ($E$). The closed models in the region now
represent oscillating models. This is represented by closed
concentric loops centered on the Einstein static fixed point.
These oscillating models also appear in the low energy system and
will be discussed in more detail later.
\section{Low energy regime Dynamics }\label{sec4}
\subsection{The dimensionless dynamical system}
\noindent We now consider the system of equations for the low
energy regime EoS, which can be simplified and expressed in terms
of the following dimensionless variables:
\begin{equation}
x=\frac{\rho}{|P_{o}|}\;,~~ y=\frac{H}{\sqrt{|P_{o}|}}\;,~~
\eta=\sqrt{|P_{o}|}t\;.
\end{equation}
\noindent The system of equations is then:
\begin{eqnarray}
y^{2} &=& \frac{x}{3} - \frac{K}{|P_{o}|a^2}, \\
y' &=& -y^{2} - \frac{1}{6} \left( 3\epsilon_p + (3\alpha +1)x \right) ,\\
x'&=& -3 y \left( \epsilon_p + (\alpha +1)x \right).
\end{eqnarray}
\noindent The discrete parameter $\epsilon_p$ denotes the sign of
the pressure term, $\epsilon_p\in\{-1,1\}$. The primes denote
differentiation with respect to the new $\eta$. The variables $x$
and $y$ are the new normalized energy density and Hubble
parameter. As before only the positive energy density region of
the phase space will be considered. The fixed points of the system
and the existence conditions are given in Table~\ref{Tab4}. As
before, by existence we mean the conditions on the parameters to
insure $x\geq0$ and $x,y\in\mathbb{R}$.
\begin{center}
\begin{table}[h!]\caption{\label{Tab4}Location and existence conditions of the
fixed points of the low energy regime system.}
\begin{tabular*}{0.47\textwidth}{@{\extracolsep{\fill}}cccc}
\hline \hline
Name & $x$ & $y$ & Existence \\
\hline
\\
$E$ & $-\frac{3\epsilon_p }{(3\alpha +1)}$ & $0$ &
$\frac{\epsilon_p}{(3\alpha+1)}<0$ \\
$dS_{+}$ & $-\frac{\epsilon_p}{(\alpha+1)}$ &
$+\sqrt{\frac{-\epsilon_p}{3(\alpha+1)}}$ &
$\frac{\epsilon_p}{(\alpha+1)}<0$ \\
$dS_{-}$ & $-\frac{\epsilon_p}{(\alpha+1)}$ &
$-\sqrt{\frac{-\epsilon_p}{3(\alpha+1)}}$ &
$\frac{\epsilon_p}{(\alpha+1)}<0$ \\
\\
\hline \hline
\end{tabular*}
\end{table}
\end{center}
The Minkowski model ($x=y=0$) is no longer a fixed point of the
system. The first fixed point (E) represents a generalized static
Einstein model. This requires the overall effective equation of
state to be that of inflationary matter and therefore only exists
when $\epsilon_p/(3\alpha+1)<0$. The last two points represent
generalized expanding and contracting flat de Sitter models. These
points only exist if the fluid permits an effective cosmological
constant point $\tilde{x}_{\Lambda}=\tilde{\rho}_{\Lambda}/
{|P_{o}|} = -\epsilon_p /(\alpha+1)$, also
$\tilde{x}_{\Lambda}\geq0$ for the points to be in the physical
region of the phase space. The eigenvalues of the equilibrium
points are given in Table~\ref{Tab5}, while the linear stability
character is given in Table~\ref{Tab6}.
\begin{center}
\begin{table}[h!]\caption{\label{Tab5}Eigenvalues of the
fixed points of the low energy regime system.}
\begin{tabular*}{0.47\textwidth}{@{\extracolsep{\fill}}ccc}
\hline \hline
Name & $\lambda_{1}$ & $\lambda_{2}$ \\
\hline
\\
$E$ & $\sqrt{-\epsilon_p}$ & $-\sqrt{-\epsilon_p}$ \\
$dS_{+}$ & $ \sqrt{\frac{-3(\alpha+1)}{\epsilon_p}} $ & $
-\frac{2}{\sqrt{-3\epsilon_p(\alpha+1)}} $ \\
$dS_{-}$ & $ -\sqrt{\frac{-3(\alpha+1)}{\epsilon_p}} $ & $
\frac{2}{\sqrt{-3\epsilon_p(\alpha+1)}}$ \\
\\
\hline \hline
\end{tabular*}
\end{table}
\end{center}
\begin{center}
\begin{table}[h!]\caption{\label{Tab6} The linear
stability of the fixed points for the low energy regime system.}
\begin{tabular*}{0.47\textwidth}{@{\extracolsep{\fill}}ccc}
\hline \hline
Name & $\epsilon_p=+1$ & $\epsilon_p=-1$ \\
\hline
\\
$E$ & Center ($\alpha \neq -1/3$) & Saddle ($\alpha \neq -1/3$) \\
$dS_{+}$ & Saddle ($\alpha < -1$) &
Attractor ($\alpha > -1$) \\
$dS_{-}$ & Saddle ($\alpha < -1$) &
Repeller ($\alpha > -1$) \\
\\
\hline \hline
\end{tabular*}
\end{table}
\end{center}
\subsection{The $\epsilon_p=+1$ case}\label{sec4a}
We start by considering the system when we have a positive
constant pressure term ($\epsilon_p=+1$) in the low energy regime
EoS. The Minkowski ($x=y=0$) point is no longer present and the
Einstein static solution has the stability character of a center.
The fixed points representing the generalized
expanding/contracting de Sitter points ($dS_\pm$) now have saddle
stability. As before black lines represent separatrix, grey lines
represent example trajectories and black dots represent fixed
points of the system.
\subsubsection{The $\alpha<-1$ sub-case}\label{sec4a1}
\begin{figure}[h!]
\begin{center}
\hspace{0.4cm}\includegraphics[width=8.5cm,height=8.5cm,angle=270]{fig8.eps}
\caption{The phase space for the low energy regime system with
$\epsilon_p=+1$ and $ \alpha < -1$. The upper (lower) region
corresponds to the case {\bf E1} ({\bf F1}).}
\end{center}
\label{fig8}
\end{figure}
The phase space for the system when $ \alpha < -1$ is shown in
Fig. 8. The open Friedmann separatrix ($x=0$) is no longer
present, and the $x=y=0$ point is no longer a fixed point of the
system. The horizontal line ($\tilde{x}_\Lambda=-(\alpha+1)^{-1}$)
is the phantom separatrix (PS), dividing the state space into
regions with phantom ($x>\tilde{x}_\Lambda$) and standard
($x<\tilde{x}_\Lambda$) behavior. The phantom region corresponds
to the case {\bf E1} and the standard region corresponds to the
case {\bf F1}. The parabola $y^2 = x/3 $ is the separatrix
representing the flat Friedmann models (FFS), this divides the
remaining trajectories into open and closed models. The
intersection of the PS and FFS coincides with the fixed points of
the generalized flat de Sitter models.
The trajectories in the upper region that start in a contracting
phase (during which the energy density decreases), reach a minimum
$a$ (minimum $x$) and then expand, representing phantom bounce
models which terminate in a Type I singularity. The closed models
in the phantom region ($x>\tilde{x}_\Lambda$) represent phantom
bounce models which start and terminate in a Type I singularity
\footnote{The Type I singularity is a generic feature of the low
energy regime EoS and can appear both in the future and the
past.}. The open models in the phantom region are asymptotic to
open de Sitter models in the past and evolve to a Type I
singularity in the future. The trajectories below the PS
($x<\tilde{x}_\Lambda$) represent models which all behave in a
standard manner (the {\bf F1} case). The open models in this
region are all non-physical as they all evolve to the $x<0$ region
of the phase space. The region corresponding to closed models
(above the FFS) contain a fixed point which represents the
generalized Einstein static (E) model. The region is filled by a
infinite set of concentric closed loops centered on the Einstein
static fixed point, the closed loops represent oscillating models.
The Friedmann equation for such models is given by:
\begin{equation}
y^2 = \frac{x}{3} - K \left[ \frac{\epsilon_p
+(\alpha+1)x}{B(\alpha+1)} \right]^{\frac{2}{3(\alpha+1)}}.
\end{equation}
\noindent The constant $B$ is fixed by the location of the
Einstein fixed point ($E$). The constant is given in terms of
$\alpha$ and $\epsilon_p$:
\begin{equation}
B=\frac{-2\epsilon_p}{(\alpha+1)(3\alpha+1)}\left(
\frac{3\alpha+1}{-\epsilon_p}\right)^{\frac{3(\alpha+1)}{2}}.
\end{equation}
These oscillating models appear for $-\infty<\alpha<-1/3$ when
$\epsilon_p=+1$ and are qualitatively similar to the oscillating
models seen in the high energy case. The exact behavior of the
variables for these models can be calculated by fixing the EoS
parameter $\alpha$. The qualitative behavior remains the same for
the models, however the maximum and minimum values of the
variables change. In the case of $\alpha=-2/3$ the equations are
greatly simplified, the scale factor oscillates such that:
\begin{equation}
a(\eta)= a_{o}\left( 1 + \sqrt{{1-K}} \sin(\eta_{o}-\eta) \right)
\end{equation}
\noindent The maximum and minimum scale factor is then:
\begin{equation}
a_{max}=a_{o}( 1 + \sqrt{{1-K}})\;,~~ a_{min}=a_{o}( 1 -
\sqrt{{1-K}})\;.
\end{equation}
\noindent The normalized hubble parameter ($y$) is:
\begin{equation}
y=y_{o}\frac{ \sqrt{{1-K}} \cos(\eta_{o}-\eta)}{ 1 + \sqrt{{1-K}}
\sin(\eta_{o}-\eta) }.
\end{equation}
\noindent The maximum and minimum $y$ is given by:
\begin{equation}
y_{max}=y_{o}\sqrt{\frac{1-K}{K}}\;,~~
y_{min}=-y_{o}\sqrt{\frac{1-K}{K}}\; .
\end{equation}
\noindent The normalized energy density ($x$) is given by:
\begin{equation}
x=x_{o} \left( \frac{ 1 -\sqrt{{1-K}} \sin(\eta_{o}-\eta)}{ 1 +
\sqrt{{1-K}} \sin(\eta_{o}-\eta) } \right).
\end{equation}
\noindent The maximum and minimum $x$ are:
\begin{equation}
x_{max}=x_{o}\left( \frac{ 1 +\sqrt{{1-K}}}{ 1 - \sqrt{{1-K}} }
\right)\;,~~ x_{min}=x_{o}\left( \frac{ 1 -\sqrt{{1-K}}}{ 1 +
\sqrt{{1-K}} } \right)\;.
\end{equation}
\subsubsection{The $-1 < \alpha< -1/3$ sub-case}\label{sec4a2}
\begin{figure}[h!]
\begin{center}
\hspace{0.4cm}\includegraphics[width=8.5cm,height=8.5cm,angle=270]{fig9.eps}
\caption{The phase space for the low energy regime system with
$\epsilon_p=+1$ and $-1 \leq \alpha < -1/3$. The entire region
corresponds to the case {\bf D1}.}
\end{center}
\label{fig9}
\end{figure}
We now consider the case when $-1<\alpha< -1/3$, the phase space
is shown in Fig. 9. All trajectories in the physical region of the
phase space exhibit standard behavior and correspond to the case
{\bf D1}. There is only one fixed point in the $x\geq0$ region of
the phase space, this point represents the generalized Einstein
static model ($E$). The FFS represent models which evolve from a
standard ``Big Bang", evolve to a Minkowski model and then to a
standard ``Big Crunch" (turn around model). The open models (below
the FFS) are non-physical as the evolve into the $x<0$ region. The
trajectories above the separatrix represent closed models ($K>0$)
which oscillate indefinitely between a maximum and minimum $a$
(minimum and maximum $x$), as seen in the previous case.
\subsubsection{The $-1/3 \leq \alpha $ sub-case}\label{sec4a3}
\begin{figure}[h!]
\begin{center}
\hspace{0.4cm}\includegraphics[width=8.5cm,height=8.5cm,angle=270]{fig10.eps}
\caption{The phase space for the low energy regime system with
$\epsilon_p=+1$ and $-1/3 \leq \alpha $. The entire region
corresponds to the case {\bf D1}. }
\end{center}
\label{fig10}
\end{figure}
The phase space for the system when $-1/3 \leq \alpha $ is shown
in Fig. 10. As in the previous sub-case the fluid behaves in a
standard manner and corresponds to the case {\bf D1}. There are no
fixed points in the physical region of the phase space. The
parabola is the FFS and represents flat model which evolve from a
``Big Bang", approach a Minkowski model and then re-collapse
(turn-around models) to a ``Big Crunch". The open models (below
the FFS) are all non-physical as they evolve to the negative
energy density region ($x<0$) of the phase space. The closed
models evolve from a ``Big Bang", reach a maximum $a$ (minimum
$x$) and re-collapse to a ``Big Crunch".
\subsection{The $\epsilon_p=-1$ case}\label{sec4b}
We now consider the system when we have a negative constant
pressure term ($\epsilon_p=-1$) in the low energy regime EoS. As
before,the Minkowski ($x=y=0$) point is no longer a fixed point of
the system and the OFS is not present. The fixed point
representing the generalized Einstein static model ($E$) has the
stability character of a saddle. The fixed points representing the
generalized expanding (contracting) flat de Sitter points now have
attractor (repeller) stability.
\subsubsection{The $\alpha < -1$ sub-case}\label{sec4b1}
\begin{figure}[h!]
\begin{center}
\hspace{0.4cm}\includegraphics[width=8.5cm,height=8.5cm,angle=270]{fig11.eps}
\caption{The phase space for the low energy regime system with
$\epsilon_p=-1$ and $\alpha < -1$. The entire region corresponds
to the case {\bf D2}.}
\end{center}
\label{fig11}
\end{figure}
The phase space for the low energy system when $\alpha < -1$ is
shown in Fig. 11. All the trajectories in the $x>0$ region of the
phase space now exhibit phantom behavior and correspond to the
case {\bf D2}. The open models are all non-physical as they all
evolve from the negative energy density region of the phase space.
The flat and closed models represent phantom bounce models which
start and end in a Type I singularity. They evolve from a Type I
singularity, contract, bounce at a minimum $a$ (minimum $x$) and
expand to a Type I singularity.
\subsubsection{The $-1< \alpha < -1/3$ sub-case}\label{sec4b2}
\begin{figure}[h!]
\begin{center}
\hspace{0.4cm}\includegraphics[width=8.5cm,height=8.5cm,angle=270]{fig12.eps}
\caption{The phase space for the low energy regime system with
$\epsilon_p=-1$ and $-1 \leq \alpha < -1/3$. The upper (lower)
region corresponds to the case {\bf E2} ({\bf F2}).}
\end{center}
\label{fig12}
\end{figure}
The phase space for the system when $-1<\alpha < -1/3$ is shown in
Fig. 12. The horizontal line, $\tilde{x}_\Lambda =
(\alpha+1)^{-1}$ is the phantom separatrix (PS), dividing the
state space into regions with phantom ($x<\tilde{x}_\Lambda$) and
standard behavior ($x>\tilde{x}_\Lambda$). The standard region
corresponds to the case {\bf E2} and the phantom region
corresponds to the case {\bf F2}. The intersection of the PS and
FFS coincides with the fixed points of the generalized flat de
Sitter models ($dS_{\pm}$). The flat expanding (contracting) de
Sitter model is the generic future attractor (repeller). The open
models in the standard matter region ($x>\tilde{x}_\Lambda$)
evolve from a standard ``Big Bang" to a flat expanding de Sitter
phase. The closed models in this region evolve from a contracting
flat de Sitter phase, reach a minimum $a$ (maximum $x$), bounce
and then evolve to expanding flat de Sitter phase. These models
represent standard bounce models with asymptotic de Sitter
behavior. The open models in the phantom region ($
x<\tilde{x}_\Lambda$) are all non-physical. The flat and closed
models in this region represent models exhibiting phantom bounce
behavior which avoid the ``Big Rip" and instead evolve to an
expanding flat de Sitter phase.
\subsubsection{The $-1/3\leq\alpha$ sub-case}\label{sec4b3}
\begin{figure}[h!]
\begin{center}
\hspace{0.4cm}\includegraphics[width=8.5cm,height=8.5cm,angle=270]{fig13.eps}
\caption{The phase space for the low energy regime system with
$\epsilon_p=-1$ and $-1/3\leq\alpha$. The upper (lower) region
corresponds to the case {\bf E2} ({\bf F2}).}
\end{center}
\label{fig13}
\end{figure}
We now consider the system in the parameter range
$-1/3\leq\alpha$, the phase space is shown in Fig. 13. The PS
($\tilde{x}_\Lambda=(\alpha+1)^{-1}$), FFS ($y^2=x/3$) and
generalized flat de Sitter points ($dS_{\pm}$) still remain. The
flat expanding (contracting) de Sitter model is the generic future
attractor (repeller). The inner most black curve is the closed
Friedmann separatrix (CFS) and coincides with the generalized
Einstein static fixed point ($E$), which has saddle stability. The
CFS is given by:
\begin{equation}
y^2 = \frac{x}{3} - D \left[ \frac{(\alpha+1)x-1}{2}
\right]^{\frac{2}{3(\alpha+1)}}.
\end{equation}
\noindent The constant $D$ is a constant of integration and can be
fixed by the location of the fixed point $E$. The constant is
given in terms of $\alpha$ and has the form:
\begin{equation}
D=\left({3\alpha+1}\right)^{-\frac{(3\alpha+1)}{3(\alpha+1)}}.
\end{equation}
The region below the PS ($x<\tilde{x}_\Lambda$) remains
qualitatively the same. The open models in the standard matter
region ($x>\tilde{x}_\Lambda$) all evolve from a ``Big Bang" to a
expanding flat de Sitter phase. The trajectories between the FFS
and the CFS also evolve from a ``Big Bang" to a generalized
expanding flat de Sitter model with the possibility of entering a
loitering phase. The models inside the CFS can behave in one of
two ways. The trajectories above the generalized Einstein static
point represent turn-around models which evolve from a ``Big
Bang", reach a maximum $a$ (minimum $x$) and then re-collapse to a
``Big Crunch". The trajectories below evolve from a contracting de
Sitter phase to an expanding de Sitter phase and represent bounce
models.
\section{The Full system}\label{sec5}
\begin{table*}[t!]
\caption{\label{Tab7}The locations of the fixed points of the full
system. The existence conditions are also given, that is
$x,y,\in\mathbb{R}$ and $x\geq0$. To simplify the expressions
special values of $\nu$ are used which can be expressed in terms
of $\alpha$, these values are $\nu_{1}=\frac{(3\alpha+1)^2}{36} $
and $ \nu_{2}=\frac{(\alpha+1)^2}{4}$. }
\begin{ruledtabular}
\begin{tabular}{ccccc}
Name & $x$ & $y$ & Existence ($\epsilon=+1$) & Existence ($\epsilon=-1$) \\
\hline
& & & & \\
$M$ & $0$ & $0$ & $\nu=0$ & $\nu=0$ \\
& & & & \\
$E_{1}$ & $ -\frac{(3\alpha+1)}{6\epsilon} + \frac{
\sqrt{(3\alpha+1)^2 - 36\epsilon\nu} }{6\epsilon} $ & $0$ & $\nu
\leq \nu_{1}$, $\alpha < -1/3$ & $ -\nu_{1} < \nu < 0$, $\alpha > -1/3$ \\
& & & $\nu<0$, $\alpha > -1/3$ & \\
& & & & \\
$E_{2}$ & $-\frac{(3\alpha+1)}{6\epsilon} - \frac{
\sqrt{(3\alpha+1)^2 - 36\epsilon\nu} }{6\epsilon} $ & $0$ & $0 <
\nu < \nu_{1}$, $\alpha < -1/3$ & $\nu >0$ , $\alpha < -1/3$ \\
& & & & $\nu \geq -\nu_{1}$, $\alpha > -1/3$\\
& & & & \\
$dS_{1,\pm}$ & $-\frac{(\alpha+1)}{2\epsilon} + \frac{
\sqrt{(\alpha+1)^2 - 4\epsilon\nu} }{2\epsilon}$ & $\pm
\left(-\frac{(\alpha+1)}{6\epsilon} + \frac{ \sqrt{(\alpha+1)^2 -
4\epsilon\nu} }{6\epsilon} \right)^{\frac{1}{2}}$ & $\nu \leq
\nu_{2}$, $\alpha < -1$ & $ -\nu_{2}<\nu<0$, $\alpha > -1$ \\
& & & $\nu<0$, $\alpha > -1$ & \\
& & & & \\
$dS_{2,\pm}$ & $-\frac{(\alpha+1)}{2\epsilon} - \frac{
\sqrt{(\alpha+1)^2 - 4\epsilon\nu} }{2\epsilon}$ & $\pm \left(
-\frac{(\alpha+1)}{6\epsilon} - \frac{ \sqrt{(\alpha+1)^2 -
4\epsilon\nu} }{6\epsilon} \right)^{\frac{1}{2}}$ & $0< \nu
< \nu_{2}$, $\alpha < -1$ & $\nu>0$, $\alpha < -1$ \\
& & & & $-\nu_{2} \leq \nu$, $\alpha > -1$ \\
& & & & \\
\end{tabular}
\end{ruledtabular}
\end{table*}
\subsection{The dimensionless dynamical system}
We now consider the system of equations with the full quadratic
EoS, this can be simplified in a similar fashion to the previous
case by introducing a new set variables:
\begin{equation}
x=\frac{\rho}{|\rho_{c}|}\;,~~ y=\frac{H}{\sqrt{|\rho_{c}|}}\;,~~
\eta=\sqrt{|\rho_{c}|}t\;,~~
\nu=\frac{P_{o}}{\sqrt{|\rho_{c}|}}\;.
\end{equation}
\noindent The system of equations then become:
\begin{eqnarray}
y^{2} &=& \frac{x}{3} - \frac{K}{|\rho_{c}|a^2}, \\
y' &=& -y^{2} - \frac{1}{6} \left( 3\nu + (3\alpha +1)x + 3\epsilon x^{2} \right) ,\\
x'&=& -3 y \left( \nu + (\alpha +1)x + \epsilon x^2 \right).
\end{eqnarray}
\noindent The parameter $\epsilon$ denotes the sign of the
quadratic term, $\epsilon\in\{-1,1\}$. The parameter $\nu$ is the
normalized constant pressure term. The primes denote
differentiation with respect to the new normalized time variable
$\eta$ and only the physical region of the phase space is
considered ($x\geq0$). The fixed points and their existence
conditions are given in Table~\ref{Tab7}. The phase space
undergo's a topological change for special values of the $\nu$
parameter, these values can be expressed in terms of $\alpha$ and
are:
\begin{eqnarray}
\nu_{1} = \frac{(3\alpha+1)^2}{36},\qquad
\nu_{2}=\frac{(\alpha+1)^2}{4}.
\end{eqnarray}
\begin{center}
\begin{table}[h!]\caption{\label{Tab9} The linear stability
character of the fixed points for the full system. The stability
character is only valid for choices of parameters which are
consistent with the existence conditions and constraints given
below.}
\begin{tabular*}{0.47\textwidth}{@{\extracolsep{\fill}}ccc}
\hline \hline
Name &$\epsilon=\pm1$ & Exceptions \\
\hline
\\
$M$ & Undefined & - \\
$E_{1}$ & Saddle & $36\epsilon\nu\neq(3\alpha+1)^2$\\
$E_{2}$ & Center & $36\epsilon\nu\neq(3\alpha+1)^2$ \\
$\,dS_{1,+}\,$ & Attractor & $4\epsilon\nu\neq(\alpha+1)^2$ \\
$\,dS_{1,-}\,$ & Repeller & $4\epsilon\nu\neq(\alpha+1)^2$ \\
$\,dS_{2,+}\,$ & Saddle & $4\epsilon\nu\neq(\alpha+1)^2$ \\
$\,dS_{2,-}\,$ & Saddle & $4\epsilon\nu\neq(\alpha+1)^2$ \\
\\
\hline \hline
\end{tabular*}
\end{table}
\end{center}
\begin{table*}[t!]
\caption{\label{Tab8}The Eigenvalues derived from the linear
stability analysis of the fixed points for the full system. In
order to simplify the form of the eigenvalues we introduce the
following variables, $\gamma_{1}=(3\alpha+1) $, $
\gamma_{2}=(\alpha+1)$ and $\delta =
\sqrt{(\alpha+1)^2-4\epsilon\nu}$. These eigenvalues are only
valid for the choice of parameters consistent with the existence
conditions. }
\begin{ruledtabular}
\begin{tabular}{ccc}
Name & $\lambda_{1}$ & $\lambda_{2}$ \\
\hline
& & \\
$M$ & $0$ & $0$ \\
& & \\
$E_{1}$ & $ + \left( \frac{\gamma_{1}^{2}-\gamma_{1}
\sqrt{\gamma_{1}^{2}-36\epsilon\nu}-36\epsilon\nu}{18 \epsilon}
\right)^{\frac{1}{2}}$ & $ - \left(
\frac{\gamma_{1}^{2}-\gamma_{1}
\sqrt{\gamma_{1}^{2}-36\epsilon\nu}-36\epsilon\nu}{18 \epsilon}
\right)^{\frac{1}{2}}$ \\
& & \\
$E_{2}$ & $ + \left( \frac{\gamma_{1}^{2}+\gamma_{1}
\sqrt{\gamma_{1}^{2}-36\epsilon\nu}-36\epsilon\nu}{18 \epsilon}
\right)^{\frac{1}{2}}$ & $ - \left(
\frac{\gamma_{1}^{2}+\gamma_{1}
\sqrt{\gamma_{1}^{2}-36\epsilon\nu}-36\epsilon\nu}{18 \epsilon}
\right)^{\frac{1}{2}}$ \\
& & \\
$dS_{1,\pm}$ & $ \mp \sqrt{\frac{\delta-\gamma_{2}}{6\epsilon}}
\left( 1+\frac{3\delta}{2} \right) + \left( \frac{{6\delta^2}
(3\delta-3\gamma_{2}-4)+ 8(\gamma_{2}(3\delta-1)+
\delta)}{48\epsilon}\right)^{\frac{1}{2}} $ & $ \mp
\sqrt{\frac{\delta-\gamma_{2}}{6\epsilon}} \left(
1+\frac{3\delta}{2} \right) - \left( \frac{{6\delta^2}
(3\delta-3\gamma_{2}-4)+ 8(\gamma_{2}(3\delta-1)+
\delta)}{48\epsilon}\right)^{\frac{1}{2}} $ \\
& & \\
$dS_{2,\pm}$ & $ \pm \sqrt{\frac{-(\delta+\gamma_{2})}{6\epsilon}}
\left( \frac{3\delta}{2} -1 \right) + \left( -\frac{{6\delta^2}
(3\delta+3\gamma_{2}+4)+ 8(\gamma_{2}(3\delta+1)+
\delta)}{48\epsilon}\right)^{\frac{1}{2}} $ & $ \pm
\sqrt{\frac{-(\delta+\gamma_{2})}{6\epsilon}} \left(
\frac{3\delta}{2} -1 \right) - \left( -\frac{{6\delta^2}
(3\delta+3\gamma_{2}+4)+ 8(\gamma_{2}(3\delta+1)+
\delta)}{48\epsilon}\right)^{\frac{1}{2}} $ \\
& & \\
\end{tabular}
\end{ruledtabular}
\end{table*}
\noindent As in the previous case, by existence we mean $x\geq0$
and $x,y\in\mathbb{R}$. The general eigenvalues derived from the
linear stability analysis are given in Table~\ref{Tab8}. The
linear stability character of the fixed points is given in
Table~\ref{Tab9}.
The system has six fixed points and the sign of $\epsilon$ no
longer effects the linear stability character of the fixed point
(but changes it's position in the $x-y$ plane). The first fixed
point $M$ represents a Minkowski model and is only present if
$\nu=0$, the linear stability character is undefined. The second
fixed point $E_{1}$ represents an Einstein static model and has
the linear stability character of a saddle. The third fixed point
$E_{2}$ represents an Einstein static model with the linear
stability character of a center. In general, this fixed point is
surrounded by a set of closed concentric loops representing
oscillating models. The next pair of fixed points $dS_{1,\pm}$
represents a set of generalized flat de Sitter models, the
expanding (contracting) model has attractor (repeller) stability.
The next pair of fixed points $dS_{2,\pm}$ also represents a set
of generalized flat de Sitter models, but now they have saddle
stability. The separatrix for open Friedmann models (OFS) is only
present if $\nu=0$. The parabola $y^2=x/3$ (FFS) still separates
the open and closed models. The separatrix for the closed
Friedmann models (CFS) is present for a narrow range of the
parameters and always coincides with the fixed points representing
the generalized Einstein static model, $E_{1}$. The fluid permits
two possible effective cosmological constants points, they are
given by:
\begin{eqnarray}
x_{\Lambda,1} &=&
\frac{\rho_{\Lambda,1}}{|\rho_{c}|}=-\frac{(\alpha+1)}{2\epsilon}
+ \frac{ \sqrt{\delta} }{2\epsilon} ,\\
x_{\Lambda,2} &=&
\frac{\rho_{\Lambda,2}}{|\rho_{c}|}=-\frac{(\alpha+1)}{2\epsilon}
- \frac{ \sqrt{\delta} }{2\epsilon}.
\end{eqnarray}
\noindent Where $\delta=(\alpha+1)^2-4\epsilon\nu$. There is also
a separatrix associated with each of the effective cosmological
points, which divide the regions of phantom and non-phantom
behavior. The separatrices will be referred to as the phantom
separatrix ($PS_{i}$ which corresponds to the line
$x=x_{\Lambda,i}$), with the appropriate subscript. For special
choices of parameters the separatrices coincide. The discussion of
the system will be split into the two categories, $\epsilon=+1$
and $\epsilon=-1$.
\subsection{The $\epsilon=+1$ case}\label{sec5a}
We first consider the system when we have a positive quadratic
energy density term ($\epsilon=+1$). The dynamical system can be
further sub-divided into sub-cases with different values of the
parameters $\alpha$ and $\nu$. The various subcases have been
highlighted in Table~\ref{Tab10}.
\begin{center}
\begin{table}[h!]\caption{\label{Tab10} The various sub-cases of
the full system when $\epsilon=+1$. The figure numbers given in
bold, indicate the choice of variables for which the state space
is qualitatively different to previously discussed cases.}
\begin{tabular*}{0.47\textwidth}{@{\extracolsep{\fill}}cccc}
\hline \hline
& $\alpha<-1$ & $-1\leq\alpha<-1/3$ & $-1/3\leq\alpha$ \\
\hline
& & & \\
$\nu>\nu_{1}$ & FIG.10 & FIG.10 & FIG.10 \\
$\nu=\nu_{1}$ & {\bf FIG.14} & {\bf FIG.14} & FIG.10 \\
$\nu_{2}<\nu<\nu_{1}$ & {\bf FIG.15} & {\bf FIG.15} & FIG.10 \\
$\nu=\nu_{2}$ & {\bf FIG.16} & {\bf FIG.15} & FIG.10 \\
$0<\nu<\nu_{2}$ & {\bf FIG.17} & {\bf FIG.15} & FIG.10 \\
$\nu=0$ & FIG.1 & FIG.3 & FIG.4 \\
$\nu<0$ & FIG.13 & FIG.13 & FIG.13 \\
& & & \\
\hline \hline
\end{tabular*}
\end{table}
\end{center}
The majority of sub-cases result in a phase space diagram which is
qualitatively similar to cases discussed in previous sections.
That is the qualitative behavior of trajectories is the same even
though the functional form of $\rho(a)$ is different. The figure
numbers not in bold (standard text) indicate choices of variable
for which the phase space is qualitatively similar to a previous
case, with the following differences:
\begin{itemize}
\item The regions which corresponded to different types of
behavior of the fluid now change (replaced by new $\rho(a)$
behavior):
\begin{itemize}
\item The case {\bf D1} $\to$ {\bf G1},
\item The case {\bf E2} $\to$ {\bf I3},
\item The case {\bf F2} $\to$ {\bf I2},
\end{itemize}
\item The Type I singularities are now replaced by Type III
singularities.
\end{itemize}
There is a narrow range of the parameters for which the state
space is qualitatively different. The figure numbers given in bold
in Table~\ref{Tab10}, indicate the choice of variables for which
the state space is qualitatively different to previously discussed
cases. We will now discuss the four sub-cases which are different
to those discussed in previous sections.
\subsubsection{The $\alpha<-1$, $\nu=\nu_{1}$ sub-case}\label{sec5a1}
\begin{figure}[h!]
\begin{center}
\hspace{0.4cm}\includegraphics[width=8.5cm,height=8.5cm,angle=270]{fig14.eps}
\caption{The phase space for the full system with $\epsilon=+1$,
$\alpha < -1$ and $\nu=\nu_{1}$ (additionally when $\alpha < -1/3$
and $\nu=\nu_{1}$). The entire region corresponds to the case
{\bf G1}.}
\end{center}
\label{fig14}
\end{figure}
The phase space of the system when $\alpha<-1$ and $\nu=\nu_{1}$
is shown in Fig. 14. As before the black lines represent
separatrix, grey lines represent example trajectories and fixed
points are represented by black dots. The fluid behaves in a
standard manner and corresponds to the case {\bf G1}. This choice
of parameters results in the two Einstein points ($E_{i}$)
coinciding. The resulting fixed point is highly non-linear and
cannot be classified into the standard linear stability categories
as in previous cases. The fixed point coincides with the CFS and
the parabola is the FFS. The open models are all non-physical as
they evolve to the $x<0$ region of the phase space. The models
between the FFS and the CFS represent turn-around models which
evolve from a Type III singularity\footnote{As with the case of
the high energy EoS, the Type III singularity is a generic feature
of the fully quadratic EoS.}, evolve to a maximum $a$ (minimum
$x$) and then re-collapse. The trajectories above the CFS also
represent similar turn-around models.
\subsubsection{The $\alpha<-1$, $\nu_{2}<\nu<\nu_{1}$ sub-case}\label{sec5a2}
\begin{figure}[h!]
\begin{center}
\hspace{0.4cm}\includegraphics[width=8.5cm,height=8.5cm,angle=270]{fig15.eps}
\caption{The phase space for the full system with $\epsilon=+1$,
$\alpha < -1$ and $\nu_{2} < \nu < \nu_{1}$ (additionally when $-1
< \alpha < -1/3$ and $0 < \nu < \nu_{1}$). The entire region
corresponds to the case {\bf G1}.}
\end{center}
\label{fig15}
\end{figure}
The phase space of the system when $\alpha<-1$ and $\nu_{2} < \nu
< \nu_{1}$ is shown in Fig. 15. The fluid behaves in a standard
manner and corresponds to the case {\bf G1}. The Einstein fixed
point of the previous case splits into two individual Einstein
fixed points ($E_{i}$) via bifurcation. The first Einstein fixed
point ($E_{1}$) coincides with the CFS, while the second Einstein
fixed point ($E_{2}$) is located inside the lower region enclosed
by the CFS. Only the trajectories above the FFS differ from the
previous case. The trajectories between the CFS and FFS still
represent turn-around models which evolve from a Type III
singularity but may now enter a loitering phase. The trajectories
inside the CFS and above the Einstein fixed point ($E_{1}$),
evolve from a Type III singularity, reach a maximum $a$ and then
re-collapse to a Type III singularity. The trajectories below
$E_{1}$ represent closed oscillating models, the closed loops are
centered on the second Einstein fixed point ($E_{2}$).
\subsubsection{The $\alpha<-1$, $\nu=\nu_{2}$ sub-case}\label{sec5a3}
\begin{figure}[h!]
\begin{center}
\hspace{0.4cm}\includegraphics[width=8.5cm,height=8.5cm,angle=270]{fig16.eps}
\caption{The phase space for the full system with $\epsilon=+1$,
$\alpha < -1$ and $\nu=\nu_{2}$. The upper (lower) region
corresponds to the case {\bf H2} ({\bf H1}).}
\end{center}
\label{fig16}
\end{figure}
The phase space of the system when $\alpha<-1$ and $ \nu =
\nu_{2}$ is shown in Fig. 16. The fluid behaves in a standard
manner in both regions and the upper (lower) region corresponds to
the case {\bf H2} ({\bf H1}). This choice of parameters results in
the two sets of generalized de Sitter points ($dS_{i,\pm}$)
coinciding. The fixed points coincide with $PS_{i}$
($x=x_{\Lambda,i}$) and the FFS. The resulting fixed points are
highly non-linear, the points have shunt stability along the FFS
direction and the generalized expanding (contracting) de Sitter
point has attractor (repeller) stability along the $PS_{i}$
direction. The two Einstein points ($E_{i}$) and the CFS are still
present. In the $x<x_{\Lambda,i}$ region, the open models are all
non-physical as they evolve to the $x<0$ region and the closed
models represent oscillating models which are centered on the
Einstein point ($E_{2}$) with center linear stability. In the
$x>x_{\Lambda,i}$ region, the open models are asymptotic to a Type
III singularity in the past and a expanding flat de Sitter phase
($dS_{i,+}$) in the future. The trajectories between the FFS and
the CFS evolve from a Type III singularity to $dS_{i,+}$ with the
possibility of entering a loitering phase. The models inside the
CFS and above the $E_{1}$ point represent turn-around models which
asymptotically approach a Type III singularity. The closed models
inside the CFS and below the $E_{2}$ point are asymptotic to a
contracting de Sitter model phase ($dS_{i,-}$) in the past and a
expanding de Sitter phase ($dS_{i,+}$) in the future. The generic
attractor in the $x>x_{\Lambda,i}$ region is the $dS_{i,+}$ fixed
point.
\subsubsection{The $\alpha<-1$, $0<\nu<\nu_{2}$ sub-case}\label{sec5a4}
\begin{figure}[h!]
\begin{center}
\hspace{0.4cm}\includegraphics[width=8.5cm,height=8.5cm,angle=270]{fig17.eps}
\caption{The phase space for the full system with $\epsilon=+1$,
$\alpha < -1$ and $0<\nu<\nu_{2}$. The upper, middle and lower
regions correspond to the cases {\bf I3}, {\bf I2} and {\bf I1}
respectively.}
\end{center}
\label{fig17}
\end{figure}
The phase space of the system when $\alpha<-1$ and $0<\nu<\nu_{2}$
is shown in Fig. 17. The upper (lower) horizontal line is the
$PS_{1}$ ($PS_{2}$). The region above $PS_{1}$ corresponds to the
case {\bf I3} and is qualitatively similar to the {\bf H2} region
in the previous sub-case. The region below $PS_{2}$ corresponds to
the case {\bf I1} and is qualitatively similar to the {\bf H1}
region in the previous sub-case. The set of generalized flat de
Sitter fixed points ($dS_{i,\pm}$) of the previous case split into
two sets of generalized flat de Sitter fixed points via
bifurcation. The upper (lower) set of generalized de Sitter
points, $dS_{1,\pm}$ ($dS_{2,\pm}$) have attractor/repeller
(saddle) stability. The region between $PS_{1}$ and $PS_{2}$
corresponds to the case {\bf I2} and the fluid behaves in a
phantom manner. The open models in this region are asymptotic to
open de Sitter models in the past and flat de Sitter models in the
future. The closed models in the phantom region represent phantom
bounce models which asymptotically approach a expanding
(contracting) de Sitter phases in the future (past).
\subsection{The $\epsilon=-1$ case}\label{sec5b}
We now consider the system when we have a negative quadratic
energy density term ($\epsilon=-1$). As before the system can be
sub-divided into various sub-cases with different values of
parameters of $\alpha$ and $\nu$. The various sub-cases have been
highlighted in Table~\ref{Tab11}
\begin{center}
\begin{table}[h!]\caption{\label{Tab11} The various sub-cases of
the $\epsilon=-1$ full system. The figure numbers given in bold,
indicate the choice of variables for which the phase space is
qualitatively different to previous cases.}
\begin{tabular*}{0.47\textwidth}{@{\extracolsep{\fill}}cccc}
\hline \hline
& $\alpha<-1$ & $-1\leq\alpha<-1/3$ & $-1/3\leq\alpha$ \\
\hline
& & & \\
$\nu>0$ & FIG.8 & FIG.8 & FIG.8 \\
$\nu=0$ & FIG.5 & FIG.6 & FIG.7 \\
$-\nu_{1}<\nu<0$ & FIG.11 & {\bf FIG.20} & {\bf FIG.18} \\
$\nu=-\nu_{1}$ & FIG.11 & {\bf FIG.20} & {\bf FIG.19} \\
$-\nu_{2}<\nu<-\nu_{1}$ & FIG.11 & {\bf FIG.20} & {\bf FIG.20} \\
$\nu=-\nu_{2}$ & FIG.11 & {\bf FIG.21} & {\bf FIG.21} \\
$\nu<-\nu_{2}$ & FIG.11 & FIG.11 & FIG.11 \\
& & & \\
\hline \hline
\end{tabular*}
\end{table}
\end{center}
\begin{figure}[h!]
\begin{center}
\hspace{0.4cm}\includegraphics[width=8.5cm,height=8.5cm,angle=270]{fig18.eps}
\caption{The phase space for the full system with $\epsilon=-1$,
$\alpha > -1/3$ and $-\nu_{1} < \nu < 0$. The upper, middle and
lower regions correspond to the cases {\bf I6}, {\bf I5} and {\bf
I4} respectively.}
\end{center}
\label{fig18}
\end{figure}
\noindent As before, the figure numbers not in bold (standard
text) indicate choices of variable for which the phase space is
qualitatively similar to a previous case, with the following
differences:
\begin{itemize}
\item The regions which corresponded to different types of
behavior of the fluid now change (replaced by new form of
$\rho(a)$):
\begin{itemize}
\item The case {\bf D2} $\to$ {\bf G2},
\item The case {\bf E1} $\to$ {\bf I6},
\item The case {\bf F1} $\to$ {\bf I5},
\end{itemize}
\item The Type I singularities are now replaced by Type III
singularities.
\end{itemize}
\noindent There are choices of parameters for which the phase
space is different (figure numbers in bold in Table~\ref{Tab11})
and these four sub-cases will be discussed in the following
sections.
\begin{figure}[t!]
\begin{center}
\hspace{0.4cm}\includegraphics[width=8.5cm,height=8.5cm,angle=270]{fig19.eps}
\caption{The phase space for the full system with $\epsilon=-1$,
$\alpha > -1/3$ and $\nu = -\nu_{1}$. The upper, middle and lower
regions correspond to the cases {\bf I6}, {\bf I5} and {\bf I4}
respectively.}
\end{center}
\label{fig19}
\end{figure}
\subsubsection{The $\alpha>-1/3$, $-\nu_{1}<\nu<0$ sub-case}\label{sec5b1}
The phase space of the system when $\alpha>-1/3$ and
$-\nu_{1}<\nu<0$ is shown in Fig. 18. The upper (lower) horizontal
line at $x=x_{\Lambda,2}$ ($x=x_{\Lambda,1}$) is the $PS_{2}$
($PS_{1}$) (they have swapped position with respect to the
$\epsilon=+1$ case). The region above $PS_{2}$ corresponds to the
case {\bf I6}, the region below $PS_{1}$ corresponds to the case
{\bf I4} and the fluid behaves in a phantom manner in both
regions. The region between $PS_{1}$ and $PS_{2}$ corresponds to
the case {\bf I5} and the fluid behaves in a standard manner. The
lower set of generalized de Sitter points ($dS_{1,\pm}$ - at the
intersection of $PS_{1}$ and FFS) have attractor/repeller
stability, while the upper set ($dS_{2,\pm}$ - at the intersection
of $PS_{2}$ and FFS) have saddle stability. The CFS is located in
between $PS_{1}$ and $PS_{2}$ and coincides with the Einstein
point ($E_{1}$). The open models in the $x< x_{\Lambda,1}$ region
(the case {\bf I4}) are all non-physical as they evolve from the
$x<0$ region of the phase space. The closed models in this region
represent phantom bounce models which evolve from a contracting de
Sitter phase ($dS_{1,-}$) to a expanding de Sitter phase
($dS_{1,+}$). The open models in the standard region
($x_{\Lambda,1}< x< x_{\Lambda,2}$ corresponding to the case {\bf
I5} ) are asymptotic to a generalized open de Sitter model in the
past and generalized flat de Sitter model in the future (the
future attractor has lower $x$ and $y$). The models between the
CFS and the FFS in this region represent bounce models which
evolve from a contracting de Sitter phase to a expanding de Sitter
phase with the possibility of entering a loitering phase. The
models enclosed by the CFS can be split into two groups. The
models above the fixed point, $E_{1}$ represent oscillating
models, the closed loops are centered on the fixed point $E_{2}$.
The models below the fixed point, $E_{1}$ represent bounce models
which evolve from $dS_{1,-}$ to $dS_{1,+}$. In the
$x>x_{\Lambda,2}$ region (the case {\bf I6}) the open models are
asymptotic to generalized open de Sitter models in the past and a
Type III singularity in the future. The closed models in this
region represent phantom bounce models which evolve from a Type
III singularity, reach a minimum $a$ (minimum $x$) and then evolve
to a Type III singularity. The generalized expanding flat de
Sitter model, $dS_{1,+}$ (Type III singularity) is the generic
future attractor in the region $x<x_{\Lambda,2}$
($x>x_{\Lambda,2}$). The trajectories in the regions,
$x<x_{\Lambda,1}$ and $x>x_{\Lambda,2}$ remain qualitatively
similar in the following two cases (Fig.19,~20).
\subsubsection{The $\alpha>-1/3$, $\nu=-\nu_{1}$ sub-case}\label{sec5b2}
The phase space of the system when $\alpha>-1/3$ and
$\nu=-\nu_{1}$ is shown in Fig. 19. The phase space is equivalent
to the previous sub-case, except for the region $x_{\Lambda,1}< x<
x_{\Lambda,2}$ (the case {\bf I5}). The open models in this region
are still asymptotic to generalized open (flat) de Sitter models
in the past (future). The behavior of the closed models has now
changed, there are no longer trajectories representing oscillating
models. The two generalized Einstein fixed points ($E_{i}$) have
now coalesced to form one fixed point via bifurcation. The closed
models above $E_{i}$ represent bounce models which evolve from
$dS_{1,-}$ to $dS_{1,+}$, with the possibility of entering a
loitering phase. The closed models below $E_{i}$ represent bounce
models which evolve from $dS_{1,-}$ to $dS_{1,+}$ without entering
a loitering phase.
\subsubsection{The $\alpha>-1/3$, $-\nu_{2}<\nu<-\nu_{1}$ sub-case}\label{sec5b3}
\begin{figure}[t!]
\begin{center}
\hspace{0.4cm}\includegraphics[width=8.5cm,height=8.5cm,angle=270]{fig20.eps}
\caption{The phase space for the full system with $\epsilon=-1$,
$\alpha > -1/3$ and $-\nu_{2} < \nu < -\nu_{1}$ (additionally when
$-1 < \alpha < -1/3$ and $-\nu_{2} < \nu < 0$). The upper, middle
and lower regions correspond to the cases {\bf I6}, {\bf I5} and
{\bf I4} respectively.}
\end{center}
\label{fig20}
\end{figure}
The phase space of the system when $\alpha>-1/3$ and
$-\nu_{2}<\nu<-\nu_{1}$ is shown in Fig. 20. The phase space is
qualitatively similar to the previous sub-cases except for the
$x_{\Lambda,1}< x< x_{\Lambda,2}$ region. There are no longer any
fixed points representing generalized Einstein static models and
the CFS is no longer present. The open models in the region behave
as in previous sub-cases. The closed models in the region
represent bounce model, which evolve to a expanding (collapsing)
de Sitter phase in the future (past) without the possibility of
entering a loitering phase.
\subsubsection{The $\alpha>-1/3$, $\nu=-\nu_{2}$ sub-case}\label{sec5b4}
\begin{figure}[t!]
\begin{center}
\hspace{0.4cm}\includegraphics[width=8.5cm,height=8.5cm,angle=270]{fig21.eps}
\caption{The phase space for the full system with $\epsilon=-1$,
$\alpha > -1/3$ and $\nu = -\nu_{2}$ (additionally when $-1 <
\alpha < -1/3$ and $\nu = -\nu_{2}$). The upper (lower) region
corresponds to the case {\bf H4} ({\bf H3}).}
\end{center}
\label{fig21}
\end{figure}
The next case is the phase space of the system when $\alpha>-1/3$
and $\nu=-\nu_{2}$ and is shown in Fig. 21. The fluid behaves in a
phantom manner in both regions and the upper (lower) region
corresponds to the case {\bf H4} ({\bf H3}). The two sets of
generalized de Sitter points ($dS_{i,\pm}$) have now coalesced
into a single set of generalized de Sitter points ($dS_{\pm}$)
which are located at the intersection of the FFS and the $PS_{i}$
which have also coalesced to form a single separatrix
($x_{\Lambda,1} = x_{\Lambda,2}$). The resulting fixed points are
highly non-linear, the points have shunt stability along the FFS
direction and the generalized expanding (contracting) de Sitter
point has attractor (repeller) stability along the $PS_{i}$
direction. The Type III singularity is the generic attractor in
the upper region ($x>x_{\Lambda,i}$) and the $dS_{i,+}$ is the
generic attractor in the lower region ($x<x_{\Lambda,i}$).
\section{Discussion and Conclusions}\label{sec6}
In this paper we have systematically studied the dynamics of
homogeneous and isotropic cosmological models containing a fluid
with a quadratic EoS. This has it's own specific interest (see
Section I for a variety of motivations) and serves as a simple
example of more general EoS's. It can also be taken to represent
the truncated Taylor expansion of any barotropic EoS, and as such
it serves (with the right choice of parameters) as a useful
phenomenological model for dark energy, or even UDM. Indeed, we
have shown the dynamics to be very different and much richer than
the standard linear EoS case, finding that an almost generic
feature of the evolution is the existence of an accelerated phase,
most often asymptotically de Sitter, thanks to the appearance of
an {\it effective cosmological constant}. Of course to properly
build physical cosmological models would require to consider the
quadratic EoS for dark energy or UDM together with standard matter
and radiation. Our analysis was aimed instead to derive and
classify the large variety of different dynamical effects that the
quadratic EoS fluid has when is the dominant component. In this
respect, it should be noticed that a positive quadratic term in
the EoS allows, in presence of another fluid such as radiation,
equi-density between the two fluid to occur twice, i.e. the
quadratic EoS fluid can be dominant at early and late times, and
subdominant in an intermediate era.
In Section II we have made some general remarks, mostly based on
conservation of energy only and as such valid independently of any
specific theory of gravity. We have also given the various
possible functional forms of the energy density as a function of
the scale factor, $\rho(a)$, and listed the many subcases, grouped
in three main cases, what we call: {\it i)} the high energy
models (no constant $P_o$ term); {\it ii)} the low energy affine
EoS with no quadratic term; {\it iii)} the complete quadratic EoS.
The quadratic term in the EoS affects the high energy behavior as
expected but can additionally affect the dynamics at relatively
low energies. First, in Section III, we have concentrated on the
high energy models. The specific choice of parameters fixes the
behavior of the fluid, it can behave in a phantom or standard
manner. In the case of phantom behavior, $\rho$ can tend to zero
at early times and either tend to an effective cosmological
constant ({\bf C1}) or a Type III singularity ({\bf A2}) at late
times. Alternatively $\rho$ can also tend to an effective
cosmological constant in the past ({\bf B2}) and a Type III
singularity at late times. When the fluid behaves in a standard
manner, it can tend to a Type III singularity at early times, with
$\rho$ either tending to zero ({\bf A1}) or to an effective
cosmological constant ({\bf B1}) at late times. The fluid can also
behave as an effective cosmological constant at early times with
$\rho$ decaying away at late times ({\bf C2}). The effective
cosmological constant allows for the existence of generalized
Einstein static ($E$) and flat de Sitter fixed ($dS_{\pm}$) points
which modify the late time behavior. The main new feature is the
existence of models which evolve from a Type III singularity and
asymptotically approach a flat de Sitter model ($dS_{+}$). Of
specific interest are the closed models of this type, which can
also evolve through an intermediate loitering phase.
Neglecting the quadratic term, in Section IV we have considered
the low energy models with affine EoS. As expected, the constant
term in the quadratic EoS affects the relatively low energy
behavior. It can result in a variety of qualitatively different
dynamics with respect to those of the linear EoS case. Again, the
fluid can have a phantom or standard behavior. When the fluid
behaves in a phantom manner, $\rho$ can tend to an effective
cosmological constant ({\bf F2}), or can tend to a Type I (``Big
Rip") singularity ({\bf D2}) at late times. Alternatively, $\rho$
can also tend to an effective cosmological constant in the past
and a Big Rip in the future({\bf E1}). When the fluid behaves in a
standard manner, we recover the linear EoS at early times and
$\rho$ can either tend to zero ({\bf D1}) or to an effective
cosmological constant ({\bf E2}) at late times. The fluid can also
behave as an effective cosmological constant at early times, with
$\rho$ decaying away at late times ({\bf F1}). The effective
cosmological constant allows for the existence of new fixed
points($E$ and $dS_{\pm}$). Comparing with standard linear EoS
cosmology, the most interesting differences are new closed models
which oscillate indefinitely and new closed models which exhibit
phantom behavior which do not terminate in a ``Big Rip", but
asymptotically approach an expanding flat de Sitter model (flat
and closed models where the fluid behaves as case {\bf F2}).
When we study the dynamics of the system with the complete
quadratic EoS, Section V, we see the appearance of new fixed
points representing generalized Einstein and de Sitter models
which are not present in the high/low energy systems. The various
models of the simplified systems are present in the full system
(but with differing $\rho(a)$), but there are also models with
qualitatively new behavior. As with the previous cases, in the
case of phantom behavior, $\rho$ can tend to zero at early times
and either tend to an effective cosmological constant ({\bf H3}
and {\bf I4}) or a Type III singularity ({\bf G2}) at late times.
Alternatively $\rho$ can also tend to an effective cosmological
constant in the past ({\bf H4} and {\bf I6}) and a Type III
singularity at late times. Finally, in the phantom case $\rho$ can
also tend to an effective cosmological constant both in the past
and future ({\bf I2}). In the case of standard behavior the fluid
can tend to a Type III singularity at early times, with $\rho$
either tending to zero ({\bf G1}) or to an effective cosmological
constant ({\bf H2} and {\bf I3}) at late times. The fluid can also
behave as an effective cosmological constant at early times with
$\rho$ decaying away at late times ({\bf H1} and {\bf I1}).
Finally, in the standard fluid case $\rho$ can also tend to an
effective cosmological constant both in the past and future ({\bf
I5}). There are models which evolve from a Type III singularity,
reach a maximum $a$ (minimum $x$) and then evolve to Type III
singularity. These also enter a loitering phase before and after
the turn around point. We also see bounce models which enter a
loitering phase and asymptotically tend to generalized expanding
(contracting) de Sitter models at late (early) times.
Of specific interest are models which evolve from a Type III
singularity as opposed to the standard ``Big Bang" ({\bf A1, B1}).
The simplest models of this type correspond to the high energy EoS
with a positive quadratic term (is possible to recover standard
behavior at late times). For these models the positive quadratic
energy density term has the potential to force the initial
singularity to be isotropic. The effects of such a fluid on
anisotropic Bianchi I and V models is investigated in Paper
II~\cite{AB}. This is achieved by carrying out a dynamical
systems analysis of these models. Additionally, using a linearized
perturbative treatment we study the behavior of inhomogeneous and
anisotropic perturbations at the singularity. The singularity is
itself represented by an isotropic model and, If the perturbations
of the latter decay in the past, this model represents the local
past attractor in the larger phase space of inhomogeneous and
anisotropic models (within the validity of the perturbative
treatment). This would mean that in inhomogeneous anisotropic
models with a positive non-linear term (at least quadratic) in
the EoS isotropy is a natural outcome of {\it generic initial
conditions}, unlike in the standard linear EoS case where generic
cosmological models are, in GR, highly anisotropic in the past.
\acknowledgments KNA is supported by PPARC (UK). MB is partly
supported by a visiting grant by MIUR (Italy). The authors would
like to thank Chris Clarkson, Mariam Bouhmadi-L\'{o}pez and Roy
Maartens for useful comments and discussions.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,258 |
Q: Debugging discrepancy between authentication in firebase app and rules I'm developing an app using Firebase Auth and Firestore.
I have the current code to create a game document:
export const createGame = async () => {
const title = makeid(6);
const myDoc = doc(db, "rides", title);
const payload = {
createdBy: auth.currentUser.uid
};
console.log(title, payload, auth.currentUser);
await setDoc(myDoc, payload);
return title;
};
This results in the following being printed to the console:
5YTm0R {createdBy: 'pLCzrgwSQSa9KxaW5OlU2l18CGY2'} UserImpl {providerId: 'firebase', proactiveRefresh: ProactiveRefresh, reloadUserInfo: {…}, reloadListener: null, uid: 'pLCzrgwSQSa9KxaW5OlU2l18CGY2', …}
As you can see from the log, the current user exists. It is an anonymous user, so isAnonymous is true when you expand the object.
However, the request fails, and when I look at the emulator's console, I see the following image:
The current user is being shown as null in the Firebase Emulator console whereas it is non-null in the application.
I'm wondering if there's a particular set of steps I need to take for Firestore to use the current user's authentication when making a request? Thanks!
A: Posting as community wiki:
As per @mikesol, the issue was resolved by upgrading to the newest Firebase version.
npm i firebase@latest
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 2,447 |
\section{Introduction}
As Internet services, such as search and social networking, become
more widespread in recent years, the energy consumption of data centers
has been skyrocketing. In 2005, data centers worldwide consumed an
estimated 152 billion kilowatt-hours (kWh) of energy, roughly 1\%
of the world total energy consumption \cite{Koomey2008}. Power consumption
at such level was enough to power half of Italy \cite{worldenergy07}.
Energy cost is approaching overall hardware cost in data centers \cite{barroso2005price},
and is growing 12\% annually \cite{ESPreport2007}.
Recent works have explored electricity price fluctuation in time and
geographically load balancing across data centers to cut short the
electricity bill; see e.g., \cite{liu2011greening,wendell2010donar,qureshi2009cutting,urgaonkar2011optimal}
and the references therein. Meanwhile, it is nevertheless critical
to minimize the actual energy footprint in individual data centers.
Energy consumption in a data center is a product of the PU
\footnote{Power usage effectiveness (PUE) is defined as the ratio between the
amount of power entering a data center and the power used to run its
computer infrastructure. The closer to one PUE is, the better energy
utilization is
} and the energy consumed by the servers. There have been substantial
efforts in improving PUE, e.g., by optimizing cooling \cite{rasmussen113electrical,sharma2005balance}
and power management \cite{raghavendra2008no}. We focus on reducing
the energy consumed by the servers in this paper.
Real-world statistics reveals three observations that suggest ample
saving is possible in server energy consumption \cite{chase2001managing,pinheiro2001load,chen2008energy,krioukov2011napsac,fan2007power,barroso2007case}.
First, workload in a data center often fluctuates significantly on
the timescale of hours or days, expressing a large {}``peak-to-mean''
ratio. Second, data centers today often provision for far more than
the observed peak to accommodate both the predictable workload and
the unpredictable flash crowd
\footnote{In May 2011, Amazon\textquoteright{}s data center is down for hours
due to a surge downloads of Lady Gaga's song {}``Born This Way''
}. Such static over-provisioning results in low average utilization
for most servers in data centers. Third, a low-utilized or idle server
consumes more than 60\% of its peak power. These observations imply
that a large portion of the energy consumed by servers goes into powering
nearly-idle servers, and it can be best saved by turning off servers
during the off-peak periods.
One promising technique exploiting the above insights is \emph{dynamic
provisioning}, which turns on a minimum number of servers to meet
the current demand and dispatches the load among the running servers
to meet Service Level Agreements (SLA), making the data center {}``power-proportional''.
There have been a significant amount of efforts in developing such
technique, initiated by the pioneering works \cite{chase2001managing}\cite{pinheiro2001load}
a decade ago. Among them, one line of works \cite{meisner2009powernap,krioukov2011napsac,chen2008energy}
exam the practical feasibility and advantage of dynamic provisioning
using real-world traces, suggesting substantial gain is indeed possible
in practice. Another line of works \cite{chase2001managing,qian2011server,lin2011dynamic,chen2008energy}
focus on developing algorithms by utilizing various tools from queuing
theory, control theory, and machine learning, providing algorithmic
insights in synthesizing effective solutions. These existing works
provide a number of schemes that deliver favorable performance justified
by theoretic analysis and/or practical evaluations. See \cite{DataCenterEnergySurvey10}
for a recent survey.
The effectiveness of these exciting schemes, however, usually rely
on being able to predict future workload to certain extent, e.g.,
using model fitting to forecast future workload from historical data
\cite{chen2008energy}. This naturally leads to the following questions:
\begin{itemize}
\item Can we design \emph{online} solutions that require zero future workload
information, yet still achieve \emph{close-to-optimal} performance?
\item Can we characterize the benefit of knowing future workload in dynamic
provisioning?
\end{itemize}
Answers to these questions provide fundamental understanding on how
much performance gain one can have by exploiting future workload information
in dynamic provisioning.
Recently, Lin \emph{et al.} \cite{lin2011dynamic} propose an algorithm
that requires almost-zero future workload informatio
\footnote{The LCP algorithm proposed in \cite{lin2011dynamic} only relies on
an estimate of the job arrival rate of the upcoming slot
} and achieves a competitive ratio of 3, i.e., the energy consumption
is at most 3 times the minimum (computed with perfect future knowledge).
In simulations, they further show the algorithm can exploit available
future workload information to improve the performance. These results
are very encouraging, indicating that a complete answer to the questions
is possible.
In this paper, we further explore answers to the questions, and make
the following contributions:
\begin{itemize}
\item We consider a scenario where a running server consumes a fixed amount
energy per unit time. We reveal that the dynamic provisioning problem
has an elegant structure that allows us to solve it in a {}``divide-and-conquer''
manner. This insight leads to a full characterization of the optimal
solution, achieved by using a centralized procedure.
\item We show that, interestingly, the optimal solution can also be attained
by the data center adopting a simple \emph{last-empty-server-first}
job-dispatching strateg
\footnote{Readers might notice that this job-dispatching strategy shares some
similarity with the most-recently-busy strategy used in the DELAYEDOFF
algorithm \cite{gandhi2010optimality}. Actually there are subtle
yet important difference, which will be discussed in details in Section
\ref{ssec:comparison.with.DELAYEDOFF}
} and each server \emph{independently} solving a classic ski-rental
problem. We build upon this architectural insight to design three
\emph{decentralized} online algorithms, all have improved competitive
ratios than state-of-the-art solutions. One is a deterministic algorithm
with competitive ratio $2-\alpha$, where $0\leq\alpha\leq1$ is the
fraction of a critical window in which future workload information
is available. The other two are randomized algorithms with competitive
ratios $\left(e-\alpha\right)/\left(e-1\right)\approx1.58-\alpha/\left(e-1\right)$and
$e/\left(e-1+\alpha\right)$, respectively. We prove that $2-\alpha$
and $e/\left(e-1+\alpha\right)$ are the best competitive ratios for
deterministic and randomized online algorithms under our last-empty-server-first
job-dispatching strategy.
\item Our results lead to a fundamental observation: under the cost model
that a running server consumes a fixed amount energy per unit time,
\emph{future workload information beyond the critical window will
not} \emph{improve the dynamic provisioning performance. }The size
of the critical window is determined by the wear-and-tear cost and
the unit-time energy cost of running one server.
\item Our algorithms are simple and easy to implement. We demonstrate the
effectiveness of our algorithms in simulations using real-world traces.
We also compare their performance with state-of-the-art solutions.
\end{itemize}
The rest of the paper is organized as follows. We formulate the problem
in Section \ref{sec:ps}. Section \ref{sec:offline} reveals the important
structure of the formulated problem, characterizes the optimal solution,
and designs a simple decentralized offline algorithm achieving the
optimal. In Section \ref{sec:online}, we propose the online algorithms
and provide performance guarantees. Section \ref{sec:expr} presents
the numerical experiments and Section \ref{sec:conclusion} concludes
the paper.
\section{Problem Formulation\label{sec:ps}}
\subsection{Settings and Models \label{ssec:settings}}
We consider a data center consisting of a set of homogeneous servers.
Without loss of generality, we assume each server has a unit service
capacit
\footnote{In practice, server's service capacity can be determined from the
knee of its throughput and response-time curve \cite{krioukov2011napsac}
}, i.e., it can only serve one unit workload per unit time. Each server
consumes $P$ energy per unit time if it is on and zero otherwise.
We define $\beta_{on}$ and $\beta_{off}$ as the cost of turning
a server on and off, respectively. Such wear-and-tear cost, including
the amortized service interruption and hard-disk failure cost\cite{qian2011server},
is comparable to the energy cost of running a server for several hours
\cite{lin2011dynamic}.
The results we develop in this paper apply to both of the following
two types of workloa
\footnote{There are also other types of workload, such as the bin-packing model
considered in \cite{krioukov2011napsac}. Extending the results in
this paper to those workload models is of great interest and left
for future work
}:
\begin{itemize}
\item {}``mice'' type of workload, such as {}``request-response'' web
serving. Each job of this type has a small transaction size and short
duration. A number of existing works \cite{chase2001managing,pinheiro2001load,lin2011dynamic,doyle2003model}
model such workload by a discrete-time fluid model. In the model,
time is chopped into equal-length slots. Jobs arriving in one slot
get served in the same slot. Workload can be split among running servers
at arbitrary granularity like fluid.
\item {}``elephant'' type of workload, such as virtual machine hosting
in cloud computing. Each job of this type has a large transaction
size, and can last for a long time. We model such workload by a continuous-time
brick model. In this model, time is continuous, and we assume one
server can only serve one jo
\footnote{Other than the obvious reason that the service capacity can only fit
one job, there could also be SLA in cloud computing that requires
the job does not share the physical server with other jobs due to
security concerns
}. Jobs arrive and depart at arbitrary time, and no two job arrival/departure
events happen simultaneously.
\end{itemize}
For the discrete-time fluid model, servers toggled at the discrete
time epoch will not interrupt job execution and thus no job migration
is incurred. This neat abstraction allows research to focus on server
on-off scheduling to minimize the cost. For the continuous-time brick
model, when a server is turned off, the long-lasting job running on
it needs to be migrated to another server. In general, such non-trivial
migration cost needs to be taken into account when toggling servers.
In the following, we present our results based on the continuous-time
brick model. We add discussions to show the algorithms and results
are also applicable to the discrete-time fluid model.
Let $x\left(t\right)$ and $a\left(t\right)$ be the number of {}``on''
servers (serving or idle) and jobs at time $t$ in the data center,
respectively. To keep the problem interesting, we assume that $a\left(t\right)$
is not always zero. Under our workload model, $a(t)$ at most increases
or decreases by one at any time $t$.
To focus on the cost within $[0,T]$, we set $x(0)=a\left(0\right)$
and $x\left(T\right)=a\left(T\right)$. Note such boundary conditions
include the one considered in the literature, e.g., \cite{lin2011dynamic},
as a special case, where $x(0)=a(0)=x(T)=a(T)=0$.
Let $P_{on}(t_{1},t_{2})$ and $P_{off}(t_{1},t_{2})$ denote the
total wear-and-tear cost incurred by turning on and off servers in
$[t_{1},t_{2}]$, respectively:
\begin{equation}
P_{on}(t_{1},t_{2})\triangleq\underset{\delta\rightarrow0^{+}}{\lim}\left\{ \beta_{on}\underset{i=1}{\overset{\left\lceil \left(t_{2}-t_{1}\right)/\delta\right\rceil }{\sum}}\left[x\left(t_{1}+i\delta\right)-x\left(t_{1}+\left(i-1\right)\delta\right)\right]^{+}\right\} \label{eq:on-cost}
\end{equation}
and
\begin{equation}
P_{off}(t_{1},t_{2})\triangleq\underset{\delta\rightarrow0^{+}}{\lim}\left\{ \beta_{off}\underset{i=1}{\overset{\left\lceil \left(t_{2}-t_{1}\right)/\delta\right\rceil }{\sum}}\left[x\left(t_{1}+\left(i-1\right)\delta\right)-x\left(t_{1}+i\delta\right)\right]^{+}\right\} .\label{eq:off-cost}
\end{equation}
\subsection{Problem Formulation}
We formulate the problem of minimizing server operation cost in a
data center in $[0,T]$ as follows:
\begin{eqnarray}
\mathbf{SCP}: & \textrm{min} & P\varint_{0}^{T}x\left(t\right)dt+P_{on}(0,T)+P_{off}(0,T)\label{eq: obj}\\
& \textrm{s.t}. & x(t)\geq a(t),\forall t\in[0,T],\label{eq:const1}\\
& & x(0)=a(0),x(T)=a(T),\label{eq:asym.constraint}\\
& \mbox{var} & x(t)\in\mathbb{Z}^{+},t\in[0,T],
\end{eqnarray}
where $\mathbb{Z}^{+}$ denotes the set of non-negative integers.
The objective is to minimize the sum of server energy consumption
and the wear-and-tear cost. Constraints in \eqref{eq:const1} say
the service capacity must satisfy the demand. Constraints in \eqref{eq:asym.constraint}
are the boundary conditions.
\textbf{Remarks}: (i) The problem \textbf{SCP} does not consider the
possible migration cost associated with the continuous-time discrete-load
model. Fortunately, our results later show that we can schedule servers
according to the optimal solution, and at the same time dispatch jobs
to servers in a way that aligns with their on-off schedules, thus
incurring no migration cost. Hence, the minimum server operation cost
remains unaltered even we consider migration cost in the problem \textbf{SCP}
(which can be rather complicated to model). (ii) The formulation remains
the same with discrete-time fluid workload model where there is no
job migration cost to consider. (iii) The problem\textbf{ SCP} is
similar to a common one considered in the literature, e.g., in \cite{lin2011dynamic},
with a specific cost function. The difference is that we allow more
flexible boundary conditions and on/off wear-and-tear cost modeling,
and are more precise in the decision variables being integers instead
of real numbers.(iv) In the problem setting, we assume that the power
consumption of a server is constant $P.$ Actually, the results of
this paper also apply to the following unit time power consumption
model: the power consumption of $x$ busy server is $F\left(x\right)$
and the unit time power consumption for a idle server is $P$. This
is because the total power consumption under this model is $\varint_{0}^{T}F\left[a\left(t\right)\right]+P\left[x\left(t\right)-a\left(t\right)\right]dt+P_{on}(0,T)+P_{off}(0,T)$.
Since $\varint_{0}^{T}F\left[a\left(t\right)\right]-Pa\left(t\right)dt$
is constant for given $a\left(t\right)$, to minimize the total power
consumption is to minimize above \textbf{SCP} problem.
There are infinite number of integer variables $x\left(t\right)$,
$t\in[0,T]$, in the problem \textbf{SCP}, which make it challenging
to solve. Moreover, in practice the data center has to solve the problem
without knowing the workload $a(t)$, $t\in[0,T]$ ahead of time.
Next, we first focus on designing off-line solution, including (i)
a job-dispatching algorithm and (ii) a server on-off scheduling algorithm,
to solve the problem \textbf{SCP} optimally. We then extend the solution
to its on-line versions and analyze their performance guarantees with
or without (partial) future workload information.
\section{Optimal Solution and Offline Algorithm \label{sec:offline}}
We study the off-line version of the server cost minimization problem
\textbf{SCP}, where the workload $a(t)$ in $[0,T]$ is given.
We first identify an elegant structure of its optimal solution, which
allows us to solve the problem in a {}``divide-and-conquer'' manner.
That is, to solve the problem \textbf{SCP} in $[0,T]$, it suffices
to split it into smaller problems over certain \emph{critical segments}
and solve them independently. We then derive a simple and decentralized
algorithm, upon which we build our online algorithms.
\subsection{Critical Times and Critical Segments}
Given $a(t)$ in $[0,T]$, we identify a set of critical times $\left\{ T_{i}^{c}\right\} _{i}$
and construct the \emph{critical segments} as follows. \\
\rule{1\columnwidth}{1pt}
\noindent \textbf{Critical Segment Construction Procedure: }
First, traversing $a(t)$, we identify all the jobs arrival/departure
epochs in $[0,T]$. The first critical time is $T_{1}^{c}=0$. $T_{1}^{c}$
can be a job-arrival epoch or job-departure epoch, or no job departs/arrive
the system at $T_{1}^{c}$. If no job departs or arrives at $T_{1}^{c}$,
$T_{1}^{c}$ is considered as a job-arrival epoch. Next we find $T_{i+1}^{c}$
inductively, given that $T_{i}^{c}$ is known.
\begin{itemize}
\item If $T_{i}^{c}$ is a job-arrival epoch, e.g., the first critical time,
then $T_{i+1}^{c}$ is the first job-departure epoch after $T_{i}^{c}$.
One example is the epoch $T_{2}^{c}$ in Fig. \ref{fig:ct.cs.example}.
\item If $T_{i}^{c}$ is a job-departure epoch, we first try to find the
first arrival epoch $\tau$ after $T_{i}^{c}$ so that $a\left(\tau\right)=a\left(T_{i}^{c}\right)$.
If such $\tau$ exists, then we set $T_{i+1}^{c}=\tau$. One example
is the epoch $T_{4}^{c}$ in Fig. \ref{fig:ct.cs.example}. If no
such $\tau$ exists, and we set $T_{i+1}^{c}$ to be the next job
departure epoch. One example is the $T_{3}^{c}$ in Fig. \ref{fig:ct.cs.example}.
\end{itemize}
Upon reaching time epoch $T$, we find all, say $M$, critical times.
We define the critical segments as the period between two consecutive
critical times, i.e., $\left[T_{i}^{c},T_{i+1}^{c}\right]$, $1\leq i\leq M-1$.
\\
\rule[0.5ex]{1\columnwidth}{1pt}
The critical segments have interesting properties. For example, they
are disjoint except at the boundary points, and they together fully
cover the time interval $[0,T]$. Moreover, we observe that workload
expresses interesting properties in these critical segments.
\begin{figure}
\centering\includegraphics[width=0.9\columnwidth]{cri_seg_ex}
\caption{Illustration of critical times and critical segments. $T_{1}^{c}$
to $T_{7}^{c}$ are critical times, and they form six critical segments.
$a(t)$ is of Type-I in $\left[T_{1}^{c},T_{2}^{c}\right]$, Type-II
in $\left[T_{2}^{c},T_{3}^{c}\right]$, Type-III in $\left[T_{5}^{c},T_{6}^{c}\right]$,
and Type-IV in $\left[T_{3}^{c},T_{4}^{c}\right]$.}
\label{fig:ct.cs.example}
\end{figure}
\begin{prop}
The workload $a(t)$ in any critical segment \textup{$\left[T_{i}^{c},T_{i+1}^{c}\right]$}
must be one of the following four types\textup{:\label{prop:property}}
\begin{itemize}
\item Type-I: \textup{workload is non-decreasing in $\left[T_{i}^{c},T_{i+1}^{c}\right]$.
\label{enu:property 1}}
\item Type-II: \textup{workload is step-decreasing in $\left[T_{i}^{c},T_{i+1}^{c}\right]$.
That is, $a\left(t\right)=a\left(T_{i}^{c}\right)-1,\forall t\in\left(T_{i}^{c},T_{i+1}^{c}\right]$
and $a\left(t\right)\leq a\left(T_{i}^{c}\right)-1,\forall t\in\left(T_{i+1}^{c},T\right]$.
\label{enu:property 2}}
\item Type-III: \textup{workload is of {}``U-shape'' in $\left[T_{i}^{c},T_{i+1}^{c}\right]$.
That is, $a\left(T_{i+1}^{c}\right)=a\left(T_{i}^{c}\right)$ and
$a\left(t\right)=a\left(T_{i}^{c}\right)-1,\forall t\in\left(T_{i}^{c},T_{i+1}^{c}\right)$.
\label{enu:property 3}}
\item Type-IV: \textup{workload is of {}``canyon-shape'' in $\left[T_{i}^{c},T_{i+1}^{c}\right]$.
That is, $a\left(T_{i+1}^{c}\right)=a\left(T_{i}^{c}\right)$, $a\left(t\right)\leq a\left(T_{i}^{c}\right)-1$
and not always identical, $\forall t\in\left(T_{i}^{c},T_{i+1}^{c}\right)$.
\label{enu:property 4}}
\end{itemize}
\end{prop}
\begin{IEEEproof}
Refer to Appendix \ref{apx:proof_1}.
\end{IEEEproof}
Examples of these four types of $a(t)$ are shown in Fig. \ref{fig:ct.cs.example}.
\subsection{Structure of Optimal Solution }
Let $x^{*}(t)$, $t\in[0,T]$, be an optimal solution to the problem
\textbf{SCP}, and the corresponding minimum server operation cost
be $P^{*}$. We have the following observation.
\begin{lem}
$x^{*}\left(t\right)$ must meet $a\left(t\right)$ at every critical
time, i.e., $x^{*}\left(T_{i}^{c}\right)=a\left(T_{i}^{c}\right)$,
$1\leq i\leq M$.\label{lem:lemma 2}\end{lem}
\begin{IEEEproof}
Refer to Appendix \ref{apx:proof_2}.
\end{IEEEproof}
Lemma \ref{lem:lemma 2} not only presents a necessary condition for
a solution $x(t)$ to be optimal, but also suggests a {}``divide-and-conquer''
way to solve the problem \textbf{SCP} optimally.
Consider the following sub-problem of minimizing server operation
cost in a critical segment $\left[T_{i}^{c},T_{i+1}^{c}\right]$,
$1\leq i\leq M-1$:
\begin{eqnarray}
& \textrm{min} & P\varint_{T_{i}^{c}}^{T_{i+1}^{c}}x\left(t\right)dt+P_{on}\left(T_{i}^{c},T_{i+1}^{c}\right)+P_{off}\left(T_{i}^{c},T_{i+1}^{c}\right)\label{eq:sub}\\
& \textrm{s.t}. & x(t)\geq a(t),\forall t\in\left[T_{i}^{c},T_{i+1}^{c}\right],\\
& & x(T_{i}^{c})=a(T_{i}^{c}),x(T_{i+1}^{c})=a(T_{i+1}^{c}),\label{eq:sub-const}\\
& \mbox{var} & x(t)\in\mathbb{Z}^{+},t\in\left[T_{i}^{c},T_{i+1}^{c}\right].
\end{eqnarray}
Let its optimal value be $P_{i}^{*}$, $1\leq i\leq M-1$. We have
the following observation.
\begin{lem}
\label{prop: lower bound}$\underset{i=1}{\overset{M}{\sum}}P_{i}^{*}$
is a lower bound of the optimal server operation cost of the problem
\textbf{SCP}, i.e.,
\begin{equation}
P^{*}\geq\underset{i=1}{\overset{M}{\sum}}P_{i}^{*}.\label{eq:P_opt_lower_bound}
\end{equation}
\end{lem}
\begin{IEEEproof}
Refer to Appendix \ref{apx:proof_3}.
\end{IEEEproof}
\textbf{Remark}: Over arbitrarily chopped segments, sum of their minimum
server operation costs may not be bounds for $P^{*}$. However, as
we will see later, computed based on critical segments, Eqn. \eqref{eq:P_opt_lower_bound}
establishes a lower bound of $P^{*}$ and is achievable, thanks to
the structure of $x^{*}\left(t\right)$ outlined in Lemma \ref{lem:lemma 2}.
Suggested by Lemma \ref{prop: lower bound}, it suffices to solve
individual sub-problems for all critical segments in $[0,T]$, and
combine the corresponding solutions to form an optimal solution to
the overall problem \textbf{SCP} (note the optimal solutions of sub-problems
connect seamlessly). The special structures of $a(t)$ in individual
critical segment, summarized in Proposition \ref{prop:property},
are the key to tackle each sub-problem. \rule{1\columnwidth}{1pt}\\
\textbf{Optimal Solution} \textbf{Construction Procedure}:
We visit all the critical segments in $[0,T]$ sequentially, and construct
an $x(t)$, $t\in[0,T]$. For a critical segment $\left[T_{i}^{c},T_{i+1}^{c}\right]$,
$1\leq i\leq M-1$, we check the $a(t)$ in it:
\begin{enumerate}
\item the $a(t)$ is of Type-I or Type-II: we simply set $x(t)=a(t)$, for
all $t\in$$\left[T_{i}^{c},T_{i+1}^{c}\right]$.
\item the $a(t)$ is of Type-III:
\begin{itemize}
\item if $\beta_{on}+\beta_{off}\geq P\cdot\left(T_{i+1}^{c}-T_{i}^{c}\right)$,
then we set $x\left(t\right)=a\left(T_{i}^{c}\right),\forall t\in\left[T_{i}^{c},T_{i+1}^{c}\right]$;
\item otherwise, we set $x(T_{i}^{c})=a(T_{i}^{c})$, $x(T_{i+1}^{c})=a(T_{i+1}^{c})$,
and $x\left(t\right)=a\left(T_{i}^{c}\right)-1,\forall t\in\left(T_{i}^{c},T_{i+1}^{c}\right)$.
\end{itemize}
\item the $a(t)$ is of Type-IV:
\begin{itemize}
\item if $\beta_{on}+\beta_{off}\geq P\cdot\left(T_{i+1}^{c}-T_{i}^{c}\right)$,
then we set $x\left(t\right)=a\left(T_{i-1}^{c}\right),\forall t\in\left[T_{i}^{c},T_{i+1}^{c}\right]$;
\item Otherwise, we construct $x\left(t\right)$ as follows. In Type-IV
critical segment, each job-departure epoch $\tau$ in $\left[T_{i}^{c},T_{i+1}^{c}\right]$
has a corresponding job-arrival epoch $\tau^{'}$ in $\left[T_{i}^{c},T_{i+1}^{c}\right]$
such that $a\left(\tau\right)=a\left(\tau^{'}\right)$ and $a\left(t\right)<a\left(\tau\right),\forall t\in\left(\tau,\tau^{'}\right)$.
Finding the first job-departure epoch $\tau_{1}$ after $T_{i}^{c}$
in $\left[T_{i}^{c},T_{i+1}^{c}\right]$ who has a corresponding job-arrival
epoch $\tau_{1}^{'}$ such that $\beta_{on}+\beta_{off}\geq P\cdot\left(\tau_{1}^{'}-\tau_{1}\right)$.
Then finding the first job-departure epoch $\tau_{2}$ after $\tau_{1}^{'}$
who has a corresponding job-arrival epoch $\tau_{2}^{'}$ such that
$\beta_{on}+\beta_{off}\geq P\cdot\left(\tau_{2}^{'}-\tau_{2}\right)$.
Go on this way until we reach $T_{i+1}^{c}$. Upon reaching time epoch
$T_{i+1}^{c}$, we find all, say $L$, such job-departure and arrival
epoch pairs $\left(\tau_{1},\tau_{1}^{'}\right)$,$\left(\tau_{2},\tau_{2}^{'}\right)$...$\left(\tau_{L},\tau_{L}^{'}\right)$.
If $L=0$, which means there does not exist such job-departure and
arrival epoch pair, we set $x\left(t\right)=a\left(t\right),\forall t\in\left[T_{i}^{c},T_{i+1}^{c}\right]$,
otherwise, we set $x\left(t\right)=a\left(t\right),\forall t\in\left[T_{i}^{c},\tau_{1}\right)\cup\left(\tau_{1}^{'},\tau_{2}\right)\cup...\cup\left(\tau_{L}^{'},T_{i+1}^{c}\right]$
and $x\left(t\right)=a\left(\tau_{l}\right),\forall t\in\left[\tau_{l},\tau_{l}^{'}\right]$
for $l=1,2,....L$.
\end{itemize}
\end{enumerate}
\rule{1\columnwidth}{1pt}
The following theorem shows that the lower bound of $P^{*}$ in \eqref{eq:P_opt_lower_bound}
is achieved by using the above procedure.
\begin{thm}
The \textbf{Optimal Solution} \textbf{Construction Procedure} terminates
in finite time, and the resulting $x\left(t\right)$, $t\in[0,T]$,
is an optimal solution to the problem \textbf{SCP}.\label{Thm:opt_sol_pro_is_opt}\end{thm}
\begin{IEEEproof}
Refer to Appendix \ref{apx:proof_4}.
\end{IEEEproof}
The proof utilizes proof-by-contradiction and counting arguments.
\subsection{Intuitions and Observations}
Constructing optimal $x(t)$ for critical segments with Type-I/II/III
workload is rather straightforward. In the following, we go through
the construction of $x(t)$ for the critical segment with Type-IV
workload shown in Fig. \ref{fig:optimal_type_IV}, to bring out the
intuition. We define
\begin{equation}
\Delta\triangleq\frac{\beta_{on}+\beta_{off}}{P}\label{eq:critical_interval}
\end{equation}
as the \emph{critical interval} over which the energy cost of maintaining
an idle server matches the cost of turning it off at the beginning
of the interval and turning it on at the end of the interval.
\begin{figure}
\centering\includegraphics[width=0.65\columnwidth]{optimal_type_IV}
\caption{An example of a critical segment $[0,T]$ (after offsetting the time
origin to the beginning of the segment) with Type-IV $a(t)$. This
critical segment is further decomposed into smaller critical segments
$[T_{1}^{c},T_{2}^{c}]$, $[T_{2}^{c},T_{3}^{c}]$, and $[T_{3}^{c},T_{4}^{c}]$.
Interval $\delta_{1}=T_{3}^{c}-T_{2}^{c}$, $\delta_{2}=T_{3}^{c}-T_{1}^{c}$,
and $\delta_{3}=T_{4}^{c}-T_{2}^{c}$.}
\label{fig:optimal_type_IV}
\end{figure}
During the critical segment $[0,T]$ with Type-IV workload shown in
Fig. \ref{fig:optimal_type_IV}, the system starts and ends with 2
jobs and 2 running servers. Let the servers with their jobs leaving
at time $0$ and $T_{3}^{c}$ be S1 and S2, respectively.
At time $0$, a job leaves. The procedure compares $\Delta$ and $T$.
If $\Delta>T$, then it sets $x(t)=2$ and keeps all two servers running
for all $t\in[0,T]$; otherwise, it further applies the \textbf{Critical
Segment Construction Procedure} and decomposes the critical segment
into three small ones $[T_{1}^{c},T_{2}^{c}]$, $[T_{2}^{c},T_{3}^{c}]$,
and $[T_{3}^{c},T_{4}^{c}]$, as shown in Fig. \ref{fig:optimal_type_IV}.
The first small critical segment $[T_{1}^{c},T_{2}^{c}]$ has a Type-II
workload, thus the procedure sets $x(t)=1$ for $t\in[T_{1}^{c},T_{2}^{c}]$.
The second small segment $[T_{2}^{c},T_{3}^{c}]$ has a Type-III workload;
thus for all $t\in[T_{2}^{c},T_{3}^{c}]$, the procedure maintains
$x(t)=1$ if $\Delta>\delta_{1}$ and sets $x(t)=0$ otherwise. The
last small segment $[T_{3}^{c},T_{4}^{c}]$ has a Type-I workload,
thus the procedure set $x(t)=1$ for $t\in[T_{3}^{c},T_{4}^{c})$
and $x(T_{4}^{c})=2$.
These actions reveal two important observations, upon which we build
a decentralized off-line algorithm to solve the problem \textbf{SCP}
optimally.
\begin{itemize}
\item Newly arrived jobs should be assigned to servers in the reverse order
of their last-empty-epochs.
\end{itemize}
In the example, when a new job arrives at time $T_{3}^{c}$, the procedure
implicitly assigns it to server S2 instead of S1. As a result, S1
and S2 have empty periods of $T$ and $\delta_{1}$, respectively.
This may sound counter-intuitive as compared to an alternative {}``fair''
strategy that assigns the job to the early-emptied server S1, which
gives S1 and S2 empty periods of $\delta_{2}$ and $\delta_{3}$,
respectively. Different job-dispatching gives different empty-period
distribution. It turns out a more skew empty-period distribution leads
to more energy saving.
The intuition is that job-dispatching should try to make every server
empty as long as possible so that the on-off option, if explored,
can save abundant energy.
\begin{itemize}
\item Upon being assigned an empty period, a server only needs to \emph{independently}
make locally energy-optimal decision.
\end{itemize}
It is straightforward to verify that in the example, upon a job leaving
server S1 at time $0$, the procedure implicitly assigns an empty-period
of $T$ to S1, and turns S1 off if $\Delta<T$ and keeps it running
at idle state otherwise. Similarly, upon a job leaving S2 at time
$T_{2}^{c}$, S2 is turned off if $\Delta<\delta_{1}$ and stays idle
otherwise. Such comparisons and decisions can be done by individual
servers themselves.
\subsection{Offline Algorithm Achieving the Optimal Solution}
The \textbf{Optimal Solution} \textbf{Construction Procedure} determines
how many running servers to maintain at time $t$, i.e., $x^{*}(t)$,
to achieve the optimal server operation cost $P^{*}$. However, as
discussed in Section \ref{ssec:settings}, under the continuous-time
brick model, scheduling servers on/off according to $x^{*}(t)$ might
incur non-trivial job migration cost.
Exploiting the two observations made in the case-study at the end
of last subsection, we design a simple and decentralized off-line
algorithm that gives an optimal $x^{*}(t)$ and \emph{incurs no job
migration cost}.\\
\rule{1\columnwidth}{1pt}\\
\textbf{Decentralized Off-line Algorithm} \textbf{A0}:
\noindent \textbf{By a central job-dispatching entity}: it implements
a last-empty-server-first strategy. In particular, it maintains a
stack (i.e., a Last-In/First-Out queue) storing the IDs for all idle
or off servers. Before time $0$, the stack contains IDs for all the
servers that are not serving.
\begin{itemize}
\item Upon a job arrival: the entity pops a server ID from the top of the
stack, and assigns the job to the corresponding server (if the server
is off, the entity turns it on).
\item Upon a job departure: a server just turns idle, the entity pushes
the server ID into the stack.
\end{itemize}
\noindent\textbf{By each server:}
\begin{itemize}
\item Upon receiving a job: the server starts serving the job immediately.
\item Upon a job leaving this server and it becomes empty: let the current
time be $t_{1}$. The server searches for the earliest time $t_{2}\in(t_{1},t_{1}+\Delta]$
so that $a(t_{2})=a(t_{1})$. If no such $t_{2}$ exists, then the
server turns itself off. Otherwise, it stays idle.
\end{itemize}
\rule{1\columnwidth}{1pt}
We remark that in the algorithm, we use the same server to serve a
job during its entire sojourn time. Thus there is no job migration
cost. The following theorem justifies the optimality of the off-line
algorithm.
\begin{thm}
The proposed off-line algorithm \textbf{A0 }achieves the optimal server
operation cost of the problem \textbf{SCP}.\label{thm: offline}\end{thm}
\begin{IEEEproof}
Refer to Appendix \ref{apx:proof_5}.
\end{IEEEproof}
There are two important observations. First, the job-dispatching strategy
only depends on the past job arrivals and departures. Consequently,
the strategy assigns a job to the same server no matter it knows future
job arrival/departure or not; it also acts independently to servers'
off-or-idle decisions. Second, each individual server is actually
solving a classic ski-rental problem \cite{karlin1988competitive}
-- whether to {}``rent'', i.e., keep idle, or to {}``buy'', i.e.,
turn off now and on later, but with\emph{ }their {}``days-of-skiing''
(corresponding to servers' empty periods)\emph{ jointly determined
by the job-dispatching strategy}.
Next, we exploit these two observations to extend the off-line algorithm
\textbf{A0} to its online versions with performance guarantee.
\section{Online Dynamic Provisioning with or without Future Workload Information\label{sec:online}}
Inspired by our off-line algorithm, we construct online algorithms
by combining (i) the same last-empty-server-first job-dispatching
strategy as the one in algorithm \textbf{A0}, and (ii) an off-or-idle
decision module running on each server to \emph{solve an online ski-rental
problem}.
As discussed at the end of last section, the last-empty-server-first
job-dispatching strategy utilizes only past job arrival/departure
information. Consequently, as compared to the offline case, in the
online case it assigns the same set of jobs to the same server at
the same sequence of epochs. The following lemma rigorously confirms
this observation.
\begin{lem}
For the same $a\left(t\right),t\in\left[0,T\right]$, under the last-empty-server-first
job-dispatching strategy, each server will get the same job at the
same time and the job will leave the server at the same time for both
off-line and online situations. \label{lem:For-the-same}\end{lem}
\begin{IEEEproof}
Refer to Appendix \ref{apx:proof_6}.
\end{IEEEproof}
As a result, \emph{in the online case, each server still faces the
same set of off-or-idle problems} as compared to the off-line case.
This is the key to derive the competitive ratios of our to-be-presented
online algorithms.
Each server, not knowing the empty periods ahead of time, however,
needs to decide whether to stay idle or be off (and if so when) in
an online fashion. One natural approach is to adopt classic algorithms
for the online ski-rental problem.
\subsection{Dynamic Provisioning without Future Workload Information}
For the online ski-rental problem, the break-even algorithm in \cite{karlin1988competitive}
and the randomized algorithm in \cite{karlin1994competitive} have
competitive ratios $2$ and $e/\left(e-1\right)$, respectively. The
ratios have been proved to be optimal for deterministic and randomized
algorithms, respectively. Directly adopting these algorithms in the
off-or-idle decision module leads to two online solutions for the
problem \textbf{SCP} with competitive ratios $2$ and $e/\left(e-1\right)\approx1.58$.
These ratios improve the best known ratio $3$ achieved by the algorithm
in \cite{lin2011dynamic}.
The resulting solutions are decentralized and easy to implement: a
central entity runs the last-empty-server-first job-dispatching strategy,
and each server independently runs an online ski-rental algorithms.
For example, if the break-even algorithm is used, a server that just
becomes empty at time $t$ will stay idle for $\Delta$ amount of
time. If it receives no job during this period, it turns itself off.
Otherwise, it starts to serve the job immediately. As a special case
covered by Theorem \ref{thm:online}, it turns out this directly gives
a $2$-competitive dynamic provisioning solution.
\subsection{Dynamic Provisioning with Future Workload Information}
Classic online problem studies usually assume zero future information.
However, in our data center dynamic provisioning problem, one key
observation many existing solutions exploited is that the workload
expressed highly regular patterns. Thus the workload information in
a near prediction window may be accurately estimated by machine learning
or model fitting based on historical data \cite{chen2008energy,bod2009statistical}.
Can we exploit such future knowledge, if available, in designing online
algorithms? If so, how much gain can we get?
Let's elaborate through an example to explain why and how much future
knowledge can help. Suppose at any time $t$, the workload information
$a(t)$ in a prediction window $[t,t+\alpha\Delta]$ is available,
where $\alpha\in[0,1]$ is a constant. Consider a server running the
break-even algorithm just becomes empty at time $t_{1}$, and its
empty period happens to be just a bit longer than $\Delta$.
Following the standard break-even algorithm, the server waits for
$\Delta$ amount of time before turning itself off. According to the
setting, it receives a job right after $t_{1}+\Delta$ epoch, and
it has to power up to serve the job. This incurs a total cost of $2P\Delta$
as compared to the optimal one $P\Delta$, which is achieved by the
server staying idle all the way.
An alternative strategy that costs less is as follows. The server
stays idle for $\left(1-\alpha\right)\Delta$ amount of time, and
peeks into the prediction window $[t_{1}+\left(1-\alpha\right)\Delta,t_{1}+\Delta]$.
Due to the last-empty-server-first job-dispatching strategy, the server
can easy tell that it will receive a job if any $a(t)$ in the window
exceeds $a(t_{1})$, and no job otherwise. According to the setting,
the server sees itself receiving no job during $[t_{1}+\left(1-\alpha\right)\Delta,t_{1}+\Delta]$
and it turns itself off at time $t_{1}+\left(1-\alpha\right)\Delta$.
Later it turns itself on to serve the job right after $t_{1}+\Delta$.
Under this strategy, the overall cost is $\left(2-\alpha\right)P\Delta$
and is better than that of the break-even algorithm.
This simple example shows it is possible to modify classic online
algorithms to exploit future workload information to obtain better
performance. To this end, we propose new future-aware online ski-rental
algorithms and build new online solutions.
We model the availability of future workload information as follows.
For any $t$, the workload $a(t)$ for in the window $[t,t+\alpha\Delta]$
is known, where $\alpha\in[0,1]$ is a constant and $\alpha\Delta$
represents the size of the window.
We present both the modified break-even algorithm and the resulting
decentralized\emph{ }and\emph{ deterministic} online solution as follow.
The modified future-aware break-even algorithm is very simple and
is summarized as the part in the server's actions upon job departure.
\noindent\rule{1\columnwidth}{1pt}\\
\textbf{Future-Aware Online Algorithm A1:}
\noindent \textbf{By a central job-dispatching entity}: it implements
the last-empty-server-first job-dispatching strategy, i.e., the one
described in the off-line algorithm.
\noindent\textbf{By each server:}
\begin{itemize}
\item Upon receiving a job: the server starts serving the job immediately.
\item Upon a job leaving this server and it becomes empty: the server waits
for $\left(1-\alpha\right)\Delta$ amount of time,
\begin{itemize}
\item if it receives a job during the period, it starts serving the job
immediately;
\item otherwise, it looks into the prediction window of size $\alpha\Delta$.
It turns itself off, if it will receive no job during the window.
Otherwise, it stays idle.
\end{itemize}
\end{itemize}
\rule{1\columnwidth}{1pt}
In fact, as shown in Theorem \ref{thm:online} later in this section,
the algorithm \textbf{A1} has the best possible competitive ratio
for any deterministic algorithms under the last-empty-server-first
job-dispatching strategy. Thus, unless we change the job-dispatching
strategy, no deterministic algorithms can achieve better competitive
ratio than the algorithm \textbf{A1}.
Similarly, we present both the modified randomized algorithms for
solving online ski-rental problem and the resulting decentralized\emph{
}and\emph{ randomized} online solutions as follow. The modified future-aware
randomized algorithms are also summarized as the part in the server's
actions upon job departure. The first randomized algorithm \textbf{A2}
is a direct extension of the one in \cite{karlin1994competitive}
to make it future-aware. The algorithm \textbf{A3} is new and it has
the best possible competitive ratio for any randmonized algorithms
under the last-empty-server-first job-dispatching strategy.
\noindent\rule{1\columnwidth}{1pt}\\
\textbf{Future-Aware Online Algorithm A2:}
\noindent \textbf{By a central job-dispatching entity}: it implements
the last-empty-server-first job-dispatching strategy, i.e., the one
described in the off-line algorithm.
\noindent\textbf{By each server:}
\begin{itemize}
\item Upon receiving a job: the server starts serving the job immediately.
\item Upon a job leaving this server and it turns empty: the server waits
for $Z$ amount of time, where $Z$ is generated according to the
following probability density function
\[
f_{Z}(z)=\begin{cases}
\frac{e^{z/\left(1-\alpha\right)\Delta}}{\left(e-1\right)\left(1-\alpha\right)\Delta}, & \mbox{if }0\leq z\leq\left(1-\alpha\right)\Delta;\\
0, & \mbox{otherwise.}
\end{cases}
\]
\begin{itemize}
\item if it receives a job during the period, it starts serving the job
immediately;
\item otherwise, it looks into the prediction window of size $\alpha\Delta$.
It turns itself off, if it will receive no job during the window.
Otherwise, it stays idle.
\end{itemize}
\end{itemize}
\rule{1\columnwidth}{1pt}
\noindent\rule{1\columnwidth}{1pt}\\
\textbf{Future-Aware Online Algorithm A3:}
\noindent \textbf{By a central job-dispatching entity}: it implements
the last-empty-server-first job-dispatching strategy, i.e., the one
described in the off-line algorithm.
\noindent\textbf{By each server:}
\begin{itemize}
\item Upon receiving a job: the server starts serving the job immediately.
\item Upon a job leaving this server and it turns empty: the server waits
for $Z$ amount of time, where $Z$ is generated according to the
following probability distribution
\[
\begin{cases}
f_{Z}(z)=\begin{cases}
\frac{1-\frac{\alpha}{e-1+\alpha}}{\left(e-1\right)\vartriangle\left(1-\alpha\right)}e^{z/\left(1-\alpha\right)\Delta}, & \mbox{if }0<z\leq\left(1-\alpha\right)\Delta;\\
0, & \textrm{otherwise.}
\end{cases}\\
P\left(Z=0\right)=1-\frac{\alpha}{e-1+\alpha}
\end{cases}
\]
\begin{itemize}
\item if it receives a job during the period, it starts serving the job
immediately;
\item otherwise, it looks into the prediction window of size $\alpha\Delta$.
It turns itself off, if it will receive no job during the window.
Otherwise, it stays idle.
\end{itemize}
\end{itemize}
\rule{1\columnwidth}{1pt}
The three future-aware online algorithms inherit the nice properties
of the proposed off-line algorithm in the previous section. The same
server is used to serve a job during its entire sojourn time. Thus
there is no job migration cost. The algorithms are decentralized,
making them easy to implement and scale.
Observing no such future-aware online algorithms available in the
literature, we analyze their competitive ratios and present the results
as follows.
\begin{thm}
The deterministic online algorithm \textbf{A1} has a competitive ratio
of $2-\alpha$. The randomized online algorithm \textbf{A2} achieves
a competitive ratio of $\left(e-\alpha\right)/\left(e-1\right)$.
The randomized online algorithm \textbf{A3} achieves a competitive
ratio of $e/\left(e-1+\alpha\right)$. The competitive ratios of the
algorithms \textbf{A1} and are \textbf{A3} the best possible for deterministic
and randomized algorithms, respectively, under the last-empty-server-first
job-dispatching strategy. \label{thm:online}\end{thm}
\begin{IEEEproof}
Refer to Appendix \ref{apx:proof_6}.
\end{IEEEproof}
\textbf{Remarks}: (i) When $\alpha=1$, all three algorithms achieve
the optimal server operation cost. This matches the intuition that
servers only need to look $\Delta$ amount of time ahead to make optimal
off-or-idle decision upon job departures. This immediately gives a
fundamental insight that future workload information beyond the critical
interval $\Delta$ (corresponding to $\alpha=1$) will not improve
dynamic provisioning performance. (ii) The competitive ratios presented
in the above theorem is for the worst case. We have carried out simulations
using real-world traces and found the empirical ratios are much better,
as shown in Fig. \ref{fig:competitive_ratios}. (iii) To achieve better
competitive ratios, the theorem says that it is necessary to change
the job-dispatching strategy, since otherwise no deterministic or
randomized algorithms do better than the algorithms \textbf{A1} and
\textbf{A3}. (iv) Our analysis assumes the workload information in
the prediction window is accurate. We evaluate the two online algorithms
in simulations using real-world traces with prediction errors, and
observe they are fairly robust to the errors. More details are provided
in Section\textcolor{blue}{{} }\ref{sec:expr}.
\begin{figure}
\centering\includegraphics[scale=0.2]{CR.eps}
\caption{Comparison of the worst-case competitive ratios (according to Theorem
\ref{thm:online}) and the empirical competitive ratios observed in
simulations using real-world traces. The critical window size $\Delta=6$
units of time. More simulation details are in Section \ref{sec:expr}.}
\label{fig:competitive_ratios}
\end{figure}
\subsection{Adapting the Algorithms to Work with Discrete-Time Fluid Workload
Model \label{ssec:dis}}
Adapting our off-line and online algorithms to work with the discrete-time
fluid workload model involves two simple modifications. Recall in
the discrete-time fluid model, time is chopped into equal-length slots.
Jobs arriving in one slot get served in the same slot. Workload can
be split among running servers at arbitrary granularity like fluid.
For the job-dispatching entity in all the algorithms, at the end of
each slot when all servers are considered to be empty, it pushes all
the server IDs back into the stack (order doesn't matter). Then at
the beginning of each slot, it pops just-enough server IDs from the
stack in a Last-In/First-Out manner to satisfy the current workload.
In this way, the job-dispatching entity essentially packs the workload
to as few servers as possible, following the last-empty-server-first
strategy.
For individual servers, they start to serve upon receiving jobs, and
start to solve the off-line or online ski-rental problems upon all
its jobs leaving and it becomes empty.
It is not difficult to verify the modified algorithms still retain
their corresponding performance guarantees. Actually, we have following
corollary.
\begin{cor}
The modified deterministic and randomized online algorithms for discrete-time
fluid workload have competitive ratios of $2-\alpha$, $(e-\alpha)/(e-1)$,
and $e/\left(e-1+\alpha\right)$, respectively. \label{cormodified}\end{cor}
\begin{IEEEproof}
Refer to Appendix \ref{apx:proof_9}.
\end{IEEEproof}
\subsection{Comparison with the DELAYEDOFF Algorithm\label{ssec:comparison.with.DELAYEDOFF}}
It is somewhat surprising to find out our algorithms share similar
ingredients as the DELAYEDOFF algorithm in \cite{gandhi2010optimality},
since these are two independent efforts setting off to optimize different
objective functions (total energy consumption in our study v.s. Energy-Response
time Product (ERP) in \cite{gandhi2010optimality}).
The DELAYEDOFF algorithm contains two modules. The first one is a
job-dispatching module that assigns a newly arrived job to the most-recently-busy
idle server (i.e., the idle server who was most recently busy); servers
in off-state are not included. The second one is a delay-off module
running on each server that keeps the server idle for some pre-determined
amount of time, defined as $t_{wait}$, before turning it off. If
the server gets a job to service in this period, its idle time is
reset to $0$. The authors of \cite{gandhi2010optimality} show that
for any $t_{wait}$, if the job arrival process is Poisson, the DELAYEDOFF
algorithm minimizes the average ERP of a data center as the load (i.e.,
the ratio between the arrival rate and the average sojourn time) approaches
infinity.
Interestingly, if there are idle servers in system, DELAYEDOFF and
the algorithm \textbf{A1} will choose the same server to serve the
new job because the most-recently-busy server is indeed the last-empty
server in this case. If there are no idle servers, the algorithm \textbf{A1
}will still choose the last-empty server but DELAYEDOFF will randomly
select an off server to server the job. With this observation, the
DELAYEDOFF algorithm, under the setting $t_{wait}=\Delta$, can be
viewed as a variant of a special case of the algorithm \textbf{A1}
with zero future workload information available (i.e., $\alpha=0$).
It would be interesting to see whether the analytical insights used
in analyzing the DELAYEDOFF algorithm can be used to understand the
performance of the algorithm \textbf{A1} when the job arrival process
is Poisson.
Despite the similarity between the algorithm \textbf{A1} and the DELAYEDOFF
algorithm, it is not clear what is the competitive ratio of DELAYEDOFF.
Unlink our last-empty-server-first job-dispatching strategy, the most-recently-busy
idle server first strategy does not guarantee a server faces the same
set of ski-rental problems in the online case as compared to the off-line
case. Consequently, it is not clear how to relate the online cost
of the DELAYEDOFF algorithm to the offline optimal cost.
The two job-dispatching strategies differ more when the server waiting
time is random, e.g., in our algorithms \textbf{A2} and \textbf{A3},
where a later-empty server may turn itself off before an early-empty
server does; hence, the most-recently-busy (idle) server is usually
not the last-empty server. We compare the performance of algorithms
\textbf{A1}, \textbf{A2}, \textbf{A3}, and DELAYEDOFF in simulations
in Section \ref{sec:expr}.
\begin{center}
\begin{figure*}
\begin{centering}
\subfloat[MSR data trace for one week\label{fig:MSR}]{\includegraphics[width=0.25\textwidth]{trace.eps}}\subfloat[Impact of future information\label{fig:Impact-of-future}]{\includegraphics[width=0.25\textwidth]{costreduction.eps}}\subfloat[Impact of prediction error\label{fig:Impact-of-prediction}]{\includegraphics[width=0.25\textwidth]{pred_err.eps}}\subfloat[Impact of PMR\label{fig:Impact-of-PMR}]{\includegraphics[width=0.25\textwidth]{p2m.eps}
}
\par\end{centering}
\centering{}\caption{Real-world workload trace and performance of the algorithms under
different situations.}
\end{figure*}
\par\end{center}
\section{Experiments \label{sec:expr}}
We implement the proposed off-line and online algorithms and carry
out simulations using real-world traces to evaluate their performance.
Our purposes are threefold. First, to evaluate the performance of
the algorithms using real-world traces. Second, to study the impacts
of workload prediction error and workload characteristic on the algorithms'
performance. Third, to compare our algorithms to two recently proposed
solutions LCP$(w$) in \cite{lin2011dynamic} and DELAYEDOFF in \cite{gandhi2010optimality}.
\subsection{Settings}
\textit{Workload trace}: The real-world traces we use in experiments
are a set of I/O traces taken from 6 RAID volumes at MSR Cambridge
\cite{narayanan2008write}. The traced period was one week between
February 22 to 29, 2007. We estimate the average number of jobs over
disjoint 10 minute intervals. The data trace has a peak-to-mean ratio
(PMR) of 4.63. The jobs are {}``request-response'' type and thus
the workload is better described by a discrete-time fluid model, with
the slot length being 10 minutes and the load in each slot being the
average number of jobs.
As discussed in Section \ref{ssec:dis}, the proposed off-line and
online algorithms also work with the discrete-time fluid workload
model after simple modification. In the experiments, we run the modified
algorithms using the above real-world traces.
\textit{Cost benchmark}: Current data centers usually do not use dynamic
provisioning. The cost incurred by static provisioning is usually
considered as benchmark to evaluate new algorithms \cite{lin2011dynamic,krioukov2011napsac}.
Static provisioning runs a constant number of servers to serve the
workload. In order to satisfy the time-varying demand during a period,
data centers usually overly provision and keep more running servers
than what is needed to satisfy the peak load. In our experiment, we
assume that the data center has the complete workload information
ahead of time and provisions exactly to satisfy the peak load. Using
such benchmark gives us a conservative estimate of the cost saving
from our algorithms.
\textit{Sever operation cost:} The server operation cost is determined
by unit-time energy cost $P$ and on-off costs $\beta_{on}$ and $\beta_{off}$.
In the experiment, we assume that a server consumes one unit energy
for per unit time, i.e., $P=1$. We set $\beta_{off}+\beta_{on}=6$,
i.e., the cost of turning a server off and on once is equal to that
of running it for six units of time \cite{lin2011dynamic}. Under
this setting, the critical interval is $\Delta=\left(\beta_{off}+\beta_{on}\right)/P=6$
units of time.
\subsection{\textit{\emph{Performance of the Proposed Online Algorithms}}}
We have characterized in Theorem \ref{thm:online} the competitive
ratios of our proposed online algorithms as the prediction window
size, i.e., $\alpha\Delta$, increases. The resulting competitive
ratios, i.e., $2-\alpha$, $\left(e-\alpha\right)/\left(e-1\right)$
and $e/\left(e-1+\alpha\right)$, already appealing, are for the worst-case
scenarios. In practice, the actual performance can be even better.
In our first experiment, we study the performance of our online algorithms
using real-world traces. The results are shown in Fig. \ref{fig:Impact-of-future}.
The cost reduction curves are obtained by comparing the power cost
incurred by the off-line algorithm, the three online algorithms, the
LCP$\left(w\right)$ algorithm \cite{lin2011dynamic} and the DELAYEDOFF
algorithm \cite{gandhi2010optimality} to the cost benchmark. The
vertical axis indicates the cost reduction and the horizontal axis
indicates the size of prediction window varying from 0 to 10 units
of time.
As seen, for this set of workload, both our three online algorithms,
LCP$\left(w\right)$ and DELAYEDOFF achieve substantial cost reduction
as compared to the benchmark. In particular, the cost reductions of
our three online algorithms are beyond $66\%$ even when no future
workload information is available; while LCP$\left(w\right)$ has
to have (or estimate) one unit time of future workload to execute,
and thus it starts to perform when the prediction window size is one.
The cost reductions of our three online algorithms grow linearly as
the prediction window increases, and reaching optimal when the prediction
window size reaches $\Delta$. These observations match what Theorem
\ref{thm:online} predicts. Meanwhile, LCP$\left(w\right)$ has not
yet reach the optimal performance when the prediction window size
reaches the critical value $\Delta$. DELAYEDOFF has the same performance
for all prediction window sizes since it does not exploit future workload
information.
As seen in Fig. \ref{fig:Impact-of-future}, in the simulation, our
three algorithms can achieve the optimal power consumption when the
size of prediction window is $5$, one unit smaller than the theoretically-computed
one $\Delta=6$. At first glance, the results seem not aligned with
what the analysis suggests. But a careful investigation reveals that
there is no mis-alignment between analysis and simulation. Because
jobs are assigned to servers at the beginning of each slots in discrete-time
fluid model, knowing the workload from current time to the beginning
of the 5th look-ahead future slot is equivalent to knowing the workload
of a duration of 6 slots. Hence, the anaysis indeed suggests Algorithms
\textbf{A1-A3} can achieve optimal power consumption when the size
of prediction window is $5$, as observed in Fig. \ref{fig:Impact-of-future}.
\subsection{\textit{\emph{Impact of Prediction Error}}\emph{ }}
Previous experiments show that both our algorithms and LCP$\left(w\right)$
have better performance if accurate future workload is available.
However, there are always prediction errors in practice. Therefore,
it is important to evaluate the performance of the algorithms in the
present of prediction error.
To achieve this goal, we evaluate our online algorithms with prediction
window size of 2 and 4 units of time. Zero-mean Gaussian prediction
error is added to each unit-time workload in the prediction window,
with its standard deviation grows from $0$ to $50\%$ of the corresponding
actual workload. In practice, prediction error tends to be small \cite{kusic2009power};
thus we are essentially stress-testing the algorithms.
We average 100 runs for each algorithm and show the results in Fig.
\ref{fig:Impact-of-prediction}, where the vertical axis represents
the cost reduction as compared to the benchmark.
On one hand, we observe all algorithms are fairly robust to prediction
errors. On the other hand, all algorithms achieve better performance
with prediction window size 4 than size 2. This indicates more future
workload information, even inaccurate, is still useful in boosting
the performance.
\subsection{\textit{\emph{Impact of Peak-to-Mean Ratio (PMR)}}}
Intuitively, comparing to static provisioning, dynamic provisioning
can save more power when the data center trace has large PMR. Our
experiments confirm this intuition which is also observed in other
works \cite{lin2011dynamic,krioukov2011napsac}. Similar to \cite{lin2011dynamic},
we generate the workload from the MSR traces by scaling $a\left(t\right)$
as $\overline{a\left(t\right)}=Ka^{\gamma}\left(t\right)$, and adjusting
$\gamma$ and $K$ to keep the mean constant. We run the off-line
algorithm, the three online algorithms, LCP$\left(w\right)$ and DELAYEDOFF
using workloads with different PMRs ranging from 2 to 10, with prediction
window size of one unit time. The results are shown in Fig. \ref{fig:Impact-of-PMR}.
As seen, energy saving increases form about $40\%$ at PRM=2, which
is common in large data centers, to large values for the higher PMRs
that is common in small to medium sized data centers. Similar results
are observed for different prediction window sizes.
\section{Concluding Remarks}
\label{sec:conclusion}
Dynamic provisioning is an effective technique in reducing server
energy consumption in data centers, by turning off unnecessary servers
to save energy. In this paper, we design online dynamic provisioning
algorithms with zero or partial future workload information available.
We reveal an elegant {}``divide-and-conquer'' structure of the off-line
dynamic provisioning problem, under the cost model that a running
server consumes a fixed amount energy per unit time. Exploiting such
structure, we show its optimal solution can be achieved by the data
center adopting a simple last-empty-server-first job-dispatching strategy
and each server independently solving a classic ski-rental problem.
We build upon this architectural insight to design two new decentralized
online algorithms. One is a deterministic algorithm with competitive
ratio $2-\alpha$, where $0\leq\alpha\leq1$ is the fraction of a
critical window in which future workload information is available.
The size of the critical window is determined by the wear-and-tear
cost and the unit-time energy cost of running a single server. The
other two are randomized algorithms with competitive ratios $\left(e-\alpha\right)/\left(e-1\right)\approx1.58-\alpha/\left(e-1\right)$
and $e/\left(e-1+\alpha\right)$, respectively. $2-\alpha$ and $e/\left(e-1+\alpha\right)$
are the best competitive ratios for deterministic and randomized online
algorithms under our last-empty-server-first job-dispatching strategy.
Our results also lead to a fundamental observation that under the
cost model that a running server consumes a fixed amount energy per
unit time, future workload information beyond the critical window
will not improve the dynamic provisioning performance.
Our algorithms are simple and easy to implement. Simulations using
real-world traces show that our algorithms can achieve close-to-optimal
energy-saving performance, and are robust to future-workload prediction
errors.
Our results, together with the $3$-competitive algorithm recently
proposed by Lin \emph{et al.} \cite{lin2011dynamic}, suggest that
it is possible to reduce server energy consumption significantly with
zero or only partial future workload information.
An interesting and important future direction is to explore what is
the best possible competitive ratio any algorithms can achieve with
zero or partial future workload information. Insights along this line
provides useful understanding on the benefit of knowing future workload
in dynamic provisioning.
\section*{Acknowledgements}
We thank Minghong Lin and Lachlan Andrew for sharing the code of their
LCP algorithm, and Eno Thereska for sharing the MSR Cambridge data
center traces.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,331 |
namespace webdriver {
class GetWindowSizeCommandHandler : public IECommandHandler {
public:
GetWindowSizeCommandHandler(void) {
}
virtual ~GetWindowSizeCommandHandler(void) {
}
protected:
void ExecuteInternal(const IECommandExecutor& executor,
const ParametersMap& command_parameters,
Response* response) {
ParametersMap::const_iterator id_parameter_iterator = command_parameters.find("windowHandle");
if (id_parameter_iterator == command_parameters.end()) {
response->SetErrorResponse(400, "Missing parameter in URL: windowHandle");
return;
} else {
int status_code = WD_SUCCESS;
std::string window_id = id_parameter_iterator->second.asString();
BrowserHandle browser_wrapper;
if (window_id == "current") {
status_code = executor.GetCurrentBrowser(&browser_wrapper);
} else {
status_code = executor.GetManagedBrowser(window_id, &browser_wrapper);
}
if (status_code != WD_SUCCESS) {
response->SetErrorResponse(status_code, "Error retrieving window with handle " + window_id);
return;
}
// Though there is an atom for getting window size, we cannot use it
// as IE doesn't allow JavaScript to get the outer window dimensions
// (including chrome).
HWND browser_window_handle = browser_wrapper->GetTopLevelWindowHandle();
RECT window_rect;
::GetWindowRect(browser_window_handle, &window_rect);
Json::Value response_value;
response_value["width"] = window_rect.right - window_rect.left;
response_value["height"] = window_rect.bottom - window_rect.top;
response->SetSuccessResponse(response_value);
}
}
};
} // namespace webdriver
#endif // WEBDRIVER_IE_GETWINDOWSIZECOMMANDHANDLER_H_
| {
"redpajama_set_name": "RedPajamaGithub"
} | 1,877 |
\section{Getting to Know the Milky Way from Studies of M\,33}
Given the difficulty of observing the central regions of our own Milky Way
galaxy \citep{vanloon_2003} we might find out more about the structure and
evolution of spiral galaxies by studying other nearby examples. Among the
Local Group spiral galaxies, M\,33 is smaller than the Milky Way and M\,31
but it is viewed by us more favourably; the distance modulus to M\,33 is
$\mu=24.9$ mag \citep[955 kpc;][]{bonanos_2006}.
\section{Pulsating Giant Stars as Tracers of Star Formation and Dust
Production}
Pulsating giant stars have reached the final stages of their evolution
\citep[their lives cut short by the severe mass loss initiated by these
pulsations;][]{vanloon_2008}. As their luminosity depends on their core
mass, which depends on the birth mass, pulsating giant stars are good
tracers of the population of stars formed when they themselves formed.
Their cool, extended atmospheres are also fertile grounds for the
formation of dust grains. As the grains intercept visual radiation from
the star and emit it at infrared (IR) wavelengths we can measure the dust
production rate by modelling the spectral energy distribution.
\section{The United Kingdom Infrared Telescope Monitoring Survey of M\,33}
\begin{figure}[!ht]
\hspace{10mm}\includegraphics[scale=0.56]{vanloon_fig1.ps}
\caption{Areas covered by our UKIRT monitoring campaigns on the central
square kiloparsec with UIST, and the M\,33 disc with WFCAM.}
\label{cover}
\end{figure}
We used the United Kingdom Infrared Telescope (UKIRT) to monitor M\,33
in the $K$ band (at a wavelength of 2.2 $\mu$m). This was done first for
the central $4^\prime\times4^\prime$ (a square kpc at the distance of
M\,33), mainly with the UIST instrument over the period from 2003 to 2007.
Later in the campaign, images were taken with the WFCAM instrument,
covering essentially the entire extent of the visible disc of M\,33
(Fig.~\ref{cover}). Occasionally, images were taken also in the $J$
(1.2 $\mu$m) and $H$ (1.6 $\mu$m) bands in order to obtain colour
information. The survey and identification of variable stars are
described in detail in \citet*[Paper I]{javadi_2010}; 812 variable stars
were found and shown to be predominantly pulsating giant stars -- the full
photometric catalogue comprises 18\,398 stars and is available from CDS.
\section{The Star Formation in the Central Square Kiloparsec of M\,33}
The star formation history is described by the star formation rate, $\xi$,
as a function of look-back time (``age''), $t$:
\begin{equation}
\xi(t) = \frac{f(K(M(t)))}{\Delta(M(t))f_{\rm IMF}(M(t))},
\end{equation}
where $f(K)$ is the observed $K$-band distribution of pulsating giant stars,
$\Delta$ is the duration of the evolutionary phase during which these stars
display strong radial pulsation, and $f_{\rm IMF}$ is the Initial Mass
Function describing the relative contribution to star formation by stars of
different mass. Each of these functions depends on the stellar mass, $M$,
and the mass of a pulsating star at the end of its evolution is directly
related to its age ($t$).
\begin{figure}[!ht]
\hspace{13mm}\includegraphics[scale=0.53]{vanloon_fig2.ps}
\caption{The ~star ~formation ~history ~in ~the ~central ~square
~kiloparsec ~of ~M\,33 derived from pulsating giant stars found in
our infrared monitoring survey.}
\label{sfh}
\end{figure}
The resulting star formation history is shown in Fig.~\ref{sfh} and is
described in detail in Paper II (Javadi et al., submitted to MNRAS).
The main features are that the large majority of the stars were formed
more than 4 Gyr ago, but that the subsequently quieter star formation
has been punctuated with epochs of enhanced rates of star formation of
which a recent one is detected to have occurred around 200--300 Myr ago,
forming at most 4\% of all stars that have been formed over M\,33's
lifetime (within the central square kiloparsec).
The spatial distributions of the massive stars, intermediate-age Asymptotic
Giant Branch (AGB) stars and generally old \,Red \,Giant \,Branch \,(RGB)
\,stars
suggest that young and intermediate-age stars were formed within the disc,
while the oldest stars may inhabit a more dynamically-relaxed configuration.
Interestingly, the massive stars concentrate in an area South of the
nucleus, and the intermediate-age population shows signs of a
``pseudo-bulge'' that however may well be a bar-like feature.
\section{Dust Production in the Central Square Kiloparsec of M\,33}
\begin{figure}[!ht]
\hspace{5mm}\includegraphics[scale=0.62]{vanloon_fig3.ps}
\caption{The central field overlain on a {\it Spitzer} Space Telescope
mid-IR composite.}
\label{spitz}
\end{figure}
Despite the complex diffuse emission and crowdedness in the central
regions of M\,33, a significant fraction of the pulsating giant stars
have been detected at mid-IR wavelengths (3--8 $\mu$m) with the
{\it Spitzer} Space Telescope \citep[Fig.~\ref{spitz}, cf.][]{mcquinn_2007}.
It has been possible to estimate the mass-loss rates (and dust production
rates) across a range of intensity (Fig.~\ref{dust}, Paper III in
preparation).
\begin{figure}[!ht]
\hspace{8mm}\includegraphics[scale=0.61]{vanloon_fig4.ps}
\caption{UKIRT + {\it Spitzer} (where available) photometry of four
examples of red giant stars in the centre of M\,33, that are affected
by various levels of mass loss. The photometry is compared with
{\sc dusty} radiative transfer models \citep{nenkova_1999} for
oxygen-rich dust (yielding higher rates and higher luminosities beyond
10~$\mu$m) and carbon-rich dust. The light curves of these stars are
presented in Paper I.}
\label{dust}
\vspace*{-3mm}
\end{figure}
\section{On-going Work and Concluding Remarks}
We are currently extending our study to the WFCAM data that cover the
disc of M\,33, to derive a global star formation history and dust
production rate. We aim to establish a link between the dust return
and the formation of stars within the prominent spiral arm pattern,
and to map their subsequent dynamical relaxation into the inter-arm
regions.
In conclusion, our method to derive the star formation history from
pulsating giant stars has been validated for the central region of
M\,33. While model-dependent, our analysis is internally consistent
and supports the Padova models we employed \citep{marigo_2008} except
that the super-AGB stars do seem to reach high luminosities and develop
cool atmospheres and strong pulsation.
\acknowledgements We are grateful for support from the conference
organisers, and from the excellent and dedicated staff at the United
Kingdom Infrared Telescope. This project was funded by The Leverhulme
Trust (grant No.\ RF/4/RFG/2007/0297).
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,623 |
777 Lawrence Ave. E. @ Leslie St.
Most Tuesdays 6:45 - 8:30 p.m.
What does it take to deliver a winning Toastmaster speech?
Home → Public Speaking → What does it take to deliver a winning Toastmaster speech?
Those that overcome these doubts by partnering with a great mentor within their club or the organization, dedicating themselves to some hard work and having a positive mindset usually succeed. Here are two stories I'd thought I'd share with you of Toastmasters that did exactly this. Hopefully they'll inspire you to realize what's possible within Toastmasters.
Ande is a 26 year old financial analyst who joined Toastmasters just over a year ago when he decided to enter into the International Speech Contest. He not only beat out several speakers that had years of more speaking experience within Toastmasters but went all the way to the semi-finals competing against some of the best in the world.
Yan was a 2 year member of Podium Toastmasters when she entered the International Speech Competition. She was encouraged by a member of her club to enter the competition. She had many doubts when she competed such as English not being her first language and she had no idea if she could even write a good enough speech. She also made it to the semi-finals of Toastmasters and beat out 1,000 other Toastmasters in the GTA to get there. I had the chance to interview Yan a few years back. You can read that interview here.
© 2019 Toastmasters International. All rights reserved. The information on this website is for the sole use of Toastmasters' members, for Toastmasters business only. It is not to be used for solicitation and distribution of non-Toastmasters material or information. All rights reserved. Toastmasters International, the Toastmasters International logo and all other Toastmasters International trademarks and copyrights are the sole property of Toastmasters International and may be used only by permission. | {
"redpajama_set_name": "RedPajamaC4"
} | 2,447 |
Who was the second biggest-selling music star to come out of Liverpool after the Beatles? It wasn't Gerry & the Pacemakers or Billy J. Kramer & the Dakotas, nor was it the Searchers. It was Cilla Black, a one-time coat-check girl from the Cavern Club who was still learning to sing...
Genres: Rock
Styles: British Invasion, Girl Group, Pop
Additional information - releases number
See Also: George Martin [3]
Similar artists [2] All tracks Discussion
Add to favourite Biography
Single [1]
Compilation [4]
Single tracks [43]
Title / Artist Year Tracks Bitrate Size (Mb) Price Download
Heart And Soul (Single)
Cilla Black / Dusty Springfield
1993 3 320 27,39 €0.24
Her All - Time Greatest Hits 2017 14 320 90,27 €1.12
Best Of 1963-1978 [3 CD] 2003 80 128 212,84 €6.40
The Best Of The EMI Years 2000 26 171 89,18 €2.08
The Best Of Cilla Black (2002, Remaster) 1968 25 320 169,44 €2.00
Single tracks
Track / Artist from album Bitrate Size (Mb) Price Download
It's For You 100 Greatest Love Songs
321 5,61 €0.10
Shy Of Love Ready Steady Go Vol. 26 "Mean Woman Blues" [CD 2]
You're My World (Il Mio Mondo) Ready Steady Go Vol. 27 (Around And Around) [CD 2]
A Fool Am I (2003 Digital Remaster) Original Hits - 60s Pop
Anyone Who Had A Heart Coronation Street - Magical Memories
Anyone Who Had A Heart Legends: Sixties
It's For You (2003 Digital Remaster) Original Hits - 60s Pop
Step Inside Love (2003 Digital Remaster) Original Hits - 60s Pop
Step Inside Love (Mono) 101 60s Party Hits
You're My World (Il Mio Mondo) Original Hits - Top Of The Pops
You've Lost That Lovin' Feelin (2003 Digital Remaster) Original Hits - 60s Pop
Alfie Absolute Unforgettable (3CD)
Alfie 60s (6CD)
Anyone Who Had A Heart 60s (6CD)
Anyone Who Had A Heart (Almigh Almighty The Definitive Collection 7 (2CD)
Anyone Who Had A Heart (Almighty 12'' Definitive Mix) Almighty The Definitive Collection 7 (2CD)
Anyone Who Had A Heart (Almighty Definitive Mix) Almighty Essentials Volume One (Limited Edition) (3CD)
Something Tells Me (Almighty Mix) CD Pool Remix September
Something Tells Me (Almighty Mix) Almighty Essentials Volume One (Limited Edition) (3CD)
Something Tells Me (Something's Gonna Happen Tonight) 70s (6CD)
Something Tells Me 2009 (Dan Thomas 12") DMC Essential Hits 54
Step Inside Love Almighty Essentials Volume 2 (3CD)
Step Inside Love (Almighty 12'' Mix) Almighty The Definitive Collection 7 (2CD)
Step Inside Love (Almighty 12" Almighty The Definitive Collection 7 (2CD)
Step Inside Love (Almighty Mix) Almighty Essentials Volume 2 (3CD)
You're My World 60s (6CD)
Alfie Magic Moments The Definitive Burt Bacharach Collection (3CD)
Alfie This Is 1966
Something Tells Me (Something's Gonna Happen Tonight) This Is...1971
Surround Yourself With Sorrow This Is 1969
You're My World (Il Mio Mondo) This Is.. 1964
You've Lost That Lovin' Feelin' This Is 1965
You've Lost That Lovin' Feelin' Pure 60's (3 CD)
Anyone Who Had A Heart Greatest Hits Of The 60's 5/8
Alfie The Look Of Love - The Burt Bacharach Collection
You're My World Hard To Find 45s On CD - Volume 7: More Sixties Classics
Anyone Who Had A Heart Wow That Was The 60's (8CD, Box-Set)
You're My World (il Mio Mondo) Wow That Was The 60's (8CD, Box-Set)
Anyone Who Had A Heart The Beat Goes On: The Greatest Hits Of The 60's And 70's
You're My World (Il Mio Mondo) The Beat Goes On: The Greatest Hits Of The 60's And 70's
It's For You The Songs Lennon & McCartney Gave Away
Love Of The Loved The Songs Lennon & McCartney Gave Away
Step Inside Love The Songs Lennon & McCartney Gave Away
Dusty Springfield [56] Kiki Dee [14] | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,119 |
author: roundcrisis
comments: true
date: 2008-05-22 10:48:00+00:00
layout: post
slug: nhibernate-from-the-begining
title: Nhibernate from the begining
wordpress_id: 8
categories:
- NHibernate
---
I've been using NHibernate for a while now, but I never started a project from the begining or used purely NHibernate (I mean without Castle Active Record).I started to read NHibernate in Action ( the EAP edition) so lets see how it goes[^1].
[^1]: NHibernate in Action Pierre Henri Kuaté, Tobin Harris, Christian Bauer, and Gavin King January 2009 https://www.manning.com/books/nhibernate-in-action
| {
"redpajama_set_name": "RedPajamaGithub"
} | 9,889 |
var ol = require('openlayers');
//var underscore = require('underscore');
var $ = require('jquery')(window);
var Backbone = require('backbone');
Backbone.$ = $;
var IconObj = Backbone.Model.extend({
initialize: function(attrs, options) {
console.log(attrs);
},
getLatLon: function() {
return "HELLO!!!!";
}
});
module.exports = IconObj;
| {
"redpajama_set_name": "RedPajamaGithub"
} | 6,213 |
Women were invited to wear national attire to the dinner and evening entertainment celebrating the role of women in families and society. Japanese and Korean women gave performances celebrating their culture, and YBhg Tan Sri Devaki Krishnan, President of UPF-Malaysia, gave the closing remarks.
The program was organized by the Malaysian chapters of UPF, the Women's Federation for World Peace, the Women's Federation for World Peace, and the Collegiate Association for the Research of Principles. | {
"redpajama_set_name": "RedPajamaC4"
} | 8,955 |
China State Council Information Office - Culture
PwC China
MIL OSI - Australia
MIL-OSI China: Manzhouli coronavirus cases likely imported
The coronavirus clusters that have sprung up in communities in Manzhouli in the Inner Mongolia autonomous region, have been identified as being triggered by imported infection based on the latest research on the virus' origins, the local government said.
The genetic sequencing on the first two confirmed cases reported in the city on Saturday, a middle-aged couple, showed that their type of the novel coronavirus belonged to Branch I of the European L genotype, and the following comparison showed it was highly similar to the strain that spread in Russia, said Wang Wenrui, director of the Inner Mongolia autonomous region disease control and prevention center, on Thursday.
"It indicated that the latest outbreak in the city was triggered by imported coronavirus infections," Wang said, adding that investigations on other confirmed cases will also be conducted.
The city reported nine new confirmed cases of COVID-19 on Thursday, increasing the tally to 11. It also reported another asymptomatic case and a suspected case, according to the city's health authority.
"Among the universal screening of the 203,326 residents in the city starting from Sunday to Wednesday, we found 10 positive cases," Guo Xiaofang, deputy mayor of Manzhouli, said at a news conference on Thursday.
"Among them, there are family members, classmates, teachers and neighbors in the same residential communities, from ages 10 to 62," said Wang Hongquan, director of the Manzhouli disease control and prevention center. "The cases have obvious characteristics of cluster infections."
After the outbreak, Manzhouli has adopted strict measures to control the spread, including locking down six residential communities in two medium-risk areas for COVID-19 since Saturday evening, and conducting strict inspections on vehicles and travelers in and out of the city, according to the government.
Several flights between Manzhouli and other cities, including Beijing, Tianjin, Harbin and Hohhot, have been canceled, Manzhouli Xijiao Airport announced on Saturday. Bus and railway services for passengers were suspended to keep people from gathering and spreading the coronavirus.
In addition to the strict measures adopted in Manzhouli due to the recent outbreak, other cities have tightened regular management especially regarding cold-chain food, which has been considered the virus source of the recent outbreak in Tianjin.
In Beijing, Xinfadi wholesale market, which supplies more than 80 percent of the capital's agricultural produce, has disposed of all of its cold-chain food including seafood and frozen meat, local media reported on Wednesday.
According to staff at the market, it will strengthen the management of its cold storage to ensure safety under the current COVID-19 epidemic condition.
Large-scale nucleic acid testing will be carried out for all workers in the cold-chain food industry as well as those who live with them, said officials at a conference on Wednesday.
Previous articleMIL-OSI China: Solutions set to shore up digital divide
Next articleMIL-OSI China: Legend Maradona buried in private ceremony
MIL-OSI USA: OFFICE OF THE GOVERNOR – FLAG ORDER – Flags to fly at half-staff in honor of U.S. Capitol police officers
MIL-OSI USA: HAWAII STATE ENERGY OFFICE NEWS RELEASE: 2021 Legislative Energy Briefing Register Today!
MIL-OSI USA: DLNR NEWS RELEASE: CHRISTMAS TREE BONFIRES AT AHU O LAKA ARE ILLEGAL & BAD FOR THE 'AINA
MIL-OSI USA: DBEDT NEWS RELEASE: NEW DBEDT REPORT PROVIDES ECONOMIC INSIGHT & OUTLOOK FOR HAWAII'S DEFENSE SECTOR | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,533 |
Minimally invasive interventional radiology techniques are used to treat a wide variety of medical conditions. Radiologists use x-ray and other imaging technologies (MRI, CT and ultrasound) to guide small wires or catheters (thin, flexible tubes) or needles with specialized instruments to treat affected areas of the body. Some procedures only require a tiny incision where the catheter or wire is inserted into an artery, which often results in less blood loss, less pain and a quicker recovery for patients. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,568 |
2. Michael Schwarzmann (ger) s.t.
3. Magnus Cort (den) s.t.
5. Jonas Van Genechten (bel) s.t.
6. Kristian Sbaragli (ita) s.t.
8. Jhonathan Restrepo (col) s.t.
9. Jean-Pierre Drucker (lux) s.t.
10. Lozzenzo Manzin (fra) s.t.
2. José Joaquin Rojas (spa) s.t.
5. Salvatore Puccio (ita) s.t.
6. Peter Kennaugh (gbr) s.t.
8. Leopold König (cze) s.t.
9. Ruben Fernandez (spa) s.t.
Laurent Pichon and Cesare Benedetti jojn forces in the first successful breakaway in this year's Vuelta. 20 minutes into the race Brian Naulleau manages to chase them down and the trio forges on to a maximum lead at almost 5 minutes. Yet, the gap goes fast with 50 kms done.
With 38 kms to go Gilbert attacks and joins the leaders, yet, the four are brought back with 16 kms remaining. Machado gives it a go, but to no avail.
The race boils down to a bunch sprint with Gianni Meersman fastest. As Michal Kwiatkowski powers to fouth place he is the new man in red. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,087 |
About the Nexus
About the National Center
Professional Directory
IPE Centers
AIHC
Nexus Summit
National Center Publications
Accelerating Initiative
IPE Critical Events
George E. Thibault, MD Nexus Award
National Center Pioneer Award
Informing › Resource Center › Alzheimer's Association Training and Education Center
Alzheimer's Association Training and Education Center
Submitted by Minnesota North... on Aug 21, 2020 - 11:03pm CDT
The Training and Education Center from the Alzheimer's Association offers a number of online dementia courses that are free to access after creating a free account. Course titles are listed, and you can use search filters to find courses you are interested in. There are three courses available in Spanish.
Link to Resource:
https://training.alz.org/home
Additional Tags (Optional):
Public Health Curriculum on Alzheimer's
Communication and Alzheimer's
A Public Health Approach to Alzheimer's and Other Dementias
Managing Behaviors Associated with Alzheimer's Dementia
ACT on Alzheimer's Electronic Medical Record (EMR) Decision Support Tools for Alzheimer's and Related Dementias
User Collections
Every registered user can comment on website content.
The National Center for Interprofessional Practice and Education was founded with support from the Josiah Macy Jr. Foundation, the Robert Wood Johnson Foundation, the Gordon and Betty Moore Foundation, United States Health and Human Services Health Resources and Services Administration and the University of Minnesota.
© 2021 Regents of the University of Minnesota, All Rights Reserved. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 8,455 |
#DVD: Rising Stars Halep & Bouchard Board the Coaching Carousel
Posted on November 25, 2014 by The Tennis Island Staff in Roundtables & Debates // 2 Comments
By: David Kane & Victoria Chiesa
Tennis is notorious for its heinously short off-season; with the almost non-stop influx of #breaking news, the players might be the only ones guaranteed a proper vacation. On the heels of Simona Halep's split with Wim Fissette, who coached the Romanian to a debut Grand Slam final and a career-high ranking of No. 2, it was announced that Canada's Eugenie Bouchard would part with longtime coach Nick Saviano. Saviano made the decision public on Monday afternoon, and had been seen as a tremendous force behind Bouchard's sudden rise to the top of women's tennis. Victoria Chiesa joins David Kane to discuss the most recent coaching upheavals, and what importance a player's team has, especially when they plan to follow up a breakthrough season.
David Kane: I'll be honest: I was still wrapping my head around the idea of Halep's dismissal of Fissette when I heard about Bouchard and Saviano. When one thinks about a coaching change, it's typically a situation where the relationship isn't working (Sloane Stephens and Paul Annacone), or where one party no longer wants to travel (Maria Sharapova and Thomas Hogstedt), or where the pair never made sense from the beginning (Maria Sharapova and Jimmy Connors). Here are two players who had incredible but, most importantly consistent, seasons. Both reached Grand Slam finals and played convincing tennis against the best in the game. What about that kind of data would lead a player to anything other than "more of the same" when looking ahead to next season?
For Halep, this is the second straight season that she will punctuate with a coaching change. Announcing that she'd split with Adrian Marcu within a few weeks of winning the Tournament of Champions in Sofia – her sixth title of 2013 – she looked to be making an upgrade to one of the more "brand name" coaches in Fissette, who once presided over four-time major champion Kim Clijsters. Some of Halep's reasoning in ousting a coach seems vague this time around, surprising given how clearly she ended the season. What do you make first of Halep and this perceived coaching carousel after impressive results?
Victoria Chiesa: Halep's decision to hire Fissette for the 2014 season should've been the final step in her climb towards the top of the WTA. She announced herself with her play on the court in 2013, and an elite coach was supposed to lead her to bigger things. While Fissette did that, helping Halep to the finals of a Grand Slam and the WTA Finals, and a career-high ranking, Halep decided to go in another direction once again. While her previous coaching change seemed logical, this one baffles. I was suspicious of Halep's decision a year ago, but it worked out well for her. This time, however, she appears to be going a different route. There have been reports that she's hired Victor Ionita, the former coach of Sorana Cirstea, and that Hogstedt will work as a consultant in Australia. It seems as though she's a perfectionist with high expectations for herself, and that might be what drives these changes; it's good to see that she recognizes that there are areas where she can improve. However, I'm eager to see how she'll react to two different voices essentially telling her what to do.
DK: Indeed; where last year felt like a natural progression, this feels like a bout of strange decision-making. Halep was quoted in Romanian press that she was certain her next coach had to be Romanian, as if to assure her public that she wouldn't continue selling out by working with international coaches. We've heard things from players like Ivanovic in the past, who said that she preferred working with Serbian coaches from a linguistic and cultural standpoint. But Halep's words gave the impression that she felt she had something to prove. Technically speaking, Fissette seemed like an ideal match for the World No. 3; having worked with Clijsters, he was someone with experience in molding a more defensive-minded player into an offensive counter-puncher. Taking on Hogstedt is another wrinkle. Halep doesn't seem to be a player that enjoys clutter, and I somehow don't see her reacting well to multiple voices. Throw manager and former Grand Slam champion Virginia Ruzici into the mix and that's just a lot of opinions getting bandied about. At a time when the goal is clear – win a Grand Slam title – this might be a big case of over-thinking on the part of Team Halep.
Where one Rising Star might be used to this sort of change, it's all brand new for Bouchard. The Canadian has worked with Nick Saviano since the age of 12. This is a coaching relationship that took her to a junior Grand Slam title, and encouraged the tactical adjustments that led to an explosive 2014 season. This is also a coaching relationship that played a big part in her falling out with former friend, Laura Robson. As Bouchard continued to improve, Saviano was seen as a svengali figure. Broadcasters became enamored with his aggressive style and sometimes split-screened his reactions with those of his protegée. The Big Bang Theory's Jim Parsons accidentally talked about how he would loudly coach Bouchard from the stands at Wimbledon, and though the two had a few intense moments in Paris, Montreal, and Singapore, they seemed like a force of Dinara Safina/Zeljko Krajan proportions. Between Halep and Bouchard, which coaching switch was more surprising, and how big was Saviano's role in Bouchard's ascent?
VC: While I heard whispers of tension between Bouchard and Saviano – beginning with a report that the two had a very public dispute a day before the Roland Garros semifinals – I was still shocked to learn the two had parted ways. I had a chance to watch the two interact in a practice session prior to the start of the US Open, but couldn't tell if the strange vibe I got was merely par for the course. Saviano seemed unsettled, and didn't stay in one place on the court for more than a few minutes as Bouchard hit with Daniela Hantuchova. She didn't seem all that receptive to what he was saying to her, either. As SI Tennis's Courtney Nguyen pointed out, it's interesting that the announcement of the split came via a press release from Saviano's academy, and not from Bouchard herself.
I'm not sure where Bouchard goes from here, coaching wise. Saviano was the man who molded her into the player she is today – wonky strokes and all – and I'm not sure what's going to happen when someone else takes over. Off the court, Saviano seemed to know how to handle Bouchard's strong personality. While we don't really know what went on behind the scenes, it has been said that familiarity breeds contempt. The pair worked together for eight years and I feel as though something probably triggered the split. Do you think that there's more to this than the average player/coach disagreement?
DK: When I heard about the spat at the French, I thought a split was imminent. It was Bouchard's second straight Grand Slam semifinal and she was due to play Maria Sharapova, the odds-on favorite to win the title. The outburst seemed to indicate that Saviano wasn't equipped to handle a player who was rising so high, so fast. Yet Bouchard came to play against the eventual champion, getting to within one game of the French Open final.
I thought a split was possible after Montreal. It was her first tournament after finishing runner-up at Wimbledon, and she was clearly struggling to cope with the pressure of playing at home. She snapped at Saviano, audibly wishing that she could leave the court.
By Singapore, I was used to what seemed to simply be their schtick. Bouchard failed to win a set, and muttered how she shouldn't have even bothered playing the WTA Finals, having had her training schedule stunted by a left leg injury. There were certainly dysfunctional elements, yet Bouchard ended Singapore sounding positive about next season despite some dire results. For all of Bouchard's bravado, I've noticed how she's tended to lack belief against the top players in the world. Besides a badly misfiring Sharapova and an extremely injured Halep, the Canadian has come off lacking a clear game plan when the biggest names are at their very best. At Wimbledon, she seemed obstinate in her desire to go toe-to-toe at the baseline against an on-song Petra Kvitova, and the results were disastrous. We might never know if this is what Saviano advised her to do, but a more experienced coach might have gotten something else out of Bouchard in what was a critical moment in her young career. It is exceedingly rare that a player retain their childhood coach…
#93908903 / gettyimages.com
VC: The Justine Henin approach doesn't work for everyone.
DK: So it will be interesting to see who Bouchard will go with, and what they will look to improve. It might be a big ask to work with stroke production, but that might not be necessary anyway. When Bouchard is feeling confident, those shots sear through the court with almost improbable accuracy. When doubt creeps in, it all becomes an unadulterated mess. In an ideal world, who would you match Bouchard with, and what do you think she needs to do heading into next season?
VC: First, I think Bouchard needs a coach that's going to help her look at the big picture a bit more. Many times during their coaching timeouts, Saviano would tell her just to focus on her own game, to play her brand of tennis. As you pointed out, that didn't always work, and Bouchard was stubborn to a fault in many of her losses in 2014. Her style of play requires impeccable timing, and she does need to develop a backup plan when that timing isn't there. In addition, she needs someone who can help her navigate the heightened expectations that she'll be facing next year. She'll start the year as a Top 10 player, and with that status comes a whole different set of pressures. She's no longer the challenger, and won't be the underdog in the majority of the matches she plays. She needs a coach that can help her manage a few early losses when they happen.
DK: In a way, Bouchard finds herself in a situation not wholly dissimilar to the one in which Halep found herself one season ago. With a very successful year under her belt, it will be important for the Canadian to assemble a team that can take her through the off-season and get her raring to go come Australia. With a ton of points to defend from the get-go, the coach will need to ready her for a very new part of her career. I would hope she takes Halep's approach and links up with someone who has been through it with an elite player already. Bouchard and Saviano were somewhat equally green when it came to the upper echelons of the WTA ranks, and that may have caused a lot of their apparent tension. It may not feel comforting to have to start from scratch, but complacency won't get her through a season where she's unlikely to have the aura of invincibility of a more established Top 5-8 player. A new season tends to bring change, but whether things stay the same for Bouchard or Halep will depend on how they apply whatever advice they get – from whomever they get it – to the tennis court, where it counts.
bouchard coach
coaching changes
Halep coach
Laura Robson
mental pressure
nick saviano
off-season news
on court coaching
saviano bouchard
Wim Fissette
wta championships
WTA tennis
1 Comment on #DVD: Rising Stars Halep & Bouchard Board the Coaching Carousel
Mark // November 25, 2014 at 5:45 pm // Reply
"This is also a coaching relationship played a big part in her falling out with former friend, Laura Robson."
This is only speculation and rumor. #journalism
The Week One Roundtable: Supersized! Edition | The Tennis Island | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 6,562 |
You are here > Home > Hall of Merit > Discussion
Hall of Merit
— A Look at Baseball's All-Time Best
Hurley McNair and Bill Pettus
McNair eligible 1942
Pettus eligible 1928
DL from MN Posted: February 03, 2022 at 08:10 AM | 4 comment(s) Login to Bookmark
1. DL from MN Posted: February 04, 2022 at 06:30 PM (#6063882)
A thread for some players who have been receiving consideration but didn't have a previous thread
2. Chris Cobb Posted: February 05, 2022 at 12:13 PM (#6063927)
Thanks! I hope to have some things ready to post in this space soon.
3. progrockfan Posted: February 05, 2022 at 02:05 PM (#6063932)
Hurley McNair:
Led 1922 NNL with 50 BB and .466 OBP
Led 1923 NNL with 98 G (tied with Heavy Johnson), 439 PA, and 49 BB
Bill Pettus:
Led 1911 WES with 202 OPS+
Led 1912 EAS with 14 R and 4 SH
Led 1913 EAS with 2 3B (tied with 4 others), 2 HR (tied with Cannonball Redding), and 32 TB
Led 1915 EAS with 38 G (tied with Sam Mongin), 14 2B, and 3 HR (tied with 2 others)
Led 1916 EAS with 12 2B (tied with Louis Santop)
Led 1917 EAS with 3 3B (tied with 3 others) and 4 HR (tied with Jules Thomas)
Led 1918 EAS with 39 H (tied with Jules Thomas), 4 3B (tied with Julio Rojo), 2 HR (tied with 3 others), 57 TB, .424 BA, .480 OBP, .620 SLG, 1.100 OPS, and 208 OPS+
Led 1921 EAS with 68 G, 294 PA, 6 HR, 49 RBI, 112 TB, and 5 HBP
Led 1922 IND with 23 BB
4. Chris Cobb Posted: February 11, 2022 at 11:44 AM (#6064911)
In evaluating Bill Pettus, there are five major obstacles to establishing the extent of his accomplishments:
(1) Small number of recorded games. On average, we have records of about 27 games per season in his 15 year career.
(2) Uncertainty about quality of competition. Dr. Chaleeko has argued (and I believe has implemented for his MLEs) a steeper competition adjustment for 1910-19 Black Baseball than the standard competition adjustment for the organized Negro-League period of 1920-48. I think he is engaged in a larger study on this issue. I don't know what the outcome will be, but it's likely that some greater discount would be requisite.
(3) The use of DRA as the system for calculating fielding value in the Seamheads data creates a discrepancy between the relative value of hitting and field in comparison to the white majors' WAR numbers from the period. Pettus' career fielding rate was 14 runs/162 games, which is about twice the rate achieved by the two best-fielding ML first basemen of the 1900-1925 period, Fred Tenney and Wally Pipp, whose career rate was 7 runs/162 games.
(4) During several of what appear to be Pettus' top seasons by rate, his playing time is divided up among many teams (4 teams in 1917 and 5 teams in 1918), and those teams played a highly variable number of games. This situation makes it difficult to project Pettus's playing time for these key seasons. Dr. Chaleeko's MLEs project him as having quite limited playing time for these two seasons, and that contributes to Pettus's MLEs making him look like a much less viable candidate than his rate stats suggest. I suspect that Pettus's highly fragmented play during these seasons has been impossible for a quantitative system of projecting playing time to handle accurately.
(5) Because Pettus died in 1924, just a year after his playing career ended, his legacy in the lore of the Negro Leagues is scanty. If he had lived another 20-40 years and become more a part of the NeL's oral history, we'd probably have more to go on with respect to reputation than we do.
Of these five problems, nothing can be done about (5), except to acknowledge that Pettus's lack of reputation is affected by his early death. I am working on ways to deal with (3) and (4), which I'll be writing about soon. We may have additional insight on (2) at some point soon, and in the mean time it is possible to make multiple estimates using different competition levels. (1) can also be addressed to some extent through the appropriate application of regression analysis. I lack the expertise to devise and implement a plan in this area, but I hope from discussion to establish a suitable way on incorporating regression into constructions of Pettus's value.
I'll be writing up and posting some initial views of Pettus vs. Taylor over the weekend, but I thought I'd lay out these issues of interpretation first to set the context for the numbers to come.
As Progrockfan's post shows, Pettus was definitely a leading offensive player in the East for most of his documented career, so it's worthwhile to develop an in-depth analysis of his play.
Sheer Tim Foli
Ranking Left Fielders in the Hall of Merit - Discussion thread
(91 - 10:58am, Jan 31)
Last: Alex02
Reranking Left Fielders Ballot
Last: Guapo
Ranking Right Fielders in the Hall of Merit - Discussion thread
Last: Chris Cobb
2024 Hall of Merit Ballot Discussion
Joe Mauer
Last: Bleed the Freak
Chase Utley
Last: Eric J can SABER all he wants to
2023 Hall of Merit Election Results
Last: Mark A Shirk
(375 - 9:11pm, Jan 06)
Adrian Beltre
Last: The Honorable Ardo
(38 - 11:55pm, Jan 05)
Last: What did Billy Ripken have against ElRoy Face?
2023 Hall of Merit Ballot
Last: Howie Menckel
Last: cookiedabookie
Most Meritorious Player: 2022 Discussion
(29 - 2:38pm, Dec 30)
Last: bookbook
Hall of Merit Book Club
Last: gehrig97
Mock 2023 Hall of Fame Election Results
(8 - 4:09pm, Dec 14) | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,991 |
Pješčenjak vrsta stijene sastavljene od zrnaca pijeska koja su veličinom pretežno između 0,06 i 2 mm veća od čestica silta, a manja od kršja koja čine breče i konglomerate. Po nastanku, pješčenjaci su taložne stijene, najčešće povezane porozne strukture koja omogućuje protok tekućine. Takva poroznost i propusnot čini ih jednim od najvažnih ležišta vode, plina i nafte.
Pješčenjaci su nastajali u gotovo svim geološkim razdobljima kao posljedica trošenja, transporta i sedimentacije čestica. Pješčenjaci se talože prvenstveno u sustavima riječnih delta, a također su jedan od prevladavajućih sedimanata u sustavima aluvijalnih lepeza.
Pješčenjak može u određenim slučajevima nastati iz vjetrom nataloženog (eolskog) pijeska, odnosno njegovom naknadnom kompakcijom. Primjer su pijesci na Vinkovačkom ravnjaku. Stijene se također mogu trošiti djelovanjem leda i tada nastaju morenski sedimenti.
Pješčenjak kao sediment postoji i izvan Zemlje. Mnoga nebeska tijela imaju dio površine prekrivene česticama prašine raznih veličina, poput regolita na Mjesecu. Neke od tih čestica mogu biti i dimenzija pijeska. Nadalje, na površini Marsa znatne površine prekrivene su pijeskom.
Weblinks
Sedimentne stijene | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 5,914 |
Rita Kleinstein (en , nacida Rita Yahan-Farouz el 24 de marzo de 1962 en Teherán, Irán) cantante y actriz israelí.
Discografía
Rita (1986)
Yemei Ha'Tom (1988)
Ahava Gedola (1994)
Tahanot Bazman (1996)
Tiftah Halon (1999)
Hamtzan (2003)
Remazim (2008)
En 1990 representó a Israel en el Festival de Eurovisión, celebrado en Zagreb, con la canción
«Shara Barkhovot» (שרה ברחובות), con la que obtuvo el puesto 18° (16 puntos)
Referencias
Representantes de Israel en Eurovisión
Kleinstein, Rita
Kleinstein, Rita
Kleinstein, Rita
Nacidos en Teherán | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,835 |
AHRC News
Urgent Appeals
Hunger Alerts
ALRC News
Forwarded News
Email Us At info@ahrc.asia
Call Us At +(852) 2698 6339
info@humanrights.asia
Home / News / INDONESIA: Impunity for soldiers accused of torturing an indigenous Papuan
INDONESIA: Impunity for soldiers accused of torturing an indigenous Papuan
Urgent Appeal Case |Indonesia | December 9, 2010
ASIAN HUMAN RIGHTS COMMISSION - URGENT APPEALS PROGRAMME
Urgent Appeal Case: AHRC-UAC-178-2010
ISSUES: Indigenous people, Inhuman & degrading treatment, Military, Right to fair trial, Torture,
An Indigenous Papuan man, Tuanliwor Kiwo was arbitrarily detained and tortured in May 2010 by the Indonesian military. Mr. Kiwo survived by escaping two days after being detained. Despite the significant international attention that the case received after a video of the torture was published by the Asian Human Rights Commission (AHRC) in October 2010, there are serious concerns that the perpetrators will not be held accountable and adequately punished. The victim was arrested at the Kwanggok Nalime TNI post near Yogorini village on his way from Tingginambut towards Mulia, Papua, Indonesia. During two days of detention, Mr. Kiwo was subjected to several serious forms of torture, before he was able to escape in the morning of the third day. Mr. Kiwo is currently in hiding for security reasons. A summary of his testimony is available here. (Photo: Tuanliwor Kiwo)
CASE NARRATIVE:
While some sources report the arrest to have taken place on May 30, 2010, a translation of the victim's testimony alleges the arrest to have taken place on May 9, 2010. Mr. Anggen Pugu Kiwo, who is also known as Tuanliwor Kiwo reached the Kwanggok Nalime military post at around 9.00 a.m. while traveling using a motorbike taxi towards Mulia in Papua. Mr. Kiwo was asked to enter the military post where they were handcuffed and beaten. Mr. Kiwo, has provided testimony about his treatment:
According to Mr. Kiwo's testimony, he was subjected to torture for some 32 hours, which included him being subjected to:
• Having his toes pulled with pliers;
• Having his penis pulled with pliers until it was almost severed;
• Having his chest , stomach and thighs burnt with a hot iron rod;
• Having his hands tied with rope that was then used to smash him against stones and other hard objects resulting in injuries to his knees and other parts of his body;
• Being tied up and placed under a large pile of wood making it difficult to breathe. The pile was then lit with petrol, causing the victim to think he was going to burn to death, but he was removed before he received major burns.
• Having a plastic bag tied around his head until near suffocation;
• Having his limbs tied down and then being stepped upon with boots for prolonged periods, resulting in a broken nose and severe bleeding from nose, mouth and other parts of the head;
• Rough shaving of his hair and beard, resulting in cuts to the mouth, ears and nose;
• Burning of the skin with a mixture of chili, washing powder and salt;
• Having a lit cigarette pushed into his nose, causing burns;
• Repeatedly being covered in cold water at night resulting in shivering and cramps;
• Sleep deprivation;
• Repeated beatings;
• Threats of having his throat cut with a bayonet blade;
• Being hung upside down and threatened with having his body split in two with an axe;
• Having his legs tied with barbed wire;
• Beatings on the back with a wooden rod resulting in the breaking of bones in the victim's back;
• His whole body being put into a plastic sack;
• Prolonged handcuffing with a rope resulting in swollen legs and hands;
• Being left for a prolonged time naked in the sunlight.
During the torture, Mr. Kiwo repeatedly pleaded for the perpetrators to stop and to release him, without success. Mr. Kiwo reported having endured severe panic attacks, cramps and extreme pain and to have lost consciousness during the torture. He explained that he was only able to walk with great difficulty and pain during his escape, due to the swelling of his legs.
Mr. Kiwo was interrogated regarding separatist activities in the area and about possible weapons held by community members.
In the late afternoon of the second day of detention by the military Mr. Kiwo received basic treatment for his injuries. His wounds were cleaned with alcohol and antiseptic fluid, he received injections in his swollen feet and hands and thighs and received stitches to his broken nose. He was then given some clothes. Mr. Kiwo reported that the military were no longer able to handcuff him as he was not able to bend his limbs sufficiently due to cramp and swelling.
During the second night of detention, Mr. Kiwo heard the soldiers planning his execution. He also witnessed phone calls between the post and other units of the military about his case, showing that others in the military were aware of the treatment he was being subjected to. Following this he successfully untied his body and escaped. Mr. Kiwo managed to avoid being hit by bullets being shot at him and ran away. He reported this escape to have taken place on May 11, 2010.
He is currently in hiding and has reportedly not been able to see a doctor to receive treatment for his wounds. Please see the summary of his testimony (6:25 min) or watch the full testimony (30:14 min) as published by the Papuan Customary Council (DAP).
Indonesia ratified the Convention against Torture in 1998 and is therefore required under international law to criminalise torture and to halt its use. However, torture continues to be used routinely by the police and the military in Indonesia. Furthermore, Indonesia agreed in 2008 to review its Penal Code to criminalise torture during the Universal Periodic Review of Indonesia's human rights situation at the UN Human Rights Council.
The Indonesian military enjoys immunity from prosecution by the civilian justice system. Violations of human rights by members of the military rarely result in anything but very lenient punishment as the result of trials by military tribunals that do not meet internationally accepted standards of transparency, independence or fair trial.
The AHRC released a video in October showing two incidents: one concerning the ill-treatment of a group of arrested indigenous persons; while the second showed the torture of Mr. Kiwo. A military tribunal has reportedly sentenced the perpetrators in the first case of ill-treatment to a few months of imprisonment. However, no progress at all has been made concerning the torture of Mr. Kiwo. Lt. Col. Susilo, a spokesman for the military command in Papua claimed that the military was not able to find the perpetrators in this case.
The Indonesian military is frequently reported to have conducted so-called sweeping operations in the Papuan highlands, which result in abuses and the intimidation of the indigenous community. Indonesia justifies the heavy military presence in Papua with the alleged threat of armed independence movements. However members of the indigenous community are often falsely stigmatized as supporters of such organizations. They then become targets of police and military violence including arbitrary arrest, torture, and other human rights violations.
A second victim, was arrested together with Mr. Kiwo. He has reportedly been released after his family had pleaded for him to be set free.
Please write to the authorities below calling for an impartial investigation into the torture case and to ensure that the perpetrators are held adequately accountable given the grave nature of the human rights violation.
The AHRC is also writing separately concerning this case to the United Nations' Working Group on Arbitrary Detention; the Special Rapporteur on torture and other cruel, inhuman or degrading treatment or punishment; and the Special Rapporteur on the situation of human rights and fundamental freedoms of indigenous people .
To support this case, please click here: SEND APPEAL LETTER
Dear __________,
Name of victim: Tuanliwor Kiwo
Names of alleged perpetrators: TNI members of the Kwanggok Nalime TNI post near Yogorini village, near Puncak Jaya, Papua, Indonesia
Time of incident: May 2010
Place of incident: Kwanggok Nalime TNI post and vincinity
I am writing to voice my deep concern regarding the arbitrary arrest and torture of Mr. Tuanliwor Kiwo in May 2010. I am shocked to hear that despite the release of a detailed testimony of his torture by the Indonesian military, no action was taken to arrest the perpetrators.
While some sources report the arrest to have taken place on May 30, 2010, a translation of the victim's testimony alleges the arrest to have taken place on May 9, 2010. Mr. Anggen Pugu Kiwo, who is also known as Tuanliwor Kiwo reached the Kwanggok Nalime military post at around 9.00 am while traveling using a motorbike taxi towards Mulia in Papua. Mr. Kiwo was asked to enter the military post where they were handcuffed and beaten.
• Having his toes toes pulled with pliers;
• Having his limbs tied down and then being stepped upon with boots for prolonged periods, resulting in a broken nose and severe beading from nose, mouth and other parts of the head;
• Having a lit cigarette pushed into his nose;
During the torture, Mr. Kiwo repeatedly pleaded for the perpetrators to stop and to release him, without success. Mr. Kiwo reported having endured severe panic attacks, cramps and extreme pain and to have lost consciousness during the torture. He explained that he was only able to walk with great difficulty and pain during his later escape due to the swelling of his legs.
After two days in illegal military detention, Mr. Kiwo was able to escape from the military post to seek medical help and shelter. He is currently in hiding.
I am deeply concerned about the security of Mr. Kiwo as a result of the inaction by the military and police to investigate this case an bring those responsible to justice. The AHRC believes that Mr. Kiwo continues to be under serious threat since his escape.
I urge you to take all necessary measures to ensure that the case is investigated by an impartial body and that the perpetrators of this horrendous crime are being identified and prosecuted according to the law.
PLEASE SEND YOUR LETTERS TO:
1. Mr. Susilo Bambang Yudoyono
Jl. Medan Merdeka Utara
Fax: + 62 21 231 41 38, 345 2685, 345 7782
2. Adm. Agus Suhartono
Tentara Nasional Indonesia (TNI)
3. R. Widyopramono SH,M.Hum
Kejaksaan Tinggi Papua
Jl. Anggrek No.6 Tj. Ria Jayapura
4. General of Police Timur Pradopo
Chief of Indonesian National Police
Jl. Trunojoyo No.3
5. Drs. Bekto. Suprapto. M.Si
Head of Police Area Headquarters Jayapura, Papua province
Jl. Samratulangi No. 8 Jayapura
Tel: + 62 0967 531014
6. Drs. Petrus Waine
Director of the Criminal Unit
Papua Regional Police
Jl. Samratulangi
No. 8 Jayapura
Urgent Appeals Programme
Asian Human Rights Commission (ua@ahrc.asia)
Indonesia Desk (indonesia@ahrc.asia)
Document Type : Urgent Appeal Case
Document ID : AHRC-UAC-178-2010
Countries : Indonesia,
Campaigns : End Violence in West Papua
Issues : Indigenous people, Inhuman & degrading treatment, Military, Right to fair trial, Torture,
« WORLD: Video presentation introducing the AHRC's Annual Report on the Human Rights Day 2010
PHILIPPINES: Institutionalized violations of rights due to a perverted system of justice »
By Country afghanistan africa asia australia bangladesh bhutan burma-myanmar cambodia china east-asia east-timor egypt hong-kong india indonesia iran israel japan kyrgyzstan laos libya malaysia maldives middle-east mongolia nepal new-zealand pakistan philippines saudi-arabia singapore south-asia south-east-asia south-korea sri-lanka sudan syria taiwan thailand turkey uzbekistan vietnam vietnam-south-east-asia world zimbabwe
By Campaign abadilla-5 attack-on-fma-razzak blasphemy-law-in-pakistan burma-peoples-protests c-k-raut chiranuch-premchaiporn-prachatai contempt-of-court-in-sri-lanka disbarred-lawyers land-grabbing end-violence-in-west-papua flood-in-pakistan free-ma-sandar irom-sharmilas-unending-hunger-strike justice-for-adhikari-couple maguindanao-massacre maina-sunuwar munir-said-thalib no-torture phyo-wai-aung protect-fisherfolks-stop-the-killings-and-land-grabbing protect-land-and-forest-from-posco repeal-the-pta-in-sri-lanka save-rizana-nafeek somchai-neelaphaijit stop-custodial-deaths-in-sri-lanka stop-disappearances-in-pakistan stop-extra-judicial-killings-in-the-philippines thailand-state-of-emergency-2010
By Issues accountability administration-of-justice all-episodes arbitrary-arrest-detention blasphemy-law-in-pakistan caste-based-discrimination censorship child-rights civil-and-political-rights corruption death-in-custody death-penalty democracy emergency-decree enforced-disappearances-abductions environmental-protection extrajudicial-killings fabrication-of-charges freedom-of-assembly freedom-of-association freedom-of-expression freedom-of-religion human-rights-defenders human-trafficking impunity independence-of-judges-lawyers indigenous-people inhuman-degrading-treatment inhuman-and-degrading-treatment institutional-reform international-covenant-on-civil-and-political-rights international-human-rights-mechanisms irom-sharmilas-unending-hunger-strike judicial-system labour-rights land-rights legislation migrant-workers military minorities non-state-actors police-negligence police-reform police-violence poverty-adequate-standard-of-living prison-conditions prosecution-system refugees-idps-asylum-seekers right-to-education right-to-fair-trial right-to-food right-to-health right-to-life right-to-redress right-to-remedy rule-of-law sexual-violence state-of-emergency-martial-law terrorism-human-rights threats-and-intimidation torture transitional-justice victims-assistance-protection violence-against-women womens-rights
By Document Type ALRC Oral Statement ALRC Written Submission Announcement Article ETC Forwarded Article Forwarded Hunger Alert Forwarded Open Letter Forwarded Press Release Forwarded Statement Forwarded Urgent Appeal HRC39 HRC40 HRC42 Hunger Alert Case Hunger Alert General Hunger Alert Update Joint Statement Open Letter Paper Press Release Statement Urgent Appeal Case Urgent Appeal General Urgent Appeal Update
Take Action Here
Make a difference. You are just a few clicks away from making a contribution to Human Rights work
Subscribe for mail
Follow AHRC
Ethics in Action
TORTURE: Asian and Global Perspectives
Human Rights Correspondence School
G/F, 52 Princess Margaret Road
Ho Man Tin, Kowloon
Tel: +(852) 2698 6339
Fax: +(852) 2698 6367
Copyright 2020 © Asian Human Rights Commission. All rights are reserved. Powered by intersmart | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 6,914 |
Home › Entertainment News
Families of Emmett Till, Trayvon Martin Bond in DC for Film Screening
D.L. Chandler
On the eve of the "Realize The Dream" march in Washington, the families of Emmett Till and Trayvon Martin joined together at Shiloh Baptist Church in Washington for a panel discussion and also the screening of filmmaker Keith Beauchamp's film, "The Untold Story of Emmett Louis Till."
March On Washington Commemoration Stamp Unveiled
Will President Address 50 Year Economic Gap Between Blacks & Whites At March?
The event, titled "The Mamie Till Mobley Memorial & Trayvon Martin Foundations Present: Civil Rights, Human Wrongs, and the Charge for Youth Leadership," took place Friday evening and was attended by a host of activists, organizers and others in the emotionally charged evening.
MSNBC's The Cycle host Toure' served as the moderator for the event. The panel's special guests were Sybrina Fulton and Tracy Martin, Fulton's son Jahvaris Fulton, Simeon Wright, cousin of Emmett Till, Kevin Powell, Beauchamp and Victoria Pannell, the Emmett Till Ambassador For Peace.
Toure' opened the evening, which was a largely informal but connected town hall-styled chat. As attendees filed into the church, the weight of the moment was not lost on some. "I had to come just because of what we saw happen with little Trayvon," said one woman, who chose not to be identified. "I have grandchildren that will grow up knowing about this case. I'm here for them."
Indeed many families were together at the event, including Global Grind's Michael Skolnik with his family. As various speakers spoke on the injustices of both Martin and Till, a choir singer's stirring rendition of "Wade In The Water" welcomed Ms. Fulton, Mr. Martin, their son and also attorney Benjamin Crump. The floor was then given to Beauchamp, who explained the making of the film.
Although the immediate parallels were not evident early on, towards the end Beauchamp injected bits of the Zimmerman trial, and the angry and still continuing protests surrounding the night watchman's not guilty verdict. Both Toure' and Powell acknowledged they were moved by the film, and, judging by the sniffles heard in the silent moments, they were not alone.
The event switched to the panel, with Toure' given equal microphone time to all guests. Ms. Fulton admitted that she and her family were tired from doing a series of interviews and appearances in and around Washington but said of the event that her family was determined to connect with the Till family. Mr. Powell mentioned that while the union was important, he was saddened by the reasons for the event – mentioning the names of Sean Bell, Oscar Grant and Michael Stewart as moments where his activism came into play.
Powell also mentioned Julian Bond, who was also in attendance, saying the gathering and the energy around shifting the treatment of African-Americans was "bigger than Trayvon." Powell also mentioned that the 58th anniversary of the death of Till and the 50th anniversary of the "March On Washington" served as the spark that should have motivated change, but admitted "forces" were at play that "robbed us of every little victory."
Early in the talk, the typically outspoken Powell said, "A march or rally is absolutely useless," and while some may have bristled at this statement, he was careful to tie in the idea that organization must go beyond the rallies and take things of this nature on the ways that Dr. King, Fannie Lou Hamer, Bond and others have done in times past. Saying that people shouldn't be satisfied with "being pissed off for a couple of weeks," he urged attendees to support the efforts of the Dream Defenders and the foundations of the Till and Martin family as well.
The evening wasn't all serious, as Toure' asked Fulton and Martin what kind of child was Trayvon. "He was an average kid," said Fulton with a smile. "He liked to talk on the phone, he was a teenager. He liked to eat. Of course, I'm Mommy, so I got after my boys but he was just Trayvon."
Martin asked Jahvaris — imposing in a simple dark suit, yet still significantly smaller than the hulking Martin – to stand next to him. "You know what, to know who Trayon Martin was, his brother is right here. They're just alike," he said, beaming at the 22-year-old young man he raised as his own son. Martin also mentioned to the audience that Jahvaris was entering his last year of college, who later said to the crowd he wants to become an intellectual property attorney.
Mr. Wright, who witnessed the incidents that led up to the death of his cousin, left perhaps the most poignant message of the night, this while also railing against conservative pundits and their portrayal of Trayvon. "Never let the world forget what happened to your son," said Wright. "Don't let the Pat Buchanans of the world poison your mind."
See exclusive photos from the 2013 March on Washington below:
Families of Emmett Till, Trayvon Martin Bond in DC for Film Screening was originally published on newsone.com
Emmett Till kevin powell Mamie Till Mobley Memorial Foundation Toure trayvon martin | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 6,620 |
Tag: Quran burning
What happens when America tells a lie to win a war?
by Jack Smith | Feb 19, 2015 | Bible prophecy, Islamic Prophecy | 4 |
Time.com Washington Bureau Chief, Michael Scherer, recently reported (2/19/2015) on the public debate between conservatives and President Obama regarding the best terminology to use for the Islamic State. According to the...
The U.S. Military Should Hand Out Qurans in Afghanistan as a Good-Will Gesture – The Daily Beast
by Jack Smith | Mar 2, 2012 | Arab Spring, Islamic teachings | 1 |
The following article was forwarded to me by a close friend. The suggestion in the article was so "noble," that I responded on the site of the author, Richard Miniter, and gave it a post here. The U.S....
by Jack Smith | Feb 29, 2012 | 2 Thessalonians, Antichrist, Bible prophecy, Islamic Prophecy, Islamic teachings, Theology - Christian vs Muslim | 5 |
Behind my desk is a newly framed picture that has special meaning to me. It is a picture of the Dome of the Rock, the Muslim religious shrine that memorializes Muhammad's "night journey" and his heavenly...
What does the Qur'an say about its burning?
by Jack Smith | Feb 23, 2012 | Islamic Prophecy, Islamic teachings | 4 |
Reporting in the Tribune, February 22, 2012: "KABUL:At least five Afghans were shot dead and dozens wounded Wednesday in clashes between police and demonstrators protesting over the burning of the Holy Quran at a US-run... | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 8,224 |
\section{Introduction}
The two extensive debated high-mass star formation
scenarios have plotted largely different pictures of
mass accumulation process.
Extensive investigations show that the prevalent filaments
are the most important engine of forming stars, especially
for the high-mass ones (\cite[Andr\'{e} et al. 2014]{Andre14}).
How the gas flows detected in filaments help
individual cores grow in mass is still a key open question.
In this work, a filamentary cloud G22 is extensively
investigated to reveal a promising mass accumulation
scenario.
\section{Mass accumulation process in G22}
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth]{Poster_Fig1}
\caption{(a) Dust temperature map from SED
fitting with column density overlaid as contours.
(b) Velocity centroids of $^{13}$CO (1-0)
spectra extracted along filaments overlaid
on top of the $N_\mathrm{H_2}$ map. The eight clumps,
designated as C1 to C8, are shown as open
ellipses. (c) LOS velocity of $^{13}$CO as a
function of distance from the potential well
centers, i.e., clump C1 for F1, F2, and F4,
and clump C2 for F3}
\label{fig1}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth]{Poster_Fig2}
\caption{(a) Velocity centroids of $^{13}$CO (1-0)
extracted along filaments overlaid on
the $N_\mathrm{H_2}$ map.
(b) Spectra of JCMT/ $^{13}$CO (3-2)
overlaid on SMA/CO (2-1) outflows. The
large ellipse delineates clump C1.
(c) A close-up view of the SMA 1.3 mm continuum.
The mono-core is designated as SMA1. A filled star
shows the MIR source SSTGLMC G022.0387+00.2222
(MIR1) from the GLIMPSE survey. The insert plot shows
the $^{13}$CO (2-1) spectrum at the SMA1 peak.}
\label{fig2}
\end{center}
\end{figure}
{\underline{\it G22: a collapsing hub-filament system}}.
With a distance of 3.51 kpc, the G22 cloud contains ten
\textit{Spitzer} infrared dark clouds (IRDCs). These IRDCs are mainly
distributed in in a hub-filament system.
As shown in Figure \ref{fig1} (b), systematic velocity changes are
detected along filaments F1, F1, and F3 based on $^{13}$CO (1-0)
observations. The differences between the velocities of the filaments
and the junction as a function distance to the center shows
monolithically increasing profiles for F1, F2, and F3
(see Figure \ref{fig1} (c)). This
suggests that gas is transfered to the hub region along these
filaments with an estimated total mass infall rate of 440
\msun~Myr$^{-1}$ (\cite[Yuan et al. 2017]{Yuan17}).
{\underline{\it G22-C1: a collapsing high-mass clump}}.
Located at the hub region, C1 is the most massive
clump with a mass of 590 \msun. Prevalent blue profiles
are detected toward C1 (see Figure \ref{fig2} (b)), suggestive
of clump-scale global collapse. The estimated mass infall rate is
$ 7.2\times10^{-4} $ \msun~yr$^{-1}$
{\underline{\it G22-C1-SMA1: A collapsing hot molecular core}}.
At the center of C1, a hot molecular core SMA1 with a gas temperature
higher than 220 K is detected. The spectrum of $^{13}$CO (2-1) and
C$^{18}$O (2-1)
show blue profiles (see Figure \ref{fig2} (c)),
indicating infall motions in SMA1. The estimated
mass accretion rate is about $7\times10^{-5}$ \msun~yr$^{-1}$.
\section{Conclusions}
Inward motions have been detected along filaments, in the center
clump and dense core. The continuous mass growth from large
to small scales suggests that high-mass starless cores might not
be prerequisite to form high-mass stars. The deeply embedded protostar,
the core, and the clump can simultaneously grow in mass.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,452 |
A Son Unique é o terceiro e último álbum de estúdio do rapper americano Ol' Dirty Bastard. O álbum seria lançado após a morte de ODB, mas foi arquivado. Ele deveria ser distribuído pela Dame Dash. No entanto, a marca da gravadora foi retirado do rótulo e o lançamento do álbum foi cancelado. Agora, está disponível apenas para download digital.
A Son Unique seria originalmente lançado em 9 de agosto de 2005, mas foi sujeito a inúmeros atrasos. No dia 7 de novembro de 2006, o álbum seria divulgado para comemorar o aniversário da morte de ODB, que ocorreu em 13 de novembro de 2004, mas foi cancelado. No dia do lançamento planejado, A Son Unique foi disponibilizado no iTunes. O lançamento foi eventualmente cancelado pela gravadora de ODB, Roc-A-Fella. O álbum finalmente seria lançado em novembro de 2009 para comemorar o quinto aniversário da morte de ODB, mas acabou sendo cancelado novamente.
Lista de faixas
Álbuns cancelados
Álbuns de Ol' Dirty Bastard
Álbuns produzidos por Pharrell Williams
Álbuns produzidos por DJ Premier
Álbuns produzidos por Mark Ronson
Álbuns produzidos por RZA
Álbuns produzidos por Rockwilder
Álbuns de hip hop | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,247 |
I`m just about to upgrade to Tiger. I downloaded the Ai 6 for a Microteck 120 tf. Haven`t tried it yet but it`s the Tiger download. I don`t see a Tiger download for HDR. Is the current version OK or is there a Tiger version in the works?
=> SilverFast 6.4.1r7d or higher is deemed ready for Tiger.
While the 6.4.1r7d HDR Studio already is online, the HDR version will be available quite soon. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,321 |
Q: Understanding type deduction for universal references Let foo be the function:
template< typename T >
void foo( T&& a ){}
To what type will T be deduced for the following calls of foo:
foo( 0 ); // is T here int or int&& ?
int a = 0;
foo( a ); // is T here int or int& ?
A: The expression T&& in a deduced context like what you provided
template< typename T >
void foo( T&& a ){}
ie T is deduced based on the provided argument, is subject to
reference collapsing rules.
In short;
*
*if the provided argument is an lvalue of type type, T&&
will expand to type& && that collapses to type&
*if the provided argument is an rvalue of type type, T&&
will expand to type && that collapses to type&&
Note that both are references, if you need to trigger the rvalue
overload of another function, you need to do std::forward<T>(a)
A: The default rule for type deduction is that reference types can never be the result of deduction. Given this code,
template <class T>
void bar(T par);
bar(0);
int a;
bar(a);
int &b;
bar(b);
all 3 calls will call foo<int>. That is, T is deduced to int and par is of type int.
Forwarding references work by simple addition of one rule: when the argument used for type deduction of a forwarding reference (i.e. of a parameter T&& for a deduced T) is an lvalue of type X, the type X & is used instead of X for deduction.
Note that this means that given a type X, only X or X & can ever be the result of type deduction; X && never can.
Let's analyse your code (I will rename he function parameter, to make it clear what I'm referring to):
template <class T>
void foo(T &&par);
foo(0);
int a;
foo(a);
In the first case foo(0), the argument is an rvalue of type int The type int is therefore used for type deduction, meaning that T is deduced to int (the function called is foo<int>) and the type of par is int &&.
In the second case foo(a), the argument is an lvalue of type int. Forwarding reference rule kicks in and the type int & is used for deduction. T is therefore deduced to int & (the function called is foo<int&>), and the type of par is "int & &&", which collapses to int &.
A:
foo( 0 ); // is T here int or int&& ?
An int.
int a = 0;
foo( a ); // is T here int or int& ?
An int&.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 119 |
Q: Popen and python Working on some code and I'm given the error when running it from the command prompt...
NameError: name 'Popen' is not defined
but I've imported both import os and import sys.
Here's part of the code
exepath = os.path.join(EXE File location is here)
exepath = '"' + os.path.normpath(exepath) + '"'
cmd = [exepath, '-el', str(el), '-n', str(z)]
print 'The python program is running this command:'
print cmd
process = Popen(cmd, stderr=STDOUT, stdout=PIPE)
outputstring = process.communicate()[0]
Am I missing something elementary? I wouldn't doubt it. Thanks!
A: Popen is defined in the subprocess module
import subprocess
...
subprocess.Popen(...)
Or:
from subprocess import Popen
Popen(...)
A: you should do:
import subprocess
subprocess.Popen(cmd, stderr=subprocess.STDOUT, stdout=subprocess.PIPE)
# etc.
A: When you import a module, the module's members don't become part of the global namespace: you still have to prefix them with modulename.. So, you have to say
import os
process = os.popen(command, mode, bufsize)
Alternatively, you can use the from module import names syntax to import things into the global namespace:
from os import popen # Or, from os import * to import everything
process = popen(command, mode, bufsize)
A: If your import looks like this:
import os
Then you need to reference the things included in os like this:
os.popen()
If you dont want to do that, you can change your import to look like this:
from os import *
Which is not recommended because it can lead to namespace ambiguities (things in your code conflicting with things imported elsewhere.) You could also just do:
from os import popen
Which is more explicit and easier to read than from os import *
A: This looks like Popen from the subprocess module (python >= 2.4)
from subprocess import Popen
A: You should be using os.popen() if you simply import os.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,851 |
{"url":"https:\/\/www.rdocumentation.org\/packages\/afex\/versions\/0.22-1\/topics\/afex_plot","text":"# afex_plot\n\n0th\n\nPercentile\n\n##### m-way Plot with Error Bars and Raw Data\n\nPlots results from factorial experiments. Estimated marginal means and error bars are plotted in the foreground, raw data is plotted in the background. Error bars can be based on different standard errors (e.g., model-based, within-subjects, between-subjects). Functions described here return a ggplot2 plot object, thus allowing further customization of the plot.\n\nafex_plot is the user friendly function that does data preparation and plotting. It also allows to only return the prepared data (return = \"data\").\n\ninteraction_plot does the plotting when a trace factor is present. oneway_plot does the plotting when a trace factor is absent.\n\n##### Usage\nafex_plot(object, ...)# S3 method for afex_aov\nafex_plot(object, x, trace, panel, mapping,\nerror = \"model\", error_ci = TRUE, error_level = 0.95,\nerror_arg = list(width = 0), data_plot = TRUE, data_geom,\ndata_alpha = 0.5, data_arg = list(color = \"darkgrey\"),\npoint_arg = list(), line_arg = list(), emmeans_arg = list(),\ndodge = 0.5, return = \"plot\", factor_levels = list(), legend_title,\n...)# S3 method for mixed\nafex_plot(object, x, trace, panel, mapping, random,\nerror = \"model\", error_ci = TRUE, error_level = 0.95,\nerror_arg = list(width = 0), data_plot = TRUE, data_geom,\ndata_alpha = 0.5, data_arg = list(color = \"darkgrey\"),\npoint_arg = list(), line_arg = list(), emmeans_arg = list(),\ndodge = 0.5, return = \"plot\", factor_levels = list(), legend_title,\n...)# S3 method for merMod\nafex_plot(object, x, trace, panel, mapping, random,\nerror = \"model\", error_ci = TRUE, error_level = 0.95,\nerror_arg = list(width = 0), data_plot = TRUE, data_geom,\ndata_alpha = 0.5, data_arg = list(color = \"darkgrey\"),\npoint_arg = list(), line_arg = list(), emmeans_arg = list(),\ndodge = 0.5, return = \"plot\", factor_levels = list(), legend_title,\n...)interaction_plot(means, data, mapping = c(\"shape\", \"lineytpe\"),\nerror_plot = TRUE, error_arg = list(width = 0), data_plot = TRUE,\ndata_geom = ggplot2::geom_point, data_alpha = 0.5,\ndata_arg = list(color = \"darkgrey\"), point_arg = list(),\nline_arg = list(), dodge = 0.5, legend_title, col_x = \"x\",\ncol_y = \"y\", col_trace = \"trace\", col_panel = \"panel\",\ncol_lower = \"lower\", col_upper = \"upper\")oneway_plot(means, data, mapping = \"\", error_plot = TRUE,\nerror_arg = list(width = 0), data_plot = TRUE,\ndata_geom = ggbeeswarm::geom_beeswarm, data_alpha = 0.5,\ndata_arg = list(color = \"darkgrey\"), point_arg = list(),\nlegend_title, col_x = \"x\", col_y = \"y\", col_panel = \"panel\",\ncol_lower = \"lower\", col_upper = \"upper\")\n##### Arguments\nobject\n\nafex_aov, mixed, or merMod object.\n\n...\n\ncurrently ignored.\n\nx\n\nA character vector or one-sided formula specifying the factor names of the predictors displayed on the x-axis. mapping specifies further mappings for these factors if trace is missing.\n\ntrace\n\nAn optional character vector or one-sided formula specifying the factor names of the predictors connected by the same line. mapping specifies further mappings for these factors.\n\npanel\n\nAn optional character vector or one-sided formula specifying the factor names of the predictors shown in different panels.\n\nmapping\n\nA character vector specifying which aesthetic mappings should be applied to either the trace factors (if trace is specified) or the x factors. Useful options are any combination of \"shape\", \"color\", \"linetype\", or also \"fill\" (see examples). The default (i.e., missing) uses c(\"shape\", \"linetype\") if trace is specified and \"\" otherwise (i.e., no additional aesthetic).\n\nerror\n\nA scalar character vector specifying on which standard error the error bars should be based. Default is \"model\", which plots model-based standard errors. Further options are: \"none\" (or NULL), \"mean\", \"within\" (or \"CMO\"), and \"between\". See details.\n\nerror_ci\n\nLogical. Should error bars plot confidence intervals (=TRUE, the default) or standard errors (=FALSE)?\n\nerror_level\n\nNumeric value between 0 and 1 determing the width of the confidence interval. Default is .95 corresponding to a 95% confidence interval.\n\nerror_arg\n\nA list of further arguments passed to geom_errorbar, which draws the errorsbars. Default is list(width = 0) which suppresses the vertical bars at the end of the error bar.\n\ndata_plot\n\nlogical. Should raw data be plotted in the background? Default is TRUE.\n\ndata_geom\n\nGeom function used for plotting data in background. The default (missing) uses geom_point if trace is specified, otherwise geom_beeswarm. See examples fo further options.\n\ndata_alpha\n\nnumeric alpha value between 0 and 1 passed to data_geom. Default is 0.5 which correspond to semitransparent data points in the background such that overlapping data points are plotted darker.\n\ndata_arg\n\nA list of further arguments passed to data_geom. Default is list(color = \"darkgrey\"), which plots points in the background in grey.\n\npoint_arg, line_arg\n\nA list of further arguments passed to geom_point or geom_line which draw the points and lines in the foreground. Default is list(). line_arg is only used if trace is specified.\n\nemmeans_arg\n\nA list of further arguments passed to emmeans. Of particular importance for ANOVAs is model, see afex_aov-methods.\n\ndodge\n\nNumerical amount of dodging of factor-levels on x-axis. Default is 0.5.\n\nreturn\n\nA scalar character specifying what should be returned. The default \"plot\" returns the ggplot2 plot. The other option \"data\" returns a list with two data.frames containing the data used for plotting: means contains the means and standard errors for the foreground, data contains the raw data in the background.\n\nfactor_levels\n\nA list of new factor levels that should be used in the plot. The name of each list entry needs to correspond to one of the factors in the plot.\n\nlegend_title\n\nA scalar character vector with a new title for the legend.\n\nrandom\n\nA character vector specifying over which variables the raw data should be aggregated in case of mixed objects. The default (missing) uses all random effects grouping factors which can lead to many data points. error = \"within\" or error = \"between\" require that random is of length 1. See examples.\n\nmeans, data\n\ndata.frames used for plotting of the plotting functions.\n\nerror_plot\n\nlogical. Should error bars be plotted? Only used in plotting functions. To suppress plotting of error bars use error = \"none\" in afex_plot.\n\ncol_y, col_x, col_trace, col_panel\n\nA scalar character string specifying the name of the corresponding column containing the information used for plotting. Each column needs to exist in both the means and the data data.frame.\n\ncol_lower, col_upper\n\nA scalar character string specifying the name of the columns containing lower and upper bounds for the error bars. These columns need to exist in means.\n\n##### Details\n\nafex_plot obtains the estimated marginal means via emmeans and aggregates the raw data to the same level. It then calculates the desired confidence interval or standard error (see below) and passes the prepared data to one of the two plotting functions: interaction_plot when trace is specified and oneway_plot otherwise.\n\n### Error Bars\n\nError bars provide a grahical representation of the variability of the estimated means and should be routinely added to results figures. However, there exist several possibilities which particular measure of variability to use. Because of this, any figure depicting error bars should be accompanied by a note detailing which measure the error bars shows. The present functions allow plotting of different types of confidence intervals (if error_ci = TRUE, the default) or standard errors (if error_ci = FALSE).\n\nA further complication is that readers routinely misinterpret confidence intervals. The most common error is to assume that non-overlapping error bars indicate a significant difference (e.g., Belia et al., 2005). This is rarely the case (see e.g., Cumming & Finch, 2005; Knol et al., 2011; Schenker & Gentleman, 2005). For example, in a fully between-subjects design in which the error bars depict 95% confidence intervals and groups are of approximately equal size and have equal variance, even error bars that overlap by as much as 50% still correspond to p < .05. Error bars that are just touching roughly correspond to p = .01.\n\nIn the case of designs involving repeated-measures factors the usual confidence intervals or standard errors (i.e., model-based confidence intervals or intervals based on the standard error of the mean) cannot be used to gauge significant differences as this requires knowledge about the correlation between measures. One popular alternative in the psychological literature are intervals based on within-subjects standard errors\/confidence intervals (e.g., Cousineau & O'Brien, 2014). These attempt to control for the correlation across individuals and thereby allow judging differences between repeated-measures condition. As a downside, when using within-subjects intervals no comparisons across between-subjects conditions or with respect to a fixed-value are possible anymore.\n\nIn the case of a mixed-design, no single type of error bar is possible that allows comparison across all conditions. Likewise, for mixed models involving multiple crossed random effects, no single set of error bars (or even data aggregation) adequately represent the true varibility in the data and adequately allows for \"inference by eye\". Therefore, special care is necessary in such cases. One possiblity is to avoid error bars altogether and plot only the raw data in the background (with error = \"none\"). The raw data in the background still provides a visual impression of the variability in the data and the precision of the mean estimate, but does not as easily suggest an incorrect inferences. Another possibility is to use the model-based standard error and note in the figure caption that it does not permit comparisons across repeated-measures factors.\n\nThe following \"rules of eye\" (Cumming and Finch, 2005) hold, when permitted by design (i.e., within-subjects bars for within-subjects comparisons; other variants for between-subjects comparisons), and groups are approximately equal in size and variance. Note that for more complex designs ususally analyzed with mixed models, such as designs involving complicated dependencies across data points, these rules of thumbs may be highly misleading.\n\n\u2022 p < .05 when the overlap of the 95% confidence intervals (CIs) is no more than about half the average margin of error, that is, when proportion overlap is about .50 or less.\n\n\u2022 p < .01 when the two CIs do not overlap, that is, when proportion overlap is about 0 or there is a positive gap.\n\n\u2022 p < .05 when the gap between standard error (SE) bars is at least about the size of the average SE, that is, when the proportion gap is about 1 or greater.\n\n\u2022 p < .01 when the proportion gap between SE bars is about 2 or more.\n\n### Implemented Standard Errors\n\nThe following lists the implemented approaches to calculate confidence intervals (CIs) and standard errors (SEs). CIs are based on the SEs using the t-distribution with degrees of freedom based on the cell or group size. For ANOVA models, afex_plot attempts to warn in case the chosen approach is misleading given the design (e.g., model-based error bars for purely within-subjects plots). For mixed models, no such warnings are produced, but users should be aware that all options beside \"model\" are not actually appropriate and have only heuristic value. But then again, \"model\" based error bars do not permit comparisons for factors varying within one of the random-effects grouping factors (i.e., factors for which random-slopes should be estimated).\n\"model\"\n\nUses model-based CIs and SEs. For ANOVAs, the variant based on the lm or mlm model (i.e., emmeans_arg = list(model = \"multivariate\")) seems generally preferrable.\n\n\"mean\"\n\nCalculates the standard error of the mean for each cell ignoring any repeated-measures factors.\n\n\"within\" or \"CMO\"\n\nCalculates within-subjects SEs using the Cosineau-Morey-O'Brien (Cousineau & O'Brien, 2014) method. This method is based on a double normalization of the data. SEs and CIs are then calculated independently for each cell (i.e., if the desired output contains between-subjects factors, SEs are calculated for each cell including the between-subjects factors).\n\n\"between\"\n\nFirst aggregates the data per participant and then calculates the SEs for each between-subjects condition. Results in one SE and t-quantile for all conditions in purely within-subjects designs.\n\n\"none\" or NULL\n\nSuppresses calculation of SEs and plots no error bars.\n\nFor mixed models, the within-subjects\/repeated-measures factors are relative to the chosen random effects grouping factor. They are automatically detected based on the random-slopes of the random-effects grouping factor in random. All other factors are treated as independent-samples or between-subjects factors.\n\n##### Value\n\nReturns a ggplot2 plot (i.e., object of class c(\"gg\", \"ggplot\")) unless return = \"data\".\n\n##### References\n\nBelia, S., Fidler, F., Williams, J., & Cumming, G. (2005). Researchers Misunderstand Confidence Intervals and Standard Error Bars. Psychological Methods, 10(4), 389-396. https:\/\/doi.org\/10.1037\/1082-989X.10.4.389\n\nCousineau, D., & O'Brien, F. (2014). Error bars in within-subject designs: a comment on Baguley (2012). Behavior Research Methods, 46(4), 1149-1151. https:\/\/doi.org\/10.3758\/s13428-013-0441-z\n\nCumming, G., & Finch, S. (2005). Inference by Eye: Confidence Intervals and How to Read Pictures of Data. American Psychologist, 60(2), 170-180. https:\/\/doi.org\/10.1037\/0003-066X.60.2.170\n\nKnol, M. J., Pestman, W. R., & Grobbee, D. E. (2011). The (mis)use of overlap of confidence intervals to assess effect modification. European Journal of Epidemiology, 26(4), 253-254. https:\/\/doi.org\/10.1007\/s10654-011-9563-8\n\nSchenker, N., & Gentleman, J. F. (2001). On Judging the Significance of Differences by Examining the Overlap Between Confidence Intervals. The American Statistician, 55(3), 182-186. https:\/\/doi.org\/10.1198\/000313001317097960\n\n##### Aliases\n\u2022 afex_plot\n\u2022 afex_plot.afex_aov\n\u2022 afex_plot.mixed\n\u2022 afex_plot.merMod\n\u2022 interaction_plot\n\u2022 oneway_plot\n##### Examples\n# NOT RUN {\n# note: use library(\"ggplot\") to avoid \"ggplot2::\" in the following\n\n##################################################################\n## 2-factor Within-Subject Design ##\n##################################################################\n\ndata(md_12.1)\naw <- aov_ez(\"id\", \"rt\", md_12.1, within = c(\"angle\", \"noise\"))\n\n##---------------------------------------------------------------\n## Basic Interaction Plots -\n##---------------------------------------------------------------\n\nafex_plot(aw, x = \"angle\", trace = \"noise\")\n# or: afex_plot(aw, x = ~angle, trace = ~noise)\n\nafex_plot(aw, x = \"noise\", trace = \"angle\")\n\n### For within-subject designs, using within-subject CIs is better:\nafex_plot(aw, x = \"angle\", trace = \"noise\", error = \"within\")\n(p1 <- afex_plot(aw, x = \"noise\", trace = \"angle\", error = \"within\"))\n\n## use different themes for nicer graphs:\np1 + ggplot2::theme_bw()\n# }\n# NOT RUN {\np1 + ggplot2::theme_light()\np1 + ggplot2::theme_minimal()\np1 + jtools::theme_apa()\np1 + ggpubr::theme_pubr()\n\n### set theme globally for R session:\nggplot2::theme_set(ggplot2::theme_bw())\n\n### There are several ways to deal with overlapping points in the background besides alpha\n# 1. using the default data geom and ggplot2::position_jitterdodge\nafex_plot(aw, x = \"noise\", trace = \"angle\", error = \"within\", dodge = 0.3,\ndata_arg = list(\nposition =\nggplot2::position_jitterdodge(\njitter.width = 0,\njitter.height = 5,\ndodge.width = 0.3 ## needs to be same as dodge\n),\ncolor = \"darkgrey\"))\n\n# 2. using ggbeeswarm::geom_beeswarm\nafex_plot(aw, x = \"noise\", trace = \"angle\", error = \"within\", dodge = 0.5,\ndata_geom = ggbeeswarm::geom_beeswarm,\ndata_arg = list(\ndodge.width = 0.5, ## needs to be same as dodge\ncex = 0.8,\ncolor = \"darkgrey\"))\n\n# 3. do not display points, but use a violinplot: ggplot2::geom_violin\nafex_plot(aw, x = \"noise\", trace = \"angle\", error = \"within\",\ndata_geom = ggplot2::geom_violin,\ndata_arg = list(width = 0.5))\n\n# 4. violinplots with color: ggplot2::geom_violin\nafex_plot(aw, x = \"noise\", trace = \"angle\", error = \"within\",\nmapping = c(\"linetype\", \"shape\", \"fill\"),\ndata_geom = ggplot2::geom_violin,\ndata_arg = list(width = 0.5))\n\n# 5. do not display points, but use a boxplot: ggplot2::geom_boxplot\nafex_plot(aw, x = \"noise\", trace = \"angle\", error = \"within\",\ndata_geom = ggplot2::geom_boxplot,\ndata_arg = list(width = 0.3))\n\n# 6. combine points with boxplot: ggpol::geom_boxjitter\nafex_plot(aw, x = \"noise\", trace = \"angle\", error = \"within\",\ndata_geom = ggpol::geom_boxjitter,\ndata_arg = list(width = 0.3))\n## hides error bars!\n\n# 7. nicer variant of ggpol::geom_boxjitter\nafex_plot(aw, x = \"noise\", trace = \"angle\", error = \"within\",\nmapping = c(\"shape\", \"fill\"),\ndata_geom = ggpol::geom_boxjitter,\ndata_arg = list(\nwidth = 0.3,\njitter.width = 0,\njitter.height = 10,\noutlier.intersect = TRUE),\npoint_arg = list(size = 2.5),\nerror_arg = list(size = 1.5, width = 0))\n\n# 8. nicer variant of ggpol::geom_boxjitter without lines\nafex_plot(aw, x = \"noise\", trace = \"angle\", error = \"within\", dodge = 0.7,\nmapping = c(\"shape\", \"fill\"),\ndata_geom = ggpol::geom_boxjitter,\ndata_arg = list(\nwidth = 0.5,\njitter.width = 0,\njitter.height = 10,\noutlier.intersect = TRUE),\npoint_arg = list(size = 2.5),\nline_arg = list(linetype = 0),\nerror_arg = list(size = 1.5, width = 0))\n# }\n# NOT RUN {\n\n##---------------------------------------------------------------\n## Basic One-Way Plots -\n##---------------------------------------------------------------\n\nafex_plot(aw, x = \"angle\", error = \"within\") ## default\n\n# }\n# NOT RUN {\n## with color we need larger points\nafex_plot(aw, x = \"angle\", mapping = \"color\", error = \"within\",\npoint_arg = list(size = 2.5),\nerror_arg = list(size = 1.5, width = 0.05))\n\nlibrary(\"ggpol\") ## currently required for combination of boxplot and points\nafex_plot(aw, x = \"angle\", error = \"within\", data_geom = ggpol::geom_boxjitter)\n\n## nicer\nafex_plot(aw, x = \"angle\", error = \"within\", data_geom = ggpol::geom_boxjitter,\nmapping = \"fill\", data_alpha = 0.7,\ndata_arg = list(\nwidth = 0.6,\njitter.width = 0.07,\njitter.height = 10,\noutlier.intersect = TRUE\n),\npoint_arg = list(size = 2.5),\nerror_arg = list(size = 1.5, width = 0.05))\n\n##---------------------------------------------------------------\n## Other Basic Options -\n##---------------------------------------------------------------\n\n## relabel factor levels via new_levels\nafex_plot(aw, x = \"noise\", trace = \"angle\",\nnew_levels = list(angle = c(\"0\", \"4\", \"8\"),\nnoise = c(\"Absent\", \"Present\")))\n\n## Change title of legend\nafex_plot(aw, x = \"noise\", trace = \"angle\",\nlegend_title = \"Noise Condition\")\n\n## for plots with few factor levels, smaller dodge might be better:\nafex_plot(aw, x = \"angle\", trace = \"noise\", dodge = 0.25)\n\n#################################################################\n## 4-factor Mixed Design ##\n#################################################################\n\ndata(obk.long, package = \"afex\")\na1 <- aov_car(value ~ treatment * gender + Error(id\/(phase*hour)),\ndata = obk.long, observed = \"gender\")\n\n## too difficult to see anything\nafex_plot(a1, ~phase*hour, ~treatment) +\nggplot2::theme_light()\n\n## better\nafex_plot(a1, ~hour, ~treatment, ~phase) +\nggplot2::theme_light()\n\n## even better and different model-based standard errors\nafex_plot(a1, ~hour, ~treatment, ~phase,\ndodge = 0.65,\ndata_arg = list(\nposition =\nggplot2::position_jitterdodge(\njitter.width = 0,\njitter.height = 0.2,\ndodge.width = 0.65 ## needs to be same as dodge\n),\ncolor = \"darkgrey\"),\nemmeans_arg = list(model = \"multivariate\")) +\nggplot2::theme_classic()\n\n# with color instead of linetype to separate trace factor\nafex_plot(a1, ~hour, ~treatment, ~phase,\nmapping = c(\"shape\", \"color\"),\ndodge = 0.65,\ndata_arg = list(\nposition =\nggplot2::position_jitterdodge(\njitter.width = 0,\njitter.height = 0.2,\ndodge.width = 0.65 ## needs to be same as dodge\n)),\nemmeans_arg = list(model = \"multivariate\")) +\nggplot2::theme_light()\n\n# only color to separate trace factor\nafex_plot(a1, ~hour, ~treatment, ~phase,\nmapping = \"color\",\ndodge = 0.65,\ndata_arg = list(\nposition =\nggplot2::position_jitterdodge(\njitter.width = 0,\njitter.height = 0.2,\ndodge.width = 0.65 ## needs to be same as dodge\n)),\nemmeans_arg = list(model = \"multivariate\")) +\nggplot2::theme_classic()\n\n## plot involving all 4 factors:\nafex_plot(a1, ~hour, ~treatment, ~gender+phase,\ndodge = 0.65,\ndata_arg = list(\nposition =\nggplot2::position_jitterdodge(\njitter.width = 0,\njitter.height = 0.2,\ndodge.width = 0.65 ## needs to be same as dodge\n),\ncolor = \"darkgrey\"),\nemmeans_arg = list(model = \"multivariate\")) +\nggplot2::theme_bw()\n\n##---------------------------------------------------------------\n## Different Standard Errors Available -\n##---------------------------------------------------------------\n\n## purely within-design\ncbind(\nafex_plot(a1, ~phase, ~hour,\nerror = \"model\", return = \"data\")$means[,c(\"phase\", \"hour\", \"y\", \"SE\")], multivariate = afex_plot(a1, ~phase, ~hour, emmeans_arg = list(model = \"multivariate\"), error = \"model\", return = \"data\")$means$error, mean = afex_plot(a1, ~phase, ~hour, error = \"mean\", return = \"data\")$means$error, within = afex_plot(a1, ~phase, ~hour, error = \"within\", return = \"data\")$means$error, between = afex_plot(a1, ~phase, ~hour, error = \"between\", return = \"data\")$means$error) ## mixed design cbind( afex_plot(a1, ~phase, ~treatment, error = \"model\", return = \"data\")$means[,c(\"phase\", \"treatment\", \"y\", \"SE\")],\nmultivariate = afex_plot(a1, ~phase, ~treatment,\nemmeans_arg = list(model = \"multivariate\"),\nerror = \"model\", return = \"data\")$means$error,\nmean = afex_plot(a1, ~phase, ~treatment,\nerror = \"mean\", return = \"data\")$means$error,\nwithin = afex_plot(a1, ~phase, ~treatment,\nerror = \"within\", return = \"data\")$means$error,\nbetween = afex_plot(a1, ~phase, ~treatment,\nerror = \"between\", return = \"data\")$means$error)\n# }\n# NOT RUN {\n##################################################################\n## Mixed Models ##\n##################################################################\n\ndata(\"Machines\", package = \"MEMSS\")\nm1 <- mixed(score ~ Machine + (Machine|Worker), data=Machines)\n\npairs(emmeans::emmeans(m1, \"Machine\"))\n# contrast estimate SE df t.ratio p.value\n# A - B -7.966667 2.420850 5 -3.291 0.0481\n# A - C -13.916667 1.540100 5 -9.036 0.0007\n# B - C -5.950000 2.446475 5 -2.432 0.1253\n\n## Default (i.e., model-based) error bars suggest no difference between Machines.\n## This contrasts with pairwise comparisons above.\nafex_plot(m1, \"Machine\")\n\n## Impression from within-subject error bars is more in line with pattern of differences.\nafex_plot(m1, \"Machine\", error = \"within\")\n\n# }\n# NOT RUN {\nfhch <- droplevels(fhch2010[ fhch2010$correct,]) # remove errors ### following model should take less than a minute to fit: mrt <- mixed(log_rt ~ task*stimulus*frequency + (stimulus*frequency||id)+ (task||item), fhch, method = \"S\", expand_re = TRUE) ## way too many points in background: afex_plot(mrt, \"stimulus\", \"frequency\", \"task\") ## better to restrict plot of data to one random-effects grouping variable afex_plot(mrt, \"stimulus\", \"frequency\", \"task\", random = \"id\") ## when plotting data from a single random effect, different error bars are possible: afex_plot(mrt, \"stimulus\", \"frequency\", \"task\", random = \"id\", error = \"within\") afex_plot(mrt, \"stimulus\", \"frequency\", \"task\", random = \"id\", error = \"mean\") ## compare visual impression with: pairs(emmeans::emmeans(mrt, c(\"stimulus\", \"frequency\"), by = \"task\")) ## same logic also possible for other random-effects grouping factor afex_plot(mrt, \"stimulus\", \"frequency\", \"task\", random = \"item\") ## within-item error bars are misleading here. task is sole within-items factor. afex_plot(mrt, \"stimulus\", \"frequency\", \"task\", random = \"item\", error = \"within\") ## CIs based on stanard error of mean look small, but not unreasonable given results. afex_plot(mrt, \"stimulus\", \"frequency\", \"task\", random = \"item\", error = \"mean\") ### compare distribution of individual data for different random effects: ## requires package cowplot p_id <- afex_plot(mrt, \"stimulus\", \"frequency\", \"task\", random = \"id\", error = \"within\", dodge = 0.7, data_geom = ggplot2::geom_violin, mapping = c(\"shape\", \"fill\"), data_arg = list(width = 0.7)) + ggplot2::scale_shape_manual(values = c(4, 17)) + ggplot2::labs(title = \"ID\") p_item <- afex_plot(mrt, \"stimulus\", \"frequency\", \"task\", random = \"item\", error = \"within\", dodge = 0.7, data_geom = ggplot2::geom_violin, mapping = c(\"shape\", \"fill\"), data_arg = list(width = 0.7)) + ggplot2::scale_shape_manual(values = c(4, 17)) + ggplot2::labs(title = \"Item\") ### see: https:\/\/cran.r-project.org\/package=cowplot\/vignettes\/shared_legends.html p_comb <- cowplot::plot_grid( p_id + ggplot2::theme_light() + ggplot2::theme(legend.position=\"none\"), p_item + ggplot2::theme_light() + ggplot2::theme(legend.position=\"none\") ) legend <- cowplot::get_legend(p_id + ggplot2::theme(legend.position=\"bottom\")) cowplot::plot_grid(p_comb, legend, ncol = 1, rel_heights = c(1, 0.1)) ##---------------------------------------------------------------- ## Support for lme4::lmer - ##---------------------------------------------------------------- Oats <- nlme::Oats ## afex_plot does currently not support implicit nesting: (1|Block\/Variety) ## Instead, we need to create the factor explicitly Oats$VarBlock <- Oats$Variety:Oats$Block","date":"2019-01-19 18:12:55","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5015149116516113, \"perplexity\": 9802.374450018462}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-04\/segments\/1547583680452.20\/warc\/CC-MAIN-20190119180834-20190119202834-00213.warc.gz\"}"} | null | null |
Money/Finance
Byline: GU
Confidence – The True Definition of Style
What makes someone stylish? Is it having an eye for finding the best pieces? An innate flair for putting together an outfit? Having a unique, personal aesthetic? Or being able to break the rules and still get it 'right'?
In truth, these are all factors. But whilst some may think these are 'gifts' you're either born with or you're not, style is something that can improve and develop. Because the above are all underpinned by one thing: confidence.
Confidence is that special 'je ne sais quoi' that can take a look from drab to fab. Forget the rules about what's right or wrong: if you believe in the outfit then others will. You will hold yourself differently and ooze poise.
Conversely, no matter how great an outfit is, if you're not carrying it off with conviction, then that will show. It should be you wearing the outfit, not the other way round.
Does that mean you could literally wear a sack of potatoes and still be stylish? As absurd as it sounds, the answer is possibly, yes. Just think of some trends that you thought you'd never deign to wear…. Someone else wore it first and somehow convinced everyone else it's stylish. And all it took was confidence.
There's a reason there's a difference between being fashionable and stylish. The former entails following trends determined by others, whilst the latter means having the courage of your conviction to set your own rules.
A few years ago, I went to a public event hosted by Vogue. The room was packed with young fashionably dressed women, but none more distinctive than the next. However, there was one person who stood out: a woman dressed in clashing bright floral tones, wearing a backpack with 90s cartoon badges on it. I noticed the clones sneering nastily at her, but I thought she was one of the most stylish in the room. Whilst her personal aesthetic wasn't necessarily for me, I admired her sureness to wear what she wanted and, unlike the others, her personality shone through.
So, confidence is that key ingredient to style. But there's only one problem: what do you do if you currently lack it? Whilst many women grow in self-confidence as they get older, their image confidence may decline. Body changes, ageist advertising and fashion campaigns, a different life style…. If this sounds familiar, you shouldn't be ashamed as there are so many contributors. Perhaps you're a mother, and you feel you've lost your sense of 'self', with other priorities taking over? Maybe you've been ignored or dismissed in clothing shops, making you feel invisible? Whatever the reason, it's extremely common, and often results in getting lost in a style rut.
There are two phrases I hear more often than others:
1. "I stick to black and navy because it makes me feel safe."
2. "I love that! But I could never wear it."
Unfortunately, these phrases are rooted in a lack of confidence, and create a vicious cycle of feeling inadequate. Wearing something because it makes you feel safe indicates an unhealthy, anxiety driven relationship with your image. Similarly, by not allowing yourself to wear what you really want, you're reinforcing the self-belief that you don't deserve to look and feel better, crushing your self-esteem further.
So how do you address this?
The first step is to readjust your mindset. Try to stop telling yourself negative phrases or comparing yourself to someone else. Even be careful with saying "that's not me", because you may have unfairly put yourself in a restrictive box. If something left field catches your eye, give it a go. Because you could find yourself pleasantly surprised and, if not, you can just take it off again – no harm done!
Next, invest time in yourself and your wardrobe. Clothes are often dismissed as something frivolous and superficial, but there are proven psychological benefits to wearing pieces that make you feel good. Style is emotive, so what you wear really can lift or sink your mood. You may feel somewhat guilty for dedicating time and energy to your wardrobe, but you shouldn't.
If you don't know where to start, just think about someone whose style you admire. Don't compare yourself to them, just think about what aspects you like – for example, that they wear a certain colour, or that they like to try different silhouettes. Try to refrain from emulating their look exactly; after all style is individual and what works for them may not work for you. But they can provide inspiration. And the next time you go shopping or put together a look, keep that in mind.
Once you've done that, start to build in inspiration from elsewhere. Take a look at your favourite brands' websites and see how they've styled pieces in photographs. Follow some style (note: not trend) related hashtags on Instagram such as #agelessstyle or #styleover40 to discover different people. Then head over to pinterest to explore new ideas and create a board of looks you can refer to both for tips and reassurance.
Some people suggest creating a 'signature look', but I generally discourage this because you can end up right back where you started, putting yourself in a box and sucking all the fun out of it. It's better to be flexible with your style and have an open mind. You'll probably end up intuitively developing a framework, but the lack of defined restrictions will free you.
Finally – and this can be a difficult one – consider leaving your normal shopping partner behind and venturing out on your own. Whilst you may trust their taste, they too will have subconsciously developed an idea of what's 'you', and therefore may unintentionally discourage you from stepping outside your comfort zone. For example, lots of grown up children still struggle to see their mum in another light – certainly a sexier one. Additionally, remember that everyone's taste is different, so your shopping partner may reject something that would actually make you feel amazing and let your style confidence soar.
So never resign yourself as someone who isn't stylish, because that can change. It may take time but gradually your confidence will grow, and you'll start having fun with clothes and experimenting with style. And eventually YOU will become that person others admire for having that certain 'je ne sais quoi'.
Jacynth Bassett is the founder of the-Bias-Cut.com - the first pro-age online independent fashion boutique – and the movement Ageism Is Never In Style. She was inspired to fight ageism after growing frustrated at seeing women being treated as invisible by the Fashion Industry, largely due to their age.
Named as an "Ageism-Fighting Trailblazer" by Global Health Ageing, Jacynth is swiftly becoming recognised as one of the leading pioneers of style at every age.
Roll over the images for more.
We endeavour to always credit the correct original source of every image we use. If you think a credit may be incorrect please contact us at hello@notdeadyet.digital
Contact us using our contact form.
130 Old Street,
London EC1V 9BD
info@notdeadyetdigital
© 2023 All Rights Reserved, Not Dead Yet
Privacy / Cookies / GDPR | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 245 |
Labord's kameleon (Furcifer labordi) is een hagedis uit de familie van de kameleons (Chamaeleonidae).
Naam en indeling
De wetenschappelijke naam van de soort werd voor het eerst voorgesteld door Alfred Grandidier in 1872. Oorspronkelijk werd de wetenschappelijke naam Chamaeleo labordi gebruikt.
De Nederlandstalige naam en de soortaanduiding labordi zijn een eerbetoon aan Jean Laborde.
Levenswijze
De mannetjes voeren felle gevechten met hun concurrenten en paren met zoveel mogelijk vrouwtjes. De eieren worden afgezet in ondergrondse holen. Labord's kameleon is een van de weinige dieren met een levenscyclus van minder dan een jaar (net als enkele andere soorten uit het geslacht Furcifer). In hun natuurlijke areaal komen ze begin november, aan het begin van het regenseizoen, uit het ei. De dieren groeien snel, en zijn al na ongeveer twee maanden geslachtsrijp in januari.
Eind februari of begin maart vindt de ei-afzet plaats, waarna de volwassen dieren snel sterven. Ze leven maximaal vier tot vijf maanden nadat ze uit het ei zijn gekropen, de mannetjes blijven gemiddeld iets langer in leven dan de vrouwtjes. Daarmee kennen de dieren van deze soort een uniek kort leven als volwassen dier, de kortste van alle viervoeters.
Verspreiding en habitat
De kameleon komt endemisch voor op het Oost-Afrikaanse eiland Madagaskar, en alleen in het zuidwestelijke deel. De habitat bestaat uit droge tropische en subtropische scrublands. Labord's kameleon is aangetroffen op een hoogte van ongeveer twintig tot honderd meter boven zeeniveau.
Beschermingsstatus
Door de internationale natuurbeschermingsorganisatie IUCN is de beschermingsstatus 'kwetsbaar' toegewezen (Vulnerable of VU).
Externe link
The reptile database
Bronvermelding
Kameleons
Endemisch dier uit Madagaskar
IUCN-status kwetsbaar | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 5,284 |
Gao Jian (; born 29 January 2002) is a Chinese footballer currently playing as a forward for Beijing Guoan.
Career statistics
Club
.
References
2002 births
Living people
Chinese footballers
Association football forwards
Beijing Guoan F.C. players | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,263 |
{"url":"https:\/\/www.hardikp.com\/2018\/05\/24\/stocknet-paper\/","text":"# Stock Movement Prediction from Tweets and Historical Prices (Paper Summary)\n\nThis paper suggests a way of using both historical prices and text data together for financial time series prediction. They call it Stocknet. There seem to be 2 major contributions here: (a) Encoding both market data and text data together, (b) VAE (Variational AutoEncoder) inspired generative model.\n\n## TLDR\n\nRNN-based variational autoencoder along with attention is used to predict whether the stock price will go up or down.\n\n## Dataset\n\n\u2022 88 stocks\n\u2022 From 2014-01-01 to 2016-01-01. Training data range: 2014-01-01 to 2015-08-01 (20,339 samples). Validation: 2015-08-01 to 2015-10-01 (2555 datapoints). Testing: 2015-10-01 to 2016-01-01 (3720 datapoints).\n\u2022 price_change <= -0.5% is assigned 0 label. price_change > 0.55% is assigned 1 label. The ones in between these 2 thresholds are ignored.\n\n## Model\n\nThere are 3 main components here:\n\n1. Market Information Encoder (MIE) - Encodes tweets and prices to X.\n2. Variational Movement Decoder (VMD) - Infers Z with X, y and decodes stock movements y from X, Z.\n3. Attentive Temporal Auxiliary (ATA) - Integrates temporal loss through an attention mechanism for model training.\n\n### Market Information Encoder (MIE)\n\nThis component is relatively straightforward. Tweets for the given day are combined into the vector $c_t$. Historical prices are normalized and stored in the vector $p_t$. The output of this component (MIE) is the vector $x_t = [c_t, p_t]$.\n\n### Variational Movement Decoder (VMD)\n\nVMD uses the market information $X$ received from the previous component and infers a latent factor $Z$. This latent vector $Z$ is then decoded into vector $y_t$ using an RNN decoder with GRU cells.\n\n### Attentive Temporal Auxiliary (ATA)\n\nAttention is applied to the outputs from the previous component. Both VAE and Attention components are combined to construct the final loss function $F$.\n\nHere, $v^{(n)}$ is the attention weight vector and $f^{(n)}$ is the loss function from the variational autoencoder component.\n\n$\\log p_{\\theta}$ is the log-likelihood term, $D_{KL}[q_{\\phi} \\vert\\vert p_{\\theta}]$ is the KL divergence loss and $\\lambda$ is the KL loss weight. $\\lambda$ is increased over time during training. It\u2019s known as KL annealing trick Bowman et al., 2016.\n\n## Training and Hyperparameters\n\n\u2022 5-day lag window is used to construct the dataset.\n\u2022 Batch size is 32. Each batch contains randomly picked data points.\n\u2022 Initial learning rate of Adam - 0.001\n\u2022 Dropout rate - 0.3 for the hidden layer\n\n## Metrics and Results\n\nMCC (Matthews Correlation Coefficient) is used as a metric. MCC is defined below in terms of tp (true positives), tn (true negatives), fp (false positives) and fn (false negatives).\n\nBaselines:\n\nTECHNICALANALYST, FUNDAMENTALANALYST, INDEPENDENTANALYST, DISCRIMINATIVEANALYST and HEDGEFUNDANALYST are simply different variants of their StockNet model with HEDGEFUNDANALYST being the model described above.","date":"2019-12-10 19:00:49","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 14, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.500371515750885, \"perplexity\": 4548.358472078708}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-51\/segments\/1575540528490.48\/warc\/CC-MAIN-20191210180555-20191210204555-00300.warc.gz\"}"} | null | null |
The diawl bach fishing fly has become one of the most popular trout flies in recent years. It translates from welsh as "little devil".
The diawl back trout fly is a fantastic midge imiataion when cruising trout are feeding on midge pupa as they ascend to the surface to hatch. This is definitely a must have fly pattern for any still water angler however it is also a wonderfully successful fishing fly on rivers where unfortunately many anglers overlook it!
Midges are active in some form all year round and are present in nearly all rivers and streams in the UK so both trout and grayling will always be aware of them. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,148 |
In a thriller that literally came down to the final second, Baskonia Vitoria Gasteiz edged Unics Kazan 91-92. The game saw three lead changes in the final 7 seconds; Shane Larkin hit a rainbow three-pointer over Art Parakhouski to give Baskonia an 88-90 lead, Keith Langford answered for Unics from beyond the arc, but Shengelia sank the game-winning free throws with 1.8 seconds left. Larkin led the winners with 22 points and 6 assists. Shengelia added 19 on 10-of-11 free throw shooting; he had hit 15 of 27 (55.6%), entering this game. Rafa Luz added 10 points for Baskonia, which improved to 8-4. Langford led Unics with 28 points, Quino Colom added 23 points and 6 assists and Orlando Johnson scored 11 for the hosts. Baskonia used a balanced offense to get a 40-43 halftime lead - as many as seven players scored 4 points on more. Baskonia made 9 of 18 three-point shots (50%), committed just 11 turnovers and pulled down 11 offensive rebounds, but most importantly kept its cool in crunch time to rally from a 6-point deficit to steal its third road win of the season.
Back-to-back layups by Kim Tillie and Johannes Voigtmann gave Baskonia an early 2-4 lead. Unics found a go-to guy in Colom, who buried consecutive jumpers and struck from downtown to tie it at 11-11. A layup by Langford and a triple by Evgeny Voronov soon gave the hosts a 16-11 lead. Ilimane Diop and Shengelia stepped up for Baskonia to keep the visitors within 20-18. Larkin fed Diop for an alley-oop and added a reverse layup for a 20-22 Baskonia lead after 10 minutes. Unics struggled to score early in the second quarter and a three-pointer by Chase Budinger made it a double-digit game, 20-30, concluding a 0-14 run. Langford hit a jumper, but a layup by Larkin and another triple by Budinger boosted Baskonia's lead to 22-35. Voronov and Paul Stoll rescued Unics with back-to-back triples and an alley-oop dunk by Latavious Williams helped Unics get within 2. A free throw by Adam Hanga fixed the halftime score at 40-43. Johnson buried back-to-back three-pointers after the break and a turnaround jumper by Parakhouski put Unics back in charge, 48-47. Hanga dunked off a steal and Larkin stepped up with free throws, a jumper and a triple to make it 55-56. Shengelia and Luz joined the three-point shootout, but Williams and an outstanding Langford helped Unics tie the game at 67-67 after 30 minutes. Langford struck from downtown and Colom added back-to-back triples to give Unics a 78-75 edge with less than six minutes left. A red-hot Colom followed a driving layup with a big triple off the dribble for an 83-77 advantage. Larkin took over with a three-point play and Shengelia hit 6 free throws to knot the game at 86-86 tie. A rainbow triple by Larkin put Baskonia ahead, 88-90, with 6.8 seconds left. Unics called timeout and Langford immediately answered from downtown, but Shengelia drew a foul with 1.8 seconds and hit both attempts for a 91-92 score. Langford missed an off-balance shot at the buzzer.
"I think we played with character today. We fought hard and this is the Unics team we want to see, a team that can beat anybody. We bounced back and had a good reaction after our loss against Olympiacos, where we played without any character or heart. We played with much more character and stayed together. This is what I wanted to see tonight, us playing as a team. All EuroLeague games are really strong, but we believed in ourselves. I don't want to talk about the referees; whatever happened, happened, but we have to be stronger and keep fighting. It is not time to cry; we showed character, played a good game and did well offensively and defensively. We made a couple of mistakes in the end and that cost us a game that went down to the last shot. All the guys who went on the court fought and played with heart."
"We played very well in the first minutes of the game because we didn't have fouls, took advantage of it and pressured really well full court, stopping their easy transition through Quino Colom, who had less time to organize his team's offense. After two fouls by our perimeter players, we had to start defending closer to our basket, so they played better, finding better options on offense, especially for Langford and Quino Colom. In the last part of the game, Langford found a good shot, so we have to improve this kind of situation, because he found a really good option. We are talking about a great scorer and have to improve in last-second situations. For us, it is very important to keep the intensity throughout the game. When you win a game by 1 point, normally you deserve to win. If you don't deserve to win, you don't win. We have 10 players who put great pressure on defense in the first part of the game, in which we were winning by 13 points. Unics made a great effort to get back to the game. I felt bad when Langford made his last shot because we made mistakes. Sometimes you have to admit your mistakes and read the situations depending on the game."
"You can never expect game-winning shots like that, but we just stayed together, really tried to lock in on the defensive end, and Toko Shengelia made some big free throws at the end. Our last couple of games we've been locking in on the defensive end, but this team has great offensive players. Langford obviously scores the ball amazing, and Colom shot the ball amazing tonight. But we just stuck together as a team. Sometimes people are going to make shots against us. But we stuck together, and got the win. That's the big thing. I felt alright. My team and my doctors helped me get a lot of medicine and get healthy enough to play the game. I'm happy I was able to go out there and help my team win." | {
"redpajama_set_name": "RedPajamaC4"
} | 7,626 |
Here are the following products for our Corporate Travel Plan.
A revolutionary type of Group Personal Accident and Travel Insurance policy that is designed to be more flexible in scope of cover by removing some of the traditional exclusions and age limit; and introducing new benefits that will lead today's insurance market.
It provides simple administration, 24-hour worldwide protection. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,104 |
{"url":"https:\/\/handwiki.org\/wiki\/Physics:Gluon","text":"# Physics:Gluon\n\nShort description: Elementary particle that mediates the strong force\nComposition Diagram 1: In Feynman diagrams, emitted gluons are represented as helices. This diagram depicts the annihilation of an electron and positron. Elementary particle Bosonic Strong interaction g Murray Gell-Mann (1962)[1] e+e\u2212 \u2192 \u03a5(9.46) \u2192 3g: 1978 at DORIS (DESY) by PLUTO experiments (see diagram 2 and recollection[2]) and e+e\u2212 \u2192 qqg: 1979 at PETRA (DESY) by TASSO, MARK-J, JADE and PLUTO experiments (see diagram 1 and review[3]) 8[4] 0 (theoretical value)[5]< 1.3\u00a0meV\/$\\displaystyle{ c^2 }$ (experimental limit) [6][5] 0\u00a0e[5] octet (8 linearly independent types) 1\n\nA gluon (\/\u02c8\u0261l\u0252n\/ GLOO-on) is an elementary particle that acts as the exchange particle (or gauge boson) for the strong force between quarks. It is analogous to the exchange of photons in the electromagnetic force between two charged particles.[7] Gluons bind quarks together, forming hadrons such as protons and neutrons.\n\nGluons are vector gauge bosons that mediate strong interactions of quarks in quantum chromodynamics (QCD). Gluons themselves carry the color charge of the strong interaction. This is unlike the photon, which mediates the electromagnetic interaction but lacks an electric charge. Gluons therefore participate in the strong interaction in addition to mediating it, making QCD significantly harder to analyze than quantum electrodynamics (QED).\n\n## Properties\n\nThe gluon is a vector boson, which means, like the photon, it has a spin of 1. While massive spin-1 particles have three polarization states, massless gauge bosons like the gluon have only two polarization states because gauge invariance requires the polarization to be transverse to the direction that the gluon is traveling. In quantum field theory, unbroken gauge invariance requires that gauge bosons have zero mass. Experiments limit the gluon's rest mass (if any) to less than a few meV\/c2. The gluon has negative intrinsic parity.\n\n## Counting gluons\n\nUnlike the single photon of QED or the three W and Z bosons of the weak interaction, there are eight independent types of gluon in QCD.\n\nHowever, gluons are subject to the color charge phenomena (of which they have combinations of color and anticolor). Quarks carry three types of color charge; antiquarks carry three types of anticolor. Gluons may be thought of as carrying both color and anticolor. This gives nine possible combinations of color and anticolor in gluons. The following is a list of those combinations (and their schematic names):\n\n\u2022 red-antired ($\\displaystyle{ r\\bar{r} }$), red-antigreen ($\\displaystyle{ r\\bar{g} }$), red-antiblue ($\\displaystyle{ r\\bar{b} }$)\n\u2022 green-antired ($\\displaystyle{ g\\bar{r} }$), green-antigreen ($\\displaystyle{ g\\bar{g} }$), green-antiblue ($\\displaystyle{ g\\bar{b} }$)\n\u2022 blue-antired ($\\displaystyle{ b\\bar{r} }$), blue-antigreen ($\\displaystyle{ b\\bar{g} }$), blue-antiblue ($\\displaystyle{ b\\bar{b} }$)\nDiagram 2: e+e \u2192 \u03a5(9.46) \u2192 3g\n\nThese are not the actual color states of observed gluons, but rather effective states. To correctly understand how they are combined, it is necessary to consider the mathematics of color charge in more detail.\n\n### Color singlet states\n\nIt is often said that the stable strongly interacting particles (such as the proton and the neutron, i.e. hadrons) observed in nature are \"colorless\", but more precisely they are in a \"color singlet\" state, which is mathematically analogous to a spin singlet state.[8] Such states allow interaction with other color singlets, but not with other color states; because long-range gluon interactions do not exist, this illustrates that gluons in the singlet state do not exist either.[8]\n\nThe color singlet state is:[8]\n\n$\\displaystyle{ (r\\bar{r}+b\\bar{b}+g\\bar{g})\/\\sqrt{3}. }$\n\nIn other words, if one could measure the color of the state, there would be equal probabilities of it being red-antired, blue-antiblue, or green-antigreen.\n\n### Eight colors\n\nThere are eight remaining independent color states, which correspond to the \"eight types\" or \"eight colors\" of gluons. Because states can be mixed together as discussed above, there are many ways of presenting these states, which are known as the \"color octet\". One commonly used list is:[8]\n\n $\\displaystyle{ (r\\bar{b}+b\\bar{r})\/\\sqrt{2} }$ $\\displaystyle{ -i(r\\bar{b}-b\\bar{r})\/\\sqrt{2} }$ $\\displaystyle{ (r\\bar{g}+g\\bar{r})\/\\sqrt{2} }$ $\\displaystyle{ -i(r\\bar{g}-g\\bar{r})\/\\sqrt{2} }$ $\\displaystyle{ (b\\bar{g}+g\\bar{b})\/\\sqrt{2} }$ $\\displaystyle{ -i(b\\bar{g}-g\\bar{b})\/\\sqrt{2} }$ $\\displaystyle{ (r\\bar{r}-b\\bar{b})\/\\sqrt{2} }$ $\\displaystyle{ (r\\bar{r}+b\\bar{b}-2g\\bar{g})\/\\sqrt{6}. }$\n\nThese are equivalent to the Gell-Mann matrices. The critical feature of these particular eight states is that they are linearly independent, and also independent of the singlet state, hence 32\u00a0\u2212\u00a01 or 23. There is no way to add any combination of these states to produce any other, and it is also impossible to add them to make rr, gg, or bb[9] the forbidden singlet state. There are many other possible choices, but all are mathematically equivalent, at least equally complicated, and give the same physical results.\n\n### Group theory details\n\nTechnically, QCD is a gauge theory with SU(3) gauge symmetry. Quarks are introduced as spinors in Nf flavors, each in the fundamental representation (triplet, denoted 3) of the color gauge group, SU(3). The gluons are vectors in the adjoint representation (octets, denoted 8) of color SU(3). For a general gauge group, the number of force-carriers (like photons or gluons) is always equal to the dimension of the adjoint representation. For the simple case of SU(N), the dimension of this representation is N2 \u2212 1.\n\nIn terms of group theory, the assertion that there are no color singlet gluons is simply the statement that quantum chromodynamics has an SU(3) rather than a U(3) symmetry. There is no known a priori reason for one group to be preferred over the other, but as discussed above, the experimental evidence supports SU(3).[8] If the group were U(3), the ninth (colorless singlet) gluon would behave like a \"second photon\" and not like the other eight gluons.[10]\n\n## Confinement\n\nMain page: Physics:Color confinement\n\nSince gluons themselves carry color charge, they participate in strong interactions. These gluon-gluon interactions constrain color fields to string-like objects called \"flux tubes\", which exert constant force when stretched. Due to this force, quarks are confined within composite particles called hadrons. This effectively limits the range of the strong interaction to 1\u00d710\u221215 meters, roughly the size of an atomic nucleus. Beyond a certain distance, the energy of the flux tube binding two quarks increases linearly. At a large enough distance, it becomes energetically more favorable to pull a quark-antiquark pair out of the vacuum rather than increase the length of the flux tube.\n\nGluons also share this property of being confined within hadrons. One consequence is that gluons are not directly involved in the nuclear forces between hadrons. The force mediators for these are other hadrons called mesons.\n\nAlthough in the normal phase of QCD single gluons may not travel freely, it is predicted that there exist hadrons that are formed entirely of gluons \u2014 called glueballs. There are also conjectures about other exotic hadrons in which real gluons (as opposed to virtual ones found in ordinary hadrons) would be primary constituents. Beyond the normal phase of QCD (at extreme temperatures and pressures), quark\u2013gluon plasma forms. In such a plasma there are no hadrons; quarks and gluons become free particles.\n\n## Experimental observations\n\nQuarks and gluons (colored) manifest themselves by fragmenting into more quarks and gluons, which in turn hadronize into normal (colorless) particles, correlated in jets. As revealed in 1978\u00a0summer conferences,[2] the PLUTO detector at the electron-positron collider DORIS (DESY) produced the first evidence that the hadronic decays of the very narrow resonance \u03a5(9.46) could be interpreted as three-jet event topologies produced by three gluons. Later, published analyses by the same experiment confirmed this interpretation and also the spin\u00a0=\u00a01 nature of the gluon[11][12] (see also the recollection[2] and PLUTO experiments).\n\nIn summer\u00a01979, at higher energies at the electron-positron collider PETRA (DESY), again three-jet topologies were observed, now interpreted as qq gluon bremsstrahlung, now clearly visible, by TASSO,[13] MARK-J[14] and PLUTO experiments[15] (later in 1980 also by JADE[16]). The spin\u00a0=\u00a01 property of the gluon was confirmed in 1980 by TASSO[17] and PLUTO experiments[18] (see also the review[3]). In 1991 a subsequent experiment at the LEP storage ring at CERN again confirmed this result.[19]\n\nThe gluons play an important role in the elementary strong interactions between quarks and gluons, described by QCD and studied particularly at the electron-proton collider HERA at DESY. The number and momentum distribution of the gluons in the proton (gluon density) have been measured by two experiments, H1 and ZEUS,[20] in the years 1996\u20132007. The gluon contribution to the proton spin has been studied by the HERMES experiment at HERA.[21] The gluon density in the proton (when behaving hadronically) also has been measured.[22]\n\nColor confinement is verified by the failure of free quark searches (searches of fractional charges). Quarks are normally produced in pairs (quark + antiquark) to compensate the quantum color and flavor numbers; however at Fermilab single production of top quarks has been shown.[lower-alpha 1][23] No glueball has been demonstrated.\n\nDeconfinement was claimed in 2000 at CERN SPS[24] in heavy-ion collisions, and it implies a new state of matter: quark\u2013gluon plasma, less interactive than in the nucleus, almost as in a liquid. It was found at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven in the years 2004\u20132010 by four contemporaneous experiments.[25] A quark\u2013gluon plasma state has been confirmed at the CERN Large Hadron Collider (LHC) by the three experiments ALICE, ATLAS and CMS in 2010.[26]\n\nJefferson Lab's Continuous Electron Beam Accelerator Facility, in Newport News, Virginia,[lower-alpha 2] is one of 10\u00a0Department of Energy facilities doing research on gluons. The Virginia lab was competing with another facility \u2013 Brookhaven National Laboratory, on Long Island, New York \u2013 for funds to build a new electron-ion collider.[27] In December 2019 the US Department of Energy selected the Brookhaven National Laboratory to host the electron-ion collider.[28]\n\n## Footnotes\n\n1. Technically the single top quark production at Fermilab still involves a pair production, but the quark and antiquark are of different flavors.\n2. Jefferson Lab is a nickname for the Thomas Jefferson National Accelerator Facility in Newport News, Virginia.\n\n## References\n\n1. M. Gell-Mann (1962). \"Symmetries of Baryons and Mesons\". Physical Review 125 (3): 1067\u20131084. doi:10.1103\/PhysRev.125.1067. Bibcode1962PhRv..125.1067G.\u00a0. This is without reference to color, however. For the modern usage see Fritzsch, H.; Gell-Mann, M.; Leutwyler, H. (Nov 1973). \"Advantages of the color octet gluon picture\". Physics Letters B 47 (4): 365\u2013368. doi:10.1016\/0370-2693(73)90625-4. Bibcode1973PhLB...47..365F.\n2. B.R. Stella and H.-J. Meyer (2011). \"\u03a5(9.46 GeV) and the gluon discovery (a critical recollection of PLUTO results)\". European Physical Journal H 36 (2): 203\u2013243. doi:10.1140\/epjh\/e2011-10029-3. Bibcode2011EPJH...36..203S.\n3.\n4.\n5. F. Yndurain (1995). \"Limits on the mass of the gluon\". Physics Letters B 345 (4): 524. doi:10.1016\/0370-2693(94)01677-5. Bibcode1995PhLB..345..524Y.\n6. C.R. Nave. \"The Color Force\". HyperPhysics. Georgia State University, Department of Physics.\n7. David Griffiths (1987). Introduction to Elementary Particles. John Wiley & Sons. pp.\u00a0280\u2013281. ISBN\u00a0978-0-471-60386-3.\n8. J. Baez. Why are there eight gluons and not nine?. Retrieved 2009-09-13.\n9. Berger, Ch. (1979). \"Jet analysis of the \u03a5(9.46) decay into charged hadrons\". Physics Letters B 82 (3\u20134): 449. doi:10.1016\/0370-2693(79)90265-X. Bibcode1979PhLB...82..449B.\n10. Berger, Ch. (1981). \"Topology of the \u03a5 decay\". Zeitschrift f\u00fcr Physik C 8 (2): 101. doi:10.1007\/BF01547873. Bibcode1981ZPhyC...8..101B.\n11. Brandelik, R. (1979). \"Evidence for Planar Events in e+e annihilation at High Energies\". Physics Letters B 86 (2): 243\u2013249. doi:10.1016\/0370-2693(79)90830-X. Bibcode1979PhLB...86..243B.\n12. Barber, D.P. (1979). \"Discovery of Three-Jet Events and a Test of Quantum Chromodynamics at PETRA\". Physical Review Letters 43 (12): 830. doi:10.1103\/PhysRevLett.43.830. Bibcode1979PhRvL..43..830B.\n13. Berger, Ch. (1979). \"Evidence for Gluon Bremsstrahlung in e+e Annihilations at High Energies\". Physics Letters B 86 (3\u20134): 418. doi:10.1016\/0370-2693(79)90869-4. Bibcode1979PhLB...86..418B.\n14. Brandelik, R. (1980). \"Evidence for a spin-1 gluon in three-jet events\". Physics Letters B 97 (3\u20134): 453. doi:10.1016\/0370-2693(80)90639-5. Bibcode1980PhLB...97..453B.\n15. Berger, Ch. (1980). \"A study of multi-jet events in ee annihilation\". Physics Letters B 97 (3\u20134): 459. doi:10.1016\/0370-2693(80)90640-1. Bibcode1980PhLB...97..459B.\n16. Alexander, G. (1991). \"Measurement of three-jet distributions sensitive to the gluon spin in ee Annihilations at \u221as = 91\u00a0GeV\". Zeitschrift f\u00fcr Physik C 52 (4): 543. doi:10.1007\/BF01562326. Bibcode1991ZPhyC..52..543A.\n17. Lindeman, L. (1997). \"Proton structure functions and gluon density at HERA\". Nuclear Physics B: Proceedings Supplements 64 (1): 179\u2013183. doi:10.1016\/S0920-5632(97)01057-8. Bibcode1998NuPhS..64..179L.\n18. Adloff, C. (1999). \"Charged particle cross sections in the photoproduction and extraction of the gluon density in the photon\". European Physical Journal C 10 (3): 363\u2013372. doi:10.1007\/s100520050761. Bibcode1999EPJC...10..363H.\n19. Chalmers, M. (6 March 2009). \"Top result for Tevatron\". Physics World.\n20. Overbye, D. (15 February 2010). \"In Brookhaven collider, scientists briefly break a law of nature\". The New York Times.\n21. \"LHC experiments bring new insight into primordial universe\" (Press release). CERN. 26 November 2010. Retrieved 20 November 2016.\n22. Nolan, Jim (October 19, 2015). \"State hopes for big economic bang as Jeff Lab bids for ion collider\". Richmond Times-Dispatch: pp.\u00a0A1, A7. \"Those clues can give scientists a better understanding of what holds the universe together.\"\n23. \"U.S. Department of Energy selects Brookhaven National Laboratory to host major new nuclear physics facility\" (Press release). DOE. 9 January 2020. Retrieved 1 June 2020.","date":"2022-10-01 04:51:18","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8027319312095642, \"perplexity\": 2838.0266288869466}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-40\/segments\/1664030335530.56\/warc\/CC-MAIN-20221001035148-20221001065148-00519.warc.gz\"}"} | null | null |
__version__ = '0.1.0'
__author__ = 'thricedotted'
from twitterbot.bot import TwitterBot, ignore
| {
"redpajama_set_name": "RedPajamaGithub"
} | 8,168 |
Эрих Куитан (; , Билефельд, Северный Рейн-Вестфалия — , Йена) — немецкий художник-экспрессионист, график, иллюстратор, плакатист.
Биография
С 1892 года посещал частную школу рисования Людвига Шмид-Ройтте и Фридриха Фера. С мая 1893 года обучался в Мюнхенской академии художеств у профессора Карла Раупа.
В последующие годы Куитан осуществил несколько учебных поездок, выставлялся в художественных клубах, работал в иллюстратором в журнале, занимался иллюстрацией для детских книг.
Руководил школой рисования в Йене. В 1910 году отправился в Италию, в 1911 году был приглашён в Королевскую художественную школу в Берлине. В 1916 году вернулся к Йену из-за неизлечимой болезни, где и умер в декабре 1917 года.
Творчество
Художник рубежа веков, автор многочисленных картин, рисунков, иллюстраций, дизайна мебели, одежды, экслибрисов, изделий из фарфора, фресок, рекламных плакатов в модном на рубеже XIX и XX веков стиле «арт-нуво». В разные периоды своего творчества увлекался модерном, символизмом и экспрессионизм.
Память
В честь художника в Йене названа улица Эрих-Куйтан-штрассе (Erich-Kuithan-Straße).
Примечания
Ссылки
Erich Kuithan. Retrospektive zum 100. Todestag
Художники Германии XIX века
Художники Германии XX века
Художники-пейзажисты Германии
Художники-плакатисты Германии
Художники-экспрессионисты Германии
Художники-символисты Германии
Графики Германии
Художники-иллюстраторы по алфавиту
Художники-иллюстраторы Германии
Дизайнеры Германии
Художники-фрескисты | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 5,438 |
module: i-form
itsaclassname: Event
version: 0.0.1
modulesize: 5.78
dependencies: "polyfill/polyfill-base.js, js-ext/lib/function.js, js-ext/lib/object.js, utils, event"
maintainer: Marco Asbreuk
title: i-form
intro: ""
firstpar: get-started-onlywindow
---
| {
"redpajama_set_name": "RedPajamaGithub"
} | 2,227 |
Jobs hiring now
How to become a…
Popular employers
Top 10 places to work in Denver
Top 10 places to work in Minneapolis
Top 10 places to work in Pittsburgh
Interview Prep
Common Questions & Answers
Specific Interview Questions
Working the job
Resources / News / McDonald's employee explains why hourly workers deserve respect
McDonald's employee explains why hourly workers deserve respect
by: Kim Costa
At Snagajob, we know how hard hourly-paid employees work day in and day out to provide for their families. Not only do they work hard at work, but at home, too. For example, after cooking and serving customers all day, they go home and cook for their families, make sure the kids are ready for bed and prepare to do it all again the next day.
Hourly workers workers are more than what they do for a living. They are musicians, artists, students, parents. Not only are they paying bills and high tuition rates, but they are trying their best to move up within a company to better themselves and create opportunities for their families.
However, there is a misfortunate stereotype about hourly-paid workers that Snagajob is trying hard to disprove. Recently an employee of McDonald's created a Facebook post that has gone viral about why his co-workers should be respected and not thought of as lazy or unmotivated.
On behalf of everyone at Snagajob and hourly workers across the nation... Thank YOU Mike, for standing up for hourly workers everywhere. You prove that so many, like you, are hard-working people who are working hard to make a living, while knocking down a thousand other tasks along the way.
My Google+ profile
About Kim Costa
Kim is a job-search coach for Snagajob! She's a Certified Professional Resume Writer and a Certified Employment Interview Professional. When she's not helping with job searches, she can be found hanging with her hubby, Matt, and puppy, Belle.
Before they became president, they held hourly jobs!
3 boss interview outfits that won't break the bank
Men vs. Women – the most shocking job search difference
3 ways to make bigger tips
5 effortless ways to save money
About Snag | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,299 |
package com.theo.downloader;
import com.theo.downloader.util.HashUtil;
import java.io.Serializable;
public class Task implements Serializable {
/**
* 任务状态
*/
public enum Status {
NONE, CREATE, DOWNLOADING, PAUSE, ERROR, COMPLETE
}
private Status currentStatus = Status.NONE;//任务状态
private Status targetStatus = Status.NONE;
private String key;//任务标识,默认使用url整个为KEY
private String url;//任务源
private String realUrl;//最终下载源,可能302跳转
//文件目录将为dstDir/dstFile
private String dstDir;//保存目标目录
private String fileName;//保存目标文件
private String filePath;//file absolute path
private long downSize;//已经下载bytes
private long downSpeed;//byte/s
private int index;//task index
public Task(String url, String dstDir) {
this.url = url;
this.key = HashUtil.MD5Encrypt(url);
this.dstDir = dstDir;
}
public Task(String key, String url, String dstDir) {
this.key = key;
this.url = url;
this.dstDir = dstDir;
}
/**
* 最终需要下载的bytes,从200的content-length中获取。
* 可能服务器传回有误,下载完成后再更新一次,已实际更新为准
*/
private long totalSize;
public Status getCurrentStatus() {
return currentStatus;
}
public Task setCurrentStatus(Status currentStatus) {
this.currentStatus = currentStatus;
return this;
}
public Status getTargetStatus() {
return targetStatus;
}
public Task setTargetStatus(Status targetStatus) {
this.targetStatus = targetStatus;
return this;
}
/**
* update target status and current status
*
* @param status
* @return
*/
public Task updateStatus(Status status) {
this.targetStatus = status;
this.currentStatus = status;
return this;
}
public String getUrl() {
return url;
}
public Task setUrl(String url) {
this.url = url;
return this;
}
public String getDstDir() {
return dstDir;
}
public Task setDstDir(String dstDir) {
this.dstDir = dstDir;
return this;
}
public String getFileName() {
return fileName;
}
public Task setFileName(String fileName) {
this.fileName = fileName;
return this;
}
public String getRealUrl() {
return realUrl;
}
public Task setRealUrl(String realUrl) {
this.realUrl = realUrl;
return this;
}
public long getDownSize() {
return downSize;
}
public Task setDownSize(long downSize) {
this.downSize = downSize;
return this;
}
public long getTotalSize() {
return totalSize;
}
public Task setTotalSize(long totalSize) {
this.totalSize = totalSize;
return this;
}
public long getDownSpeed() {
return downSpeed;
}
public Task setDownSpeed(long downSpeed) {
this.downSpeed = downSpeed;
return this;
}
public String getFilePath() {
return filePath;
}
public Task setFilePath(String filePath) {
this.filePath = filePath;
return this;
}
public boolean isDownloading() {
return currentStatus == Status.DOWNLOADING;
}
public int getIndex() {
return index;
}
public Task setIndex(int index) {
this.index = index;
return this;
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 2,147 |
PICKETS AND DEAD MEN
**THE MOUNTAINEERS BOOKS**
_is the nonprofit publishing arm of The Mountaineers Club, an organization founded in 1906 and dedicated to the exploration, preservation, and enjoyment of outdoor and wilderness areas._
1001 SW Klickitat Way, Suite 201, Seattle, WA 98134
© 2009 by Bree Loewen
All rights reserved
First edition, 2009
No part of this book may be reproduced in any form, or by any electronic, mechanical, or other means, without permission in writing from the publisher.
Manufactured in the United States of America
Developmental Editor: Julie Van Pelt
Copy Editor: Carol Poole
Cover Design: Karen Schober
Book Design and Layout: Mayumi Thompson
Cover artwork: Jannelle Loewen
_Library of Congress Cataloging-in-Publication Data_
Loewen, Bree, 1981-
Pickets and dead men / by Bree Loewen. — 1st ed.
p. cm.
ISBN-13: 978-1-59485-101-8
ISBN-10: 1-59485-101-8
1. Mountaineering—Search and rescue operations—Washington (State)—Rainier, Mount. 2. Park rangers—Washington (State)—Mount Rainier National Park. 3. Loewen, Bree, 1981- I. Title.
GV200.183.L63 2009
363.14'09797782—dc22
2008028361
Printed on recycled paper
## CONTENTS
Acknowledgments
Author's Note
1 DON'T GET FROSTBITE
2 THE BOLD AND THE BALLSY
3 DEAD MEN AND DUST MOTES
4 THE COLD HEARTED
5 JUST A PAINFUL WAYPOINT
6 BACKTIED TO A BUSH
7 KAUTZ, SOLO
8 THREE DAYS
9 ROADTRIP RESCUE
10 POINT OF NO RETURN
Afterword
## ACKNOWLEDGMENTS
GEORGE BURNS ONCE SAID, "No snowflake in an avalanche ever feels responsible." Here's a shout out to all the unique and multifaceted individuals who are responsible for the creation of this book. Thanks to Ted, Adrienne, Tracie, Tom, Paul, Charlie, and the rest of the park folk for giving me the awesome experience of having met and worked with you. You guys have changed my life in so many ways. Thanks to my friends and climbing partners Alice, Ryan, and Russell, who taught me how important grace is in climbing and in life. Thanks to the entire staff of the Mountaineers Books for their time and attention. It was a pleasure to work with Dana Youlin, Mary Metz, Julie Van Pelt, and Carol Poole. Special thanks to Russell for your unfailing support on this project, as well as for marrying me and giving me a child and a small house in the country with nice Adirondack chairs, a big kitchen, and a spot to grow organic peas. Happy endings are the best.
## AUTHOR'S NOTE
I HAVE MADE EVERY EFFORT TO PROVIDE an accurate account of the people, places, and events as relating to my experiences on Rainier. I have relied on my own memory and journals, as well as the memories of friends, family, and fellow rangers. But essentially this is my story, my perspective on my seasons on Rainier. Any gaps in my story or misrepresentation of facts are purely accidental and unintentional. In most cases, I have confirmed or attempted to confirm such details, but there will undoubtedly be a few inconsistencies, for which I apologize in advance. Some individuals' names have been changed.
## 1
* * *
## DON'T GET FROSTBITE
A MONTH INTO MY FIRST SEASON I was interviewed by the _Tacoma News Tribune_ about what it was like to be a girl climbing ranger at Mount Rainier National Park. They wanted to know about my typical day, how many people I'd rescued, and what it felt like to kick butt in the mountains wearing a sports bra and floral print shorts. I told them the bra and shorts were buried under too many layers of clothing to be a factor. I said sometimes it was difficult being a woman in the profession, but other things were far more difficult, and then, laughing, I told the reporter I wasn't going to talk about any of it because being a climbing ranger was my dream job and I wanted to keep it.
As I understand it, when I was hired there were several guys who had been volunteering as climbing rangers for years with the expectation that they would get hired as soon as spots opened up. They'd had a falling out with the boss just before the season started, despite being well-liked and respected by everyone else. Consequently, they were slighted when myself and two other girls with no real experience on the mountain whatsoever were hired in their places.
We three girls had a poor understanding of what we were supposed to be doing on a daily basis. Almost no one wanted to invest the time to teach us how to do our jobs right, and no one expected us to stay very long. In the beginning, I persisted partly because of the glamorous idea of doing high-profile rescues. I wanted to fly around in helicopters and watch my work on the evening news. Mostly, though, I thought shared hardship increased camaraderie, and so the more difficult and stressful rescues I took part in, the less the grudges of the past would matter, and the closer a community my coworkers and I would become. I wanted to create friendships that would last, that I could trust my life to, the kind where we would sit together when we were old and reminisce about all the crazy things we'd done.
These friendships never really materialized, but I stayed for years anyway because of my relationship with the public and the beauty of the environment, and because I received a paycheck. Among my coworkers, I was the one who cleaned the bathroom, but when I was the lone ranger on the upper mountain I was the ultimate voice of authority. When people did what I said, they lived, and when they didn't, they died, or at least suffered unreasonably. They didn't always respect my advice at the time, or sometimes ever come to appreciate the unconditional support I gave them, but I knew even the small contributions I was able to provide were making an immediate difference in the lives of inexperienced or unlucky climbers every day. Although these weren't the types of relationships I was looking for when I started, I recognized that they were important nonetheless.
This do-gooder desire was combined with my lack of other job prospects. I'd graduated from college at seventeen with a philosophy degree, spent the next four years as a climbing bum living in my compact car, and had most recently worked as an ambulance jockey, sleeping on the gurney in the back between unlimited overtime shifts. That was a job that I would have done almost anything not to go back to because I felt like the longer I worked there, the more jaded I became towards the people I was sent to help. I saw starving old people who were afraid to tell anybody, and babies covered in rat bites, along with a litany of things that seemed to me to be worse than death. I saw so much that things that should have bothered me started not to. I didn't want to be numb. I wanted to be compassionate and understanding again, and I wanted my days to be filled with beauty and success.
This left me at age twenty-one with climbing as my only job skill, and I'd just had one of the rare, extremely hard-to-get climbing-based jobs dropped in my lap. Here I was, with important work handed to me every day in one of the most beautiful places on earth. I wanted to prove I had what it took to work every day in the mountains, to show that women could excel at this work. I didn't want to blow it.
During my three seasons at Rainier I came to realize that, as much as I wanted to prove myself, in the end I didn't have the skills or stamina to live up to the standards my supervisors, coworkers, and the public set for me, let alone the ones I set for myself.
Even from day one, things were grim. I came into the job with a debilitating reputation to overcome: I'd been rescued off Mount Rainier in a highly televised extravaganza less than a month before I got hired to rescue others.
This is how it happened. I was planning to climb Denali with two friends, Alice and Cicily. A few years earlier, Cicily and I had met while working in an outdoor goods shop, paying our way through college, but we'd never climbed a mountain together. So, only a week after I submitted my application to be a climbing ranger, Cicily and I decided we should climb something smaller and local first to make sure we could get along on a big trip. Since the majority of our climbing experience had been on alpine rock, Mount Rainier seemed like the perfect choice for honing our slogging skills.
February was when Cicily could get away, so on Valentine's Day I picked her up and we drove to Mount Rainier, our practice climb, in a torrential rainstorm. By the time we reached Paradise at the end of the road on the mountain's south side, the rain had turned to snow and sleet, and the wind had picked up. It was horrible weather for climbing, and we sat in the car with the heater running, watching the slush land on the windshield, trying to work up the guts to get out of the car.
We'd planned on doing Gibraltar Ledges, a standard winter route, but the weather shrank our aspirations to just hiking to the public shelter, a heatless rock building at Camp Muir about halfway up the mountain. We figured we'd hang out for a few days, which in my mind meant we'd be cold and wet and miserable together—a good test of whether we'd be able to do it again for a longer period of time.
I was really excited to finally be putting together an international women's trip. My previous trip—leading climbs in Peru without pay for an under-the-table guiding business run by my ex-boyfriend, who'd tried to strangle me with a tent cord one night when I ate more than my meager share of freeze-dried pineapple chicken and rice—had been lousy. The climbing had been fantastic, but the social dynamic had been too scary to make it worthwhile. I wanted to have a climbing experience that involved both amazing climbing and fun partners, and this Rainier trip seemed like a good idea to make sure all would go OK on Denali. I really wanted the Alaska climb to be a bonding experience. I wanted to forge a long-lasting climbing partnership with Cicily, to grow up and grow old with her and Alice. I wanted us to be able to trust each other with everything, and do epic, amazing routes together in the future. I just knew that Rainier and then Denali would be the catalysts that would make it all happen. In retrospect, my expectations might have been too high.
We got our backcountry permit and started hiking up through the snow play area, which was almost deserted in the hideous weather. Cicily started having problems right away. She said her backpack was rubbing on her shoulders, and her mouth turned down like she was about to cry. I was glad for the sideways sleet, because it prevented me from seeing if she was tearing up. I was afraid that a breakdown this early, before we were comfortable with each other, might lead more to awkwardness than camaraderie. I asked if there was anything I could do, since I'd done a lot of pack fittings at the outdoor shop and I knew people often underutilized their load straps. She asked me if I could take some of the weight. I was surprised, but I knew I could carry it so I didn't mind too much.
We stopped and took off our backpacks, and then she handed me the stove and pot, the rest of the group gear, and two quart-size bottles of contact-lens solution. I didn't say anything, but secretly worried that since I hadn't even brought a toothbrush I had misjudged the hygiene standards of this trip. I packed for light and fast style, which usually meant I reeked after a few days. My usual climbing partner, Alice, had never cared, but I wondered now if she had just put up with me all these years, if I was unaware of some female backcountry grooming code.
Cicily and I shouldered our packs again, but in another hundred yards she stopped and said it was the pack itself that was the problem. I grinned, "Do you want to switch packs?" Five minutes later I had all the gear inside her pack. I hoisted it up and we were off again. I was having a really good time despite the weather, and the pack thing didn't seem like a big deal. I knew that girl climbing partners are hard to find. I was willing to carry the contact lens solution. I was willing to be flexible and forgiving no matter what happened because I wanted this trip to work, and I hoped Cicily would, too.
After we passed tree line, the already poor visibility deteriorated rapidly. At first I could see about a hundred yards, but that dropped to a few feet in the encroaching whiteout. I was squinting, unable to keep more than one eye open against the wet ice crystals the wind was blasting in my face. My hood was cinched as tight as it would go, but water was still running down the back of my neck from the snow blowing in. I realized I was going to be cold and wet all the way to Camp Muir, but I wasn't too worried about finding the way there. I had a map and compass, and I was fairly sure we'd find the shelter as long as we followed the bearings.
We had seen a few other hardy souls around Paradise, but as we got higher only two men looked like they were continuing up the mountain. Cicily was lagging a ways behind me, so I would plod along for a while, then stop and wait so we didn't lose each other. One of the two men was also stopping to wait for his companion, and the two of us were leapfrogging. I felt anti-social staying so close to them but not saying anything, so the next time he passed me I yelled, "Hi!" and waved.
He came over and said his name was Steve. He looked to be in his late fifties and fairly weathered, though his hood hid most of his face. He seemed nice enough, and extremely confident in his climbing abilities. Shouting to be heard over the wind, he named a number of Alaskan summits he'd been to, which made sense because he said he lived in Anchorage. He'd come to summit Rainier via Gibraltar Ledges with his long-lost college friend, Kim. They had wanted to climb together, but the trip's real purpose was to be one of Kim's training climbs for Mount Everest.
I was happy to talk to Steve, but I was wearing every bit of clothing I had and I was freezing. I was beginning to get impatient and tired of doing jumping jacks on the long stops. I remember wishing Cicily would hurry so we could get to the shelter before dark, but a few hours later, in the dark, with the wind blowing so hard it was nearly knocking Cicily and me off our snowshoes (the guys didn't have snowshoes), we couldn't find the shelter. Steve had an altimeter and he kept yelling that we were at the right elevation, but that the shelter was missing. I doubted this, but since I couldn't find it, either, it seemed easier to accept that the shelter was missing than that we were lost. We decided to dig in where we were and spend the night, since the weather was too awful either to keep looking for the shelter or to turn back.
Steve and Kim's plan had been to do Gibraltar Ledges in a single push—a burly goal in winter—and they hadn't brought a lot of gear. Cicily and I had planned on spending the night in the shelter. So none of us had a tent. We talked about digging a snow cave, but because January that year had been sunny, there was a bulletproof ice layer about two feet down, covered by loose, unconsolidated snow. I had the only shovel and it was no match for the ice layer, so I dug a trench a few feet deep and wide enough for the four of us and we just lay down next to each other. Cicily and Kim had bivy sacks. I was still saving money to buy one. My down sleeping bag filled with snow the instant I unzipped it to get in, and knowing that the bag would be worthless once the snow melted into the down, I got into it with my full raingear and boots still on. I didn't really mind the unplanned night out in a big storm. I was happy whether we got ourselves in trouble or not, since the point of the trip was to find out how Cicily and I got along in tense situations. I was just excited to see how we did before it really counted.
It was a sleepless night. The storm was warm and wet, and several feet of heavy new snow fell. The winds increased. At first light, we got up because we were freezing and our sleeping bags were useless. None of us had put our water bottles in our sleeping bags, which was stupid, because now they were chock-full of ice. None of us ate anything or even commented about the lack of water because we wanted to begin moving down immediately, thinking we were only a few hours away from the parking lot.
I got out the map and compass. We didn't know exactly where we were, but we thought we were close to Camp Muir. I wanted to follow the cheat-sheet bearings the ranger station had handed out, but after walking a few hundred yards—Cicily and I breaking trail with our snowshoes, the two men behind, post-holing in our footsteps—Cicily started freaking out, saying we were going to die in an avalanche. Some rime ice-covered rocks on our left looked like they paralleled us on a bit of a ridge, and she wanted to follow the rocks down instead, thinking it was a safer route.
It turned into a bit of an argument between us. I was fine with an unplanned night out, but I was ready to go back to the car. I said if we followed the rocks down, we'd be completely lost, since we didn't know what rocks they were, but if we stayed on the bearings we'd be OK. I didn't think the snowfield was steep enough to slide. Cicily said she knew we'd be caught in an avalanche if we stayed on the bearings, and said she'd follow the rocks down by herself if she had to.
I remember yelling, "Cicily, if we go that way we're going to be in for a major epic!" But I also felt that we couldn't split up. I didn't want to lose a friend and potentially good climbing partner because of a stupid fight. I also had all of Cicily's clothes and equipment in my pack. I resolved that a major epic would be OK—at least I'd get to know her. Maybe we could still have a quality bonding experience.
The two men behind us didn't say anything about what direction they wanted to go. It looked like a lot of work for them just to keep up with us, since the snow, even in our snowshoe prints, came up to above their knees. I suppose they could have gone off on their own, but I'd learned from the night before that their stove wasn't working and they didn't have any food beyond a frozen Snickers bar or two, or any water. Without snowshoes to hike fast or a shovel to build a shelter—or a better sense of direction—they were largely dependent on us.
After about fifteen minutes the rocks ran out, but we continued walking downhill anyway. We began thinking that we were too far to the east. The wind was so strong that we were knocked over by the bigger gusts, and we felt it was blowing us off course. We compensated by side-hilling for a ways to the west and then continuing down again, only to think we'd gone too far west, and so we angled back to the east. The problem was, we never seemed to get any lower. After innumerable hours of zigzagging we started finding hills, huge uphill sections, when our route should have taken us straight back down to the Paradise parking lot. We'd crest a hill after what seemed like hours of trudging, only to go down the other side and find another hill after it. We were still far above tree line, and it was snowing sideways so hard we could barely make out each other's shapes as we walked.
We were exhausted and soaked to the skin. My down jacket hung on me like a wet, oversized rag. Water was running down the back of my legs under my rain pants. I didn't have a bit of dry clothing in my pack. My one gear success was the pair of Ed Viesturs-recommended eight-thousand-meter synthetic mittens, so I could still feel my fingers even though they were cold. Everyone else was complaining bitterly that they hadn't been able to feel their fingers or toes for hours.
Finally, it started getting dark and we had to concede we were not going to get out. We decided to dig a snow cave, no matter how long it took, and started taking turns with the shovel. It was impossible going. The snow on the surface was hopelessly soft and loose, and below that was rock-hard ice. We had to switch to chiseling with Steve's ice ax almost immediately.
After five minutes of standing still, Cicily started crying quietly. She was really cold, and said she knew she was going to die there. She refused to dig, and lay down in the snow in a ball. I didn't comfort her. I didn't know how. I'd mostly climbed with guys and Alice, and I'd never had a partner do this before. I reasoned that guys usually wanted space when they had a breakdown so they wouldn't feel embarrassed. I thought that if I left her alone she'd eventually come to her senses and realize that we needed to keep working on this snow cave in order to warm up.
The three of us continued digging in turns. Hours passed. Steve and I were zoning out in our misery, staring into the darkness, and we both started thinking it was strange that Kim had been digging so long, since his turns had been fairly short up to this point. We turned around and found him curled up in a ball inside the tiny shelter, out of the wind. We pulled him out by his ankles and then Steve and I took turns digging. Finally, we couldn't dig any more and we all piled in.
The snow cave was T-shaped, and the guys' side was almost long enough for them to stretch out, but our side had hit impenetrable ice and was only long enough for us to lay on our backs with our knees up to our chests. Both sides were very narrow, and once we'd all gotten in I was basically lying on top of Cicily. Cicily lay there a minute and then screamed at the top of her lungs into the pitch darkness, "I feel very uncomfortable with this situation!" She was crying again and it pissed me off, since there was obviously nothing more I could do. It didn't bode well for our Denali trip. She started sobbing and yelling, "Get off me!"
I had been happy for the warmth, and willing to accept the discomfort in exchange for gaining camaraderie, but Cicily's reaction felt like a denial of all that our friendship could be. It hurt me deeply and made the physical torture all the worse because I knew I was experiencing it for nothing.
"I'll be right back," I said as I wormed my way out of the snow cave, not caring for the moment if I froze to death. No one was going to see how much this final loss had hurt me. I didn't know what else to do for Cicily, but I tried desperately to think of something. I stood there, feeling any remaining heat being blasted away by the mad winds tearing through my wet clothes. I couldn't see anything, and I knew I would die if I stayed out in this for even five more minutes.
I tried feeling my way back into the snow cave, but there was a backpack wedged into my spot. I started to pull it out, but heard Kim's voice, quiet but firm: "This has become a life or death situation and it's everybody for themselves. I need my pack to stay dry." There were no other sounds from within the snow cave. I recoiled, thinking of Scott's race to the Pole, and how Oates had at least been able to decide to die of his own volition.
Back out in the ice shrapnel I reviewed my options. I couldn't leave them and I couldn't stay in the snow cave. I knew digging another cave for myself would take too long. My long underwear was wet, and the next few layers were stiff with ice—my down jacket could have stood up on its own. I'd tried pulling my sleeping bag out of my pack earlier, but it had frozen into the shape, weight, and consistency of a frozen turkey. I'd been unable to massage it into something I could get into, and I knew it would be totally worthless either for warmth or as a wind block. I began feeling that the rest of the evening was going to sort of suck.
Just then there was a light from behind me, and I recognized Steve emerging from the snow cave. He walked over to me and shouted, "I couldn't stay in there anymore." I nodded, immediately and intensely grateful. "We need those bivy sacks," he yelled. After a little investigation I found out that Cicily wasn't even using hers. I'm not sure what Steve said to Kim to get him to give up his.
The wind threatened to pull the bivy sack out of my grasp. It took everything I had to concentrate on pinching it between my two frozen hands. I stuck it over my head and spent a few minutes pulling it down, until my head was at the foot end of the yellow bag and only my feet were sticking out the bottom. It was a relief to be out of the wind, but the ice-encrusted bag was still slapping me in the face, and snow was billowing in from between my feet. I sat down on my backpack with my back to the wind, sitting on my hands because my gloves were so thick I figured they could double as a sit-pad. I leaned forward until my head was on my knees, with one cheek on the crusty fabric, and after a few more minutes of full-body shivering I realized that the muscles in my back and legs had become rigid. I was unable to sit back up.
I spent the endless night shivering violently with my head-lamp on. I was afraid to turn it off, even though I knew I needed to save the batteries—but total darkness added a surreal element that I couldn't bring myself to appreciate. I hadn't been able to feel my toes all day, but there was no way I was going to take my boots off to see what they looked like. Thanks to the mitts I could still feel my fingers, but I had to wiggle them constantly, which was OK since it gave me something to do.
In the morning, just before dawn, the weather wasn't any better. It had snowed several feet overnight and the wind was still blowing the snow sideways, creating huge drifts. Cicily and Kim hadn't tried to dig out the entrance to their cave at all, so they were buried in new snow. My first awareness of life outside the bivy sack—beyond my attempts to sing myself a song, and my irritation at being unable to remember any lyrics, or to concentrate on any thought whatsoever—was muffled screaming coming from the snow cave. It was both a relief and a new agony to take off the bivy sack, and be exposed and free in the storm again.
I stumbled over Steve, who was lying down completely covered in snow, and for a moment I thought he had frozen. It was an enormous relief when he moved slightly and I yelled down at him, "Rise and shine."
It took us a few minutes of digging to locate the snow cave entrance and pull Cicily's pack out. Kim and Cicily piled out after it, sputtering and hysterical. Cicily told me later she had woken up disoriented, or had just become increasingly aware that the snow cave was super stuffy and totally dark. She and Kim had panicked, and pandemonium had ensued. They were tangled in the dark, screaming, until we shoveled them out. It sounded horrible and claustrophobic, and actually made me glad I'd spent the night out.
Cicily's fingertips were black and horrifically swollen. So were Kim's. Steve's left thumb had swelled and darkened so badly that the flesh was split down the side of the nail and the bone was exposed. I was shocked. It was the first time I'd seen bad frostbite in real life. I'd experienced waxen fingertips and the screaming barfies, that condition where the pain of blood returning to your frozen fingers makes you want to scream and vomit at the same time. I'd gotten blisters from ice climbing a few times. But I'd never seen anything that I knew, without a doubt, needed to be amputated. I'd been worried about hypothermia, but I hadn't even considered frostbite. I had thought we'd all get out, warm up, and be fine in twenty minutes, without any negative lasting consequences. Suddenly I realized it wasn't going to work out that way.
I duct-taped a gauze pad loosely around Steve's thumb to keep the skin from hanging down off the bone and looking gross. Cicily wanted me to switch gloves with her since hers were inadequate and soaked, and she knew her fingers were only going to get worse. I wanted to extend an offer of friendship, to repair our frayed relationship—I was in a position to help her—but I said no to save my own skin. I felt awful. But I rationalized that my hands were now a valuable resource since the others were unable to do any task that required dexterity. Plus, she had health insurance and I didn't.
We could hear avalanches going off all around us. We didn't know what kind of slope we were on or what was above us, since we couldn't see any farther than each other. We figured that we probably weren't in the best spot.
I started packing up. I wanted to be as helpful as I could, to justify the gloves decision, so I packed everything and re-laced everybody's boots. Cicily had to use the bathroom and I helped her take her pants off and then put them back on again. She had left her snowshoes flat in the snow the night before, and they'd been covered completely. I looked for a long time, but I could only find one of them.
We started down, but the slope was too steep. We were up to our waists in the snow but falling forward. We didn't remember any place so steep on the snowfield. After about ten minutes we unanimously decided to turn around and wait until we had better visibility before continuing. We were afraid we were above a cliff, and if the slope got any steeper we would slide off. Even if we did come to a cliff and managed not to fall off it, we wouldn't have the strength to hike back up the hill. Swimming in the snowdrifts back up the steep terrain we'd just slid down took us several hours, and we were unspeakably relieved and exhausted once we reached the cave again.
I decided it was time to start cooking. I had tons of food since we'd been planning to stay at Camp Muir for several days, and we hadn't eaten or drunk anything since we'd left Paradise two days before. The stove wouldn't light in the storm, so I lay flat on my stomach and lit it in the entrance to the snow cave, and started melting snow that I'd chipped off the ceiling. Cicily refused to get back into the snow cave, so Steve put her into her bivy sack outside the entrance. Steve was trying to talk some sense into her, but I couldn't hear what he said. Steve would come over periodically and I could tell he wanted me to understand that Cicily was having a serious problem, one I needed to deal with since I was her friend, but I didn't know what to do. I was doing the best I could. Kim just lay down on the other side of the snow cave and stared blankly at me, which was a little creepy.
I made tea—I had six kinds to choose from—and then I made soup. Cicily had a thermos which I filled with soup and had Steve take to her. The stove wouldn't stay lit, so I had to dismantle it several times. In the end I used parts from both stoves to keep it going. I noticed that no one was very hungry except me. I set out cups of hot tea for Steve and they got cold before he came and got them. Kim wouldn't eat or drink anything. I thought this was totally bizarre, since for me the hot liquid was incredibly restorative. I was still shivering constantly, but I finally had to pee for the first time in two and a half days.
That night the wind was crazy, and I wondered if we would all still be alive in the morning. I was baffled at how the situation had become so serious so quickly. Even why it was so serious was beyond me. I wanted to yell at Cicily to buck up and at the others to try the potato-leek soup. It wasn't half bad. Cicily had been uncommunicative all of the previous day and had refused to get out of her bivy sack. Kim wasn't making much sense and wouldn't wear his gloves. I wasn't sure where Steve was. I wanted to make hot water bottles but I couldn't get the stove to stay lit long enough to melt the ice in the Nalgenes, and we didn't have any that were empty. I stayed where I was next to the stove in the snow cave all night, and nobody said anything. I lit the stove and made drinks every few hours, and dug out the cave entrance, and the rest of the time I shivered in a wet, miserable, solitary ball.
By morning the wind had died down, the clouds had disappeared, and the sun was shining. It was incredible. It was almost warm. I was still shivering, and my jaw was cramped because my teeth had been chattering for so long, but finally the cold wasn't quite so insidious, and I hoped maybe some of our stuff could dry out a little bit in the sun. Just outside our snow cave was a pile of rocks covered in rime ice, and we all stumbled over to sit in the sun. Even Cicily got out of her bivy sack, although we had to help her walk. Then we just sat there a long time.
I wanted to try hiking out, but Cicily and Kim didn't think they'd be able to make it in a day, and no one wanted to try to find a new shelter on the way down. There was only one pair of snowshoes left. I didn't want to leave by myself, even to get help, because no one else was able to light the stove or take care of themselves because of their damaged fingers. So we just sat there in the sun.
We tried calling 911 on Steve's cell phone, but we couldn't get service. I was embarrassed that I needed to be rescued. I'd thought we could pull it off, and now I realized we couldn't. It was a demoralizing moment. I'd just put my application in to be a climbing ranger, and getting lost was a fantastic reason for them not to hire me. So I would be jobless. It didn't look like I was even going to have my all-girls' trip to South America. Probably no one would ever want to climb with me again once they knew how badly I'd screwed up. And I'd be friendless. I hunched lower on my ice-covered rock, glaring out from under my saturated balaclava, blinking in the crack of natural light that kept growing bigger as the wet foam on the top of my ski goggles sagged lower and lower down my nose. Regardless of the consequences, calling for help was the right thing to do.
Steve figured that even though our call for rescue didn't go through, he and Kim were now a day and a half overdue and so someone ought to be looking for them. I made a giant X in the snow with our wands just in case Steve and Kim's absence was important enough that a helicopter would come. We didn't see any.
A little before noon a woman skied into our camp. We just stared at her. "Are you with the lost guys?" she asked, eyeing Cicily and me.
"Yes," I said. "Probably you mean us, anyway."
"You know the Paradise parking lot is crawling with news crews and people out looking for them," she said, pointing to Steve and Kim.
"No," I replied, "we didn't know that."
"Well," she scoffed, "you ought to tell the Park Service where they are." Her disdain was so tangible I had to wonder what scenario might be going through her head—possibly that this was some sort of mock disappearance we'd staged in order to facilitate an illicit romantic tryst—but I didn't care. We could follow her tracks back to the parking lot. I was excited to get out any way we could, and I hoped the others felt the same way.
"When you ski down, could you mark your route with some of our wands?" I asked. "We'd like to follow your ski tracks out, but I'm not sure we'll get out today, and if it snows again we won't be able to follow you." We gave her the rest of our wands. She said she was too cold to stop so she had to run, but she'd mark the route back to the road and let the Park Service know she'd seen us.
I started packing up. The hope of getting out without spending another night, possibly in another storm, gave me fantastically renewed energy, and I stuffed frozen gear into our backpacks as fast as I could. I didn't want to abandon any camping equipment in case we didn't make it out that day.
While I was packing, another skier came along. This time it was a Park Service employee on his day off, Rich, out for a ski on the Paradise glacier, enjoying a break in the weather. Unlike the first skier, Rich seemed interested in helping us back down the glacier. "Hey," I said, "would you let Cicily snowshoe down with you? She doesn't want to spend another night out and we're not sure we can make it down today without snowshoes." It was a huge relief when he agreed and gave Cicily his spare pair of gloves. I strapped my snowshoes on her feet and gave her my car keys.
"Now," I said, "whatever you do, don't go in an ambulance to the hospital because they'll take you to Tacoma. You want to go to Seattle because they have better facilities to treat frostbite." I said this after remembering that Virginia Mason Hospital in Seattle had a hyperbaric chamber. I told her to wait for me if she could, but if she couldn't wait for me to take my car and I'd find a way to catch up with her later.
The remaining three of us started post-holing along, following the tracks that Rich and Cicily had left in the snow. We never did see any wands. I was paranoid that it would start snowing again. The sun had disappeared as quickly as it had come, and the sky looked ominous. I knew with every fiber of my being that we needed to go as fast as possible.
We were taking turns breaking trail, following the tracks, winding our way between crevasses, as it started snowing again. When Kim dropped to his knees and started crawling forward in the waist-deep snow, I didn't say anything, just pushed him out of the way and continued on.
We didn't say anything for hours and hours until I saw my backpack lying in the snow between the prints ahead of us. I was amazed. I still had Cicily's pack and she had mine, but she hadn't been carrying anything in it, so I couldn't figure out why she'd ditched it. I stopped, picked up my pack—which was expensive—and tried to figure out how to affix it on top of the pack I was already wearing.
The backpack didn't fit well since I was still carrying Cicily's remaining snowshoe. I removed the snowshoe and thought, why am I carrying this? I couldn't think of any great use for it. I couldn't walk with just one snowshoe on, I couldn't eat it, and I couldn't use it for warmth. Briefly, I thought that maybe I didn't want to litter. Then, suddenly, I was angry. Here I was, lost in a midwinter epic and carrying two quarts of frozen-solid contact-lens solution and other people's leftover gear. As a beleaguered traveler, shouldn't I strip down to the bare necessities for survival? (Which for me also meant not losing personal gear I couldn't afford to replace.) I refused to carry the offending snowshoe further. I discus-hucked it into a crevasse—let the world judge as it will—attached the two packs, and continued post-holing toward the parking lot.
It got dark and started snowing heavily again, but we continued pushing through the deep drifts. Once below tree line, the promise of the parking lot was overwhelming. We had to continue. When we were totally exhausted—much more so than we'd been days earlier when we'd said we were totally exhausted—we stumbled into a group of mountain rescue folks. There were tons of them, different groups from different jurisdictions all hiking together. Their headlamps looked like a string of Christmas lights in the dark. They all tried to call in on their radios at the same time to say it was their group that had found us. They had brought snowshoes for us. They threw them in a pile at our feet.
"Our hands are frostbitten," said Steve.
"That totally sucks, dude," said one of the searchers, "but we'll get you fixed up as soon as we get back to the parking lot."
They all nodded and stood, looking at us expectantly. I got down on my knees and started putting the snowshoes on Steve and Kim.
We were back in the parking lot in fifteen minutes. There were news crews everywhere. A couple of green-clad Park Service employees hustled us into a Suburban as people with microphones beat on the windows. They drove us down to Longmire, the park headquarters a half-hour down the road from Paradise. I wanted to take my own car, but I learned that Cicily had taken an ambulance to the hospital in Tacoma with my car keys in her pocket.
We got debriefed by my future boss who wanted to know where we'd been, how the four of us had found each other in the storm, and any other interesting information we had. We didn't know much. They gave me a free phone call and I called my parents, who didn't know I'd been missing, which made sense since I wasn't supposed to have been home until that night, anyway. My mom asked me to say hi to Cicily for her, and hoped we'd had a nice time.
I sat on the front porch of the Longmire Inn, trying to figure out what to do next. Steve and Kim had driven off to Virginia Mason in Seattle. I had to get to Tacoma to pick up Cicily. My friend Alice was driving from Idaho on her way back from a wilderness EMT class, and she said she'd stop by Mount Rainier on her way home to pick me up. Even better, I found out that Cicily had discarded her coat before she'd gotten in the ambulance, and my keys were in the pocket.
A few hours later Alice showed up and gave me a ride back up to Paradise to get my car. Then we caravanned to Tacoma to pick up Cicily. Cicily had been diagnosed with frostbitten fingers and something resembling trench foot, but they wouldn't keep her for the night. They told her to go home to California and deal with the frostbite there. She needed a bath, she said, but she couldn't wash herself since her fingers were all bandaged up. I begged Alice to take Cicily home with her for the rest of the night, until we could get her on a plane. I have never been so grateful to anyone as I was to Alice when she said yes.
To sum up, I lost nearly twenty pounds in four days from shivering and dehydration. Cicily said the experience, along with the months of bandage-changing that followed, inspired her to become a doctor, which she is now. And somehow I got a job working as a climbing and general high-altitude rescue ranger, starting three months later. Cicily and I never did climb together again.
Though I'd been lost on the mountain, I didn't quietly ask for my resume to be returned. I accepted the park's offer of employment even though I didn't understand at the time why on earth I got it. I even showed up that first day, though I had pretty well convinced myself it was a setup. My appearance led to grave concerns from the rest of the staff and outrage from the lead rangers directly responsible for me. They'd thought they would be getting someone they had groomed for the job and were already friends with, but instead they got a girl they hadn't met until she got herself lost in terrain she should have known like the back of her hand. For me, it was a difficult work environment to come into. But I resolved to try as hard as I could to fulfill my dreams. I would learn the skills I needed, and risk as much as I had.
Lately I've been wondering what I'd have told the _News Tribune_ reporter if he'd interviewed me a few years later. During my three seasons at Mount Rainier I learned a lot about mountain climbing and rescues, about politics and camaraderie in the mountains, and about what being a woman climber means. Now I know in all certainty when to bring my toothbrush and when to leave it at home and, all things considered, that kind of confidence is hard to come by. The greatest skill I ever had, though, was the one I started with: being able to suffer for long periods of time and not die. In exchange, I got to see some amazing things.
Four days in a whiteout was the beginning of my relationship with Mount Rainier, and what follows are stories of my subsequent work there in my third year as a ranger, my days and nights, and heart and soul. This is my last year on the mountain.
## 2
* * *
## THE BOLD AND THE BALLSY
MIKE, OR "GATOR" TO HIS FRIENDS, supervised eleven climbing rangers at Mount Rainier. Two of them were middle-management guys, Glenn and Stefan, who got out on the mountain sometimes but were mostly stuck with office work. Four were the team leaders: David and Chris, with grunt rangers Stoney and Jeremy on the north side; and Charlie and Andy, with grunt rangers Matt and me, Bree, on the south. Adrienne—the other girl who had also decided to stay—and I weren't allowed to work on the same team in the best of times, but she'd injured herself just before the season started and spent the summer out of the field, running the Climbing Information Center. That sounds like a lot of rangers, but it wasn't. Of the eight paid climbing rangers who typically worked on the mountain, on any given day half were having their days off. That left just four rangers to cover the whole mountain, north and south. I worked on the south side, and it was just Charlie, me, and our volunteer, Tom.
When Mike hired me, he said he was collecting eclectic personalities more than anything. He wanted the bold, ballsy, and elite climbing attitude to shine through the government red tape. I think he considered this the last hope of being able do rescues the right way—with a group of people who wouldn't ask permission or give up control over something they knew best about. However, the negative aspects, the intolerance, egomania, and "lone wolf" attitude that came with this personality type, were hard to integrate with the customer-service role of being a climbing ranger, and led to many awkward situations involving offensive language, uniform requirements, substance and beverage regulations, and sometimes a lack of human decency. Mike wanted showy, so while I tried to be a sweetheart on the mountain, I didn't when I was around Mike. When I got hired, we shook hands. "Call me Mike," he said.
The north side rangers were fantastically good rescue rangers, but often so hard to get along with that Mike would sit down and lecture us south side rangers about damage control etiquette. It was up to us, he would say, to be professional and get along with the rest of the park by setting a consummate example of what the park means: uniformed, friendly, and well-mannered. The paradox was that if we succeeded, it meant we weren't good climbers and subsequently weren't his first choice for rescues; and if we didn't, then we weren't doing our part for the team. Sometimes it made me angry, but most jobs have their share of strange politics, and this didn't seem so different from anywhere else.
In October, almost two months after the end of my second season with the park, Mike called me at home. He needed to get a dead guy off the mountain, and no other rangers were around, so he'd had to call me and Adrienne, the girls. "OK," I said flatly, "I'll do it." I'd been waiting two full seasons to play a central part in a rescue, the most important aspect of what a climbing ranger does.
Agonizing flashbacks raced through my mind. All the times I'd been in the right place at the right time to help with a rescue, but Mike had called the north side rangers to drive around the mountain, or he'd recruited someone else during their days off. Worse, he'd call me back to work for rescues during my days off, saying he was desperate for more help. I'd made the three-hour commute from Seattle, but when I got to the park he'd asked me to backfill Camp Muir, or staff the office so we could keep selling permits.
At the start of a rescue, all the climbing rangers would meet in the Park Service's equivalent of a war room in Longmire, but I never got to sit in on those sessions at which everybody else learned where they were going and what the story was. Beforehand, Mike would ask me to make coffee and then he'd ask me to leave when the meeting started, with everybody else staring at me as I left. Then Mike would ask whoever was nearest the door to shut it. I'd stand for a minute alone in the hallway, staring at the wall, before driving up the hill to Paradise.
I never understood why he ousted me from planning meetings. Maybe it fostered some solidarity among those who stayed. I figured it must make the rest of the team stronger, to know they were chosen, even if it was at my expense. I'd spend those days fuming, not understanding why I kept up my emergency medical technician's license and went to rigging and helicopter classes. Some of the other climbing rangers hadn't been current for years, but they had more field experience, and they always would if I never got to do rescues. I shook with fury and found myself overcome with the fear that I was totally worthless, but I smiled when tourists drove up to the ranger station and honked until I came out.
"Is this it?" they'd ask, peering at the fog. Trying to see the mountain behind it.
"Yep," I'd reply, "this is all there is here for us today."
Disappointed, they'd screech out of the parking lot without even getting out of the car.
In retrospect, I wonder that I didn't enjoy those days more. I think I was focused on participating in the human element, striving to have a role in someone else's life and death struggle. And I thought that to go on a rescue would generate the camaraderie and respect I wanted to share with my coworkers—one of the reasons I'd become a climbing ranger in the first place. This obsession with being left out of rescues blinded me to the amazing things I did have: the wonder of being enveloped and held in the swirling fog, seeing the rain pelting on window panes, the peace of being alone.
But that October when Mike called, I felt that this was my chance at last. It was going to be a difficult assignment. Not particularly time-sensitive (since the man was already dead) and I was already familiar with the terrain, but Mike said he was coming with us and that meant that I was going to have to put on a good show as well as do the actual work.
Unwilling to say no, I drove back to the park from Seattle.
"I can't believe I came back again," said Adrienne, staring around the nearly deserted parking lot when we met at Long-mire. In contrast to the dour wintry surroundings, Adrienne looked pissed at herself but vibrant, very blond with her hair in two braids down her back, her nose sunburnt.
"I know," I said numbly, "I shouldn't have come back either."
At that moment Mike came tearing out of the administration building, saw us, and yelled, "Get your packs, we're going to Kautz and we're flying out of here!" He jumped into his van and squealed out of the lot, the engine laboring, towards the Kautz helicopter base fifteen minutes down the road.
We stood where we were and watched the rear of the minivan disappear over the crest of the hill. "I didn't even pack my pack," Adrienne said, "I just threw everything in the back of my truck. I didn't figure I'd actually end up doing anything."
"Neither did I. Let's give it a minute before we follow him."
Half an hour later we were flying over the mountain in a small helicopter, following footprints in the snow. Apparently, as Mike told us the story over the intercom, two men in their early twenties had been caught in an avalanche the day before yesterday. They'd been planning on climbing the Ingraham Direct, the easiest of the standard winter routes on the mountain's south side, and had been holed up at Camp Muir waiting for the weather to clear. Unfortunately for them, the weather never did clear and the snow kept piling up. After a few stiff, cold, and bored days in the public shelter, they'd decided to hike around the area for a bit.
The avalanche buried them as they tried climbing a steep chute above a crevasse. A thick slab of snow broke off from the top of the chute and carried the climbers down into the crevasse with it, crushing them against the frozen crevasse walls. One of the men survived the fall and was able to dig himself out. He attempted to rescue the other man, but it was quickly evident that his friend was already dead.
After climbing out of the crevasse, the surviving man hiked back down to Camp Muir and called the park using the emergency radio in the public shelter. The park's communication center told him to keep hiking down to Paradise, where a law enforcement ranger would meet him, and that they'd send someone up to get his dead friend right away. In reality, though, Mike had had to wait until the weather improved before going up for the body. The dead man's friend was waiting in Mike's office, restless and bleary-eyed after days of inaction and sleepless nights full of regretful reliving, which was why Mike had met us outside.
There was a lot of new snow over the two men's tracks, but the wind had made weird mounds where the steps had been, so from the air we could still tell, sometimes, which direction the climbers had been headed. Then we flew over a place where the two sets of tracks ended and one started wandering separately back down the mountain. We landed as close to it as we could. We pulled our packs out after us and knelt down, huddled over our gear, as the rotors blew the light, unconsolidated snow in our ears, up our noses, and down our shirts.
Suddenly I didn't want to be here, high on the mountain with my boss. I wasn't sure I had accumulated enough inner resources to handle this situation calmly, with the poise, efficiency, and panache Mike would expect. I shut my eyes, clicked my heels together three times, and wished to be back in Kansas—or anywhere else—as Mike threw himself on top of me to protect me from the rotorwash.
When the bird was gone it was suddenly quiet. The snow sparkled in the sun and the cold. We were high up on the mountain, the three of us together, alone with our thoughts. I thought about how we would do this job if it were just Adrienne and me. Probably we would banter about how our winter plans were going, hiking at a reasonable pace to the spot. Then we would remove this man from the mountain with as much respect as we could, and would go for beers and talk about it afterwards. We'd talk about how we were an important part of this man's death, and what it felt like to be part of a death. But we were playing by different rules with Mike here: we weren't good people, we were good climbing rangers.
We sank into the new snow to above our knees. Mike asked me to lead the way uphill. I sort of knew this was coming, since the previous year one of the north side rangers had told Mike I was a great pack animal, not really fast, but able to carry heavy loads over long distances. He said this after I had lugged a pack with close to a hundred pounds of climbing hardware from Ipsut Creek up sixty-five hundred vertical feet to the base of the Willis Wall supporting a rescue on Liberty Ridge, then asked if I could keep going. While I was grateful for the compliment, and its implication that I was in fact useful, I was also worried because I knew Mike wanted to see for himself. In all the time I'd worked in the park we had never before climbed or done a rescue together. In fact, I had never done a climb or rescue with Glenn or Stefan, the next level of management down, either. Mike worked mostly in his office in Longmire, and when he climbed it was with the north side rangers or with special guests like the president of REI, or U.S. senators. It's hard work to make the steps, forcing first one foot and then the other through the snow and ice, crashing down until your feet finally find a layer that supports your weight, and working your way uphill.
Normally post-holing isn't so bad; it's slow, hot work, and team members trade off frequently. But on this trip I knew there would be no switching off. I needed to appear to do this effortlessly, because this would be the only time I would ever hike with Mike. This was how he would always remember me. Until this point, the only comments I'd gotten from Mike during reviews were that he'd like my niche to be dealing with our outdated computer system. The fact that my future doing a real climbing ranger's job depended on how fast I could hike uphill to find a guy who was already dead was nauseating in its pointlessness. I didn't understand why it couldn't be some other way, friendly and companionable.
The temperature was probably less than ten degrees, but I started off in just a T-shirt, knowing I would warm up fast. I wasn't acclimated to the altitude, going from sea level to twelve thousand feet in five minutes. My throat immediately burned and my legs went numb, my vision blurred. The terrain was steeper than I'd remembered. But it didn't matter. I was being good today; I wasn't going to stop no matter what happened. In the frozen silence I could hear Mike and Adrienne breathing ten and twenty meters behind me on the rope we were all evenly tied into. I wondered if they could hear me huffing and puffing, and I tried to breathe quieter. I just had to get through this day, to prove I could not only do my job competently, but in the style Mike wanted to see.
The middle of the glacier was broken, and we spent a lot of time climbing on and off, over and around the giant blue ice cubes that blocked our way. Finally, after an hour or two, when we knew we were close, we got to the edge of a crevasse that was probably fifty feet across, with a thin ceiling of blue ice and a false floor of jumbled snow blocks creating a sort of tunnel that sagged below the level of the glacier. We were at one end of the narrow opening and we left our packs there, bringing just a rope, a body bag, and a shovel with us. Then we crawled across the false floor, trying to step on the tops of the biggest snow blobs. Every once in a while we could hear pieces falling out from underneath us, creaking off and then slamming against other nebulous things in the bowels of the crevasse. I wondered if the whole floor would collapse under our weight.
We could see light on the other end of the crevasse, and we followed the footprints across the tunnel. On the far side a green wall rose up, angled against a sixty-degree snowslope. Where the two came together, there was a purple head sticking out of the snow, mouth open, looking at us.
"There he is," I said as brightly as I could while my stomach recoiled.
"Oh yeah," said Mike, "looks like him."
Mike tilted his head sideways, as if it made the purple head look more normal. Adrienne was standing behind me, just looking at the head. I wasn't sure if she'd ever seen a dead person before. I didn't think so.
"What's his name?" I asked Mike. I was trying to figure out whether, minus the purple, and the rent in the side of his head, he looked like someone I knew.
"Uh," said Mike, trying to remember.
I realized I did know him. He'd been a volunteer at a fire station I'd been at for a few weeks the previous year while doing a rotation for paramedic school. I also realized I couldn't let Mike know I knew him—I was worried that if I did, Mike would think I'd lose it. I knew I'd see myself as a callous person for the rest of my life for what I was about to do. In order to gain my boss's respect, I was going to lose my respect for myself.
I was soaked from the sweat I'd generated on the climb up, and now in the shade of the crevasse the sweat was freezing and I was shivering. I hoped Mike wouldn't notice—I didn't want him to think I was shivering because I was scared.
"Hey, Mike, throw me that shovel, and I'll try to get an idea of what position he's in."
I started in behind the dead man, at the back of his neck, working my way down. I tried to be careful to dig close but not too close to him, but the snow was very compact and difficult to dig, so every once in a while I would nick the back of his collar or tink him on the head, and it made me cringe. He wasn't frozen really, just stiff, I could tell by the sound. I didn't slow down. I was supposed to be good at this. This was my job.
Mike and Adrienne were taking turns, digging with their hands in front of his head. He looked like he was sitting up, which meant we were going to have to dig a long ways to extricate him from his cramped tomb between the ice wall and the steep snow slope. The rope was tangled around the body and anchored deep below him in the snow, and his pack was mostly buried.
Once we finally got down to his torso, Mike sent Adrienne back across the crevasse to get a knife so we could cut the rope and pack off him. I looked at the dead man's face again—there was red foam bubbling out of his mouth, though he'd been dead for a few days. I remembered that he was twenty-one, gregarious, had a blond Mohawk. He did a lot with his church, was outspoken about it. We'd eaten Thanksgiving dinner together.
Adrienne came back and then we cut the pack straps and the rope, finally succeeding in getting his top half free of the ice. His legs were next, and I found myself looking at them. He had expensive climbing pants. I wondered what would happen to them, if his parents would sell them or burn them. Dead man's pants. I had an introspective moment thinking about how meaningless things take on so much significance after their owner has died. Mike saw me staring at his pants, and there was a momentary pause where I felt sure he was sniffing out weakness in me. Probably thinking I wasn't able to stay in the moment and get the job done in a timely manner without getting mushy. Here the roof could collapse on us, the floor could give way, there could be another avalanche and we'd be smothered, but I was too concerned with the dead guy to be able to prioritize properly.
"Anybody want some M&Ms?" I asked, jerking up, smiling as I pulled the brown packet out of my pocket. "I'm starving." Mike held out his hand greedily, and I ate a few to show I could eat over a dead body. Adrienne shook her head. It was a good move on my part. Showed I wasn't getting mushy.
We had to take the man's crampons off so they wouldn't rip holes in the body bag. When he was finally totally free his left arm stuck straight out and wouldn't fit in the bag. There was only one solution. I leaned on it until something snapped under my weight and I fell over, face first, on the body. Still lying on the body, I winked at Mike. I could feel the dead man's bony hip under me and felt the slight give to his stomach when I pressed on it to get back up. Mike gave me a "you're so sexy" look. The dead man fit in the bag just fine after that. I apologized silently for trying to make everything work out for the living, namely me, at his expense and hoped the dead man would understand.
We lowered him down, hand over hand, back into the crevasse. He was heavy and the bag slid everywhere but the path we wanted it to follow. We didn't dare carry him because we weren't sure the false floor would hold all our weight. At the same time, we didn't want to just let go and have him slide to the far corner of the crevasse, where there was a huge hole in the floor. If he slid through that, he'd plummet to the bottom of the glacier and disappear. Eventually, we couldn't hold the body bag anymore and we let it go free. It slid down the icy slope, slamming into the wall at the bottom before, thankfully, lodging itself against a big snow blob a dozen feet away from the hole.
Dragging him up the other side of the crevasse was even more difficult. I clung to the body bag by the front handles, heaving him up in spurts of exertion. Adrienne sat at the far edge of the crevasse and hand-over-handed the slack, while Mike walked down the slope with the rope over his shoulder, trying to keep tension. There were a few news helicopters flying above and we had to stage the body at the edge of the crevasse, just under its roof, out of view of their cameras.
We called our helicopter back, telling the pilot to bring a thick cable that attaches under its belly with a hook on the end. When it came we enmeshed the body in a net and hooked it to the helicopter, and it flew off, swinging, back down to Kautz where the friend could identify it for the coroner. The family could come stand in the middle of the green airfield and cry, or pray, or do whatever they wanted for a few minutes, before they'd be escorted away, maybe even before we got back.
We sat there on our packs for a bit. It was really nice just to sit in the sun, not moving, but the silence was getting awkward. Mike was fidgeting and I worried about our inability to carry on an easy conversation. We weren't good enough friends to weather long silences, and I feared the silence was proof of my failure to entertain. Mike looked bored—bored with us. "So," I said, "I used to know that guy." Mike and Adrienne looked at me. "He was a volunteer fire fighter at a station I was at for a while."
"Really?" Adrienne looked shocked.
"Yeah," I reminisced in a jovial tone of voice. "He was a Republican, right-wing, evangelical Christian. Good fucking riddance he's dead, really." Mike laughed so hard he fell off his backpack into the snow, tears coming out his eyes. I was immediately disgusted with myself. But I started counting on my fingers while Mike brushed the snow off his back. If I got rehired it would only be a few more months before I'd get to do this every day. Well, if I was lucky. If this was lucky.
## 3
* * *
## DEAD MEN AND DUST MOTES
WHEN I STARTED WORK AT THE END OF APRIL, it was shaping up to be the most prolific rescue season anyone could remember. Besides the dead man in the crevasse the previous October, the winter had seen several other incidents. Then two Rainier Mountaineering guides were rescued after becoming disoriented in a horrendous storm. Having climbed Ptarmigan Ridge, they'd ended up descending some combination of Gibralter Ledges and the Nisqually Ice Cliff in early February.
My first week back was strenuous but uneventful. There was the first summit climb of the season, which always seemed to be the hardest. I did a good deal of skiing during which I managed one really impressive fall. I filled out all my forms for housing, keys, uniforms, radios, batteries, radio chargers, radio holders, badges, and nametags. I cleaned the mouse turds off every surface in my assigned room in the ski lodge at Paradise and spent hours on the phone with the maintenance division, trying to come up with a noninvasive way to deal wit the stinky and seemingly nocturnal family of woodpeckers that had had nested in the wall next to my bed. On their suggestion I opened the window next to the hole and banged on the wall, and the birds and I would both stick our heads out and try to glare each other down. I blinked, they won. I told myself I could get used to the smell.
Then in the first week of May there was another storm on the mountain. I spent my days off lying inert on a friend's couch in Seattle, watching the rain and reruns of _Murder, She Wrote_. Ted, Rich's employee and the park's maintenance division expert on servicing solar dehydrating toilets, was up at Muir. I think Ted might have been in his sixties. From Thursday to Sunday in the summers, he stayed at Muir as the official "turd herder," cleaning the bathrooms and doing small maintenance jobs. He had a fuzzy gray comb-over and crooked yellow teeth. The eight-by-twelve-foot hut was always strewn with his dozen or so pairs of cheap reading glasses, all of them broken in one way or another, and his books, warped from the humidity: _Egyptian Hieroglyphics Uncovered_ , a biography of Napoléon Bonaparte written in French, maps of the Inside Passage, a coffee-table book of green fields in Ireland. I'm not convinced Ted knows French.
Ted told me later that during that storm there was a mound of snow several feet high inside the door of the hut where the spindrift had blown in around the edges and through the cracks and air holes in the walls. Every time he went out, he had to beat on the door until the ice seal broke, and yank on the tiny loose kitchen cabinet handle until the door swung inward. When he looked out, he realized the snow reached to the top of the door frame in a white wall. When he touched it, it spilled inwards over everything. Spindrift blew in his eyes and ran down his shirt and into his boots and his ears. He had to swim his way out. Inside the hut, everything was damp and foggy and freezing. When he used the stove to make dinner—the same meal he made every night, basmati rice with tuna and Popeye canned spinach on top—the heat of the stove melted all the snow in the tiny plywood hut, which later refroze into a layer of sheet ice covering everything.
Ted had been drinking tea when Mike showed up. I always liked to hear the details of Ted's stories, and I pictured him drinking out of his one huge plastic mug with four Lipton tea bags and four sugar packets while lying on the back bunk in his long underwear and his gray socks with the red toes, the same as always. Ted liked his routines. Mike had snowshoed up to the hut so he could snowboard back down from Camp Muir, which is nothing more than a group of tiny huts perched on a rock outcrop at ten thousand feet. Mike had passed a few people on the way up, but he thought they would turn around because of the weather. Ted offered him some tea before he headed back down.
I like storms. I like the energy, the overpowering and irrational force of them. There are so few opportunities to experience that kind of honest upfront and unfocused passion, elemental suffering. I like it when the wind blows the snow so hard that ice shrapnel tears into my face, strong enough to cut the exposed skin on my cheeks. I like it when I can lean against the wind, and the wind slowly blows me backwards, ripping at my clothes until I'm forced to drop to my stomach and crawl. I like skiing when I can't tell if I'm going right or left, up or down, if I'm about to hit something, or even if I'm moving at all. The danger and frenetic nature of the thing inspires madness, and this was that sort of storm.
I had to start work at six the morning after the storm, so I left Seattle at three am. The rain had stopped, the storm was over, and a crescent moon was showing through a hole in the clouds. I was feeling refreshed after my days of lethargy on the couch, and was ready to face whatever was coming to me.
I stopped for groceries, loading up with almost two hundred dollars' worth. Working a seasonal job with no permanent home elsewhere, I started from scratch every year, re-buying the staples, the peanut butter, flour, pancake mix, syrup, condiments, spices, tea, sugar, brownie mix, rice, cans of things, anything that would be too much of a pain to carry around in the trunk of my car during the off-season. I didn't use a shopping list, relying instead on impulse. This time I decided to go for sixteen boxes of macaroni and cheese, among other things. I paid up and wheeled my cart back to my car, the only one left in the lot.
It was still dark. I put the groceries in the car, got in and turned the heat all the way up, though I knew it would make me drowsy. It's hard not to fall asleep on the way to the park, but I hogged the heat, driving with my wrists on the wheel and one hand in front of each vent, loving every last second of being warm. The last forty minutes inside the park boundary were the hardest. The road is tricky. It lets you drive sixty for a while and then throws a series of ten-mile-per-hour U-turns at you right when you get comfortable. It's also perennially foggy. It was especially foggy that morning, because it was warming up after the storm. The last of the snow on the road was melting. With the slush and the fog I drove slowly, peering out, alternating one eye open, then the other, since they refused to both stay open at the same time.
I got to Paradise at quarter to six, at dawn, but I didn't really notice. My head was in its own fog from the early morning and the recent loss of the car interior's coziness. I carried all my groceries in one trip up the three flights of stairs to my room, the plastic bag straps cutting into my wrists. I had fifteen more minutes to sleep. My bed was Park Service-issue, a twin with a lumpy mattress and stripped bolts holding the frame together. I had to get in carefully because it rocked wildly back and forth, threatening to collapse entirely when I weighted it. I didn't even take my shoes off, just set my alarm, brushed off the fresh mouse droppings, and eased in.
I got up fifteen minutes later and changed into my uniform, the green polyester pants that are too tight around my thighs, two T-shirts for warmth, and my Park Service shirt and fleece jacket, both with matching official badges. I made a cup of tea to complement the pumpkin muffin I'd picked up at the grocery store, and then walked over to the Climbing Information Center.
It was a slow day, sunny but with few people despite the improving weather. Between issuing climbing permits and talking with the tourists who were mildly interested in climbing the mountain—but who never would—I had time to clean the bathroom and drag each one of the giant cheap floor mats down the stairs and out the back door so I could beat them with a broom I'd borrowed from the public restroom utility closet. When I was done with the domestic work, I sat on a stool behind the long front desk and watched the dust motes in the afternoon sunshine until three o'clock, when we closed.
I had just locked the door when Mike called to say that there were two people missing on the mountain, and could I find them in the computer registration system. It wasn't an unusual request. Almost every day an issue comes up with a climbing party. Sometimes there is a legitimate problem and the missing people are late because of bad weather but are managing to work their way down slowly, or else they're lost, exhausted, or dead. But most of the time they've just decided to stay an extra day or two without telling anybody, and all that's required is a quick radio call to the ranger at Muir to go root them out of their tents and send them on their way. Sometimes it's a little more complicated, like when a worried wife calls to confirm the dates of her husband's trip, but we have no record of him.
As a safety net, the day after a party is supposed to be off the mountain, their party name pops up automatically on the computer screen, along with their license plate number. I take this information down and then walk through the parking lots, row by row. If the car isn't there it's assumed the party has just forgotten to check out, and I erase them from the computer system. If the car is still there, I call Camp Muir and see if someone up there can check for them. This time, though, there was no one at Muir to call: Ted had come down on Monday. It was now Wednesday, and no one had replaced him yet. I pulled up the party name Mike gave me, got the license number, and walked through the lot until I found their car. I called Mike back. He asked me to pack and tell Andy to do the same, and come down to Longmire. I said OK, I'd be right there, and then I walked back towards the Paradise dorm. I stopped at the basement window of the Jackson Visitor center, pulled it open a little, and yelled in at Andy, who lived in the rat-infested old law enforcement office. Andy came to the window and said he was making lunch and was going to finish up, do the dishes real quick, and then he'd be ready. We weren't worried about speed because we were rarely asked to actually do anything—the north side rangers did most of the glory jobs.
Twenty minutes later, I drove over to pick Andy up. It's a half-hour ride downhill from Paradise to Longmire. When we got there we waited another couple hours for the north side rangers to drive around from White River. Then I waited outside the administration building, sitting on the edge of its wide concrete porch, for the meeting to finish up, since I'd been asked to leave after making the coffee.
When it was over, Glenn came and told me that Andy and I were going to get on a small helicopter at the Kautz helibase and fly over the mountain to see if we could see anybody. I was a little surprised, but it turned out that David, one of the lead north side rangers, had forgotten his boots and there wasn't enough daylight left for him to drive a couple more hours back to get them. So Mike had gotten pissed and given me his spot. The north side rangers like to work together, because they know they can trust each other, so if I was going, that meant the other spot had just opened up, too.
Flying over the snowfield, on the back bench seat in the helicopter, Andy was looking out one window and I was looking out the other. We had just flown over the hills below the mountain, my favorite part of these helicopter rides—the green hills with their meadows on top looked so green and exotic and remote. I loved it when the pilots dipped down into the deep gorges at the bases of melting glaciers. Then, as we got higher, we could see Paradise and Panorama Point with its paved hiking trails and the famous flowers that wouldn't bloom for several more months. And then the snowfield, which was really white because of all the fresh snow from the storm. The mountain was all black rock and snow. I thought it looked more complicated than beautiful.
We flew up the Nisqually Glacier side, climber's left, slowly and low, almost in the shadow of the mountain in the late afternoon sunshine. Looking down, everything we saw was white. I wondered why we were looking for these guys with a helicopter instead my going on foot, like usual, to check Muir. Why the fuss this time? I wasn't in the meeting, so I didn't know anything about the men we were looking for, but I did know that after a big storm, everybody's late getting out.
I'd heard from Ted that before he'd come down from Muir he'd talked to a big group of skiers at the Muir shelter, and they'd turned away two guys at the height of the storm—they'd looked cold and weren't wearing enough clothing, the skiers said, but there wasn't enough room for them. According to the skiers, the two men had a tent, which had probably eased the group's consciences. The two men had told the skiers that their plan was to head down. Nobody told them there was a ranger station fifty feet away, where Ted would have invited them to stay and fed them spinach and rice, and good conversation. I'm sure, though, that in the blowing snow they wouldn't have been able to see the tiny hut.
When our helicopter flew over Camp Muir there were ski tracks, and a few people were walking back and forth below us. Probably clients of the guide service, or part of the ski group Ted had seen. The top and exposed side of the ranger hut glistened, encased in ice a few inches thick, beautiful and classically arctic.
We turned and started back down the snowfield, weaving slightly, looking down over the edge of the snowfield onto the Paradise Glacier, climber's right. We were about twenty-five feet over the snow, and as we dropped down past Anvil Rock I spotted a red backpack. As the pilot swung around I could see the men, about fifty feet apart, both of them lying face up in the snow. We flew in tight circles above them. The wind from the blades blew snow over them and they didn't move—then we knew they were dead. It was fully dusk, the sky was pink and the sun was gone. We had to go back to Kautz because the pilot didn't have instruments to fly in the dark, and we didn't have the equipment to pick the men up.
When we got back to the helibase, David came up to us as we were taking our packs out of the helicopter. "Are you sure they're dead?"
"Yeah, David," Andy replied, looking down. "They're dead."
"Because," David continued, "if they're not dead and you just said they were, then you just killed them, because they won't survive tonight. Think of their families, think of your consciences. How do you know they're dead?"
"Well," said Andy, "we flew right over them and they didn't move, they were being covered in snow and they'd obviously been there a while. If a helicopter flew right over me and I wasn't dead yet, I'd do my best to try to signal it somehow."
"And they didn't," I added. "They didn't move at all. They looked really dead."
David, his wild hair and bushy black beard framing his squinty eyes, didn't reply. He pivoted on one heel and went to talk to Mike. A moment later Mike came over. "Are you sure they're dead?"
"Mike," I asked, "do you want us to hike up there tonight and check for pulses? It was getting dark and we couldn't land there. We were as sure as we could be without being able to land." I paused and looked at Andy, then back to Mike, and said again, "They're dead," firmly and with conviction.
"OK," said Mike. "I'll tell the families."
When I got back to the Paradise dorm I washed out the macaroni and cheese pot I'd used for lunch and refilled it with water for more macaroni and cheese. I vowed I'd cook it until it was actually done, no crunchy noodles this time. I waited an extra five minutes and then added the powdered cheese pouch and sat down at the big communal kitchen table all alone to eat. I noticed we'd caught a mouse in one of our traps. It was dead and there was blood on the floor. I touched it with my shoe; it was stuck to the linoleum. It could have been dead there for days before I'd noticed it. I got up and moved to a chair on the other side of the table where I couldn't see it while I ate.
In the morning, back at Longmire, I sat next to David at the conference room table before the meeting started.
"Y'know, David," I said, "I know you're more experienced at doing rescues and that you can totally do this recovery better than I can, but they're already dead and I could use the experience, and since I started this, I'd like to finish it." I don't know why I wanted to go back there. Maybe I wanted to seem competent enough to do the job, maybe I wanted to make some kind of gesture, something no one else would do, but something that would be meaningful.
He didn't even look at me. "I need the money."
"OK," I said, "I understand that, you know a pint of Ben and Jerry's has gone up to almost four dollars." David looked at me in disgust. He was a vegan. I'd forgotten.
I sat outside during the meeting. Afterwards, Stefan told me to get my pack and go down to Kautz. He and Andy and I were going to do the recovery. I guess David had changed his mind.
Crouched with Andy and Stefan in the snow after the helicopter dropped us off, it was too bright to look at anything without sunglasses: another sunny day, quiet and calm, with at least a foot of new snow covering everything. While I pulled off my flight suit, which involved sitting down and taking my boots off, Stefan pulled out a sandwich. I wished I'd brought one, and I looked over at Andy and saw him eyeing it, too. We sat in silence a minute, both of us watching Stefan chew. Looked like ham and cheese with lettuce and an actual tomato on it. There was nothing to say. After he finished, we left our packs and hiked down towards the place where we had seen the men. We sank into the warm soft snow to above our knees, which made for slow going, but the slope was fairly steep and it wasn't far before we could see them lying down below us.
We got to the first man and I could see that he was old. His hands were covered with age spots and he had a few days' growth of white beard on his cheeks. He was sitting propped up against his backpack with his hands on his knees, staring out across the glacier, although I figure he was probably staring into the darkness when he died. He had light blue eyes and the lids were half-closed like he was thinking about something that was not in the darkness or on the glacier. His headlamp was still on and I reached up and turned it off without thinking—it was wasting the battery. He had on cotton khaki pants that were wet and frozen to his legs, and a wool plaid shirt. A tent was half sticking out of his pack, and all the poles were bent and broken. Half of a blue vintage Eddie Bauer down sleeping bag was pulled out of the pack lid and wrapped down over one shoulder like he had tried to use it for warmth, but hadn't tried very hard. He looked like he had sat down to wait.
Stefan took pictures for the coroner and then drew a sketch showing where the man was in relation to the rock and the slope. Then he pulled the man's oatmeal-colored wool hat down over his eyes so the dead man wasn't looking at him while he went through his pockets. None of us said anything.
We moved on to the red pack that we'd seen abandoned in the snow. It was an older pack that also looked like it had hurriedly been put back together. Near it we found what looked like a tent platform that had been dug flat in the snow and then abandoned. Stefan took pictures and drew another sketch, and then we walked down further, with each step sinking deeper as the sun rose higher and the snow became softer and wetter.
The second man was very young. Later I found out he'd just gotten engaged. He was skinny and his face was smooth and expressionless. He was lying flat on his back with his hands in the pockets of a thin blue nylon windbreaker. He was wearing shorts and his knees were black with frostbite, even though there were more clothes in his pack. He didn't have a hat, so we used a glove to cover his face. And took more pictures, and did a few more sketches.
I thought they looked like they'd been dead a while. I didn't say anything.
Andy and I started chopping out small platforms under the bodies so that we could get them into body bags and nets to fly out. We wanted to make sure we had a flat spot under each one so that once we started moving him, he wouldn't slide away from us and accelerate down the glacier. The snow was so soft that it didn't take us long to create spaces that were about six feet wide and ten feet long. We were down to our shirtsleeves by the time we'd finished. Then Andy held one of the men under the armpits and I grabbed the ankles. We dislodged him from the slope and slid him down on top of the body bag and open net as best we could, trying to keep as much snow out of the bag as possible. We did the same with the other man. We intended to hook the nets one at a time to a cable that hangs fifty or a hundred feet under the helicopter. In that way we could get them off the mountain quickly and without having to slide them down the main climbers' path in front of whoever might be headed up to Muir that afternoon.
I remembered Glenn telling me about finding two boys who had fallen to their deaths on an icy day a little ways above here, just at the base of the Cleaver. He didn't have any nets or body bags with him, so he had to fly them out with the cable just hooked to their climbing harnesses. He figured that if he flew the dead boys out one at a time, each hanging from his waist and splayed out with his head and arms and legs dangling down, it would look really bad, especially since, because of where the accident had happened, the helicopter would fly right over Camp Muir and all the climbers there. He decided to hook both of the bodies in at the same time and then he duct-taped them together so they'd stay upright, so it looked like they were holding onto each other.
When we finished getting our two men into the bags and the nets, we called the helicopter and then we had a minute to sit down. It was totally silent. Just the intense sunlight and the snow sinking imperceptibly down as it melted. We looked down at Paradise. Andy spoke up.
"I think they decided to set up their tent, but for some reason after they'd set it up they decided to leave again, so they stuffed it all back into their packs. Maybe as they were hiking down they lost each other in the storm, the older guy thought the younger guy was behind him and he sat down to wait. The younger one realized he too was alone and maybe he left his pack in an attempt to go faster but then didn't go more than ten yards past it."
"Hmm," said Stefan. It didn't make much sense to me, them being out here like this, but strange things happen all the time. There was probably something that should be said, I thought, regarding strange things like two men who had died lost alone in the dark and were then found in the brilliant sunshine. But we didn't say anything more. Maybe it was enough to be here and see it.
The helicopter came and we hooked the nets up to it, and it flew off with one man at a time attached to the end of a giant steel hook, a very loud and industrial end considering the men died in a beautiful natural place. After they were both gone we post-holed back up the hill, and I felt the sweat running down my back. Andy and Stefan had been waiting for me for a few minutes by the time I made it back to the packs, trying not to look like I was breathing hard. Then we flew back to Kautz.
Someone had placed the body bags in the shade. We had to walk right by the family, who'd come to identify the bodies, but we didn't talk to them. Didn't look them in the eye. They didn't know who we were and we didn't tell them. Andy and I drove back up to Paradise, and I went back to the dorm. I washed out my cooking pot and put it back on the stove for macaroni and cheese. I ate alone.
Before I was done, Stefan called and asked me to staff the Climbing Information Center for the rest of the day, so I walked over and sat watching filtered sunshine through the windows. It was a slow afternoon.
## 4
* * *
## THE COLD HEARTED
BY EARLY JUNE, my routine was falling into place—the trainings, refreshers, and check-ins finally over—and I was looking forward to just doing my job on my next eight-day shift, Wednesday to Wednesday. The first six days were busy but passed without major incident. I post-holed thirty thousand feet uphill, most of it with my ever-present backpack weighed down with vegetables and jars of Nutella as I made trip after trip up to restock Muir. This made the week's three summit climbs from Muir with a light load feel relaxing. Then there were the afternoon hours spent talking to climbers at Muir before their summit bids, some light construction, and the inevitable midnight medical calls. (Climbing rangers are on call twenty-four hours a day, for whatever emergency comes up, though our paid hours are 6:00 AM to 4:30 PM.)
By day seven I found myself staring off into space, unable to focus my eyes or my mind on the simplest of tasks, thoroughly exhausted and ready for my days off. Sometimes being exhausted is a wonderful feeling. There is nothing like being mentally, physically, and emotionally picked clean, not to mention unreasonably dehydrated, battered, and sunburnt—and then being allowed to sleep. That is a kind of sleep like no other, and I was ready for it. But the week wasn't over yet, and I was wary of what might come next.
From my first two seasons on the mountain, I'd learned that climbing rangers don't have breaking points—they don't get tired, or stressed out. They laugh at adversity and always enjoy a fresh challenge. The year before, Charlie and his old lead Paul had climbed Liberty Ridge in a day, carrying over to Camp Muir. As soon as they'd arrived at Muir, Paul had found a large, ill climber and had ended up carrying him piggyback almost all the way down to Paradise. Then at the parking lot Paul had found out the man's partner was missing, and immediately hiked back out to look for him.
I wanted to demonstrate these same abilities, but for some reason I got tired. I wasn't as fast a hiker as the guys, and on a rescue—the most important thing we did—nothing was valued more than foot speed. I was really good at first aid, rigging, general maintenance, assessing the climbing routes, and teaching people about their equipment, park history, and how to keep the mountain clean, but it still took me three hours to get from Muir to the summit, and that meant I wasn't as valuable to the team. The resulting shame weighed me down further.
Anyway, on day seven as I was fantasizing about sleeping a whole night through, Stefan was going stir crazy after having spent all week in his office, and decided that the team—Stefan, Charlie, and I—needed some exercise. So we hiked right back up to Muir in the blazing midday heat in full uniform, since Stefan liked uniforms.
I started out already dehydrated from the week's earlier exploits, and I drank all my water before I was halfway to Muir. My dark green cotton-and-polyester-blend pants were stiff and clinging to my legs, making every step harder. I had to stop at around eight thousand feet. The mountain slowly revolved around me and I gagged, but there was nothing in my stomach to throw up. While I was bent over, a stinging mixture of sweat and sunscreen trickled into my eyes, blinding me, but I had to keep moving. Stefan and Charlie had gone ahead and were waiting for me at Muir, and they knew what time I'd left the parking lot. I took a few deep breaths and started up again, blinking and weaving. I felt useless, but kept repeating to myself, "You're OK, you're an embarrassment, everything is OK, this is easy, just for god's sake go faster!" Climbing rangers never puke.
Knowing I'd be visible to Stefan and his binoculars five hundred vertical feet before I actually got to Muir, I tried to keep up appearances. That meant no stopping, no faltering, and no heavy breathing. I managed to arrive at Muir with a smile pasted on my face, but I was thankful when the suggested continuation to Ingraham Flats never materialized. Instead we hung around for about an hour, and then I let gravity pull me back down to Paradise.
More than ever I was ready for a full night's sleep, cradling my Camelbak so I wouldn't have to get up when I woke desperate for water every few hours... but I didn't get it. Andy called on the dorm's phone at two in the morning, and I jumped out of bed fully clothed. (Something came up almost every night, and I learned I would rather face whatever it was with my pants on. On holidays, weekends, and when I was too tired to take them off, I also wore my shoes.)
Some of the employees at the Inn had been partying outside, complete with drunken dancing, and a girl had fallen off a picnic table and broken her arm. Andy had been rousted by the partyers to drive the girl to the hospital—another part of our job description, midnight hospital runs, with or without an ambulance. Andy and Tom were handling the situation, but they asked me to fill Mike in and to call a private ambulance company to meet him in Ashford, just outside the park, so Andy didn't have to drive all the way to Tacoma.
The phone directory was in the ranger station a quarter mile up the road, so I stumbled down the three flights of stairs and out into the parking lot. As I walked slowly through the dark I could see the lights of the party and hear the music down at the picnic area. I wondered why anyone would be jumping around on picnic tables if they had the option of sleeping instead.
The next morning, I was still bleary-eyed at our yearly wildland fire refresher and physical agility test. Most park rangers moonlight as wildland fire fighters for a little extra money, and I was no exception. The refresher was a long PowerPoint presentation without pictures, and the test was a timed three-mile hike along a flattish dirt road with a forty-five-pound pack. Coming in under forty-five minutes was supposed to indicate that we could outrun a fire uphill, jumping the crest to safety before we got burned over and cooked.
The weather had crapped out and it was drizzling lightly. Everybody was wet and the road was getting muddy. I didn't have enough soup cans to get my pack to the required weight, so I'd had to add some rocks and now the inside of my pack was muddy, too. As I walked, the straps of the ridiculously loaded pack pushed down painfully on my already bruised hips, and I worried about my bad knee. It was sort of funny in a miserable kind of way, a bunch of people in matching uniforms carting cans of soup up and down an old dirt road in the rain, but the money from fire fighting made any training worth it. My mind drifted to more pleasant circumstances as I was swept up into the group of power-walkers.
My days off started officially at four-thirty that afternoon, as soon as this fire test was over. I pictured myself in my car, speeding down the winding road between Paradise and Longmire, my laundry in the back seat, heading towards a sub sandwich and then a pint of Ben & Jerry's ice cream—and then the South Hill Park and Ride, where I'd lock my car doors and sleep for a long time without anyone knowing where I was.
Stefan jogged up behind me and interrupted my reverie. "Hey, the glaciologists need somebody to help ferry their gear from Camp Muir to Camp Schurman, can you do it?"
"Sure" I said mindlessly, ready to please. "Is the trip next week?"
"Oh," he said, "they're up at Camp Muir right now. They're planning on climbing tonight, but you'll be up there in time to carry their stuff to the summit if you start up right after we're done here."
He slapped the back of my backpack and turned around, jogging the wrong way down the course so he could encourage some of the lady law enforcement rangers to make it across the finish line before the forty-five-minute deadline.
Up until that moment I had been able taste the Ben & Jerry's—it was so close, a miracle of chocolate and marshmallow créme and little smiling, tasty fish. But in a second it was gone, and there was just me in the rain on a muddy road in the woods with a bunch of wet people wearing green, and none of us were going to get any ice cream. It was disappointing, but I had to keep walking with the rocks in my pack for another two miles—and then I would have to keep walking all night.
Being tapped as mule for the glaciologists was actually a favor, because it gave me a chance to show I was useful in some way. At my last season's-end review, Glenn told me that I wasn't a team player, that I didn't have anything special to contribute. I needed a niche, a skill, a special project. I was devastated, thinking I was unlikable and incompetent. But in a lot of ways what he said made sense. I _wasn't_ a team player. That year my assigned government housing was in a different zip code from the rest of the climbing rangers who lived, ate, and commuted together. Then when I wasn't alone in Paradise, I was alone at Muir. I'd spent forty days scheduled alone at Muir that summer because, as the others got to know each other, they wanted to climb together, and it became easier for scheduling if I did the opposite assignments so we had good coverage. Even more than the rest of them, I'd never climbed, talked to, or even seen Glenn except for review day, so it made sense that he didn't think I did anything.
This year, though, we all lived together in Paradise and things had gotten friendlier. But there were also fewer of us, to the point where we were almost always on our own, working in the ranger station, staffing Muir, or soloing the mountain. This all meant that when a weird job like helping the glaciologists was offered to me, I had damn well better do it to prove I was worth keeping around.
"I've already hiked over thirty thousand feet this week," I told Charlie back at the dorm, to explain why I was propped up at the kitchen table instead of hiking up the mountain.
"Don't forget to fill out your accountability sheet for what you did this week, on your way up," he said, looking up from scrambling a whole carton of eggs. "You can leave it in my box."
I tried to pull myself together but my body wouldn't respond, it just sat slumped over on a chair. I put my head down on the dining table for a second, and it felt so good to be supported.
I woke up and Charlie was sitting across from me, eating a huge pile of breakfast burritos. One whole side of my face was wet from drooling on the table. My eyes wouldn't focus on Charlie so I looked past him and out the window. It was getting dark. The clouds were thicker and the rain was pelting up off the asphalt in the parking lot. I couldn't see the Tatoosh Range across the valley because it was full of clouds. I could barely see the trees on the sides of the parking lot. Groaning, I wiped the drool off my cheek with the back of my hand.
"You'd better go soon," Charlie said, offering me a burrito.
My clothes were still wet, but it didn't matter. Everything was going to get wet in a minute anyway. I barely brought anything with me, just a map, compass, headlamp, jacket, gloves, and what I was wearing—less than five pounds in all. I figured it was all I was going to be able to carry, and still get there. I rolled my things in a garbage bag to keep them drier and stuffed them into my pack. Everything else I needed for climbing I could get from the gear stash at Muir and from scavenging through Tom and Charlie's plastic storage bins. I knew they had eaten most of my chocolate bars already this season, so I figured I'd return the favor.
Food and equipment was always being accumulated at Muir. For example, over my most recent days off, two men had tried climbing Gibralter Ledges, but at the top one of them had tripped and slid down the forty-five-degree slope, eventually tumbling over a small rock outcropping, and then nine hundred feet further down Gibralter Chute. Andy and Matt had hiked up from Muir and performed CPR for a while, but eventually they'd been forced to leave the body and get the partner out because of bad rock fall. Earlier that morning, Ted had lent the dead man a water bottle. After the partner had retrieved the dead man's belongings, he'd hiked the bottle back up to Ted and thanked him for letting them borrow it. Ted never threw anything away, but he'd said later that he thought it was too weird to drink out of a dead man's bottle so I knew it was still sitting in the corner of the hut.
The rain ran furrows in my hair and spilled like a waterfall into my eyes. The beginnings of these sorts of trips were always the worst for me, while I still remembered what being warm felt like. It was dusk on a Wednesday in the pouring rain and no one was around. The meadows were a blackish green and bone-achingly cold—the bowed, absorbent valerian glowed white in the flat light. The lupine, green hellebore, and few pussytoes bunched in tight to the trail were beautiful, but even they looked vaguely foreboding in the gloom. There weren't any animals out. The giant silverish marmot that lives just below Pan Point didn't even bother to come out and see me. I felt abandoned, imagining him in his burrow under the big rock, cozy and sleeping.
I stopped for a minute at Pebble Creek to get my headlamp out, and was instantly chilled. Every hair on my arms was individually coated in its own layer of hard frost. I didn't put my jacket on. It would have gotten wet, and I needed it to be dry on the upper mountain. The rain had changed to freezing fog and I could tell I was into the clouds now. The fog from my breath completely obscured the way, blocking the light of my headlamp, but the clouds were so thick around me I couldn't tell which way to go anyway. I didn't want to stop and get my compass out—I was worried I would never warm back up again. Instead I went with my gut feeling on the direction and kept walking slowly, sustainably. My legs moved like a metronome.
It was late when I made it above Anvil Rock, a little over halfway to Camp Muir. I could see the outline of the rock in the dark and I realized I was above the clouds. I looked up to see a million stars and no moon. I looked back behind me and saw the gray roll of the fog curling around and caressing the snow in a line that ran across the slope, around the mountain, and then flat in a dark mass covering the earth, running to the horizon line. I was happy to be out of the clouds, but the starry night also chased away my slim hope that bad weather would keep the glaciologists from climbing over to Camp Schurman. The show would have to go on.
When I got to Camp Muir all the lights were out. I looked at my watch. It was ten-thirty. The ranger hut, where the glaciologists were staying, was dark. I knocked lightly, and a man groggily called, "Come in."
"Hi," I whispered, opening the door. "I'm Bree. I'm here to help carry your things over to Camp Schurman."
"Oh, OK. We left you some dinner." There was a pot on the countertop. I pointed my headlamp and saw about half a cup of phad thai frozen to the bottom of the pot. I ate it in two mouthfuls.
"I'm Jen and this is Rob," said a voice from within the other sleeping bag. Rob was sleeping on the back bunk, and Jen was sleeping on the bunk that folded out from the wall, leaving just enough space for one person to stand sideways between it and the countertop.
"I'm just going to make up my bed," I whispered, scrunching down under Jen's foldout bunk. There was another foldout bunk nobody used because all the rescue gear was hanging off it, and when it augured down the gear jammed against the floor, and you ended up sleeping at a steep angle. I didn't care. I just wanted to get warm.
"We're planning on getting up at twelve-thirty," said Jen. "Are you going to be able to go that soon?"
I laughed in a dry whisper, "I'd better be. I guess if I can't make it then your boss certainly doesn't have to pay me."
"The point is we need to get there," said Rob.
"OK," I said, "I can get there."
Impossibly, I couldn't sleep. It was a lot colder down near the floor than on the upper bunks, and I was freezing. I knew I wasn't going to dry out very much in two hours. I kept shivering and I was hungry, but I didn't want to bang around cooking because my job was to be helpful customer service, not the annoying person who keeps waking everyone up in the middle of the night.
I was still awake when their alarm went off. It went off a long time, and nobody stuck a hand out to turn it off. I wondered if they had earplugs in. It didn't matter to me why they weren't getting up, their deep slumber was my salvation. The alarm finally shut itself off after fifteen minutes of beeping. I could hear the guide service down below, people yelling to one another in the dark, banging the outhouse door, getting ready to start their daily climb to the summit. I prayed the glaciologists wouldn't hear it.
"Shit!" I opened my eyes inside my communal sleeping bag and hit the light on my watch. It was four-thirty. "Wake up everybody, the alarm didn't go off!" Rob was grumbling. He was tinkering with it when I stuck my head out of the bag. "Everything looks like it's set right, but I didn't hear it at all. Did you hear anything?" he asked me, looking down.
"No," I said, getting up dizzily, "not a thing. It looks like it's going to be a beautiful day, though," I added, keeping it positive. I smiled in the pre-dawn light. I opened the door, and the sky to the east was a brilliant lightning pink.
Rob said, "The equipment we need you to carry is sitting on the storage box out front." I went out to look at it. There was a heavy metal pipe with about a five-inch diameter, maybe six feet long; a piece of thick PVC pipe, cut in half lengthwise that fit around the metal pipe; a metal crosspiece handle; and a bunch of one-inch aluminum pipes, also six feet long, secured together with a bungee cord. I strapped them on my pack like skis, with the big pipes on one side and the small ones all bunched together on the other side, and I used the bungee cord to tie both together about four feet above my head. There was also a Ziploc bag of miscellaneous tools, wrenches, screwdrivers, bolts, and screws, which I threw on top of my pack. It was awkward and top-heavy, and I had to sit down on the box and shrug into the shoulder straps while Rob held the contraption upright in order for me to put it on. When we started out I realized that the pipes banged into the back of my head, hard, with every step. It was a welcome distraction from my body's other aches and pains.
I'd filled my pockets with energy gel I'd been storing for an emergency. It was too expensive to use all the time, but now I was desperate for more energy. Trying one and then another, I decided I liked the mixed berry—the chocolate tasted like dirt. I sucked them all down.
We stopped on the Ingraham Direct, about level with the top of the Cleaver, to take a snow core sample. It was going to be a warm day, the sky was azure blue above the clouds, and even the clouds below us had holes in them, revealing the green meadows and forest below.
Rob and Jen's project was to find out how fast the glaciers were melting. They were measuring snow density to determine the snow's water content, and they were also trying to find how deep last year's summer snow layer was. When we got to the first sampling location, on the side of the Cleaver, I noticed that the big metal pipe I had been carrying had teeth on it, and the handle attached to the top. Rob drove it down into the snow like a giant drill bit. When the top of the big pipe had been drilled in until it was level with the snow, one of the smaller aluminum pipes attached to the top of it with a couple bolts, and it was driven in, and so on. The goal was to get the big pipe that would gather the core sample all the way down to the bottom of the glacier. Then we'd pull the whole contraption up.
Once we finished pulling up the samples the snow we gathered from the bottom of the glacier would be slid gently into the half PVC pipe. Then Jen and Rob would look for layers, measure them, and determine their density, noting the depth and location of the core sample. My job was to write all the information down on a little pad of graph paper. On this first sample the pipes got stuck, frozen probably, deep in the hole, and I was glad for the reprieve while Rob swore and banged on the sides of the exposed pipe to try to free it.
As we continued up to the summit, my legs couldn't keep up the pace. They shook incessantly and would not acknowledge my directing them to go faster. I cursed at them. "Hey Rob," I said, "I need to slow down here for a bit. I'm so sorry. It's been kind of a long week." He didn't offer to take the equipment from me, so I figured we must have been going fast enough.
We headed into the summit crater to take another core sample. I had my down jacket on, but when the drill got stuck again and we had to dig it out, I had to take the jacket off because I got so warm shoveling. It felt good to be warm, but I was out of food now, and I was anxious to start heading down. We only had two shovels, and so we took turns, two of us digging and one taking a break and wearing the down jacket.
We got the drill out after an hour or so, but then Rob suggested that we dig down to the dirt so we could get a really good look at the different layers. The spot they picked was close enough to the edge of the crater that it seemed like a plausible thing to do, so we kept digging. Although there was softer snow on top, as we got deeper it got icy, and the layers were compacted together so tightly we had to alternate digging with a shovel and an ice ax. As we got deeper, the hole got smaller and eventually one person had to be down in the hole, handing up blocks of snow to another person kneeling on the edge and leaning over to get them. At some point during my break I fell asleep and Jen and Rob threw shovelfuls of snow on me, but I didn't wake up so they decided to let me stay there. I felt bad afterwards and told them I was on the clock and they should have woken me up. It turned out the snow was too deep for us to hit the ground, and we had to abandon the dig. I can't say I wasn't relieved.
Just as we finished shoveling snow back in the hole so nobody accidentally fell in it, a helicopter suddenly nosed up in front of us from the other side of the crater rim, and flew low over us. It was the rest of the climbing rangers, getting certified to use the jungle penetrator on rescues. The jungle penetrator is a little anchor-shaped thing on a winch attached to the belly of a Chinook helicopter that can lower you down and suck you back up while it hovers above you. This setup, a big helicopter with a winch, is good for inserting people into tight places or for high-altitude rescues where a smaller helicopter wouldn't have the power to land and then take off again.
It had been completely quiet while we were digging the hole in the summit crater, but now the wind from the rotors blew our stuff everywhere, while the rangers waved at us. We were glad when, after their initial buzz-by, they flew off to do their practice elsewhere. It had probably taken them less than ten minutes from the helibase below Longmire to get to the summit. It dawned on me that they'd known about this certification opportunity and, since someone had to go with the glaciologists, that's probably why I'd been asked.
We packed up and headed down to Camp Schurman, stopping for a couple more core samples. It was a beautiful day, but the carry-over and the sampling and the pit digging had taken forever, and it was almost five in the afternoon when we finally got to camp. Glenn had been dropped off at Schurman by the Chinook earlier in the day and he'd brought lots of homegrown vegetables from his garden. He offered to make dinner for us.
The social situation between me and Glenn was delicate, because we hadn't talked since that bad evaluation the previous year. I felt I needed to make good conversation to smooth things over, to let him know I wasn't upset and that I was trying to remedy my shortcomings by doing things like this very trip. But I was too tired to make any sort of decent conversation. I had a bad headache from spending so long in the sun without water and with the pipe banging a dent into the back of my skull. I decided what I really needed was a nap, even though I knew if I wasn't good I needed to at least be entertaining. I should be funny, I told myself, and then we could all have a good talk, Rob, Jen, Glenn, and me. I could pull everything together. Everything would be fine again. I could help make dinner, get things ready for the next day's march down to White River. Except at that moment I couldn't do any of those things. I had to go to bed because I felt like I was dying.
The Schurman hut is a lot nicer than the Muir hut. It's about five times the size. When you come in the door there are benches running along both sides big enough to sleep on, and a bunk hanging from the ceiling on the left. Further back there are cabinets for food, a stereo system, a heater, a ladder, and then further back still there's a big kitchen with tons of storage room. Up the ladder is a windowed loft where extra people can sleep and where all the rescue gear, personal gear, and other, stranger things are stored. I crawled up the ladder and pulled out a communal sleeping bag. It had "ASS BAG" written in Sharpie pen on the top in big letters. I got into it fully clothed.
Glenn handed up my dinner on a plate. They have real plates at Schurman; they don't just eat out of the communal pot like we did at Muir. He'd made pasta and put some steamed broccoli rabe from his garden on the top. It was so sweet of him, and I was so tired I started to tear up. It was a tasty dinner, but the long, long stems on the broccoli rabe left me thinking I was chewing sticks. I would start in on one, but no matter how long I chewed it, it never seemed to get any smaller or mushier, and in the end I carefully ate around them and left them on the plate next to my bed. "Thanks Glenn, that was really tasty," I yelled down the ladder, wishing there was a way I could get more of the pasta without more of the odd vegetable.
A minute or two later Jen came up and I got up to help her make her bed next to mine. She pulled out a sleeping bag that had "HO BAG" written on it, and she looked at it and sighed. I started giggling, wondering exactly what went on in these bags and knowing that I really didn't want to know. She had to cover her mouth with her hand and I had tears running down my face. I had to sit down again, partly because I was laughing so hard and partly because the room was spinning. I offered to trade bags with her, but she said she thought the "ASS" bag might be worse. Down the ladder, Glenn and Rob were talking about the joys of gardening, Glenn's newfound passion, until they finally fell asleep.
We set the alarm for five-thirty, with the plan to leave by six so we could get down before the snow got too soft. In the morning I waited until I thought I heard Glenn step outside. "Is Glenn down there?" I yelled down the ladder. "No, he just went to the bathroom," said Rob from the kitchen. I grabbed my dinner plate with the broccoli rabe sticks and beelined it down the ladder and over to the garbage can, flipped open the lid with the neat foot pedal and shoveled the little sticks into the trash with my back to the front door. I looked up and Rob was staring at me with an amused expression on his face. I shrugged and smiled but he looked past me, and when I turned around there was Glenn in the doorway looking sad. "You didn't like the organic vegetables from my garden?" And there was nothing I could really say after that. As we were leaving, Glenn handed me the trash to take down. It looked like a week's worth or more.
The garbage wouldn't fit under the lid of my pack with the pipes on it, so I duct-taped the huge garbage bag to the gear loops on the back of my pack. I had to lean way forward to balance the pipes already, and the black bag accentuated the hunchback effect. The first five steps or so out of camp on the rocks were OK, but an ice crust had formed over the soft snow. It was thick enough that Rob and Jen could walk on the firm surface easily, but with the glaciologists' equipment and the garbage and the fact that I wasn't real tiny to begin with, the crust wouldn't hold me. With every step it would hold for a second when I first put my foot on it, then I would crash down through knee-high soft, unconsolidated snow and the pipes would come down after me and bean me on the head. It was slow and exhausting and I couldn't keep up. I was glad that we stopped after only a few hundred yards to take another snow sample so I could take a break.
All the time while Rob was dealing with the drill, I was trying to decide what to do. Things could not go on this way. The weight was too much. My head hurt terribly already. My legs were too weak. There comes a point when a person has to sacrifice her dignity, her professional credibility, her gear, and everything else to simply get home. Forget having a job next season; this was about personal survival.
I pulled one of my prusiks off the rope that tied us all together. We used the rope, with one of us tied into each end and one in the middle, in case any of us fell into a crevasse. The other two of us would then catch that person before they fell too far in, and then we'd pull him out. The prusiks, which were tied loops of five-millimeter cord, were dual-purpose: they could be used either to make a pulley system so we had more leverage to pull someone out, or the fallen person could use them to maneuver himself out of the crevasse. I had a spare prusik, so I took off my pack and looped the prusik around the pack's hip strap, clipping the other end to the back of my climbing harness. My pack was now trailing behind me. I figured the pipes would act like a rudder and with the garbage on the top, the whole thing wasn't likely to flip over. I knew it would slide on top of the icy crust, but I'd never seen anyone drag their pack before. Sleds yes, but not here, and packs, definitely no. It seemed like a horribly juvenile thing to do, very un-ranger-like. The looks that Rob and Jen were giving me confirmed my suspicions. "I've never done this before," I explained. "I just don't know what else to do."
It worked like a dream. Without my pack I was light enough that I stayed up on the crust. The pack tracked straight and true behind me. I could tell the pack was taking some damage around the edges from the ice, but it seemed like a small price to pay. Rob was worried that his expensive equipment would fall out of the pack and slide away down the hill, but I assured him it was strapped in tight, and it was.
Instead of going down the standard way, down the Inner Glacier, we planned to go down the Emmons to its terminus. Rob wanted core samples, and he had some permanent PVC snow stakes there that he wanted to take measurements on. We weren't sure we would be able to make it down since the glacier was fairly broken already, but he really wanted to go that way, and he was running this trip.
Finally, I could see the bottom of the glacier, and we were close to the trail that would take us back to the car. But the section of glacier we were on was very broken. It looked like an apple fritter, with the slots cut in the top, and my stomach rumbled at the thought. Then I squinted out at the snow that was as impossible to look at as the sun, and at the glacier's thousands and thousands of dark blue holes promising a quicker way out than hiking down, and what I saw didn't look anything like food, fritter or otherwise. It looked impassible.
Despite Rob's protests I said we couldn't continue this way. The glacier was too broken and we were going to fall in. I was right, but being right meant we had to hike back uphill to get off the glacier higher up. We turned around and started back. It was nice to know that, despite my struggles with the pack weight and my slow pace, I'd actually helped the expedition out.
On the three-mile trail back to the parking lot I tripped on a root and did a face plant in the dirt, breaking one of the trekking poles I'd borrowed from Adrienne (I'd already broken both of mine earlier in the week). Replacing it would mean I'd actually _lost_ money by doing this trip. I had to rationalize quickly, because I wanted to cry. Most people climb the mountain for fun, not for money, I told myself. Mountains shouldn't be climbed for financial reasons; it's too much work for the money.
I'm not sure climbing mountains is fun, either. In fact, I'm sure it's not. The camaraderie that often comes from climbing, especially with friends, is absolutely worth it. The view may be worth it. The feeling of having your body running smoothly is wonderful—and being done with it, having done it, is nice. I was ready to be done with this climb.
We made it to the car at White River Campground, and two hours later I was dropped off at the Paradise dorm. We said our goodbyes. I said, "Thanks so much," and limped away to my car. It had a flat tire, and my spare was flat. One more impediment. I smiled. This one wasn't going to stop me. I borrowed a bicycle pump and set to work on the spare with a vigor I didn't know I still possessed. I hugged the curves down between Paradise and Longmire, and I flipped the park off on my way out of the entrance station. (Not the attendant, just the job.)
An hour and a half later I was in South Hill in line at Safeway, my hair a bird's nest, my clothes reeking and bloody from the header on the trail, my eyes bloodshot, my hands shaking. But I was holding a beautiful pint of Ben & Jerry's Phish Food. Moments later I was sitting outside on the curb, consuming it. And that part, the being done part, really was great.
## 5
* * *
## JUST A PAINFUL WAYPOINT
THE REST OF MY DAYS OFF PASS IN A BLUR. I come back to a Rainier Mountaineering client hit by rockfall while climbing Fuhrer Finger. He has sustained an open tib/fib fracture. RMI has lowered him on a rope down the Finger to the Wilson Glacier and the park has flown me up to get him, then we have both been dropped off at Harbor-view Medical Center in Seattle, the regional trauma hospital.
After he's been admitted I sit in the ambulance bay, my old stomping grounds from my stint as an EMT in Seattle, waiting for a park volunteer to drive the three hours up to get me. Maybe it's that I am wearing an ancient military flight suit and holding a backpack with brutal-looking ice tools bristling out of it between my knees while the hospital security guards stare at me through the bay door, or maybe it's being grubby in the city surrounded by old friends wearing white, but I feel rusty and embarrassed. I've forgotten how to act in a city, I'm sitting on a street corner with the other homeless people, and I've forgotten my wallet so I've got no money to buy food, either.
Two days later I set out from Paradise in the middle of the night. It has already been a long day. I expected to spend the rest of my shift at Muir, but a climber got altitude sickness and I helped him down. Now I need to get back up. A helicopter is coming to Muir first thing in the morning, and before the contraption shows up I need to corral a pile of construction waste and other refuse in boxes, put the boxes into big plastic nets, tie up the nets, and then, when the helicopter arrives, thread the swivels on the nets onto a cable hanging off the bottom of the ship so that the lot can be flown off the mountain. The whole job will take less than an hour, but getting back up to Muir so I can do it is going to take forever.
I wait longer to leave than I should. I stay in Paradise to make a meal, macaroni and cheese again, and a glass of Gatorade made from the powder. Before heading out, I have to dry out my jacket, pants, green shirt, and socks and gloves. I peel them off my body and throw them in the dryer, plugging in quarters I borrowed from Adrienne's room.
I eat my dinner out of the pot, in the basement sitting in front of the dryer in my bra with my backpack across my lap to cover me in case anyone comes downstairs. When the cycle finishes I get dressed. My clothes are stiff and they smell like me, which is to say sweat, chili powder, and ranch salad dressing. It has been five days since I washed them, but there isn't time now.
I carry my heavy, wet boots upstairs with the laces biting into my hand. I duct-tape over the raw spots on my heels and the outside of my little toes, the holes in my skin that seem to get deeper and bloodier every time I pull the old tape off. My feet burn when I pull my boots on. The pain of the first few steps makes my eyes water.
I'm only going back up to Muir for one more night, and I don't want to bring anything up with me, since the weight would only make the trip harder. For one more night, I can live without a lot of things. I put a quart of water and a chocolate bar in my pack and swing it onto my back. It will be cold, I tell myself, but I won't stop. I'll be OK if I keep moving.
I remember that my backpack used to be red. Now it is pink from the sun, with salt lines on the straps, and I have rub marks on my pants and on my hips where the pack rests.
Looking out into the darkness, I see myself reflected in the glass panes in the front door. I notice that it has been a long time since I've washed my hair. It has gotten shorter; barely long enough now to put up, and I wonder if this is because it has become so tangled. It is lighter, too, with white streaks from the sun, where it used to be the color of cherry wood with a dark stain. I feel myself fading, like my pack is fading. I haven't looked at myself in a long time. Nose to nose with my reflection, I see lines in the corners of my eyes from squinting at the snow—that blinding retinal pain in the morning—and I can't read anything from my own expression.
I walk out into the darkness, and the door clicks quietly behind me, locking me out. There are tiny green eyes in the dark, bouncing in the light of my headlamp. I use a small LED light that lasts for two weeks on one set of batteries, because batteries are expensive, but I can't see much farther than my own feet. Although I can't see it, I can hear a grouse in the blueberry bushes, along with the constant reassuring clicks of my ski poles on the concrete of the parking lot as I head out.
I won't start being paid again until six in the morning, and so my thoughts, at least, are my own. I turn off my Park Service radio even though we are supposed to be on call at all times. Later I'd say, if they tried to find me, that the volume was accidentally turned down. It is so much easier to continue this way, where only my body needs to keep moving for the man. My body is its own thing, disconnected from me, from everything. I barely have control over it. I've realized that my body can go on walking forever as long as I don't make it go too fast.
I am a disgrace to my work, to all the climbing rangers, because I am tired. I live in a world where a new speed record was just established at four hours and fifty-nine minutes from the parking lot at Paradise to the summit and then back again. Car to car in less than five hours. I am lucky on these nights just to make it to Muir in that time. My mind is like a brick in my skull, it's so heavy. It keeps trying to pull my head down to the pavement. I don't know if it's heavy with shame or with exhaustion. My breathing is that of a dreamer, deep and consistent. Everything around me is dark. I walk up through the black meadow. On both sides of me is the rich, cold, herbaceous vegetation, elbow-deep damp flowers. I can almost see them exhaling oxygen I need to help me continue on. I see them move in a breeze and it makes me shudder.
This walk makes me think of nothing. There are no sudden insights, no ruminations on the past or expectations of the future, no daydreams to break the monotony. The future is unbearable and the past is gone, and now there is only this moving like a ghost though the night. I cannot feel my feet, I am floating.
It gets colder as I go higher. The dew turns into frost on the few subalpine firs, and they glitter in my blue light. When I turn my head away, the trees disappear. There is frost on the rocks and frost on the trail, and no footprints but my own. I turn around to look at my tracks, to feel I've made solid progress, that I'm doing OK. My head is still numb, but there is a pain in my chest like a longing. I'm not sure for what—a bed somewhere, maybe, or that magical ability to be fast and solid and sure. It's something I remember having, so long ago.
I stare down at the imprint of my boot tread in the frost, and then up at the sky, and wish on all the stars to give me fire, but then they go out. I feel betrayed until I realize a moment later that I've fallen asleep. I open my eyes and the stars are back, and I find myself sitting on a pile of water bars, logs that the trail maintenance crew has left. It's too cold to stay there, and I get up and keep walking.
My nose is cold. It drips but I can't feel it. I only watch as the occasional drop plummets out of sight. I get to Pebble Creek, and the little stream is covered with a layer of swirling ice, full of little holes, the edge of each one coated with a fat layer of frost. It looks just like salt on the edge of a margarita glass. A thousand margarita glasses. _It's a party and I'm the only one here._ I leave it.
Everything from here up is on the snow. I sink down through an ice crust, about an inch with each step. It's the sound of something newly formed, breaking. I can see the shape of the mountain ahead of me in the dark. It glows, the enormous white glaciers glued to the rocks. It lies flat in front of me, from my feet to the stars, all of it glittering, all of it connected to thousands of years of old snow below me in black, dirty, crusty layers. Altogether, the mountain is a big thing to be with, alone in the dark.
When I stop, my head is in a cloud of my own breath. I'm thirsty and I take my pack off to get my water bottle. I'm dizzy when I stand up, and my stomach is a hard knot, like a ball of ice. The key to success on these trips is not stopping too much. The water is as cold as it can be without being ice. It makes my teeth hurt and I start shivering uncontrollably, but I know I will feel better the next day for not getting dehydrated now. Who knows what will happen tomorrow. I shiver harder, my legs rigid. I pull my pack back up to my shoulder in one slow, static movement, and the sweat on my back has turned cold. I keep my eyes focused on my destination only three thousand feet above me that I can see as clear as anything, at the base of the Cathedral Rocks.
My hair is covered in frost from my breath. Each strand that has fallen down had its own insulating layer of rough crystals. I can see the crystals in my eyelashes and freezing in the air. I keep walking. It's not far, I tell myself. There are people up there asleep in their little yellow and orange tents, which sometimes glow like Japanese lanterns but are now black cold nylon on spindly aluminum legs engulfing the sleepers in the dark. Muir is no destination, just a painful waypoint for people who always want to be somewhere else.
One by one, my fingers start to freeze. First the skin directly under the holes in my gloves, where I've had frostbite before, wearing these same gloves; then in a better semblance of order, from smallest to largest, my little finger, and then the next and then the next become hard and numb, still grasping my ski poles. I don't think about it. There is nothing to do but continue.
The snow gets harder, I no longer sink into it. I can walk along the ice crust, which is slick sometimes and I have to be careful where I put my feet. It takes as much energy as I can muster to concentrate on the ground. I no longer look up or at my watch with its altimeter, because it doesn't matter anymore. There is no destination, there is only this. Only the sound of my breath and the mountain under me, and above me, and surrounding me.
I am too tired. I need to lie down. I take my pack off and pull out my chocolate bar for energy—I curl up on top of the pack in the fetal position and put my arm over my head, hiding from the mountain and the cold. I try to eat the chocolate bar, but it won't melt in my mouth so I chew it like gravel, and can't swallow it.
I know I can't sleep here. I've tried it so many nights before, because of whatever circumstances, wet in the snow in the middle of the night, lying down until the shivering became an exhausting rigidity, and my fingers and toes feel obligated to start moving. If I stayed I would lie here, cramped and afraid to lose the heat I've trapped by moving, and sometimes my mind would drift, but I'd never get to sleep. I get up again and it is terrible, the continuing, but it's also comforting because it is the same. It is a routine that I follow step after step and in this, for once, I always know what will happen next.
After an interminable distance, miles and miles and days and days without a sunrise, I come to my own door. The plywood A-frame in the sky. The whole camp is quiet, but I can see high above me tiny lights on the Cleaver, other people climbing in the dark. I feel no kinship with them, they are distant and involved in a quest I no longer understand. I open the door, smelling mold and wet feathers, and I go inside where nobody and nothing can see me. It is a relief. I take off my boots, which don't hurt anymore, and I get into a doubled sleeping bag on the bunk in the back, one inside another, and lay down.
I am wet, the nylon of the bag sticks to my wet socks as I try to slide in. The bag is cold and I wonder if I have enough energy to produce heat, to warm the bag up and dry out. With wooden fingers I set the alarm for three hours' sleep, and I smile.
## 6
* * *
## BACKTIED TO A BUSH
EARLY JULY WAS CHOCK-FULL OF RESCUES. In a little over a week, a Rainier Mountaineering Incorporated client was seriously injured when he was hit by icefall on the Kautz. Another RMI guide and three of his clients fell two hundred feet down the Ingraham glacier, sustaining femur fractures, serious head injuries, and spinal injuries, resulting in a very involved rescue and media firestorm. Then two north side rangers spent the night at 13,500 feet in subzero temperatures on the Emmons Glacier with two people injured in a fall and three hypothermic would-be rescuers. The winds were so strong that their tent collapsed; that one was later described to me as "a rough night."
When my shift started, I scrambled to keep up with the backlog of maintenance projects that had been put on hold, since rescues justifiably take precedence. By day seven of my shift I was happy to have a day working with Adrienne in the Climbing Information Center, answering the phone and issuing climbing permits. The CIC is in the middle storey of the Guide House, a newly remodeled, big, formal, and furnitureless building filled with echoes. We had moved in from a much smaller building, and we didn't have nearly enough stuff to make this new space friendly or inviting.
On the walls, surrounding an immense open area, were brand-new information displays about the dangers of dehydration, cold, storms, lightning, avalanches, and inadequate mental preparedness, illustrated with pictures we'd taken of each other suffering. It was strange to see people I knew on display, since the building seemed so impersonal. There were also a lot of Mike's pictures of sunsets and sunrises, and a memorial display for two climbing rangers who'd died a few years ago, doing a rescue in a storm. Maybe we were insensitive, but Adrienne and I took it down and moved it to the back office, facing the wall. We didn't want to see their faces looking at us while we worked. I know it gave me the creeps.
A new video featured Andy and Glenn demonstrating an easy step-by-step process for using a "blue bag" in the wilderness. First, take the blue plastic bag and use it as a glove to pick up your own shit and toilet paper. Get one finger under the edge with your other hand and invert the blue bag, and use one of the provided twist ties to secure the top. Then place the blue bag into the clear plastic bag, also provided, and use the remaining twist tie to secure the outer bag. Put the packaged waste in an outside pocket of your pack and continue on with your climb, happy with the knowledge that the drinking water will be cleaner and the route more beautiful because you've picked up your own crap. This was important. Every time we found human waste someone left on the mountain, we had to pick it up. We typically picked up several finds a day while we were climbing.
Kids liked the video. They ran around in circles listening to the squeak of the newly refurbished wood floor under their feet, and every time the video ended they ran over and pushed the button again. During the busiest weeks of the summer, enough kids and other tourists came through to keep the video playing in a continual loop. We could all recite it from memory. I didn't mind. It was just nice to be inside today, sitting and resting and watching the video and the kids, and answering phone calls.
Adrienne was in the back doing paperwork. She was in charge of the CIC this year because she'd hurt her knee over the winter working as a ski instructor. She'd been holding the arms of a tiny girl who was skiing between her legs when the girl slipped sideways and their skis tangled. Adrienne had had to ski the rest of the way down to the lodge on one leg while carrying the crying girl. Adrienne had had surgery on the knee in the spring and it was pretty much fine now, but she didn't want to risk hurting it again before it fully healed. I was kind of jealous. It seemed like it would be such a relief not to have to be hard, fast, and ready every day. Adrienne could decide each day whether or not it was a good day for her to go out and climb the mountain.
A couple of climbers were milling around, looking at the warning posters, waiting to register. They looked typical of the average younger-generation climbers who came to the park. Little knit hats on (despite the heat). Expensive sunglasses. Well-tanned and muscled, and wearing trendy approach shoes. They were discussing their plans and their past exploits loud enough that I could hear everything they said; they kept glancing at me to see if I was impressed. They were climbing Disappointment Cleaver, the most popular and perhaps easiest route on the mountain.
I ran their credit cards, handed over their passes and blue bags, and showed them a copy of the weather forecast. "I don't suppose there's anybody here who's climbed the mountain lately?" the one with the blue knit hat asked me, looking past me into the back room.
I noticed his eyes didn't linger on Adrienne either, hunched over at the computer with her blond braids hanging down her back. "Hey, Gator isn't here, is he, or is he out on the mountain right now?" Blue Hat sounded excited.
I continued to look at them, expressionless, but inwardly irritated. "Well," I said, trying to sound genuinely apologetic, mentally donning my customer service hat, "I climbed the DC yesterday, and I know the route pretty well. Our volunteer Tom might have climbed it this morning. I can give him a call on the radio if you'd like to hear the absolute most current conditions from him, or I can just give you yesterday's info." I meant this as a slight jab, since the weather had been consistent and we all knew nothing had changed at all on the mountain since yesterday. But I figured they'd see it as a chance to talk to a guy, without having to ask. Then I added, "And Mike is down in Longmire if you want to stop by and say hi."
"Oh," said Blue Hat, "We don't actually know him personally, just wondered if he was going to be out on the mountain at the same time as us."
"Sorry, no luck today," I said, smiling, and then they asked me to give Tom a call.
It was ten to three. We closed at three. All the climbers planning on climbing today were long gone, and Adrienne and I were only waiting to close, hoping that no more tourists would come in and want to watch the blue bag video. It was hot outside but it was cold in this huge old building, and I couldn't wait to get out in the sun and soak it up, feeling it radiate into me.
The phone rang. Somebody had figured out how to make the ringer play _The Simpsons_ ' theme song. Most of the climbers who came in liked it, but it got annoying after the millionth time. Donny at the communications center was calling to ask me to check on an injured visitor hiking in the meadows.
"Hey Adrienne," I yelled into the back room, "Donny says there's a kid with a broken arm at Glacier Vista. Do you want to just close down the Center now and go with me up there? I mean, I think we can justify it because it's sort of heavy carrying the first-aid kit and the O2 kit by myself, and it looks way more professional if there are two of us."
"Sounds good to me, anything to get me out of here," she said, with the front door keys already in her hand.
My feet were killing me as we headed out of the CIC. My constantly wet boots and the persistent cold had slowly destroyed my feet. I wasn't sure if I had trench foot or what, but large chunks were falling off the bottoms, and they were really painful. The whole mess had gotten worse earlier that week with huge blisters that had been rubbed off in a series of climbs, and now my heels were large, bleeding sores. I'd covered them with antibiotic ointment and duct-taped gauze over them.
That morning I'd tried to fit my feet into my Park Service-issue boots that go with my green ranger uniform, but they hurt too much. I could stand up after a minute or two, but I couldn't walk without an unprofessional limp. So I wore flip flops instead, in violation of the strict uniform policy. Now that I was headed outside I had to make sure no other rangers saw my feet. Backcountry, frontcountry, and law enforcement rangers already thought that climbing rangers got too many perks, like good raincoats and a synthetic uniform we could wear above ten thousand feet in bad weather. It was true. The rest of the park had to buy their own uniform raincoats, and they were lousy.
The trail was thick with visitors. They were everywhere, slowing us down, stopping right in front of us, cutting us off. We wove around them, saying, "Excuse us." "May we just squeeze by you." "You want to go left here to go to the overlook." "It's a green false-hellebore." "It doesn't sound like a bear, it was probably just a marmot."
At first the sun felt good. It was the height of summer in Paradise, and we were surrounded by amazing vistas: the Tatoosh across the valley, Goat Rocks, Adams, Hood, and Jefferson in the distance, tiny rivulets of water burbling out of dark grottoes surrounded by wildflowers at our feet. All of it being photographed by thousands of tourists surrounded by thousands of mosquitoes. We started sweating almost immediately, and after a few minutes the itchy green pants became a menace and our grey uniform shirts stuck to our backs. We'd both been spoken to about how unprofessional it was to undo the top button of our shirts just because it was hot, but I really wanted to anyway, and it took a lot of willpower not to mess with it. It's funny how the little things can become the most annoying.
We got to the intersection that leads to the top of Glacier Vista. "Is there a kid with a broken arm up there?" we asked a group of middle-aged women hiking down the trail.
"No, there's nobody up there like that at all. The view isn't even as good as they said it was in the ranger station."
"I'm sorry you didn't like the hike, but are you sure there aren't any kids up there?" Adrienne asked again.
"Yeah, we're sure." They kept walking past us and then looked back. "Hey, do you think the view is better from Pan Point or is it not worth the bother of hiking all the way up there, either?"
"Maybe it was a prank," said Adrienne. "I wonder how Donny found out, anyways." Cell phones don't work in Paradise, so somebody had to have walked out and reported the accident to a ranger in person. Trying to decide what to do, we looked around blankly at the myriad of people on various trails and walking through the flowers, despite the signs everywhere saying to stay on the trails. A woman in her early fifties, with closely cropped white hair, was walking fast and passing people on the stairs, coming down towards us. She started waving. "Are you rangers?" she asked us. I glanced briefly down at my uniform. "Yes."
"This kid," said the woman, waving behind her at a very overweight child who looked like he was about ten, wearing a red basketball jersey and black basketball shoes, "is the brother of the kid who hurt himself. This kid found me on the trail, and it was my friend Nancy who walked out to report the accident. I assume that's why you're here?" She trailed off and looked at us expectantly.
"Where is the injured kid?" I asked.
"Oh, I don't know," she said, agitated. "This little guy didn't know the names of any of the trails, so we couldn't tell where he'd come from. He's got a walkie-talkie, though." The sweaty boy held it up for us to see. "And his mom answers back sometimes. She's with the other one, the hurt kid."
"Thanks for your help," said Adrienne, turning to the boy. "Hey kid, do you think you could walk with us back to where your mom and your brother are?"
"Maybe." Adrienne and I looked at each other and I shrugged.
It was really hot here in the meadows and we hadn't thought to bring any water, since this wasn't supposed to take very long. I was already thirsty. And I was becoming confirmed in my suspicions that walking in flip flops up steep pavement with sweaty feet and a heavy first-aid kit sucked.
All my complaints started welling together and I wanted to sit down in the shade—just sit for a while with my eyes closed. I wanted all the tourists to disappear with their noise and problems and questions, and I wanted to lie in the meadow, where nobody was allowed to go, and look up at the pale blue sky, with my entire peripheral vision filled with cold purple lupine.
The kid lagged behind us, wheezing and tripping along. He was spent. I let him go on in pain a minute or two longer than I should have, but I was irrationally angry with him for having a brother who had hurt himself and had interrupted my afternoon napping plans. We came to a trail intersection, where three different trails came together, and I asked him if he knew which way he'd come from. He looked around without comprehension. His eyes were glazed over from the heat and the strain and thirst, and he obviously had no idea where he was.
There was still a steady stream of people hiking past us, back down to Paradise. I stopped an older couple wearing matching khaki sun hats. "Hi, I'm Bree, and this is Adrienne. We're the rangers up here today, but we're dealing with an incident in the meadows. This boy really needs to get down to Paradise, and we're concerned he might not make it down there by himself. Would you guys mind if he tagged along with you?" They were nurturers, we'd chosen well. They asked him if he wanted any water. I looked at the water bottle as they handed it over, and I watched him drink, spilling half the contents on his shirt. I couldn't take my eyes off it. "Just drop him off at the Paradise Inn," I said. "We'll make sure there's somebody's there to get him. And thanks so much."
I called it in on the Park Service radio. "Comm. Center, 686."
"Comm. Center."
"We still haven't found the injured visitor, but we did find his brother and he's hiking down to the Paradise Inn with some other visitors. Could you alert the Inn to keep him there until we can come get him?"
"We'll call the Inn, let us know when you find the injured party," said Donny.
"Well, Adrienne, shall we try the walkie-talkie?" I asked. Adrienne pushed the button on the front of the little blue unit. "This is Adrienne from the Park Service. Can the person that needed help hear me?" We waited a second and then there was the sound of a woman screaming hysterically on the other end. This was discouraging.
"Ma'am, can you tell us where you are?" Adrienne spoke slowly and clearly into the piece of blue plastic. Some of the people walking by stopped to listen while pretending to take pictures or gaze out into the meadow. There was more screaming and crying. It seemed like this mother was overreacting to a broken arm, but then, I supposed she had been waiting a while for help.
"Ask her if she can hike out with the kid back to Paradise," I said. "Can both of you hike back down to Paradise?" Adrienne said into the walkie-talkie.
"Both of us?" the woman said, sounding confused. "Noooo." And the "no" turned back into a wail, then back into sobbing.
"Do you know where you are?"
There was a pause.
"Mount Rainier?" came the hesitant answer.
"Oh dear," I said. "Maybe we should just keep hiking uphill and see if we can see them, because they obviously aren't here."
We kept going higher. They could have been anywhere on the miles of trails that wind back and forth, crisscrossing each other. Adrienne kept asking questions. "Are you on a trail?" "Are you on snow?" "Are you in the trees?" "Can you see Paradise?" "Are there other people with you?" The woman didn't know very much, and we had a hard time understanding anything she said because she was screaming into the little walkie-talkie speaker.
"I think she's off the trail," said Adrienne finally. "Great," I said. Every couple of minutes we asked the people coming the other way if they'd seen an injured kid, but nobody had. We hiked up to Alta Vista and wondered briefly if the mom and her son could be over the side towards the Nisqually Glacier, but peering down through the trees from the trail, we didn't see anybody. Uphill was Panorama Point.
"Did you go uphill to Panorama Point?" Adrienne asked the walkie-talkie. There was no reply.
"Maybe we're out of range?" I wondered.
Adrienne wiped her face with her sleeve and didn't answer.
Ed Dunlevy, the head law enforcement ranger and EMS coordinator, called us on the radio asking if we needed any help. "Well," I said, "I'm not sure, since we haven't found the injured boy yet. If we don't find him soon, we could use a few more people to help look for him."
"OK" said Ed. "I'll be incident commander on this, so I'll be in Paradise and you can talk to me directly when you call in."
I thought it must be a slow day in Paradise for us to get so much attention. I wondered why sometimes I couldn't get help when my life depended on it, and other times the whole park was willing to come out. To some extent it depended on which budget the money was coming out of. If the total cost for the call was less than five hundred dollars, then it came out of the climbing budget, which meant we needed to keep it as cheap as possible. No extra people, no overtime—when the climbing budget was expended, we got laid off. If the total cost was over five hundred dollars, then the money came out of the park's search and rescue budget, which had much deeper pockets. I figured that Ed was betting it would take more than me and Adrienne to splint a broken arm, it would take enough people to go over the five hundred dollar mark. While I was happy for the help, I was also a little pissed that he didn't think we could do this by ourselves.
Tom called me, saying that he was headed down from Camp Muir, could he help? If there was a rescue and the money was flowing, he could get paid for his time, even though he was a volunteer. I thought it would be good for him financially, and good for knocking the total cost up into the search and rescue budget. "Sure, Tom. And could you bring an extra quart or two of water down with you? We'll meet you at Pebble Creek." The water request was unusual, since it takes hours of melting snow to get even a little water at Muir, but I was getting desperately thirsty. I looked over at Adrienne, who seemed pleased that she was out of the CIC for a while, and I envied her enthusiasm and vigor.
Pebble Creek is just uphill from Panorama Point, about twofifths of the way to Camp Muir. The snow ends there in a series of dirty, icy rolls leading down to the small creek. Nobody up here had seen an injured kid, either. We both doubted that the group we were looking for would be any higher than this. Most people turn around at Pebble Creek, if they get that far, unless they specifically want to climb up to Muir. Tom said he'd keep an eye out on his way down, just in case.
We'd been hiking for an hour and I wanted a break. We had to wait here for Tom, anyway. Adrienne checked out some of the little snow pockets between the rocks that we couldn't see from where we were, in case it was possible for a person to get stuck in one. We were high enough that nothing but heather and a few penstemons and asters grew in little clumps between the rocks. I sat down on a rock next to a patch.
There were a few day trippers coming down from Muir, and I said hi to everybody as they passed me. "Taking a break, huh?" said one guy with a large orange backpack. I smiled. "I wish I had your job, sitting in the sunshine, looking at the mountains all day. I only get to do this on vacation. Hey, what do you have to do to be a ranger anyway? My son is in high school and he's getting a little chunky, if you know what I mean." The man threw me a conspiratorial smile, which I took to mean the kid was huge. "And he needs a summer job, where do I sign him up at?"
"Well," I said, "the median age for a climbing ranger is over thirty, and I think you're supposed to have a degree in something, you know, nature-related. It's also best to be in shape before you start, and it's good to know something about climbing early on. There's always a chance, though, right?" I was trying to keep the conversation light, but my voice started to have an edge. "Hey, what do you do? My knees are starting to go and I'd like to have a job where I can get out of the weather once in a while." Things went on like this as I tried to hold it together.
Finally, Tom showed up, and I waved at Adrienne to come back the next time I saw her head pop up from behind a boulder pile. We'd come up here on one side of the Skyline Trail, and we decided that we should try going down the other side, checking out the various trails as they branched off on our way back down. I felt much more enthusiastic after chugging the quart of water Tom had brought. "You're an angel," I told him, "bless you." Glenn called on the radio and said he was taking over as Incident Commander. Ed and the Cougar Rock Campground Host were going to check out the trails just outside of Paradise to see if they could find the kid down there.
Adrienne had been trying the walkie-talkie periodically, and as we dropped down from Pan Point on the other half of the Skyline Trail loop we picked up the hysterical mother again. "Where are you guys?" the woman wailed.
"Where are you?" Adrienne replied. "Do you know where the nearest trail is? Could you start walking down it to the nearest trail intersection and read the signs so you can tell us where you are? That would help us out a lot."
"OK, I'll send the kids," said the woman.
"What kids?" Adrienne wanted to know, and then as an afterthought she added, "Hey, are there any landmarks around you that we might be able to recognize?"
"No," the mother cried, "there's nothing here but the damn waterfall."
I called Glenn to let him know. "IC, 686."
"Go ahead, Bree." I felt important with my name going out over the radio after having used a number for so long.
"I guess they're near a waterfall. Could you look at the map and give us a list of all the waterfalls next to trails around here?" I knew there were a bunch. Glenn said he'd look when he had a minute.
The three of us were half-jogging down the dusty trail in the late afternoon sun. There weren't as many trees and bushes on this side. It was mostly just shale and a few disconnected snow patches, but that also meant better visibility—we could see a long ways down. There didn't seem to be anyone out on the trails at all. This side of the Skyline isn't nearly as popular. Most people go up and down the same way since doing the whole loop takes longer. It was also getting late, and I was sure that anybody in their right mind would have headed back to Paradise for dinner at this point.
Although the trail is one of the most rigorously maintained in the whole park, a lot of it is gravel and stairs. The section we were on consisted of large uneven stone steps, and it was really hard on the knees. I noticed that all three of us were limping slightly as we descended. Tom was carrying a big black plastic bag of garbage, smashed under the lid on his pack and ripe in the sun. He was ahead of me on the trail—I could see Paradise below us, and then it would be obscured behind the garbage, and then reappear again every time he took a step.
We were getting lower. It was a pity to have gained so much altitude just to lose it again. After a bit, we were totally out of the snow and back into alpine meadow with short little trees that came to our shoulders. I noticed that the flowers on this side were in better shape than on the west half of the trail, and I made a mental note to start recommending this side to hikers again. Glenn called us back on the radio and said there were a lot of waterfalls in the area. He sounded grumpy, and we wondered why he was in the park at all since these were his days off.
We came to the intersection of the Skyline with the Golden Gate Trail. There was a waterfall here, and we left the main trail and headed through the meadow on a social trail, made by visitors tromping on the vegetation until it died, to get a better look at it. There wasn't a lot of water coming down, since most of the snow patches that fed it had already melted. The waterfall was probably a hundred feet from top to bottom, but it wasn't vertical, it just ran down in a series of steeper steps, and in between them the water fanned in a thousand tiny, frothy fingers around the moss on the slimy black rock. About halfway up we could see people, not in the falls but right next to it sitting in the grass and hellebore and blueberry bushes. They waved, and we headed up the slope towards them.
It was steep and the grass was wet with mist from the falls. We had come in from the side, not the bottom, so if we slipped from where we were, we would keep going all the way to where the last cascade ended in a pile of jagged rocks another fifty feet below us. I grabbed handfuls of flowers, trying to hold onto enough of them that the roots would stay in the ground and hold me. My flip-flops didn't exactly have tread on them, and they kept sliding. I would start to slip and would press my whole body against the side of the slope, hoping the friction would be enough to stop me.
My mind wasn't into this. I had liked being on the trail, solidly connected to the earth. I didn't want to have to worry about falling to my death today. I just wanted to splint this arm and then go eat dinner like all the other hikers. I looked up and I could see that Adrienne and Tom had reached the accident site. They were on the other side of a small tree that was sticking out of the slope at an odd angle, like it too was slowly sliding off the cliff. I willed myself to continue crawling up the slope.
I wondered where my guts had gone over my seasons as a climbing ranger. I was supposed to be heroic. I'd been hired to rescue people willy-nilly, and yet here I was, proving I was the kind of person who had to think twice about risking my skin to save an injured child. A few weeks before, one of the north side rangers told me that the major reason I wasn't picked to do big rescues was that he thought I would someday be under a huge ice fall, running with seracs crashing all around me, and I would freeze up and get squished. He said you always had to be ready to die. Willing even. Excited maybe. But no matter what, you just had to go for it. I decided that my problem wasn't that I froze up, scared witless in the heat of the moment, I was OK once I had committed. My problem was far worse: before I would commit, I had to calculate my willingness to rescue an injured child versus the likelihood that I would die.
When I reached the tree I clutched at it, then straddled it, took my flip-flops off, and threaded them through my belt. It was better barefoot. I could dig my toes into the dry flakey dirt and use them to grasp at the tiny green plants.
When I finally made it over to everybody else, I asked Adrienne, "Did you do introductions?" trying not to show how badly I was shaking.
"Not yet," she said.
"Well, I'm Bree, and this is Tom and Adrienne, and we're the rangers up here on duty today."
I paused. "What's going on?" I whispered to Adrienne.
"The boy fell over the top of the waterfall, landed on his face, bounced down lower, then landed on his knees in the water over there. Somehow he managed to stop himself. His mom, his sister, and two brothers crawled up here and then his mom drug him out of the water and put him over here where it was a little flatter."
I looked around. The boy was covered with blood. He looked about twelve. He was half-sitting, half-lying on the steep grass. He had one butt cheek on a flat rock, but he had to hold onto the grass and brace with his foot to keep from sliding down over the next steeper section. The mother was standing, braced against a dead log that looked like it had a dubious connection to the earth. There was nobody else there.
"Where are the two brothers and the sister?" I asked.
"I guess we met one brother at Glacier Vista, and the two others, ages four and six, were sent back to the trail by their mother to alert us to their location here," said Adrienne. "And since we never met them, I'm assuming nobody knows where they are right now."
I got out my radio. "IC, 686."
"Go ahead, 686."
"We've located the injured party above the Golden Gate Trail. Uh. The injured party fell over the waterfall here, and due to the location we're either going to need a helicopter to winch him out, or we're going to need a rigging crew to do a lower down the waterfall, along with all the gear to do the lower. We also need an advance party, maybe Charlie if he's around, to run up the trail with some pickets and a light rope and a harness to secure the patient and rescuers to the slope."
"Great," growled Glenn over the radio. He sounded pissed. "It's too late to get a military helicopter today. I'll see what we can do about getting the people together to do a lower."
"Oh, also, Glenn, the mother here has two young children, ages four and six, who are on their own on the trails, so if everybody could keep an eye out for them it would be great."
I put the radio back in my pocket and crawled slowly over to where the kid was. "Hey, Adrienne, if you take the first-aid kit and stuff, could you brace yourself against that tree and then throw things to me when I ask for them?"
I turned to the boy. "Hi, I'm Bree, I'm one of the rangers up here. How's it going?"
He tried to say something, but his jaw was broken and there was a big chunk of skin that had come off his chin—when he tried to talk it rubbed against his chest and started bleeding again.
"Aw, it's OK, never mind," I said. "Hey, Adrienne," I yelled over, "how 'bout some roller gauze?"
I couldn't really tell what was wrong with him. He was wearing a Metallica sweatshirt, and his mom kept screaming that I wasn't allowed to cut it off. He'd just bought it with his own money and neither one of them could afford to get him a new one.
She got hysterical again when Adrienne threw me the scissors. "Just one sleeve," I said, eyeing the arm he was cradling in his lap. I cut the underside of the fabric up to above his elbow, despite her protests, and his forearm was angulated and fat and blue. His fingers were purple, but he could move them and I figured that was a good thing.
"Hey, Adrienne, throw me a SAM splint and some more roller gauze." I kept working. I wanted to take his blood pressure but I wasn't sure that I'd be able to get on his other side because it got steeper over there. I didn't want to crawl over him in case we bumped each other. The whole situation was so precarious that we'd probably both slide off.
I cut up his pants on the inside seams because his mother thought she'd be able to repair them later. He was still using one leg to brace himself. He had skinny, young, white legs. They were bruised and bleeding but looked mostly intact. He'd sprained or broken an ankle. I did a few more things, and got out the oxygen and squatted there with the bottle between my knees and one hand on a heather bush with thick roots.
Tom grabbed the radio out of my pocket; I told him to start arranging things for the people who were going to show up. Adrienne and Tom said they were going to try to take the mother up to the trail by way of the top of the falls. They said it looked easier to go up than down from where we were, and while they were up there they were going to start scouting for decent anchors. I yelled, "Good luck!" They started inching their way up the slope and some loose grass and dirt came down after they'd gone.
Then it was just me and the kid beside the waterfall. I missed them already. I started in on my inane cheerful chatter talk that I use with people who cannot talk back—a skill I just recently discovered I share only with dental hygienists and preachers.
Nothing happened for a long time. I watched the sun swing lower, behind the top of the gully. When we'd first arrived here the cool mist from the falls had been refreshing. But now the temperature was dropping and I embraced every last second in the sun. The shade line rushed at us and I realized it was going to be really cold in a few seconds. Because I couldn't move any further over to follow the sun, I held one arm out to touch it as long as possible, watching the darkness crawl down my arm to my fingertips.
The kid started shivering almost immediately, but I didn't have anything to put over him. I didn't even have a jacket. The oxygen bottle I was holding made my hands numb and I kept wiggling my toes to keep them from freezing in the wet grass. I kept talking for a while, but I eventually ran out of things to say. I could see a little bit of the trail below us, and I kept my eyes focused on it, watching for more rescuers. The kid started weeping, and I patted his shoulder. Later, when he couldn't hold onto the heather bush anymore, I got behind him and put my arms under his armpits and held him across the chest. He would still slide down in little slips, but every once in a while I would count to three and then heave him back uphill. He drooled blood on my hands.
I didn't see Charlie come up the trail, I only noticed him when he started crawling down the slope towards me covered in sweat with his jacket tied around his waist. He brought a harness and a short bit of rope. We put the harness on the kid and tied him to one of the tiny trees. I didn't trust the tree and we couldn't get the rope tight enough with the stretch, so it was more just in case he slipped and I couldn't grab him.
"Hey, Charlie," I said, "do you want your jacket or can I have it?" He handed it to me. "I could also really use a radio, and a sleeping bag for the kid, it's pretty cold down here." Charlie nodded and headed back up. I thought he was going to come back with more stuff, but he didn't.
It was dusk when I finally saw the crew headed up the trail below the falls. All of them in a line, with an old heavy litter they'd put a wheel on and filled with ropes and old rigging equipment. Andy had put together new lighter rigging kits, but they were locked up because they were valuable and only the supervisors had access to them. We'd never used them because it was always one of us who had to get the kits, and we weren't allowed to have the keys. I felt a flash of anger, but at the same time accepted that seasonal workers have always been considered un-trustworthy, and climbing rangers doubly so since so much of the gear we needed for work was so easy to borrow for personal climbs and equally easy to forget to give back. There was no good solution. I made a conscious effort to be happy and think about getting rescued fast.
"Hey, look!" I said to the kid, "Here they are! I'm sure it won't be long now."
It took another interminable stretch of time until Charlie and a law enforcement ranger, Tim, carried the litter down the hill. They were awkwardly trying to crawl down, clutching the wire basket between them and hanging on to the verge with their outside hands. The litter was attached to slack ropes that ran back up the slope. I was a little disturbed by this because usually the whole bit, the people and litter, are lowered down on a tight rope from the top. When they got close I asked them what was up with the weird setup and Charlie said he didn't trust the anchors to take the full weight of himself, Tim, and the kid. So they weren't going to tie in, they'd just tie the kid in and then help push him back up the slope, hanging on to the rocks at the same time and hoping the anchors held if they accidentally dropped the litter.
"Oh," I said, nodding, thinking this was insanely risky. "Make sure the rope stays tight so it doesn't shock load your anchors if you do drop him."
I saw Tim eyeing me. He favored a militaristic, rule-based, cookie-cutter approach to all problems, which irritated me every time we met. Occasionally I would refuse to do what Tim wanted, which irritated him every time we met. As far as I was concerned, Charlie had gotten there before Tim, which meant that he was in charge, even though I'm sure Tim had some kind of incident management training that probably made him more qualified.
"Why are you down here without a helmet and a harness?" Tim said, pleased to have found something wrong with me. "That's very unprofessional, and it shows you didn't spend enough time thinking about safety awareness."
These lowland rescues are so different from rescues on the upper mountain. There's a certain flow on the mountain, everything is impromptu and nobody has all the right equipment, and we just do whatever works well. When the rest of the park is involved, the actual rescue often gets overlooked because everybody's worried about whether incident command, radio, and uniform protocols are being followed. I wondered if Tim realized how dangerous what he'd just done was. It's true he had a harness, but he wasn't tied to anything, and his helmet wouldn't be enough to save him if he slipped. "Sorry, Tim, I didn't realize I'd need a harness when I left Paradise."
"You always need to be prepared, Bree." I hate being spoken to in that tone of voice, part condescending, part motherly. I wondered what exactly I always needed to be prepared for. Tim continued to look me over, and when he got to my bare feet, only partially hidden in the grass, his eyes got wide. I knew he was going to have words with my supervisor's supervisor, the infamous Gator, and I was going to get some serious shit. I glared back and he slowly shook his head. Mike would understand my sore foot problem, but I also knew he wouldn't risk upsetting a law enforcement ranger by sticking up for me over a uniform issue. Uniform issues were really important, and my feet really weren't.
They'd brought a sleeping bag and they also had a backboard and a C-collar for the kid. I put the C-collar on him, trying to maneuver it around the broken jaw, but I didn't want to use the backboard. I knew he needed one, but there wasn't a good way to put him onto the board. If we put it down on the slope next to him, and he slid onto it, then he'd take off on it like a toboggan. Then, too, the litter had a weird slippery fabric lining, like the top of a trampoline, and I knew the backboard, with the kid on it, would slide out of the litter if it tipped at all, which of course it would. There was only one set of straps and there wasn't any extra webbing, so we could strap the kid down to the board or to the litter, but not both. I wanted to just tie him into the litter, get him to the top, and then deal with the backboard there.
Tim said no, there was a medical protocol to follow. I had a mental picture of the kid on the backboard, wearing his harness tied in to the rope with six feet or so of slack in it, sliding out of the side of the litter and being wrenched in the middle, probably knocking Tim and Charlie off, and maybe the shock load on the static ropes would be enough to make the whole system fail. Tim was demanding compliance and I kept looking at Charlie, but he didn't want to get in the middle of the argument.
I figured Tim was already pissed, what did it matter what I did at this point? I grabbed the backboard and dangled it behind me, over the water, where if I let go it would sail gently down to land in the pool fifty feet below. "I can drop it, Tim, or I can carry it up with me later." I stood there holding the red plastic backboard in one hand and clutching a Sitka valerian with the other, trying to look unfazed as my legs went numb and all the blood in my body rushed to my cheeks.
"Fine, I'll be in touch with your supervisor," said Tim. They called for tension on the ropes and then tied the kid onto the main line and slid him into the litter. The whole thing sagged down a couple feet and was ungainly enough without getting the backboard involved.
I inched back over to the tree where the first-aid kit was propped between a few branches, and started stuffing things back into it. I had to write up a run sheet for the ambulance people. I didn't want to look up and see how the raise was going. A lot of rocks and dirt came down and hit me. I hoped it was going fine. In the end, it did.
We cleaned up all the webbing from the scrawny trees and the bushes they were backtied to, and picked up the rigging kits. I carried the first-aid kit back down the hill in the dark. A medic had started hiking up after driving to the park from Tacoma, and she met us about halfway down the trail. Glenn had come into the field at some point to take over command of the rescue, and he ran ahead to talk to her about the kid's condition. I hadn't talked to Glenn, so I'm not sure how accurate the information exchange was. I asked him a bit later if I should give her a short report, but he said he'd handled it.
We wheeled the kid down in the litter from switchback to switchback in the dark. The trail was skinnier than two people abreast with the litter between them, and the grass was wet with dew, so the people on the outside slipped sometimes and we had to stop a lot so they could pick themselves up. I liked to look back and see everybody, a long line of headlamps bouncing in the dark—it was pretty, and there was a nice sense of togetherness. All the rangers out for a hike in the meadows on a moonless summer night. I would have liked to carry the litter all the way back to the Paradise parking lot, to make the feeling last longer, but someone had driven a truck on the lower trail so we put the litter in the back. I sat on the tailgate with the medic and gave her a short report anyhow, and then rode the rest of the way back to the parking lot. The kid groaned every time we hit a pothole.
The mom met us in the parking lot. Somebody had found her two youngest children on a trail and had dropped them off at the visitor center, and someone had gotten her other son from the Inn. Everybody was back together again. They were going to drive behind the ambulance to the hospital in Tacoma.
Tom, Adrienne, Charlie, Glenn, and I carried all the gear into the Paradise dorm, spread it out on the floor, and sorted it. We counted everything, daisy-chained all the webbing, put the equipment in duffle bags, and zip-tied them so none of us could steal anything. It was going to be dawn soon. I was excited, it was the last day of my eight-day shift.
When we were done putting away the gear I went upstairs and took a shower, feeling the hot water burn into all the cuts and scrapes I always seem to get on these crawling-through-the-brush days, even though I don't ever remember having hurt myself. Adrienne and Charlie went to bed. We all had to get up soon. I went into the dorm kitchen, turned all the lights on, and put some water on for mac and cheese, and while it was heating I lay down on the big, empty kitchen table. It was quiet and warm, just me and the moths fluttering around the ceiling lights. And I closed my eyes, just for a few minutes.
## 7
* * *
## KAUTZ, SOLO
IT WAS SNOWING AND I WAS CRYING, HARD. It was newly dark and there was still a blue glow down along the edge of the glacier where the ice met the sky. I was on duty, doing a patrol with Charlie, but he had disappeared an hour or so ago. I had gotten too far behind and had lost sight of the tiny indents in the ice where his crampons had left their marks. The plan had been to climb Ptarmigan Ridge in a day, a route I'd never done before. Charlie had the map, but I should have known the way regardless. I yelled for a while to see if he could hear me, watching the sun get lower and lower while dusk fell over the green hills in the valley. I turned my light on, and then it was the only light.
Charlie always listened to music when he climbed, so he might not notice for hours that I was gone. When he did notice, I wasn't sure what he'd do. Maybe he'd come back, maybe he'd keep going. If he did come back, it would be good for me to be here, waiting. If he didn't come back, in the morning—early, before the snow got too soft—I'd hike back to the car. I didn't want to keep going up, not knowing if this was where the route went. I didn't know why I didn't know where the route went. I should have known. I wasn't scared, I could always hike back down, I was just sad that I'd been left.
A week later, after Charlie had come back and gotten angry with me, and then forgiven me for being slow, and crying, and not knowing the way, I realized that I was no longer trying to prove to the world that I could do my job. I was just trying to make it through each day without losing myself, hurting anyone, or going insane. The work was amazing, but there was too much of it. I enjoyed talking to the public about route conditions, staffing the high camps, and patrolling routes up the mountain. If someone got hurt on a route, I went there and made it turn out all right, but I only got a full night's sleep once every two or three nights. I fell asleep climbing, cooking, eating. My hands shook and my eye twitched all the time. I couldn't recover, and consequently I was not a good partner or friend, and as a further consequence I got left.
After this Charlie scheduled me for a backcountry patrol by myself, and in a way it was a relief. I still had to get from point A to point B, but if I took a break it didn't matter. If I couldn't think of anything to say, it was OK. I wanted to be a good partner—a big part of why I wanted to be here was because I wanted kinship—but I'd also become good at self-assessment, and I knew if I tried to climb with Charlie again I would only fail again. We jointly decided that we would both do better if I climbed alone again for a while.
I picked the Kautz route, which is on the southeast side of the mountain. None of us had climbed it for a while, and we needed to update the conditions report. In the pre-dawn I packed a sleeping bag, a little food, a quart of water, and my climbing tools. Nobody except one of the Paradise volunteers knew where I was going. There was no one else around to tell the morning I left, and no one who would care except Mike, since he was the one who checked out my accountability sheet proving I had done work. I could go as slowly as I wanted, because nobody would be expecting me. If there was an accident, I would be too far away to do anyone any good. In many ways it was a working vacation.
When I got to the trailhead it was cold and foggy. It wasn't windy, but it felt too cold to be August, even on the mountain. I put my running shoes on. I always hike in shoes late in the season, when the snow surface is firm enough. The boots stay in my pack until I have to put crampons on.
I hiked slowly up the trail through deep woods, heading for Comet Falls. Early in the season, the first miles of this hike—and several thousand feet of elevation gain—can be bypassed by hiking across the Nisqually Glacier from Paradise, but this late, the crossover involves a lot of rockfall, and I was glad I had the time to hike the long trail from lower down.
I found out later that an aerospace engineering student on summer vacation had tried climbing the Kautz the day before I did. His ex-girlfriend was climbing with the guide service, but he didn't have the money to go with her. He didn't have the money for much, but he decided to climb anyway. He rented an ice ax but didn't have the funds for crampons, so he sharpened tent stakes and found a way to duct-tape them to his boots. He packed up a blanket and a can of beans.
I never did see him on the trail or on the route. With the fog we could have passed right next to each other, and never known it.
It was midmorning before I made it up to Van Trump Park. I came out of the woods and into the meadow, which had no sense of openness that day because I could only see about fifteen feet ahead of me through the thick clouds. There was a tent site under some trees in a bit of a flat spot. I sat down there on my pack, and rested my head on my knees for a while, keeping my nose warm. I didn't think about anything. I've discovered it's best not to think about the future—it'll happen anyway.
When I got up I could feel the cold soaking into my face and my stomach. I had to concentrate to get my hands to work, to do the buckles on my pack. The cold made my hands throb. I put my pack on and stuck my hands in my pockets, where I hoped they'd warm up again once I got moving.
The trail ended at Van Trump Park. There were two climbers' paths heading into the fog, and I wasn't sure which one I should take. I took the right one, for no reason. It petered out almost immediately, but I got out my map and compass and checked the direction and it looked about right, so I kept going. As I headed up, the fog only got thicker.
I don't think the engineering student had a map. He was relying on determination and conditions as he encountered them to tell him the best way to get up the route. Which is not much different from what I did. A map is just a safety in a world that is inherently unsafe.
The meadow was gone almost immediately, replaced by big rocks in loose sand as I went higher. It was difficult walking—with each step my foot sank down and slid back, losing ground. The sand built up in my shoes, feeling like sandpaper when I wiggled my cold toes. I kept going. I tried sticking to the sand and rocks for a while, but it was so tedious that I switched to the snow fingers, the very terminus of the glacier groping down between the rock ridges. The snow was very hard but it had large sun cups, concave bowls like waves in a kiddie pool, and the surface of the snow was gritty. I could walk on it as long as I centered my foot in each depression.
As the angle got steeper, I realized that if I lost my balance I would slide a long way. I started slipping frequently and was forced to keep crisscrossing back and forth, trying to find the deepest, most evenly spaced sun cups. I almost fell once, but caught myself by quick, energy-consuming body contortions—bloodying my fingernails on the slope as I slid. My heart was beating wildly and I was out of breath, but I didn't feel scared, just annoyed that I'd expended so much energy. Carefully, I walked back to the rocks and took my pack off, getting out my boots and crampons.
Fear is an interesting thing. I almost never feel afraid, but I do sometimes get a sense of impending doom, and it's almost always at the start of a trip that I know is going to be a disaster. Over time I've learned that it's best to verbalize this feeling to my climbing partners, if I have them. But I've also learned that the trip's momentum can take over so that gut instinct and logic are rarely heeded, to our later regret. On solo trips, though, when I've had that feeling I've always turned around. After all, what are we without partners in a crisis? I'm still convinced that a disaster among friends is more easily survivable and can knit you closer together. A disaster when you're alone is miserable and taxing at best. At worst, with the added psychological pain of being left, it's a death sentence.
I don't know if the engineering student felt afraid during his climb. He didn't have the equipment or the skills to ascend the route, so logically what he did was stupid, but it wasn't hubris that made him do it. He didn't think he could make it. Later we found the campsite he'd used all summer as a base for his hikes. He'd left a post-dated journal, the entries, written days before, describing what he knew would happen to him on this climb. He wrote about how hungry he was. How his clothing and blanket were inadequate. How he was going to die. I think he had the same instincts I had. He knew, and he went anyway.
Gritty frost on top of the ice stuck to the underside of my crampons. It got warmer as I went up until it was like a greenhouse in the fog, with the snow getting gloppy. Eventually, I started to get above the fog. The snow stuck in huge balls to the bottoms of my boots, and I'd slide down every time I took a step, before the steel spikes of my dull crampons eventually caught. I debated whether this was better than wearing my running shoes.
I could see the mountaintop every once in a while through a hole in the fog. It looked the same as it always does. Very there. Meanwhile, the angle of the slope I was on kicked back a little, which was nice. I didn't feel so precarious, and I was relieved to see that I was right where I wanted to be. I could see the ridge ahead of me where the two different starts to the Kautz route came together. It wasn't very far away, but it took me a long time to get there.
I finally made it out of the clouds and into the sun. Every few steps, I stopped to enjoy the heat on my back. It felt so good that all I wanted to do was sleep. I couldn't see anything below me because of the clouds, and there were no people, or evidence that people had ever been there. It was as if the whole human world didn't exist anymore.
I came up to the ridge, looking for a little waterfall I knew was there. I found it, then walked over loose rocks at the base of the fall and around a big rock on the ridge crest, and found some campsites lower down. Stashed in the rocks of the campsite I chose were two gallon containers of white gas, a few old pickets, and a broken camp stove. I took my pack off and angled it so my back sweat would dry off it, then took my boots off, and my socks, and propped them up in the sun. Finally I took off my pants and shirt, took my water bottle, and walked naked over to the waterfall to fill it up. I didn't treat the water, I just drank it. It was freezing, it tasted like clean gravel, and it made my teeth hurt.
I put my clothes back on at sundown. They were still damp and cold, but not soaking. I got into my sleeping bag and watched the sunset. I tried calling Charlie on the radio, but he didn't answer. I supposed he was climbing out of Camp Muir. Looking over I could see Muir, a tiny dot on the horizon across the glaciers. It was getting cold fast, and so I zipped up my sleeping bag, pulled the hood up, and tightened the strings so only my nose and one eye were exposed.
The stars started to come out, and the wind picked up. I wanted to watch the stars, there were so many that the whole sky was white and glowing, but the breeze blew gravel in my eyes, up my nose, and in my ears, and I had to stick my head entirely inside the bag to escape it. I didn't set an alarm. I should have, but I didn't because in a lot of ways I didn't want to climb. I just wanted to be too far away to help anyone for a little while. Just long enough for a little rest.
I'm not sure when the student might have passed by my campsite. Maybe he'd passed it without a second thought. Or maybe he had camped there just the night before, and checked to see if the little stove jammed behind the rocks worked—thinking for a few moments about how nice some hot water would be, but then realizing it was a false hope—just like I did.
It was five AM before I got up. I couldn't stand the thought of eating another Snickers bar, so I ate an Emergen-C, the orange-flavored drink mix, like pop rocks for breakfast, straight from the package with no water. It left me foaming at the mouth. I stuffed my damp sleeping bag back into my backpack, and put my boots and crampons on. Then I took one of my two ice tools off the back of the pack, put the pack on, and started working the stiffness out of my legs.
I was tired, and knew that today I had to be slow and consistent. I was worried that the snowbridges over the crevasses higher on the route would be too soft later in the day. Too soft meant I'd fall through into a crevasse and die and nobody would ever find my body, but I hadn't wanted to get up any earlier than I had because a person who doesn't get enough sleep starts to go peculiar, which also can lead to dire results. I started up the mountain an hour after dawn, at first wearing my jacket, but after a few minutes I took it off. The wind had died down and it was going to be a bluebird day. Even the clouds in the valley were gone.
After about an hour I came around a big rock and saw a tent with three men standing around, drinking coffee. "Hi, I'm Bree, I'll be your park ranger today. Y'all having a good time?" They weren't. They had climbed this route a few days previously, but had been too scared to come back down the ice chute at the top of the route, so they had gone down a different way, and then hiked back up here from the parking lot in order to get the camp they'd left. I thought they were burly to come back and get their camp. Most people would have just left it. I'd found perfectly set up camps long abandoned here before, full of new gear. Fortunes of war. Funny how we compare mountain climbing to war, when mountain climbing is something people do for fun.
I left them drinking coffee next to their tents and continued on. I had to climb down below the Kautz ice cliff, a large hanging ice fall. As the day warmed up, the ice cracked. The fall loomed above me, a huge blue art-deco death-trap. I crossed this section at a steady pace. I absolutely didn't want to linger here, but at the same time I didn't want to move too fast and wear myself out completely. I emerged on the other side and for the first time looked up at the ice chute above the cliff. It didn't look too steep. Early in the season, the chute is just steepish snow, but as the season progresses it gets icier and icier until finally it's a sixty-degree slope of water-ice, a clear blue you can see deep down into. I stood below it for a while, knowing I should move on and get this part over with. Looking down at the foot of the glacier, way, way below me, I was happy for the first time in a long while. There weren't any distractions. There was just me here, and I only had one thing that I could do, go up. Simple as survival.
I don't know if the engineering student ever was here at the ice cliff above Camp Hazard. In his post-dated diary, he wrote that on the second day he would be climbing hard ice, but it would be too much for his skills and his homemade equipment. He said that he would fall and slide, tumbling and bloody, into a crevasse. He wouldn't die at first, he imagined, but too injured to get out, he would lie there at the bottom for a long time.
I took my other ice tool off my pack and started up the ice. It felt steeper than sixty degrees, I think because I was alone. My pack felt heavy, but I knew it wasn't. The only things in it were my lightest sleeping bag; half a quart of water; a few Snickers bars; a jacket; map and compass; and a pair of running shoes.
It had been really cold for a long time prior to my climb, and the ice was brittle. When I hit it with my ax, huge chunks split off that would dinner-plate out and hit me in the face, or the knees, or land on my feet, threatening to jar my crampons out of the ice. Sometimes on this section there were huge penetentes, like large misshapen stalagmite formations, that you could hang onto or lean against for a rest, but this time there were none. I kept visualizing myself ripping back down the ice in a wild fall, somersaulting down and over the cliff at the bottom. I had seen what this spot had done to other people. I did not feel invincible. There are some places where a person cannot afford to sneeze, and on this day, for me, this was one of them.
I couldn't get my tools to stick in the ice. It was too brittle. I was afraid that the chunks splitting off would knock me off completely. My hold was too precarious for me to try putting the tools back on my pack, so I left them hanging from my wrists where they caught on every irregularity in the ice. For stability, I put my hands on the ice in front of me and continued climbing with just the frontpoints of my crampons in the ice, a quarter-inch of two metal spikes barely angled in to keep me from dying.
To complicate things, there were huge holes in the ice, dark chasms with thin little walls that I had to step on, step up on, try not to fall into. I was listening to my MP3 player. To the Black Eyed Peas's "Anxiety." The player was in my pocket, and every time I high-stepped my thigh would press the volume button higher. I couldn't stop to fix it, so the music in my ears got unbearably loud in the silent glacial world. When I stopped for a moment, my calves started to shake. The climbing wasn't that difficult, and I couldn't figure out why my muscles were flaming out except that I was being too careful, and needed to move faster.
Finally, I came to the top of the ice. I wanted to throw up, I was so tired. I turned the music down. There was a flat spot. It was late in the morning and I needed to keep going, but I sat down on my pack and ate another candy bar, drank the rest of my water, and then just sat in the sun. It was a fairly flat glacier, gently angling upward above the ice chute. On its surface were several huge round ice blobs that seemed to have come from out of nowhere—they looked like giant snowballs, left randomly. The nearest one was about fifteen feet high. I walked over to it without my pack, climbed to the top of it, and looked around. The view was exactly the same as it had been on the glacier, except for when I looked straight down.
The student spent his summer break illegally camping and hiking through the park, looking at crazy things. I hope he somehow made it up above the chute to see the snow blobs.
It was now very late in the morning, the snow was getting soft, and my feet were sinking six inches or more with every step. There were huge crevasses everywhere with late-season, sagging snowbridges crossing over them like a giant maze, and with every minute the bridges were getting more unstable.
I started walking again. There was a rock ridge I could stay on for a while, but then I had to cross another glacier which was very broken with crevasses. I could only guess which way to go through.
Some of the crevasses were massive, a hundred or more feet across, with the lip on the far side hung with huge icicles, and each end disappearing into the snow. I never wanted to cross the snow too close to the crevasse edge, since it was thin there, but I didn't want to waste time or energy walking too far around, either. Several times, one of my feet punched through the snow and into the edge of a crevasse, or maybe into the middle of another crevasse I couldn't see. I was getting hazard pay for this, a couple of dollars an hour extra.
You can never tell how thick the snow covering a crevasse will be. I always stopped falling after one foot or both feet went in because the additional surface area provided by my crotch, my ass, or at least my backpack when they hit the surface of the snow spread out my weight enough so I didn't break all the way through into a bottomless abyss. When I felt myself start to fall, I instinctively put my arms straight out—something I'd seen mellow drunks do back in my days driving an ambulance. We used to call it the Jesus Position. Maybe I did it so I wouldn't fall in—not that I actually thought my arms would be long enough to span crevasses that were easily fifty feet across. Maybe I did it in case I did break all the way through, for the reason the drunks did it, surrendering myself to whatever came next.
Higher up, I got to a weird spot. As far as I could see in either direction, an enormous crevasse cut across my path. Its edges were overhung, and I could hear water running down into its dark blue everlasting. I could see a few snowbridges across it, but they were all drooping even without my extra weight on them.
There was only one that I thought I could cross. It rose up like a bridge, twisted in the middle, and came down on the far side—looking like the McDonald's arch with a twist at the top. I didn't want to cross it, but I knew I was going to have to. I wanted someone to know where I was in case I fell in. If I died on the mountain I wanted my body to stay there, but I didn't want my friends to have to look for me forever.
I tried calling Charlie on the radio again, and this time he answered. I told him which glacier I was on, and that I would give him a call back in fifteen minutes, which was about the amount of time I thought it would to take me to get across the bridge. He wanted to know why I was calling, and I said I would be down to Camp Muir in a bit and I just wanted to know if he was there. I looked around again. It had been nice to talk to somebody, but now I had a deadline.
The bridge creaked and I was petrified crawling across it, but I spent a moment more than I should have at the apex, looking down into the crevasse. The view was crazy. Like looking into a black hole trying to suck me in. I could feel the entity of the living glacier inhale, the warm air rush past me into the shadow, pulling at me, and then I was on the other side.
I decided not to try to summit the mountain. The top is only the top, and I couldn't care less anymore. I hadn't cared for a long time. I needed to get to Muir, and it was all about the shortest distance between two points. The Ingraham Glacier was very broken, and I had to cross it before I could meet up with the popular Disappointment Cleaver route that would take me down to camp.
By now it was afternoon and the snow was getting even softer. With each step I was sinking almost to my knees, but somehow the snow I'd stuffed into my water bottle hadn't melted at all. Such is life. Every time I sank into the snow, I wondered whether my foot would stop, or break through the ceiling of another crevasse and wave around in nothingness. I was thirsty, but I had to continue going up, within five hundred feet of the summit, in order to find a way through the crevasses. Mine were the only footprints.
It was early afternoon when I hit the DC. The DC is a trail packed down two feet wide; the guide service maintains it with shovels. It was nice to go downhill. There was nobody else this high on the mountain so late in the day. I kept catching my crampons on my pant cuffs, and every time I would almost fall. I laughed at myself a little bit, and was glad no one was there to see me stumble. I had to jump across a few crevasses on the way down, but I did it without breaking stride—I had to have some small skill.
Coming into Ingraham Flats I passed two climbers who were also headed down. I said, "Hi," and asked them if they were doing OK. They looked tired and grumpy. They said they'd been out climbing since eleven o'clock the night before, but they'd made the summit. I congratulated them, and added that if they wanted to come by the ranger station later there might be margaritas. They said that sounded nice, but they said it wistfully, like the idea was a mirage of a watering hole when they were lost forever in the desert. I kept going.
I paused at the top of Cathedral Gap to fix my hair and make sure I didn't have any more gravel in my teeth, and then I cruised down into camp. Charlie was sleeping, but he woke up when I walked into the hut.
I kept more food at Camp Muir than all the other rangers combined. Besides the margarita mix (excellent with a little tequila and late-summer corn snow) I was planning on cooking up some phad thai with a real lime, peanuts, and tinned chicken, but it wasn't meant to be. Charlie sent me immediately on to Paradise, to get ready for another two-day climbing patrol starting first thing in the morning. I hid the lime away and refilled my water bottle, though water was scarce at Muir and I was headed to a kitchen full of faucets. I didn't care, I was thirsty. I took my boots off and put my running shoes back on, and then shuffled off the deck, trying not to look too stiff.
That next patrol never did happen. I returned to Paradise, into the beginning of what became a heavily funded and manpower intensive incident covered by a media firestorm. The next five days I spent searching the same route I'd just covered, looking for the missing student. He was never found.
## 8
* * *
## THREE DAYS
IT WAS ALMOST SEPTEMBER. Little ice pellets were bouncing everywhere over Camp Muir: over the gravel helipad in the middle of camp, over the walkways and the stone stairs we'd shored up again and again in the sandy hill, over the deck that smelled like piss because nobody wanted to walk all the way to the outhouse in the middle of the night, over the crusty black snow that had been melting around us all year.
I walked outside into the cold, and felt the hail landing in my hair and running down behind my ears. Everything was gray, everywhere I looked. There were clouds above me, clouds below me, and clouds between the buildings. I ambled down the stairs to the outhouse. It was starting to get windy and it would be dark soon, in a little less than two hours. The lingering day trippers needed to get going to make it back to Paradise before dark.
I sloshed some straight bleach around the toilet seat. Rich, Ted's boss and the head of the backcountry buildings and outhouses department, wanted us to dilute it to save money, but the only bucket we could dilute it in was our dishpan. I didn't want to take the dishpan anywhere near the outhouse.
I noticed that the basket under the seat needed to be changed. Under the toilet seat are six baskets down below in a box. The baskets hold the shit, and this basket was full to within a foot of the seat. I really didn't want to rotate it out today. I had another two days up here, without any way to clean myself up, and the job was impossible to do without getting shit on the sleeves of my jacket. I didn't want to be dirty that long; it was disgusting. I decided the project could wait another day.
That was a good decision, because Rich radioed just then saying that a contract helicopter would be at Muir in thirty minutes, and I needed to haul a bunch of man-sized propane tanks to the helipad so they'd be ready to fly out before it got here. They were too heavy to lift, so I dragged them all up and down the stairs across camp as fast as I could. Most of the tanks had been lined up on top of the slab behind the outhouse where the pans under the baskets drain, so the bottoms were rusted out with rain, piss, and who knows what else.
It was a hot job despite the weather, and I'd taken my jacket off and left it mid-run, between the ranger hut and the helipad, on a big rock. When I finally finished, I came back for my jacket but it was gone. I couldn't believe it. People don't steal things from Muir. None of the guides, climbers, or day hikers claimed to have seen it. I sat down on the rock with my back aching, and stared out at the clouds. I felt the day slipping into the negative. It would be all downhill from here.
It was a devastating loss. I only had two warm jackets, and wore one on top of the other. In bad weather, I wore a waterproof jacket on top of both. I wouldn't be able to replace the lost jacket until my next days off, and it would cost a couple of days' wages. I could feel the insidious cold, my enemy, already pressing against my back.
Tired of feeling sorry for myself, I wrestled all the tanks, along with my cooking garbage that I didn't want to pack out, onto a big net on the gravel helipad. I was happy now for the heavy labor, glad for the heat I was producing and for the physical and painful work I could pour my frustrations into. I shooed all the day trippers off the edge of the pad, saying the helicopter would be there in a few minutes. It wasn't true. I could tell there would be no flights here today—the clouds were too thick, it was too windy. I didn't care. I was angry, my voice had an edge to it, I wanted to exercise the full extent of my small authority over these jacket-stealers—the authority granted by my Park Service hat, the only official thing I had on. I was as cold as the cold. We were one.
Looking down through the clouds, I saw a bunch of kids roughhousing on the glacier below Muir. The average climber at Muir is a middle-aged man, so children romping on the glacier was a bit unusual. And this late in the season, during a low snow year, it was scary. They didn't know what I knew—namely, that below camp were a lot of crevasses overhung with thin snow edges that would collapse under body weight and send an unsuspecting victim plunging down into a hundred feet of slimy black ice. As I thought about it, I wanted to experience it: it fit my mood, suffocating alone in the blackness of this uncaring mountain. Still, I had my job to do.
I found a man who was attached to the kids, sitting on a rock, and leaning against the side of the guide service's bunkhouse, a creosote-covered plywood box that looked like it should have blown away years ago. The man was bent over with his hands in his armpits, trying to stay out of the wind. He was wearing an inadequate, thin nylon sports jacket and didn't have any kind of pack with him. His nose was purple.
"Hi," I said, "I'm Bree, the ranger up here on duty this afternoon. Are those your kids?" I pointed in a sweeping gesture to encompass the six or so kids, barely visible because of the hail, who were wrestling in the snow and sliding down the shallow incline towards the nearest of the gaping crevasses. I paused, squinting into the haze, trying to pick out details. The girls wore long flannel dresses and bonnets. Both girls and boys were soaking wet, covered with hail and the cruddy summer snow.
"Some of them are mine," the man said, unwilling to look up at me because it would mean moving his chin out of the paltry warmth of his jacket.
"Well," I lied, "it's going to be dark soon, and so I'm just letting everybody know that now is probably a good time to start heading back down the hill." As much as I believe in personal responsibility and learning from your mistakes, I was seeing a lot of suffering in this group's near future once they quit horsing around and realized that it was snowing and windy, that they didn't have any dry clothes, and that they were a long way from home. I thought maybe they needed a nudge towards enlightenment.
There was a long pause and the hail fell like BBs over us. I stood silently with my hands in my pants pockets, waiting for the response I wanted. If I didn't get it, I would start with the strong-arm stuff. I didn't know what I could do really, but I wasn't above anything today.
"Yeah," the man said, finally, "we should probably get a move on. Most of our group didn't make it all the way up to Muir, and they're waiting for us a little ways down."
"Ahh," I said, "well, have a good trip." I said it warmly, and it was the only warmth there was.
I got another radio call from Rich. Flights were cancelled because the weather was getting worse. "OK," I said. The propane tanks could sit where they were on the pad until tomorrow. I didn't want to be outside anymore today, I wanted a cup of tea and my sleeping bag.
It had been a hard week already. My climbing boots had blown out, so the only footwear I had with me were my running shoes. I'd duct-taped a pair of flimsy aluminum crampons onto them for the unreasonably large stretches of ice that had suddenly appeared above Pan Point, halfway to Muir from Paradise. I worried that someone would need help on the upper mountain, and I really didn't want to go up there by myself in this storm, small as it was, with just running shoes. Sometimes climbing the mountain in running shoes isn't so bad, but a protracted rescue in cold footwear means frostbite and, personal injury aside, if my boss saw me I'd be fired. Two climbing rangers had fallen and died because of poor-quality footwear, and there was now a no-tolerance policy.
A few weeks before, the park superintendent's personal aide, Randy, had asked me what kind of boots I used on the mountain. I hesitated, and then told him my boots were not actually very good, but that I was saving up to get the pair I needed. I asked him a lot of questions about his feet and gave him some good suggestions for the type of boots he needed, since we all knew he and the superintendent were planning on climbing the mountain sometime during the summer. We had a nice chat, but Randy talked to Mike about my boot crisis, and Mike sat me down and told me that any time I needed gear, I only had to come talk to him, not blab to the whole park. He had actually promised new boots for all the climbing rangers, a nice gesture because the park wasn't obligated to provide us with equipment, but the money ran out once the north side rangers had gotten theirs. Politics. I admonished myself to learn to keep quiet.
I climbed the stairs back to the ranger hut, the Butt Hut—affectionately nicknamed for a ranger named William Butler and unfortunately descriptive of its general appearance and odor. It wasn't any warmer inside the tiny plywood hut than it was outside. When I tried to put water on for tea, I found that the stove wasn't working. One of the two propane tanks we used had been empty and I'd taken it to the helipad to be flown out, but I realized now there was no shut-off valve between the two tanks, so when I turned the propane on the good tank on, the gas only hissed out the empty side of the connector. Damn. No stove to melt snow for water or to make tea.
It was going to be harder than I'd thought to warm myself up. I'd boxed up all the extra jackets and three of the four sleeping bags to be flown down for their yearly cleaning. They were communal sleeping bags, slept in every night all summer by a variety of sweaty, hairy, chili-eating rangers. I pulled off my wet running shoes and got into the remaining bag with all my clothes on for a nap and to try to warm up. It was hard to sleep with frozen feet, knowing they were down there at the end of the cot, jammed up against the wall all white and wet and wrinkled. I tried wiggling my toes, and most of them responded. Maybe things weren't so bad, even though I was curled up in a fetal position, almost incapacitated by violent shivering.
About ten minutes later the guide service started calling Camp Muir. They had to call a few times before they got my attention. The sound of the radio was muffled through the sleeping bag and there was also, like every day, a steady stream of radio traffic from the law enforcement rangers reading out license plate numbers in the parking lots and on park roads for background checks: "I'm rolling east on 123 at milepost 16 with yankee, bravo, blah, blah, blah, comes back clear and valid to a Mr. Goodman out of Enumclaw, blah, blah, blah."
I tried to answer the guide service in a lull, but the communications center cut me off for interrupting, reminding me that law enforcement traffic takes precedence on the radio because their jobs are dangerous and their communication link vital.
I waited, feeling my fingers stiffen around the radio mike. Eventually, the conversation ended, the tag was clear and valid, the case number confirmed. RMI said they were coming up the mountain with a bunch of clients. They were currently at nine thousand feet and had just passed a group of sixteen people, mostly children, who were unable to get down the mountain because they were afraid of falling on the ice. All of them were wet and hypothermic.
I burrowed into the sleeping bag, stretching the mike cord. This was just fantastic. I was sure it was my group from earlier, and they'd only made it down a thousand feet, a fifth of the way to Paradise. I thanked RMI for letting me know, and said I'd go get them. I laid there a moment longer with my eyes closed. I still couldn't feel my feet, I was spent, I let out a long high-pitched whine like a dog.
I wished there was another ranger here so we could laugh about the ludicrousness of sixteen people in bonnets loose on the snowfield. We could groan together; it would be a funny story to tell later, with our audience bent over, unable to breathe, with tears in their eyes. It was a tragedy there was no one else here.
Finally, I loosened the drawcord on the sleeping bag and reached up for the cell phone that worked as long as my head was at the right angle. I called the Paradise dorm, hoping some of my coworkers would be there. Adrienne answered.
"Hey," I said. "There's this big group of, uh, you know I think they're Amish people, sixteen of them, mostly kids. RMI just called me and said they're stuck at nine thousand feet, and they don't have crampons or anything to get down the ice, they're hypothermic and not moving. I guess I'm going to go down there and see what I can do for them, but if you could call Stefan and ask him if he'd hire one of the volunteers just to bring up some more dry clothes and help me corral these guys down, it would be real nice."
"OK," said Adrienne. "Call me back in about ten minutes." I unzipped the sleeping bag and felt the whoosh of all the moist, warm air escape in a steam cloud, instantly replaced by bone-chilling dampness. I put my shoes back on. Thankfully, they weren't frozen yet, but there was water bubbling out the toes. I drank the last of my Gatorade, then called Adrienne back.
"Nope, he won't allow it, he says there's not enough money left in the budget to pay anybody overtime, or hire any volunteers even for a few hours, especially if it isn't a real emergency. Have fun, though," she said, giggling. "Glad it's you and not me."
"Thanks for trying, Adrienne." This was my third summer as a climbing ranger on the mountain, but for the life of me I couldn't understand what constituted an emergency and what didn't. I'd seen legions dispatched for a dizzy person in the meadows on a midsummer day, inordinate numbers requisitioned for broken legs, hundreds involved in hopeless body recoveries, but help rarely seemed available when I needed it. When help did come, I was second-guessed later: I was weak for having asked. I didn't understand it, because as a personal philosophy I like containment, a backup plan, a partner for safety's sake. I resolved to keep asking for help no matter how weak it made me look.
I filled my pack with ice axes and a first-aid kit, and grabbed a handful of Snickers bars for my pockets. Almost right out of camp, I had to put my crampons on. I'd never seen the snowfield so icy. It was deep blue ice with runnels of water becoming moulons, huge rivers running swiftly over the glacier in progressively deeper troughs until finally they plunged through black holes deep into the glacier. These things eat people.
I couldn't get my crampons adjusted small enough for my running shoes—I needed to drill a new hole further down the bar, but I hadn't had time. With every step, my foot slid about an inch forward to the end of the crampon, my toes bashing into the front bail. Then, when I lifted up my foot, it made a slapping sound, like walking in flip-flops. It was the only noise besides my breathing and the hiss of the hail that continued to fall.
Visibility was terrible, and I was afraid of missing the group altogether, so I zigzagged back and forth across the slope hoping to see them. Finally, I found them huddled together on a little rock knoll. It was the group from before, but they'd apparently met up with the rest of their party, which was a good thing because now I didn't have to look for the other half, too.
"Hi," I said. "I'm Bree, the ranger up here. You guys want to get out of here?" I stood before them hugging myself, trying to keep the heat trapped in my one remaining warm jacket.
"You're it?" said one woman with a huge bosom. "We asked for help, and you're IT?"
I realized I was going to have to change my posture. I stood up straight, palms out, and started gesturing. "I'm not so bad," I said, smiling, "I know the way down, and I've brought chocolate bars." I pulled the king-size Snickers bars out of my pockets, and started waving them enticingly, though looking at the crowd of them I added, "You might have to share."
The woman glared at me. I wondered if she knew I wasn't getting paid for this. I wondered if she knew that if I wasn't there, she'd probably freeze to death, or fall in a moulon and then freeze to death. The group was much too far west of the main trail to make it down. Without a major course correction, they'd end up cliffed out in the dark in the middle of a hailstorm. I wondered why she still managed to make me feel defensive, like apologizing for showing up.
I looked around. The women and girls wore, in addition to the bonnets and dresses, high-heeled lace-up boots with no tread on the soles. I opened my pack and handed out my hat, my one pair of gloves, and my waterproof jacket, one each, to three of the kids. They were all shivering and glassy-eyed. Some of them were a little bloody on the chins and elbows from previous spills, and they all looked disinclined to continue, some of them lying in an all-too-familiar fetal position on the ground between the rocks. I took off my one remaining jacket and gave it to a little boy who couldn't have been more than four, leaving me with only my still-damp synthetic T-shirt.
"It's not enough," said the woman flatly. There were nine young children in the group, all of whom were still obviously freezing, not to mention all the adults, and now me, too. "It'll be fine," I said. "Once we get moving, everybody'll warm up. Let's go, quick."
"I can't go," the woman replied, almost mocking me, trying to be ornery. "I sprained my ankle in a fall, and now I can't walk on it."
I looked off into the distance, thinking, _We have to go now, we have to go now, we have to go now._ I looked her in the eye and said, "Well, if you stay, you'll freeze to death, and since you're choosing to stay, it's suicide, and can you really get into heaven that way?" I hoped there was some sort of religious background associated with the bonnets that I could guilt her with. "Here, I'll look at your ankle real quick."
I took her boot off and she was only wearing nylons. Her ankle was slightly blue. I wrapped an ace bandage around it and shoved her boot back on as fast as I could with frozen fingers. The men were sitting slightly separately, We didn't speak to each other, although they looked at me every once in a while.
Then I stood up. "OK, let's go."
I handed out the ice axes, saying, "These are short walking sticks, use them to keep your balance, and if you start to slide, let it go." I figured that this was a fine tutorial in this case, since it was probably too icy to arrest a fall in any case. A few of them asked if I had any water, and nobody was pleased when I said there was a drinking fountain in Paradise. Two other women helped support the busty one, one on each side, and I was grateful for their help.
I put my empty pack on the four-year-old and let him ride piggyback. We started out with the rest of the children trailing silently behind me. It was slow going, and even more difficult because the men would go too far ahead, always in the wrong direction. I would yell at them to wait for the rest of us, but I think they couldn't understand why the ranger couldn't keep up with them. They kept yelling back that if I thought I needed to lead them, then I needed to stay in the front, otherwise I wasn't helping. Finally, tired of yelling, I just took my own way and hoped they'd look back every once in a while and change direction with me.
It was hard to see anything and I had to keep my compass out around my neck, checking it constantly to make sure we were still somewhat on track.
It got dark. We made an impressive group, weaving down the mountain. Mine was the only light, and then there was a line of shadows trailing me. As we got lower, it got warmer and started sleeting. It was so foggy I couldn't tell if everybody was still together or not, but I thought it would be hopeless to try counting again and again.
When we finally got to Pan Point, the glacier ended and we were back on the trails, two-thirds of the way back to Paradise. The men and some of the faster kids took off then, stumbling in the dark, unwilling to wait for the large woman and her cohort. I gave them directions on what trails to take, and hoped they remembered the order. The rest of us made it back to the parking lot in the middle of the night, soaked and freezing, but alive and together. The kid I'd given my jacket to had had a bowel control problem in it, and he'd also somehow ripped the zipper off.
The rest of the group were waiting at their cars, angry that we'd taken so long. Apparently, there was an issue with who had the car keys. I figured they'd tell me if anyone was missing. I asked for my hat and gloves and raincoat back, and waited a long time in the rain, holding my soiled jacket, standing outside their bank of vehicles while they changed inside them. The kids finally threw my things out the fogged-up windows, where they landed in puddles—which didn't make them any wetter. I thanked everyone and walked off into the dark, back to the dorm. I couldn't handle going back up to Muir just yet.
I wondered what the group would think about the experience, if they ever thought about it. I wondered if they'd complain. I wondered what would happen to me if they complained. Some of them, especially the vocal woman, were pretty angry at the delays, and the lack of support, and that I hadn't had more lights, and chocolate, and clothes. I don't know. I mean, what did they want from me? It's true that they paid to get into the park, and that money is supposed to cover emergency services, and I was a pathetic thing to hold up under the heading "emergency services."
But it was the middle of the night, I wasn't getting paid, I was going to have to buy more chocolate bars, and I was out two jackets in one day. I also needed to start back up to Muir, because I needed to get back to my post in case something else happened.
I didn't go back up. I couldn't leave the dorm. I felt like a failure and an embarrassment, and the self-pity was paralyzing. Everybody was asleep and it was warm inside, and the kitchen light was glowing, inviting. I peeled off all my wet clothes to take a shower and turned the water up as hot as it would go, hot enough so that I had to turn in tight circles to keep from scalding any one part of myself. I stayed in until all my skin turned red. It was almost four am when I crawled into bed and set the alarm for six.
In the morning I rolled out of bed, feeling old, dizzy, and weak. I put on the same clothes from the previous day, and halfway through I remembered that my lightweight hi-tech alpine jackets were no more. Casting around for replacements, I pulled on my green park-issue uniform jacket and then struggled into my dressy out-on-the-town jacket. The result was bulky and odd looking, but warm enough. I walked to the kitchen, refilled my water bottle, and threw in two scoops of orange-flavored Gatorade powder. I opened the freezer and checked out my breakfast options. There was a huge stack of Eggos and nothing else. I pulled out eight and popped them two at a time into the toaster oven to eat and then repeat.
I started hiking at six-thirty. I didn't want anyone to know I had spent the night in Paradise. The truth is, I could have made it back up to Muir, but I was lulled by the twin sirens of bed and hot water, and I was too weak to refuse. I couldn't figure out what was wrong with me. I was supposed to love climbing and live for the thrill of every second I could be out on the mountain. Charlie worked hard to give me as much time on the mountain as he could, away from the office and the meaningless stacks of government forms. I was grateful, but... there was no but. I needed to be grateful, and beyond grateful, I needed to love it. "I love this," I said to the fog outside the front door.
The truth was, I did love it. In my first season I spent a three-day patrol hiking from Longmire to climb Success Cleaver with another ranger. It was remote enough that no other climbers ever ventured there and the alpine meadows and views from the Cleaver were the best I'd ever seen on the mountain.
I enjoyed helping folks, too. The previous year, one day when a friend was staying with me at Muir, I got a call at about five in the afternoon about a stranded party on the Cleaver. We decided to go up and see what the problem was, and if there wasn't a problem, we'd just keep going and do a sunset climb. We found the stranded party, a group of marines who had spend the last eight hours sitting on the Cleaver staring at their map, but they refused to take route suggestions from a girl in pigtails and Patagonia Hawaiian flower print shorts (I was off duty). I said fine, we were going to summit, and if they changed their minds they could follow us back down when we returned in a few hours—which was what eventually happened. It had been great fun.
Sometimes even just being at Muir was good. We used to melt our water there in a sixty-gallon black plastic garbage can we'd set on the south side of the deck in the sun. The water did get a bit gunky and silty at the bottom, but as long as we used water off the top and added snow every time we took water out, we had a constant water supply that rarely froze up completely. At least, it worked well until one day at the very end of the summer, when the ice block that was always floating on the top finally melted. Charlie and I realized, to our horror and amusement, that a mouse had drowned in it and had probably been slowly decomposing in our water supply for several months. That was the only year we used the garbage can. We took a picture of the corpse and stuck it on our fridge in Paradise.
But I was alone too much. I needed a real partner in crime, someone to play off of, so we could motivate each other, share stories and have each other's backs.
The weather was getting better. I hiked past the few late-season flowers in the dying meadows, past the brown heather between the rocks, up into the snow. It wasn't warm enough to stop moving this early in the morning, and I only paused momentarily, just above Pebble Creek, to put my aluminum crampons back on my running shoes. None of us had needed crampons to get to Muir until this fall, and I think there was a bit of a stigma about using them. I'd seen some of the guides going without them and I'd tried it a few times, but for me it was faster to wear them and deal with the potential humiliation than to slide backward every couple of steps. I wanted the trip to be mindless. I just wanted to get there.
I looked up. It was definitely clearing up—still overcast, but the clouds were high. and I could see all the way to the lenticular cloud swirling on the summit. I always thought these clouds seemed like living things, blasting around like the Fates dancing.
When I finally got back to the Butt Hut, it was after ten. My fingers were cold, and it was hard to open the combination lock on the door. Inside the hut was messy, my food strewn everywhere from yesterday's search for good things to eat that didn't have to be cooked. Still standing in the doorway, I turned around and squinted at the campground. There were a few tents out there. I hadn't had time to do rounds last night, so I didn't know who was out climbing and who wasn't. I hoped anyone who'd gone would come back soon without incident.
I closed the door and stood in the middle of the hut for a minute, shivering and hunched over. My mind completely numb, I stared at the lighter from the Paradise Inn gift shop I'd left on the counter. "I love this," I said aloud. "I love this, I love this, I love this." I went back to staring at the lighter. It had a tiny white outline of the mountain on one side.
I remembered that there was an emergency hiking stove, a beat-on little thing, which I found and quickly assembled on the countertop. It wasn't supposed to be used indoors, because it produced carbon monoxide, which can build up to toxic levels and kill people in confined spaces. The hut had had a carbon monoxide alarm, but it went off every time we cooked on the regular stove, and half the time otherwise, so finally we threw it out and vowed to keep the door open a crack when we cooked.
I threw a huge pot of ice on top of the single-burner stove. It wobbled a bit, but stayed on. It would take a few hours for the water to melt.
Right after I put it on, Rich called and told me that flights were back on for the day. He was also sending up some equipment, and wanted to make sure there was a clear place to drop it off. A construction crew was remodeling the public shelter that summer—they had only a few weeks to do it between peak climbing season and winter—and right where the equipment needed to go were two and a half pallets of concrete.
The helicopter was minutes away, so I grabbed the construction folks. They were an odd couple amid the black Gore-Tex and bright beanie-clad climbers in the campground: he a blond Nordic god with a huge beak nose, always decked out in overalls and a handknit blue sweater; she tromping about in knee-high rubber barn boots and braided pigtails. We moved all the concrete over about six feet as fast as we could, bag by eighty-pound bag, running and stumbling with the bags, tripping over each other, and laughing. It felt fantastic to have the company. By the time we were finished it was getting windier, and I started to doubt if the flight would go.
I walked back to the Butt Hut. The stove had gone out. I pumped the fuel bottle to repressurize the line and grabbed the lighter. Rich called and said that flights had been cancelled again because of the weather. We'd try it again next week. "OK," I said.
I needed to rotate the baskets in the outhouses, but the wind was going to make it difficult. As soon as I would open the back of the box, used toilet paper would fly around in the wind. There were separate receptacles in the outhouses for toilet paper, but a lot of people didn't use them, instead tossing the paper down the hole.
We had white Tyvex moon suits to use when we dumped the filled baskets into sixty-gallon barrels that were flown off the mountain every week or so, but the suits were too expensive to use every time the baskets got rotated. So I just had to be careful when I leaned way in, with my head between the underside of the toilet seat hole and the top of the full basket, in order to heave the heavy basket over. I was tempted to put my elbows on the edge of the waist-high pan or to press up against it to get more leverage against the basket, but the whole edge was covered with dried human shit, and because I was wearing my out-on-the-town coat this time, I really didn't want to touch anything. I pulled on my latex exam gloves, held my breath, and headed in.
At least there weren't any flies at Muir. No animal life made it up this high. Sometimes a bird or mouse spent a day or two, and then moved on. Twenty years ago, I was told, a whole cloud of monarch butterflies had been blown in during a storm, and they covered everything so thickly that they dripped from the eaves of the buildings. That would have been something to see.
I walked around the front of the outhouse and peered down the hole to see if the new basket was centered under the toilet seat. Close enough. It sucked when shit spooged down and then dried onto the basket handles.
The stove had gone out again. I pumped up the fuel bottle and relit the stove. The ice was starting to melt, now it was a floating block with an inch of water around the edge. I decided it was enough to make lunch. I heated up a can of "just add water" soup, and made some tea along with it. The soup was gritty from silt, but it was hot and really, really nice. I fancied that it would be fabulous with some additional carrots and onions, and maybe some corn and mushrooms and some vegetable broth. Maybe a nice rosemary potato bread and butter, warm of course.
I left the lunch pot outside next to the door, put the water pot back on the stove, and gave the fuel bottle a few more pumps. It was afternoon. After the warm meal and my late night, I was really sleepy. All I wanted was to lie down for a minute or two.
I looked at my watch, and it was only three. It was cold and dreary outside, and cold and lonely inside. Somebody had broken the antenna off the little FM radio and it wasn't working. The drawl of the law enforcement rangers on the Park Service radio was hypnotic: "Foxtrot, sierra, charlie, papa, copy?"
I couldn't help it. I had to nap. I pulled the drawcord tight on the greasy fartsack so only my nose was sticking out. My joints ached, and my shirt and socks were still soaked from the trip up that morning. I'd picked up a layer of dust from the concrete and it had become mud on my black climbing pants. The sleeping bag was still damp at the bottom from yesterday's wet socks. I curled up... and there was a knock on the door. "Come in," I yelled from inside the bag.
The door opened. I moved the hole from my nose to one eye. I could see a silhouette of a man with a lot of jackets on. He looked humpy and tired. Behind him I could see the clouds were starting to break up. There were patches of blue, but no sun yet. "Lazy days," the man said, eyeing me back.
"Yeah," I replied, struggling to find the drawcord again and stick my head out. "Late season and all that. I'm Bree, I really am the ranger on duty up here today. What can I do for ya?"
"Nothing," he said. "But my group just got down from the summit." There was a pause while the man let me digest the awesome fact that he had just summitted the mountain. I'm sure my facial features gave away that I remained unfazed.
"Anyway, on our way down we were behind this group of three guys, who hadn't summitted,"—there was another pause, and then he continued—"when these huge ice chunks let loose from the Direct right next to the toe of the Cleaver. I mean right at them. We yelled and they yelled and ran, but it totally smashed into them. They got pretty banged up and I'm not sure they're going to be able to get down. Anyway, they're all still alive and stuff, and we told them we'd let you know on our way down and that was only, like, probably four hours ago or so."
I pushed one arm out of the sleeping bag and picked up my tea. Might as well enjoy the rest of it. "So, were they moving when you left, or were they going to stay where they were?"
"Don't know," he said. "We've got to keep going down, gotta get to the bar down there in Paradise before it closes. Celebrate our summit."
"Just one more thing," I said. "Was anyone going to help them, or were you guys the only ones who were left up there?"
I didn't bother with radio traffic. A visitor had reported an erratic driver near Sunrise Campground, and I could tell that it was going to be a huge incident: several law enforcement rangers were responding and the radio traffic was intense and strained. I got out the cell phone. Charlie was gone for another three weeks climbing in the Bugaboos, and Tom lived in the basement of the visitor center and didn't have a phone. I called the dorm. After I'd let the phone ring a long time, Adrienne answered.
"Hey," I said. "I just heard that some guys got winged by a falling serac right above Ingraham Flats. I'm going to go up and see what I can do for them. If you wouldn't mind, could you call Stefan and ask him if one of the volunteers would be available if this turns into a real emergency and I need help getting these guys down? Just let him know, y'know, I'm going to go up there."
I didn't really know what I wanted to say. Adrienne agreed, though she didn't sound happy, and I gathered that Stefan had been grumpy the last time I'd asked her to talk to him. I told her I'd call her back in ten minutes to see what the news was.
I looked around. It would be dark soon. It was still cold but crisp, and visibility was good. A light cover of hail over the dirty snow made everything look cleaner and more beautiful. The air smelled good. A slight breeze from the southwest kept the smell of the outhouses wafting away from camp, always a sign things were going right. Literally.
I stood on top of the tallest storage box outside the hut and looked up at the mountain through binoculars. I couldn't see anybody on the skyline between Muir and Cathedral Gap. That was most of the way to Ingraham Flats. They really must not be moving. I called Adrienne back.
"Sorry," she said. "Stefan said no, and wanted me to let you know not to call unless you've got a true emergency 'cause we don't have the money for this. Though," she added, "if you really do need help, give me a call and I'll come up and help you."
"Thanks, Adrienne," I said, "I'll let you know." It was sweet of her to offer, but she'd tripped a few days ago and reiunjured her torn anterior cruciate ligament. I knew she could barely walk on it.
I set out alone. A few weak rays of sunshine were barely making it over the edge of the mountain. The sun was disappearing behind this huge shadow, this mountain that blocks out the sky. I was irritated, and feeling angry again. My toes were cold already, or maybe they had never warmed up. My flimsy crampons looked silly on my running shoes, and my heels were starting to wear thin where the straps on the crampons rubbed. I should have put some duct tape on them. I felt embarrassed in my getup. Embarrassed for being alone. Part of my job was to make sure that climbers understood the importance of good gear and safety practices, but I was no example.
I buttoned my town coat and brushed off the sleeves. I reasoned, George Mallory climbed Everest in a tailored cotton jacket... oh, but wait, he did die on the descent, or maybe it was on the way up. It would have been nice to rope up, too. Some of the crevasses were large enough that it took a long jump and light thoughts to cross them. With partners, if you fall in you can get hauled back out, feeling warm and fuzzy because your friends are there to save you. I would just have to be careful.
My pack was heavy. I'd taken the first-aid kit, a short rope, and the damp sleeping bag, just in case someone needed it. I also had my compass, a headlamp, a quart of water, and some almond Hershey's bars. I started up Cathedral Gap. It was all rock and steep sand that was exhausting to climb; I slid down with every step. Sometimes on the Gap whole sections would slide, like a rock version of a slab avalanche, with the mud at the top breaking away and leaving a fracture line. It was a scary thing to be on when that happened. Once I'd been helping a group down through this section when it started moving under us, and together we'd all run as fast as we could, knee deep in sand, before it could slide us over the edge and back down to the glacier.
I picked my steps carefully and made it to the top, turning to look back at camp and the pink sky. At the top of the Gap the route turned a corner and started up a little ridge to get to Ingraham Flats. All along the side of this ridge are loose rocks glued together with ice. My crampon's plastic straps hurt my bare ankles whenever the terrain started sloping to the side. It was painful walking, but not that bad, and I felt a bit wussy.
It's fine, I kept telling myself, it's fine, it's fine, it's fine. But I knew the truth: I wasn't. During the off-season my co-workers went to China or Pakistan or Tibet and climbed much harder routes, loving every minute of it. They thrived on near-death experiences, climbing with people they barely knew. All I wanted to do in the off-season was take three or four hot showers a day, cook up huge meals of fresh vegetables, and spend time with friends.
I was pretty sure I still loved the mountains, but I wanted fellowship in the mountains, not judgment or neglect. I didn't want to keep proving myself over and over, when nothing I did seemed to be enough. Climbing alone wore me out and felt like a pointless risk. I wanted more than the act of climbing. I wanted to forge friendships by climbing. I was mad at myself for risking death every day without even that possible payoff. Every adventure I went on alone seemed like a lost opportunity, and constantly climbing alone made my suffering seem meaningless.
I shook my head. The mountain was watching. I wondered what it thought. I wondered if the mountain felt cold. Wished it could trade places with Ayers Rock in Australia for a few months of glowing red in the sun. Or if it was sad it wasn't a big impressive mountain like K2 with a reputation for killing as many people as it let summit. Hmm. I wondered for an instant what all those people thought about, the second before they died. I'd already planned my last thought. I'd decided there were too many people who died yelling, "Shit!" I wanted to die thinking about white-chocolate-chip macadamia nut cookies. Dying while thinking about dessert seemed unique and tasty and important to me. And I hoped I wouldn't die alone.
The route dropped down a bit going into the flats, and I could see the guys I was looking for, coming towards me. There were three of them, obviously disabled. One was carrying all three packs, and the other two were walking really, really slowly. They were walking, though. I was happy. Mobile was good.
"Hi," I said to the guy in the front of the rope, "I'm Bree, I'm the ranger on duty up here today. Somebody said you got hit by icefall. Are you doing OK?" I squinted at the guy, looking him over for injuries.
"You're the ranger?" He looked me up and down.
"Yes," I said, straightening my nightclub-worthy jacket. "I'm here to help you. Would you like a Hershey's almond chocolate bar?" The man with the three packs was Native American. He looked old, maybe in his sixties, and he wore his waist-long hair in two braids down his back. He was wearing an ancient green plaid wool jacket that buttoned in the front, and had a huge digital camera slung over one shoulder.
"I'd love a chocolate bar," he said, and then added, "I'm OK walking for now, but the guys are pretty tired. We did get hit by the ice. One of the guys broke his collarbone, and maybe some ribs, and he got kind of cut up when he got hit with a chunk of ice. The other guy had a bad knee, so of course that's the one he fell over on when he dove out of the way."
"But you're fine? Are you doing OK carrying all the packs?"
"Yeah, but we've got a lot of stuff down at Camp Muir and I'm not going to be able to carry all of that."
"We'll worry about that when we get down there. You guys seem to be doing just fine. Did you hang out where you got hit for a while, or have you been trying to descend since the accident?" I was going to be very worried if they'd been moving the whole time, because they'd made it less than five hundred feet in several hours.
"We stopped for a long time," he said. "We were tuckered out, anyways. We left camp to start climbing twenty-one hours ago." I was relieved, and walked over to the other two.
The second guy on the rope had a bad case of acne. He looked young, maybe about eighteen, with a bloody bandana wrapped around his head. "Are you the bad knee guy, or the guy who got hit by the icefall?" I asked, to make conversation.
"Icefall," he answered, humorlessly.
"Your friend says you might have broken your collarbone and some ribs. I've got a first-aid kit, want me to see what I can do to help?" I unzipped his jackets and lifted up his shirt. There was a huge black bruise the size of a basketball on the side of his chest.
"You having any trouble breathing?" I asked.
"Hurts, but I can do it."
"Sweet."
"Cold though."
"OK." I rezipped his jackets. It's a fantastic thing about the mountain: By the time I get to people they're often either fine, or already dead. It saves a lot of work trying to figure out what's a critical injury and what isn't.
"I can give you a sling for your arm, and it should make your collarbone feel better," I said.
"Do you have to take my jacket off again?"
"Nope."
"Sweet."
I dug out the first-aid kit and came up with a couple of triangle bandages, which I used to tie a sling and swath. "Nothing else hurt?"
"Nope."
"Sweet."
I handed him a chocolate bar and walked on to the third guy. "Knee guy?" He looked up. He kind of looked like a hippie, I guessed in his thirties, with a big black beard. He had two ice axes, which I noticed he'd been using as really short crutches before his party had sat down for their break.
"Is there anything I can do to help, or can you keep walking out on it as-is?" I wasn't going to give him any options where he didn't have to walk out, because today, he didn't have any.
"It's fine," he replied.
"I'll give you my trekking poles," I said. "That'll be a lot easier than the ice axes."
I walked back to the first guy, and said, "Let's get our head-lamps out now, and then we won't have to stop again later."
We started back to camp. I was afraid they weren't going to make it up the tiny uphill right before the start of the ridge, but we did a rest step: Take a step. Count to four. Take another step.
It was full dark before we even got to the top of Cathedral Gap. There wasn't any moon and the stars were really bright. It had finished clearing, and there wasn't a cloud in sight. The hippie guy knew all the constellations, and when we took breaks he'd show them to the rest of us. I said that sometimes in the early morning you could see a little bit of the northern lights, just a bit of a green wave before dawn. There wasn't anything there yet tonight. We all agreed that was too bad.
My light made a circle on the snow in front of me. I couldn't see anything outside of it, and felt claustrophobic to be stuck inside it, dependent on it to get me down. I resolved to stop thinking about it, and then after a while I stopped thinking about anything. Somebody would stumble or groan, and I'd say, "You all right?" But we all just had to keep walking.
When we got to Muir, I asked, "What do you want to do?"
"We want to keep going, he needs to go to the hospital," said the Native American, pointing at the acne kid.
"That's true, he probably does," I said, and resigned myself to this trip taking longer than I'd planned. They didn't really need me to go with them—if they left all their stuff at Muir, if everything worked out perfectly for them, if they didn't get lost in the dark with the unfamiliar ice, if the guy's crushed chest held up as it had been. And if they stayed here? Maybe the kid would die in his sleep. They looked at me expectantly. "OK" I said. "I'll help you guys down. Where's your tent?"
I packed up their stuff and wondered why people carry so much crap up to Muir. There was no way they'd used this many spare pairs of long underwear, and what was the point of having a second spare fuel canister? I made up one huge pack and combined the other two into a strange-looking contraption with a bunch of gear sticking out the back and sides, then I struggled into one of the beasts. The force of it hitting my back made me stumble forward a few steps and I regretted not having my trekking poles to steady the load and save my dissolving, creaking knees. Somebody from a nearby tent yelled that some people had to get up and climb in the morning, and could I please be quiet. "Sorry, we'll be gone in a minute," I whispered.
I gave the other pack to the Native American, and we headed down from Muir towards the tiny lights of Paradise. There is nothing to say about the trip except that it was long, and that we made it. I left them in the parking lot with some handwritten directions to Good Samaritan Hospital in Tacoma.
"We didn't even summit," one of them said sadly in the dark.
"It really doesn't matter," I said. "It really doesn't."
In the dorm, the kitchen lights were on. Someone, probably Adrienne, had made blueberry muffins before going to sleep. They were hers, but I ate them anyway. I was only going to eat one muffin, but once I'd eaten one I couldn't stop, and I ended up eating most of the dozen. They were fantastic. Then I decided I fancied some Eggos, and I pulled out another eight. "Tonight, I will eat them with jam," I thought, and filled up each one of the little square indents in the top with raspberry jam. Tasty and pretty. I inhaled them.
I took off my wet running shoes. They were showing a lot of wear from the crampons, with holes on both sides of the toes from the front bail. My feet weren't doing much better, and neither was my out-on-the-town coat for that matter. I was too tired to take a shower, so I set my alarm and went to bed still wearing all my clothes. Every joint and muscle in my body ached, and it was only day two of my eight-day shift. Before falling asleep, I wondered briefly if I should add an iron supplement to my diet.
Alarm. Six in the morning. Dizzy, I stood up and clutched at the bedpost. It was going to be a sunny day, maybe even warm. I squatted down to get my shoes off the boot drier, but then my knees refused to let me stand up again and I ended up having to sit down, roll over, and awkwardly push myself back up. Damn people with their damn heavy packs. I spent a little while looking for my pack before I realized I'd left it up at Muir. I didn't really need to bring anything with me anyway, it was all up there already. I put my Park Service radio in one pocket and my keys in the other, slung my crampons over my shoulder, went to the kitchen, and drank a quart of orange Gatorade without coming up for air. Then I grabbed a handful of store-bought gingersnap cookies for breakfast, and headed out.
I was tired and it took a long time to get to Muir. It was almost warm when I got there. I'd taken my on-the-town jacket off and wrapped it around my waist, going for the professional uniform jacket look. Nobody was around to notice.
My feet really needed some duct tape. I rummaged around at Muir for some, and then pulled my shoes off and wrung my socks out. My heels were bleeding, and my toes weren't doing a whole lot better. I covered everything with duct tape—big strips across each heel, and short pieces for each one of my toes, individually wrapped. Much better, if a little weird-looking. I put my socks back on. Yes, better.
Rich called on the radio to say he was headed up to Camp Muir and needed help fixing some stuff with the outhouses. I liked Rich, but I had been hoping for a day focused more on visitor contacts and less on heavy lifting. I sat and pouted for a few minutes, staring out through the door of the hut at Muir Rock across the way, where a few people were practicing ice-ax arrest. I sucked it up. I needed to clean up the hut.
I started putting my food away and turned the tiny stove back on to try to melt the pot of ice again. Last night had pretty much undone any progress I'd made melting it. I needed to clean my soup pot out from yesterday, too, I remembered. It was still sitting outside the front door—with tomato residue dried onto the bottom and a layer of sand that had blown in and fused to the tomato. It was going to be a hard job, and it was going to have to wait for hot water. I pulled my damp sleeping bag out of my pack, turned it inside out, and hung it out to dry over one of the giant cables that kept the hut grounded in high winds.
The guide service was still out climbing, I noticed. They took a new group of clients up almost every night. Usually one or two clients wouldn't leave Camp Muir because they were out of shape, or didn't feel well, and then one or two more would turn around with a guide somewhere on the route because they were out of shape, or didn't feel well, and the rest would summit and return around noon. Today I noticed that one of the clients who had stayed at camp was having a yelling match with one of the guides. I closed the door and continued putting my cans of food back into my bin.
A minute later, there was a knock on the door. It was the guide. He looked pissed. He squeezed in and shut the door. It was a small hut for two people. If we'd been sitting opposite each other our knees would have touched. "Want some tea?" I asked. "I'll have some water up here in a bit," I added, waving at the tiny stove and the big pot on the countertop.
The guide's eyes narrowed as he viewed the setup. "Bigger problems, Bree. Bigger problems. I've got this super irritating client, _Richard_." The name came out like a growl. "He wants to go down. Says he doesn't like us. But we just can't let him go. He's our responsibility. But he was threatening to just take off back down the mountain, so we took away his boots. Now he's super mad and is demanding them back. All the senior guides are still out on the mountain. Will you talk to him?"
"You stole his boots?" I asked, incredulous.
"What were we supposed to do? He has to stay! It's too dangerous for him alone. We can all go down together this afternoon."
"I'd be pissed if you stole my shoes," I said.
"Yeah, but we wouldn't steal your shoes."
"Thanks," I grinned.
"Maybe just give me a moment alone with him," I said. "Have you tried calling your office for direction?"
"We've been trying to get through, I'll go try it again." He left and jogged up the stairs into one of their buildings.
I walked over to the client and held out my hand. "Hi, Richard, I'm Bree, I'm the ranger on duty up here today."
He looked like he was in his early fifties. He had brand-new climbing clothes on, and a little bit of a gut. He was probably a doctor or a lawyer or something, I thought. And he was walking around with just a pair of thick, red-toed, gray wool socks on his feet.
"What's the difference between a ranger and a guide?" he asked. "Besides, you look more like one of the cooks."
"Um," I said, "I work for the government, I'm in charge of the camp up here right now, and although I work with these guides to make sure everything runs smoothly, part of my job is guide service monitoring for quality control. So if you have a concern, you can tell me and I'll make sure it gets addressed."
"I was willing to wait to go down with them this afternoon, until they stole my boots," Richard said grumpily. "I'm an adult, I'm competent, I'm declining their services, they can't keep me prisoner here. Make them give me my boots back."
"OK," I said. "I think the one guy just got a little bit upset. I'll get him to give your boots back. I guess the thing is that it is safer to go down with the guide service if you don't have any experience, and there's some additional gear that it would be nice to have, like a map and compass. I mean, you hired a guide in the first place because you thought you needed one. Maybe it would be better to be patient, work out your personality differences in the parking lot, and then go for a beer?"
He looked at me for a long moment. "Nope."
I went back up to babysit my water pot. It was coming along nicely. Richard and the guide followed me. "The office says we have to give his boots back. He can leave if he really wants to, as long as there's a witness, you, hearing me telling him that it's really dangerous and we're not responsible for him anymore—that if he's seriously injured or dies, it's not our fault and he won't sue us."
"He's right there," I said. "Does that count, or do you have to do it again?"
"Richard," the guide said, turning towards him, "if you refuse our services we can't guarantee your protection and you might, and probably will, fall in a crevasse and die, and it won't be our fault, and you can't sue us. Do you understand?"
"I get it already," Richard said, "you can go now." The guide stalked off back to his own hut, but Richard stayed sitting on the storage box outside. I went back inside to pump the stove up again.
"Um," said Richard, "so the thing is, I don't really feel comfortable going down by myself, but that whole boot-stealing thing was bullshit and I couldn't stand that. So, what are my options to get down from here?"
"Well," I answered, "you could suck it up and go down by yourself since that's the decision you made, or you could go over to the campground where all the independent climbers are, and see if anyone is going down and if they'd be willing to let you tag along."
"When are you headed down?" Richard asked.
"Later today," I said. "I've got a project to work on in Paradise tomorrow, but I've got to wait for the maintenance guy to come up here so I can help him fix some stuff with the outhouses."
"They're disgusting," he said. I let it slide. But I saw my chance of getting out of working on the shitters.
"OK, Richard," I said, "I'll take you down to the top of Panorama Point. But then I have to stop and do some work there, so you'll have to go down the trail yourself, OK?"
"Good," said Richard. "Let's go."
I knew it was going to take the rest of the day to go with Richard, and that was fine with me. I wouldn't have to stay up here late working with Rich. I could stop and clean the bathroom at Pan Point and be back in Paradise around the time I got off the clock. Perfect. I scribbled an apology, tacked it to the door, and closed up the hut.
I could call Richard an "assist" on my accountability form, but this was shaping up to be a grim week for proving my worth to the bureaucracy. Assists didn't matter very much, and there was no space to explain why my assists took such a long time. It was already the end of day three, and I still hadn't checked off anything under the "projects achieved" section. That was the really important section. I hadn't even gotten a summit climb in. My boss would wonder what I was doing with my time.
## 9
* * *
## ROADTRIP RESCUE
I HAD JUST POURED THE LAST OF MY MILK on the last of my Honey Nut Cheerios. The sun was baking in the front windows of the Paradise dorm. I picked up my bowl and moved to an armchair that we'd balanced on top of a couple of sheets of plywood, on top of a couple of folding chairs and cinder blocks. From the chair, I was high enough to see the peaks of the Tatoosh through the kitchen windows. When I climbed up on it, the whole contraption groaned and wobbled, and the cushion support had ripped out of the chair, but it would hold if I eased onto it. I stretched out in this sunny spot, the hot fabric of the dark brocade against my back, and felt my cold stiff body absorb the heat like a dry sponge in water.
I noticed I had terrible tan lines. My leg hair was bleached white—it was too much work to shave anymore—and my legs were tan, but I had a white sock line around my ankles, and then my feet were pure red, white, and blue. I opened the window, stuck my feet out, and let them wave and dry in the sun three storeys above the visitor center parking lot.
The whole evening stretched out before me: First I would eat Cheerios and sleep here until the sun crossed over the Tatoosh and no longer came in the kitchen window; then I would make lasagna and chocolate chip cookies, leaving a few for Tom and Charlie, who were bouldering down in Longmire; and then I would go to bed around seven-thirty. I would creak down onto the bed, savoring the pain of the change in position, the backache that made it hard to breathe at first, but then I would conform to the mattress completely, embrace it, pull the covers over my head, and shut everything out. The evening would be fantastic.
I closed my eyes and made an invisible soundproof line above all the traffic and honking in the parking lot, the people crowding the trails, the screaming children in the visitor center, the constant party hosted by the Paradise Inn and restaurant employees going on behind us in the next housing block over. Done with the mountain for the day, I felt a semblance of peace. And then there was the sound of feet pounding up the stairs.
"Are you going?" asked Tom as he stumbled, giggling, into the kitchen, his climbing shoes in one hand. "Do you have any food I can eat?"
As a volunteer, Tom didn't get paid, but otherwise we all had the same job. I tried to feed him when I could. He was tall, with big feet and greasy dark hair. He came from Corvallis, Oregon, and I thought his parents were farmers, but he had occasional moments of confusion when he thought he was a gangster rapper from New York. The previous summer, we'd stayed together in Seattle for a few days for a firefighting class, and we'd gone to Broadway on Capitol Hill. He'd made me drive the street again and again so he could look at all the crazy people with big hair, talking to themselves and waving at things that weren't there. Like he'd never seen crazy people before.
Now I looked over at him and saw his pink fleecy hat perched at an incredible angle on top of his greasy head. Today's signature piece: He was a crazy person. He opened the refrigerator.
"I thought you guys already went climbing?" I murmured with my eyes closed. I didn't want to go climbing, I wanted to stay here and sleep. I didn't want to spend my super-rare free evening in a broken-down barn in Longmire clinging to greasy plastic holds attached to a rotting piece of plywood. I wasn't sure how they could spend so much time there. I started to panic about being forced to go.
"No," he said impatiently, "not that." He closed the refrigerator door and straightened up. "We've been officially requested by the Ashford Fire Department to rescue some hiker in the hills above town." He grinned broadly, and I stared at him.
"I thought they had a volunteer fire department to do that stuff, and what about the guide service who are actually based in the town. Can't they do it?" I asked.
"I don't know," Tom said. "I suppose we have some sort of reciprocity agreement with them. Maybe we used them for something in the past."
He opened the refrigerator again. I knew I had to suggest some food I didn't mind losing, or he'd move on to my cabinet and start eating my chocolate bars. "I think there's some cheese and tortillas in there, you could make some quesadillas. I think Charlie has some salsa, you could use his."
"Sweet, Bree's making quesadillas?" said Charlie, turning the corner from the stairs down the hallway to his room.
"No!" I yelled after him down the hall, "you can use my cheese and tortillas, but you're making them, and make me one, too."
"Who's organizing this rescue?" I asked Tom.
"Uh, I think Tim is, and he requested six people. He might have them by now, and come to think of it, he doesn't like you, you probably shouldn't call him at all. Just come down with us and we'll all go together. You might not get paid, if they already have enough people, but it's better than just sitting around here." Tom looked around at the silent kitchen and the dust floating in the sun. His nose wrinkled. "This is frickin' boring."
I liked just sitting. I could easily do it all day without experiencing even a twinge of boredom. But if everybody else was out doing something together and I wasn't, my napping plans would be ruined. Instead I'd mope about missing the social interaction and, well, maybe they really needed more people. It takes a lot of people to carry a person. People are heavy. I sighed and got up. "What do I need to bring, Tom?"
"Nothing, I don't think. I mean, it's just in the woods, right?"
"Cool," I said. "That sounds perfect."
It was a warm summer day, and all of a sudden I was actually looking forward to a hike in the woods, where there wasn't any snow, and where none of my stuff would get wet, and it wouldn't be freezing cold, and if I accidentally slipped I wouldn't fall to my death in a crevasse. It had been a long time since I'd walked on a sunny trail in the woods, surrounded by new growth and huge old trees, with the springy ground under my feet. It was good on the knees, I remembered. I went to my room and picked up my backpack, putting a quart of water in it along with a jacket, headlamp, and chocolate bar.
"Hey, Charlie," I yelled down the hall. "Uniforms?" It was up to Charlie to make these kinds of complicated calls.
"I'm not wearing mine," he yelled back through his closed door. That was good, I only owned one, and at the moment it reeked.
We piled into the blue government minivan, with quesadillas and the rest of my sodas in hand. We maneuvered out of the parking lot—oblivious to traffic and scenery alike—around the gawking tourists in the road and past the tailgate picnickers, their dinners complete with chips and cheap Rainier beer.
Charlie turned on the Funky Monkey, his favorite radio station, and cranked up the volume. He was always threatening to call the station from Camp Muir, say who we were, and then add that we were their highest listeners. It was that sort of station. Tom had been in Las Vegas the week before, getting paid for working as a wildland firefighter, and he'd picked up dozens of call-girl cards, amazed at the novelty of the idea. Every time we took a corner, the cards slid back and forth on the van's floor. I was going to have to clean those out before anyone borrowed our government van, along with the rearview mirror ornament, a magazine cutout of George Bush sporting a three-millimeter-cord noose. Not that I objected to it on principle.
There was a lot of traffic on the road. We slowed to a crawl behind an RV, the owner with his head actually out the window, both hands too, shading the viewfinder on the obligatory oversized digital camera. We cursed him like we were doing something important and it was vital we got to our destination in a hurry. Sometimes if you generate enough excitement, you don't need anything substantial to back it up, the momentum alone is enough. Windows down and music cranked, we continued following the RV.
We finally got to Longmire after forty minutes on a road that usually takes less than thirty, and met up with Tim. He was in full uniform, including his bulletproof vest and gun belt with thirty pounds of odd gadgets Velcroed to it. I suppose he was obligated to wear the uniform, but I could tell he liked it—it was in his swagger, the way he hooked his thumbs in his belt and stuck out his pelvis, as if to show off all his equipment. He scowled at me when I got out of the minivan, but we didn't say anything to each other and I figured he must still be short of people for the carryout. He knew that I knew I wouldn't get paid for this outing unless I went over to talk to him, but he didn't know that I knew it wasn't worth the money.
We caravanned, fast, bumper-to-bumper, down the road from Longmire and out of the park, passing under the giant log hoisted above the roadway with the wooden park sign dangling below it—all of that awesome, rotting tonnage looming over the cars, trucks, and vans lined up to pay at the toll booth. Tim was in the lead with his marked patrol SUV. We were next in the blue, dirty, and overloaded minivan. Then behind us was a backcountry ranger I hadn't met before in a dusty, rusty Subaru wagon.
I've always loved caravans. They're absurd. Only the first driver is given any information about where everyone is going, forcing the rest of the line to disobey traffic signals, brush off merging traffic, and follow too close in order to keep the leader in sight. One time when I was sixteen, driving (with only a learner's permit) in a bumper-to-bumper caravan to a youth-based rescue group outing, I created a chain-reaction pileup totaling three vehicles when I accidentally hit the clutch instead of the brake. Even so, I still love the common purpose of a caravan, the obvious signals we give other drivers to show we're together, and the feeling that the destination is important because all the other people you're with want to go there, too.
We turned off onto a logging road in Ashford after picking up another ranger who had been waiting to rendezvous with us. It was much warmer in Ashford than it had been thousands of feet higher in Paradise. It felt good to be in the minivan with the sun beating down and the windows rolled up, air vents blowing road dirt on us, coating everything with a layer of dust. It smelled like summer and the earth and the solid, inviting woods.
As we headed into the hills outside of Ashford the road got worse, narrow and tilted, filled with water traps and humps and valleys. We turned from one unmarked junction to another until we were sure we were lost. Charlie was clinging to the steering wheel with both hands and peering out through two layers of dust trying to follow Tim's SUV. Charlie was grinning madly. He turned on the wipers, and that helped the dust a little, but the sun was low enough to shine right in our eyes. Tom and I just held on, happy for every inch of this road that could be driven instead of walked, content with being bounced along towards whatever end.
After nearly an hour on dirt logging roads, we finally hit a dead end, a tiny turnaround filled with dusty, dry potholes, surrounded on one side by woods and on the other by an overgrown clearcut. The turnaround was occupied by the victim's truck and the Ashford Fire Department's dusty, aged aid car. When we arrived, the driver's door of the aid car opened and an ancient man stepped out. He must have been at least ninety, wearing a hand-knit wool sweater despite the heat, Velcro fire department patches on the arms. He pulled his cane from behind the seat, donned his white helmet, and fastened the chin strap with shaking fingers. He was unsteady walking across the lot, and Charlie leapt out of the minivan and offered his elbow. Through the passenger's window I could see an old woman, presumably the Fire Chief's wife, happily knitting in the front seat. I began to understand why they had called the Park Service.
The old Chief was definitely in charge, though. I had nothing but respect for him as he ordered us all around, including Tim, telling us what gear to take and what the plan was (hike in, pick up the injured hiker, and hike out). I could tell Tim was upset that he wasn't even allowed to have a huddle meeting first, before being relegated to working with the lot of us scruffy, backcountry folk.
One of the members of the volunteer fire department was already with the victim, who was a man in his forties who'd turned an ankle on the trail and couldn't walk the last mile to the road. That sounded fine to me. The whole carryout should be fast and stress-free. The five of us from the park threw on our packs, grabbed the equipment we would need, and started up the trail a few minutes later, leaving the Fire Chief and his wife to watch over base camp.
The woods were as fantastic as I'd thought they would be. The late-afternoon sun slid between the trees and onto the trail, onto us. The trail itself was a mass of old, knotted roots from the huge, equally gnarled trees still standing guard on the edge of the clearcut above town. Along the trail were blueberry bushes, right at eye level. After a few minutes of hiking and looking at the fat, untouched blueberries, we couldn't stay away from them any longer. We knew this fellow with a sprained ankle would be fine for a few more minutes, and as a perennially hungry group we put down our packs and wandered into the woods for a "water break," eating and stuffing our pockets and hats with berries. The backcountry rangers were used to foraging all the time, but for Charlie, Tom, and me, who always worked in the snow, it was as sweet as it gets. We left Tim on the trail to start evaluating the terrain for the carryout.
Once we started back up the trail, wiping blue fingers on our shorts, we got to the victim in no time. He was probably only a half-mile from the turnaround, lying on the warm dirt and pine needles with his injured leg stuck out in front of him, already splinted, and attended by the sixth man, a slightly younger volunteer firefighter who used to work for the Park Service. He was probably the one who had suggested calling us.
We got to them just as the sun was going behind the hills and we were down to the flat light of dusk, but it was still a warm, summer dusk, complete with mosquitoes, pests I hadn't seen for a while. I didn't mind them, it was nice to see animal life again, even if they were blood-sucking insects. They made the summer feel real, genuine, complete with all the important details and variety that were lacking in the mountain's austere snow and ice and rock.
"Do you need to pee?" I asked the man. Everybody looked at me. I realized this was a lame way to introduce myself, so I followed with, "It'll take us a little time to get back to the cars, so if you have to pee, pee now." OK, I was done talking.
We got the victim to lie down in the old one-piece metal litter. Then we picked him up about four feet in the air and attached the litter's huge, knobby wheel to the bottom of it. That way, if we all held on to both sides we could wheel him out without actually carrying his entire weight, which just from picking him up I realized was considerable.
Tim immediately took charge. He put two people on each side of the litter, which was tipping wildly on its central dirt-bike wheel, and then he put one person on each end to help brake the contraption as we negotiated the steep downhill sections over knee-high drop-offs (courtesy of the root systems) and big rocks in the trail. I'm not sure how many litter evacuations Tim had done in his long career, or where they might have occurred or in what kind of terrain, but his directions weren't going over well on this one. The trails in Paradise, where most of our carryouts happen, are wide, only slightly sloping, and mostly paved.
As we grunted our way down the trail here, Tim, the man in the front, was walking backwards. He kept tripping over things behind him that he couldn't see. The rest of us, crowded together, kept stepping on each other's feet, and we couldn't see where we were going, either. Every time Tim tripped, he got angry with us for going too fast, or not fast enough, over an obstacle we didn't know was there.
I should have kept my mouth shut, but I just couldn't. I am chronically bad at keeping quiet when there's a simple solution to an obvious problem. For ten years I'd been a member of a youth-based lowland search and rescue group that I was embarrassed to tell anyone about, because now as a professional I wasn't supposed to fraternize with amateurs. But I had done hundreds of carryouts through the woods, and I had a few ideas about how to make this one better. I figured that Tim didn't like me anyway, so what the hell? I opened my mouth.
"Hey, guys," I said, "hold up a minute."
"Are you tired?" Tim mocked, looking at his watch without letting go of the litter, and then he snapped, "You have another seven minutes." One of his other favorite evacuation methods was allowing us only prescheduled breaks. I can't stand regimentation when it's totally unnecessary. Maybe if we were a huge group of people it would make sense, but with just six of us we could afford to be more flexible. I stewed about this for a second before realizing that we were still stopped and everybody was still looking at me.
"Let's switch around," I said, leaning the litter against my thigh. "My left arm is definitely longer now than my right one." I held up both my arms, with one shoulder back so that my fingertips on the hand grasping the litter were a good six inches past my other hand. Since girls are the weaker sex, I decided I might as well use that, and a little humor, to my advantage. Tom chuckled in the dark, and I thanked him silently.
I had pulled a ten-foot piece of one-inch tubular webbing out of the van because I'd thought it might be useful, and now I got it out of my pack. I tied it to the rear of the litter, walked out five feet, and tied a loop in it for a handle. I had Charlie, a Greek statue of a man, hold onto the end as the brake, where he was far enough out of the way that we wouldn't be stepping on each other. Since Tim was doing no good in the front, I asked him to take up my position on the side, and while I'm sure he glared at me, it was nearly dark and I couldn't see it. I also figured that, since this reorganization stemmed from my perceived weakness, it would mitigate his dissent. It worked; he went along with it.
I felt bad for a minute because Charlie was my boss, and I didn't want to step on his toes. I would follow any sort of order he could come up with, but he didn't seem interested in dealing with Tim. I understood that. They worked together more often than Tim and I did, and I supposed that Charlie needed to save what power he had for vetoing Tim when it was a matter of life and death. I figured I wouldn't offend Charlie too badly by taking over this short carryout. It was a me-sized rescue.
I didn't like the big rescues anymore, the ones with helicopters flying overhead, the media storming Longmire, and people screaming and bleeding in the snow after a few unspeakable moments of terror. I'd discovered that these operations were too complicated for anyone to keep them in check; they had a life of their own. They were impersonal and stressful and left me feeling like I had no control over my own destiny—so much energy, money, and power were spent to do the rescue, and getting good media coverage was so important that it seemed to me as if it didn't matter what or who was sacrificed in the attempt as long as the rescue was successful. Like it was more important for everybody to come off looking heroic for the sake of the climbing program and its future budget than for everyone to come back in one piece. I could handle my small part in the big rescue, but the following nights found me looking back and analyzing what I'd done, which so closely mirrored what the victims had done—often we'd both been in the same spot, dodged the same rocks, jumped the same crevasse, only now one of us was severely injured or dead. Even if the press conference went well, and the whole thing was deemed a success, it left me feeling like death was looming over my shoulder.
This basic carryout might just have been my favorite day of the season. A day with a simple story and a happy ending.
We started off again with just four people actually touching the litter. Everyone was able to see what was ahead without crowding each other's feet. Charlie and his enormous muscles were providing more than enough braking power, and I walked out about ten feet in front and called out obstacles on the trail, where they were, how big they were, and which side they were on. The carryout was going more smoothly, but whether that would make up for the widening rift between Tim and me, I had no idea. I figured it didn't matter at that point, so I also changed the rules: any time someone wanted a break, we stopped.
In actuality, it wasn't far back to the road, though it was nearly full dark when we got there. The old Fire Chief had the ambulance's scene lights on, and hundreds of moths were flying around them, offering up flicking noises as their wings brushed against the lights. Tim was back in charge. I held the side of the litter while the wheel was removed, and then we set the whole thing on the ground and let the poor guy back out. He opted to drive home himself with his good foot. Tim and the Fire Chief had a few words—they finally had their huddle—and they shook hands, some obligation fulfilled.
Tim walked over to where we were all sitting in the minivan. "Bree," he addressed my silhouette with his own outline holding a clipboard, "you didn't really help very much."
I blushed in the dark. I was pretty sure I had expedited the actual rescue, but I also had to admit it did seem lazy just walking ahead of the litter. Sometimes a gesture of mindless physical involvement, a willingness to suffer with others, goes a long way towards creating trust and friendship. "But I'll put your name down on the overtime list."
"Thanks, Tim," I said, surprised by his generosity and, more than anything, that he had actually come to me. Maybe there was hope. I stammered, "I really appreciate it. I'll do better next time, I promise." He swiveled and walked away, hoisted himself back into his SUV, and took off.
Tom drove us back down the road after the ambulance, bumping slowly along in the dark. As the last car down we couldn't see anything through the dust, and Tom was driving amazingly close to the ambulance's back bumper. It was exciting, like we were cars on a roller coaster: we'd see the tail lights in front of us suddenly disappear into a giant hole, and we'd brace ourselves for the inevitable crunch and scrape of our frame as we were drawn inexorably forward. We'd see the lights suddenly rise above us to the level of the top of our windshield, and Tom would gun it for all it was worth, and our bald little wheels would spin mercilessly against the loose dust of the rise. I liked it. Tom smiled, hunched over the wheel. No one said anything, we just watched the lights and the darkness.
When we made it back to Ashford we decided to go out for pie before heading back. Technically, we were on call even though the paid workday had ended, and we should have hightailed it back to the park. But Charlie said if anyone turned out to have been looking for us, we'd just say we'd had a flat tire on the dirt road, which was pretty plausible. Charlie was too honest, his conscience usually sent him straight home, but today if he wanted to stop for pie, we were only going to support him.
Blinking in the sudden light, I teetered in the roadside restaurant outside the park entrance, my joints stiff again. I was pleasantly surprised that the place was still open. I had been up climbing since the previous midnight, and I'd lost track of time, not sure how late it was. The little café seemed so homey and bright inside, there were so many colors—it was dazzlingly, amazingly beautiful. I fingered the plastic red-and-white checkered tablecloth, and ordered two slices of the most delicious blackberry pie.
## 10
* * *
## POINT OF NO RETURN
THE SUMMER WAS OVER. It was eight o'clock in the morning on my last day. My final responsibility was winterizing the Climbing Information Center. I'd just finished cleaning out the recycling bin and mopping the newly refurbished soft pine floor. As if cleaning it would erase the damage done by hundreds of boots owned by hundreds of climbers who had nervously ground loose gravel into the finish while waiting for me to issue them a permit. The floor was hopeless, but doling out permits had been fun for me. "We are already very full," I'd say, shaking my head at my computer, and then barely manage to squeeze the party in. It always made them feel lucky. I think it's important to feel lucky when you climb.
My boss, Stefan, was in the back office cleaning up loose ends of his own. I could see him watching me though the window every time I turned around, making sure I was still working, or maybe he was just reassured to know someone else was there. The CIC was already closed for the season, and we were the only rangers left in Paradise. The cavernous space was so quiet it seemed oppressive. I set the mop abruptly down against the counter, and it slid off and slammed to the floor, the noise like a shot. I jumped, even though I'd watched it fall. I had to get out of the building, so I skipped to the next thing on my to-do list: switching over to the self-registration system. I left the mop, figuring there was no one left to trip over it.
I'd already written up directions for using the climbers' self-registration system, and all I had left to do was to put the directions on the box outside. The box was there because there wasn't a high enough volume of climbers to justify keeping the CIC open over the winter, but there were still people who wanted to climb the mountain in the off-season. So to keep track of them in case they went missing, the park let them self-register.
I walked into the back office to look for a screwdriver so I could put the self-registration directions underneath the Plexiglas plate on top of the box. We had bags of miscellaneous tools, boxes of rusted-out wrenches, and thousands of zip-ties, but although I found six large flathead screwdrivers, there weren't any Phillips. I thought briefly about trying to use a flathead, but I remembered that I had nearly stripped the screws last year by using an oversized flathead for this same project. Stefan was making phone calls, and I could tell that my banging around in the back was starting to bother him, so I decided to walk over to the Paradise Inn to borrow the right tool.
Outside it was a beautiful fall morning. The meadows were dead, but the stalks which had so recently flowered were now covered with frosty cobwebs that sparkled in the sunshine. The air was cold and clean, so healthy it was painful. The sidewalk, still icy, crunched a little bit under my boots where the sun hadn't hit it yet. Even so, I could tell it was going to warm up fast enough that I'd be comfortable without a jacket by mid-afternoon. It was a perfect day, I thought to myself, a lucky day, my last day.
I remembered I had a finishing hammer and a few other tools in the trunk of my car. I wasn't sure where the tools had come from originally, and they'd been banging around in my trunk for so long I was thinking of taking them out, but I thought this might just be the project that would earn them their keep for another few years. My car was in the main parking lot, sort of on the way to the Inn, and I walked slowly out there first, enjoying the sunshine and the fact that Stefan wasn't watching me as I wove through the few frost-covered cars in the lot. Sure enough, I had the right screwdriver. It had a blue handle, and looked like it had never been used before. I stuck it in my pocket and headed back to the self-registration box.
I got two of the screws out by just pulling up on the plastic top. They were little, rusted, stripped screws, and I was deeply satisfied with myself for getting them out so fast. I even thought briefly about trying to find new, bigger screws to replace the old ones with—a job truly beyond the call of duty for my last day—when I overheard the garbageman calling the communications center over my radio. A car with four people in it had driven off the road and gone over Christine Falls. The garbageman was deaf and his speech was a little garbled, so although he had a radio and could transmit, nobody could call him back to ask questions. The communications center immediately started calling Stefan, but he didn't answer. I figured he was probably still on the phone.
I could feel my heart start to beat faster, but I didn't know if I'd heard what I thought I'd heard and, anyway, I didn't know if Stefan would want me to go. Above all, I didn't want anything to come between me and the exit lane at the park entrance. I needed to be cruising under the cedar log arch at four-thirty sharp, because I had a date with peace. Part of me longed for the winter season and its safe, normal routine after difficult months of work, a reprieve from a lifestyle I was beginning to suspect was causing me post-traumatic stress disorder.
And yet I found myself gearing up for one last rescue. Some part of me knew this would be the last time I would put myself out there, and I wanted to do it right. I didn't know then that Charlie would die in an avalanche a few months later—days after announcing his engagement and plans to go to nursing school—or that Mike would opt for a totally fresh team by replacing me as well as my dead friend. He would tell me then that he thought I wasn't mentally prepared to give the job everything, in light of Charlie's accident, and it was true. But before that happened, on the last day of my third season, some part of me already knew that I was about to be done. I wanted to live a long and happy life. I wanted to climb for fun, with friends; get married and buy a house in the country; own lawn chairs, have babies, grow organic peas. I wanted to save myself next. It was one thing to live for climbing, it was another to know it was only a matter of time before you died for it.
I stuffed the self-registration papers and the screwdriver back in my pocket and jammed the screws back in their holes just as the communications center called me, looking for Stefan. I ran faster than was strictly necessary back to the CIC and motioned to Stefan to get off the phone. He waved me off and so I waited, spinning the mop behind the counter with impatience, until he was done with his conversation.
When I told him what I'd heard, he said he didn't think a car could go over Christine Falls because there was a three-foot-high, three-foot-thick concrete barrier, covered with decorative river rock, constructed just to prevent cars from driving over the falls. He told me to take the Paradise ambulance down to the falls and find out what the real problem was. If there actually was a car down the embankment, he said, I should call him on the radio, and he'd bring down the rope-rescue equipment so we could pull everybody back up again. I could tell he didn't want his day complicated, either. It was warm in the back office where the sun came in, and he had the only comfortable chair in the place.
I ran out the main entrance and across the parking lot. I liked being looked at by the few tourists bumming around, the lone uniformed ranger running at top speed with her radio in hand, obviously a vital part of the solution of some horrible catastrophe. I often did this for fun early in the season, even when there was no emergency, just to see the reactions I got.
I started down the road the eighth of a mile to where the ambulance was parked just outside my dorm. I wanted to stop in at the dorm to grab some breakfast before I started down the hill, so I had to hurry. I was thinking that this could be a long last day.
The Paradise maintenance guy, Guy, drove up alongside me, matching my pace, and rolled down his window to offer me a ride. I jumped in. His little red truck smelled like old cigarette smoke. Guy was excited to be part of the action. Just the previous week he had gotten to drive the ambulance while a law enforcement ranger was doing CPR on some guy in the back, and he told me all about it while he drove. He'd been so excited, he'd hit the gas before everyone was fully loaded in the back, and someone had fallen out, but luckily hadn't been hurt. I asked Guy to let me out at my dorm. He seemed confused that I wasn't going directly to the ambulance, but I didn't want to explain that I was delaying the rescue for a sandwich.
I ran up the three flights of stairs, tripping over my boots, justifying the detour by telling myself that boosting my blood sugar would improve my job performance in the long run. I grabbed a bagel and the last hunk of a baby loaf of mild cheddar I found in the communal fridge, along with a handful of Pepto-Bismol chewable tablets. These emergency response jobs always gave me heartburn. I got my climbing harness and helmet, too, because Stefan would be pissed if he ended up having to loan me gear when he was part of the contingent that was always telling me I should be prepared for anything. I skidded back down the stairs, feeling great with my little blue backpack full of food and gear.
The garbageman called again with the same information: car over the embankment. I jogged back to the ambulance and headed down the hill. There wasn't any traffic. I tuned the radio to the only station we get in Paradise, the Funky Monkey. It's a hard rock station, but for some reason it was playing the Postal Service: "Don't wake me, I plan on sleeping in." I turned it up, dedicating the song to my upset stomach. Comm. Center called again and said that there was a law enforcement ranger, Chris Trotter, who could respond to the accident as well, but she was an hour away.
For a moment I missed having a partner. It wasn't as nice showing up by myself. I didn't think it inspired as much confidence, seeing my lone silhouette on the horizon instead of a rescue posse, squad, team, crew, or whatever. A partner would also make me braver and stronger. Someone who would say, "Don't worry, Bree, I've got your back," as they did a kung-fu move and grinned. I rubbed the ambulance's dashboard and said, "We've been through a lot together, huh?" But somehow it didn't seem like the same sort of thing. In silence, I ate my bagel and the cheese and two bubble packs of Pepto-Bismol tablets, with one hand on the wheel.
I parked the ambulance behind a garbage truck blocking the road. I could see a man lying right on the double yellow stripe in front of the truck. There wasn't any damage to the barrier, so I figured maybe the garbageman had just run over this guy. That would be easy. Maybe he was already dead, and there was nothing for me to do here; I could go back and finish the floors.
He wasn't dead. I saw him move a little through the wind-shield as I set the emergency brake. My heart sank, not because I wished he were dead, but because I knew the fun daydream where I saved everybody before four-thirty was over, and the work was beginning. I sat for a second, regaining composure and slowing down my heart rate. I resorted to the disposition I've perfected over years of ambulance driving. Sort of a sour, jaded, heartless routine that is amazingly efficient at getting the job done, but sadly, lacks the heady giddiness of an adrenaline rush. I turned on all the flashing emergency lights so that nobody would run into the ambulance if they came around the corner, then grabbed the barely portable orange first-aid kit out of the back and lugged it towards the garbage truck.
The debilitated man was lying about two feet in front of the truck. I kept walking till I was standing over him. I didn't want to kneel at his level just yet, to get too close and involved without knowing what the problem was. I looked down at him with my hands in my pockets and the ambulance's jump kit on my back. He was young, and he looked Hispanic. Maybe in his early twenties. His clothes were dirty and wet, but I didn't see any blood anywhere.
"Did you get run over by this garbage truck?" I asked bluntly, almost accusingly.
"No," he replied calmly and with an air of resignation.
"Then why are you lying in the middle of the road?" I was going for a scolding tone, but it came out flat, like a good traffic cop. "You've clearly upset the driver, and you're blocking traffic." I gestured towards an imaginary backup behind the ambulance. The road probably wouldn't see another car for over an hour.
"Well," the man said, without attempting to move out of the way, "yesterday afternoon my girlfriend and I were taking pictures of each other out on the point, down that little trail by the edge of the cliff, just over the fence. I couldn't get enough of the scenery in the picture so I asked her to take one more step backwards, and she did, but she fell over the edge." He drew a ragged breath and then continued as I listened, expressionless, my hands still in my pockets.
"I freaked out," he said. "I yelled for her, but she didn't answer. I ran back to the road, but nobody was coming and my cell phone didn't work, so I decided to climb down to her. I got most of the way down the side of the cliff next to the waterfall, but it kept getting steeper, and it was mossy and wet, and then I slipped and fell off the cliff down into the water at the bottom right next to her. I broke both my legs and my left wrist." He showed me his wrist, cradled against his chest. I nodded and smiled politely, encouraging him to continue.
"My girlfriend was pretty fucked up when I crawled over to her. She landed on the rocks, not in the water, but she was conscious and everything. I tried yelling for a while, but we couldn't hear if any cars came by, and I think the waterfall noise drowned us out, anyways. It got dark, and she couldn't move at all because of the pain, so I curled up next to her and our clothes froze on us during the night and, y'know, it sucked." He looked at me and I had perfect empathy for the freezing, fucked-up night thing, but he wouldn't have known it.
"This morning," he went on, "I knew she was going to die, and it was freakin' cold and nobody was going to come looking for us, so I decided I had to crawl out. I floated down the stream below the falls for a while and then used my good arm and inched up the slope on the downhill side. I started just before dawn, and I knew it had taken me hours to get to this road. Nobody was coming, and I didn't want anyone to drive by me, so I crawled out into the middle of the road, and a minute later this garbage truck came barreling around the blind corner and almost hit me."
I turned around and looked at the truck, and then at the deaf garbage truck driver. He waved at me from the truck cab where he was still manning the radio, and I waved back. "All that, and to be killed by a garbage truck." He said, rolling his eyes, "That was a close one."
"So," I asked, "your girlfriend is still at the bottom of the waterfall?"
"Yes," he said, looking focused again.
"OK," I nodded. "I'm just going to call this in on the radio."
It took me a few seconds to figure out how to condense the story, but I decided on: "Comm. Center, 686. I'm at Christine Falls. No car went over the embankment, but there are two patients who fell over the falls, one with leg fractures who is now back on the road and one with unknown injuries who is at the bottom of Christine Falls. Ask Stefan to come down here with the rigging equipment, and anybody else who might be around and willing to help."
I nodded and smiled at the man in front of the garbage truck. "They'll send some folks real soon." I said. He didn't look reassured: even with my tough-guy routine I'm still just a sunburnt girl with a ponytail.
I said, "I'm going to take a quick look at your legs, then." I looked down at them. "Yeah, they look broken. Can you wiggle your toes?"
I took off his shoes, and his feet were black with frostbite. The garbage man scavenged him a blanket from the cab of his truck. Then we all sat around because I didn't want to leave the man in the road until somebody else showed up who could care for him. I felt calm, standing in the sunshine, happy, almost, for the lull, but my stomach was still killing me.
Chris Trotter showed up sooner than I'd thought she would. She'd turned on the emergency lights on her law enforcement SUV to make all the RVs and deer get out of her way. She had come as fast as she could, and she was revved and happy to be there. I, on the other hand, had reached the point where I was nauseated thinking about what was coming next. I liked being needed, in a desperate, time-critical, life or death, adrenaline-pumping sort of way. It was flattering. But another part of my brain stood back and wondered how much chaos one person could take. Maybe I felt this way because in this job I was only needed when something had already gone wrong, and there was nobody else to send. This kind of situation inspired dread, or heartburn. In terms of rescue specialists, when the park sent somebody else they knew they had me as a second-string backup, but when they sent me it was because there was no one else.
I told Chris what had happened and that we needed to get two ambulances started from Tacoma, which at about an hour and a half away was the nearest city. We were also going to need about six people who knew how to rig up a way of getting the girlfriend back up the cliff, as well as another first-aid kit, because the one I had with me sucked. I told her the boyfriend needed to be backboarded and he needed his own EMT, because I needed to go see about the girlfriend.
Chris said she was going to take command of the situation, unless I wanted it. I didn't think she was serious. I think the Park Service has some kind of certification you have to get before you can be in charge. There's probably a form I'd have to fill out, which would have to be signed by my supervisor.
I went back to the ambulance and got my harness and helmet out from between the two front seats. I was digging around in the pockets of my pack to see if I had any more Pepto-Bismol stashed when Stefan drove up in his dusty blue Volvo, crammed full of gear from the SAR cache in Paradise. I grabbed a rope from him, and he went to get the story from Chris so they could discuss big-picture management things.
I shuffled down the trail towards the edge of the cliff, carrying the ridiculously large first-aid kit, the rope, and my personal gear. A couple of middle-aged male tourists were leaning against the hood of their black Cadillac Escalade next to Stefan's car, checking out the flashing lights and the garbage truck and taking in the whole picture. They asked me if I wanted any help. I smiled and yelled, "Sure, come over here and take this first-aid kit, it's really heavy."
The two men followed me down the well-used trail to where the earth dropped off. The edge was sloping and mossy, and on one side there was a dirty gully that looked like it channeled a lot of runoff after rains. I couldn't see anybody down over the edge. All I could see was a bend in the stream below the falls, and some bushes on the opposite bank. I yelled a few times, but I couldn't hear anybody.
I put my harness on while the two men watched me, and then I wrapped one end of the rope around a slender, smooth-skinned tree that was relatively close to the edge. I wrapped it around four times, because I was going for friction, and then I tied a bowline back around the rope. It doesn't take any hardware to secure the rope this way, which was good because I didn't have any.
I clipped the abhorrently large first-aid kit to the back of my harness with a plastic ice-screw organizer I'd been too lazy to take off earlier in the year. The weight was awkward and pulled me backwards. I started to rappel down over the edge, but one of the two tourists started yelling at me and messing with my knot, so I stopped. He said the knot had started to pull, but I looked at it and saw that it was just tightening up against the tree's smooth bark. I told him that the knot was good, but that I needed him to stay there and make sure that nobody messed with it, because this was a life or death matter. I gave him a look like I trusted him with everything so he'd better not screw up. I could see his posture improve. He felt important. My knot was safe. I left.
It was really loose in the gully and a lot of dirt fell down in my shirt and my boots. Maneuvering with the first-aid kit was difficult.
When I got to the bottom there was no body there. I walked out into the stream. The water was freezing as it poured in around my ankles. I shivered, and shook the dirt out of my jacket before zipping it up. I started walking upstream toward the falls, and then I saw her on a small sandbar next to the base of the waterfall, lying on her back, awkwardly, between several large round rocks. She was facing away from me with one arm up in the air, holding her cell phone over her head. There isn't any cell phone service in the park except on top of the mountain, so I figured she probably wasn't having any luck. When I was right next to her I yelled, "Hi!" over the roar of the water. She screamed and dropped her phone.
"Sorry," she said, "you startled me."
"What are you doing?" I asked her.
"I'm trying to figure out how to save a text message of my last words."
"OK," I replied. "I'm just going to call in on the radio to say that I found you, and then we'll see what we can do about getting you back out of here." There wasn't any sun down in the deep gorge below the falls, and my fingers were cold. It was hard to feel the radio in my hand. Andy had been here at this exact spot two weeks earlier because another woman had fallen over these falls, and I wished I'd asked him more about the extraction details. I couldn't hear the radio, so I just pushed the transmit button and said I'd found the woman, that she was alive, and that I needed another EMT as soon as one could be rustled up. And that the woman would need to be raised out in a litter.
"OK," I said again, patting her arm, "they're going to have all the details figured out in a minute. How are you doing?"
She was shivering, and I looked for some chemical hot packs in the kit, but there weren't any. The kit was shitty. When I'd opened it in the parking lot, the zipper had broken off in my hand. I couldn't find anything I needed in it, and it looked like it hadn't been used in years. It probably hadn't. I was pissed I hadn't brought the one from the CIC that I'd put together a few weeks before. I gave her my green polarfleece jacket, tucking it around her shoulders.
There was a lot of blood on the rocks around her, but I couldn't find where the blood had come from. I could see the bone in her elbow sticking out of the skin, but it didn't look like it had bled much. Not even the edges of her torn sweatshirt were saturated. She had on tight hip-hugger jeans and a belt with multicolored rhinestones. "Nice belt," I said.
"Old Navy. Eight dollars," she told me.
Her thigh had a U-bend in the middle of the femur, and her leg was sticking out in a weird direction. Despite this, she looked OK for the most part. I think if a person survives all night in freezing temperatures while wearing wet cotton after a sixty-foot fall and landing on rocks at the base of a waterfall, they are probably going to live.
I told her as much, and said I'd take her phone for safekeeping just in case. She wanted to know if her boyfriend had made it back out. I said it sounded like he'd floated downstream for a while and then he'd crawled up the embankment and out to the road, where he'd flagged down a garbage truck. Now, I said, he was probably sitting in the back of the ambulance with the heat cranked up, and loads of pain medication.
She said she loved him for saving her, but hated him for being warm right now when she wasn't.
"Yeah, well," I said, "true love is always kind of a love-hate thing anyway. That's how it goes sometimes. Being cold just makes you appreciate being warm that much more. Just think about how much you'll appreciate the little things after this, like hot showers and hot chocolate and hot water bottles and—"
"Do you mind?!" She yelled over the noise of the waterfall. She was smiling, though.
"And hot tubs, heaters, electric blankets, hot men." I shut up for a minute. I looked up to see if anybody else was coming down, but there was no sign of anyone at the top of the cliff at all.
I wondered what was going on up there. It takes a couple of people who do this sort of thing often to remember how to set up the ropes and the pulley systems. I remembered Andy telling me that during the rescue here two weeks ago he hadn't been able to find enough people who knew what they were doing. Chris had been in charge, just like today, and she'd asked a couple of volunteers to help out. They'd said they knew what to do, but then they'd done it all wrong, and Andy, worried he was about to die, had scrambled up the side of the cliff, redone everything practically by himself, and then rappelled back down so that he could get hauled back up, carrying the patient in the litter. At least I knew Stefan was up there, somewhere, setting things up. He could do it all by himself if he had to. Then he could teach the garbage truck driver and the people in the Escalade how to operate the system.
I looked behind me, back towards the gully I'd rappelled down. There was Stefan, hauling some gear around the corner. "Stefan! Who's setting up the rigging system?"
"Aw," he yelled back, "they've got it covered."
"Great," I yelled, "why don't you go up with the patient then?" He said he'd do it, and I was relieved.
"Were you just in the park for the day, or was this part of a longer trip?" I asked the woman at the bottom of the waterfall.
"Just for the day, on leave. We're both Army medics. Going to ship out to Iraq in two months." I tried not to look skeptical, what with the leg situation, and I asked her if she wanted to go. She looked too young. Less than eighteen, blond, and wearing too much makeup, which had run all over her face. She looked like she belonged in a mall. I was amazed she'd chosen to come to a national park on her leave at all. Maybe it had been her boyfriend's choice.
"Oh yeah, I'm excited. Going to see the world, pay for college, do something interesting."
"Well," I said, "that sounds like it'll be just great. Hey," I added, "we're going to have to straighten out your leg so we can get you into the litter and back up the cliff." She looked shocked. "Look, if we don't then it'll hang out and bounce around, get caught on bushes, and I'm sure you don't want that, either." While I talked, I cut up her pantleg to the waist. I could see that her hip was blue, and the top of her hipbone was sticking out of the skin. I wondered if she'd broken her pelvis.
Andy had said he'd used a full-body vacuum splint on the woman here two weeks ago. I'd never seen a full-body splint, but if we had one, then I wanted it. "Hey, Stefan! Can you ask Chris to send down the full-body vacuum splint?"
"A what?" he yelled back.
"Full-body vacuum splint, we've got one, I swear."
The splint showed up a few minutes later in the arms of Ed Dunlevey, the head of EMS in the park. Ed was beaming. He looked thrilled to have an opportunity to get out of his park police car for a few hours. He was still wearing his bulletproof vest under his shirt, and his gun belt with a myriad of gun accoutrements all smashed up under a very tight-looking, brand-new climbing harness. He had a white plastic helmet on that had never been used before, and his gray wispy hair was sticking out of the ventilation holes.
The full-body vacuum splint itself was large and red; it looked like an oversized dog bed. It came with a yellow plastic hand pump to suck the air out of it. Stefan looked at it and shrugged. In one of the splint's corners were written directions, and so I sat down with it across my lap to read them. It looked simple enough. Now that there were three of us, we could all pick her up together. We'd drape the splint over the litter, then put her in it, and then suck all the air out of the splint, and it would magically conform to her like a full-body cast.
Meanwhile, the paramedics from Tacoma had showed up, and I asked Stefan if one of them could be sent down to give her something for the pain. Stefan went back around the corner to talk on the radio. When he came back, he said that only one of the paramedics could give pain medications, but that guy was afraid of heights, so the other guy was coming down.
"Why is he coming down if he can't help?" I asked.
Stefan shrugged again. Everybody wants to feel important, and I guess sometimes it's more work trying to talk people out of helping than it is to go with the flow and deal with the consequences later. Sometimes, however, it's not.
The medic was huge, probably over three hundred pounds. He was lowered down the cliff to us with his white uniform already drenched in sweat under his arms and covered in dirt down the front. He looked like he was clutching the rope in front of him with all his might. He didn't put his legs toward the cliff to walk himself down; he just slid down with his face to the dirt. When he reached the bottom, he fell in the water and I had to help him up and hold his elbow while we walked over to the accident site.
When we got there he said he couldn't help because he couldn't give pain medication. "OK," I said. "It was brave of you to come down here, though. Can you help us lift her onto the litter, maybe support her leg? I know if I do it I'll probably bump it or something." He looked pleased to have a job. His whole body was shaking in ripples with excitement. I thought it was an odd effect, and I couldn't help staring at him.
"OK," I said, looking pointedly at Stefan, but talking to the woman, "now we're going to pick you up and move you over into the litter, and it's going to hurt a lot." I narrowed my eyes at Stefan and Ed. "But we're not going to stop no matter what, because we've got to move you over and it'll suck worse if we have to do it twice." Silently to myself, I added, "Because then you'll know what's coming."
She screamed a lot when we picked her up and moved her. It was ungainly and ugly, like it always is. It's disconcerting to me that when I'm right next to someone who is screaming in unmitigated and prolonged agony, it doesn't hurt me a bit. Being that close to suffering, I think my brain does a kind of reboot, bypassing my conscious self, sort of like, "What was that? Wow. She must be in a nasty spot. Wait... I'm here too. Am I OK?" It grates on the nerves. I have to remind myself that I am the calm in the face of the storm. I am the rock. I am the one who did not fall over the waterfall in a "Hey babe, take one more step back so you'll fit in the picture." "Ack. I'm falling!" Splat. "Oh shit!" accident.
Finally, somebody on top of the cliff threw down ropes and lowered the litter, an ancient metal contraption in the shape of a lidless wire sarcophagus. We have a nicer one, an orange fiberglass litter that comes apart in two pieces so it's easy to carry, but this wasn't it. True to his word, Stefan tied himself into the rope next to the litter so that he could be pulled up with it and keep the whole thing from scraping and bouncing up the side of the cliff. "Bye," I told the girl. "I'm putting your phone here in your pocket."
Once they were headed up, I needed to figure out how to get Ed and the medic back up to the top of the cliff. The beginning of the gully was a bit steep to climb out, even with a belay, but there weren't any people left at the top of the cliff to pull us up, so I didn't have a lot of options.
I said, "Ed, could you put your rappel device back on the rope and then sort of pull yourself up, and then lock yourself off with your other hand so you don't slide back down, and get up that way?"
"Sure," said Ed, beaming. "I can do it!"
"OK," I said, "when you get to the top I'll tie the medic into the end of the rope and then you can belay him up while he tries to climb out on his own. That way he'll have two hands," I added, looking at the medic. He nodded appreciatively.
As Ed started up, I noticed that the medic's harness didn't fit him at all. His gut spilled out over the buckle and at his hip level, the narrowest part of him where the harness actually connected, the end of the belt was barely threaded through the buckle, never mind doubled back with two inches of tail like the directions tell you. I was surprised it hadn't come apart on him on the way down. I flashed on an extremely disturbing image of this medic, suddenly freed from all constraints except gravity, plunging backwards towards me, myself looking up and seeing him fall, with his huge white medic shirt with the little gold badges flapping in the wind and blocking out the sun. It would never happen that way, I thought to myself reasonably. There wasn't any sun down here for him to block.
"Hey," I said to the medic, "you don't fit in that harness. We're going to have to find some way of making it bigger or making it tighter, or else you might fall out of it on the way up. I'm amazed you didn't fall out on the way down here." The medic was already highly agitated. Sweat was dripping off the end of his nose, and he was breathing in short, hard gasps. There was no way of telling if the news that he'd narrowly escaped plummeting to his demise affected him at all. Maybe he wouldn't have died, I thought, still staring at his waistline. After all, the woman we'd just sent off in the litter hadn't. Neither had her boyfriend, or the woman who fell over this waterfall two weeks ago.
"Suck it in," I said, and grabbed the end of the harness belt, pulling as hard as I could. He came with me, and I had to push my elbow into his gut to maintain some counter-pressure. My elbow went way in. I was surprised.
"Hey," said the medic, looking down at me, "if we both get out of this alive, would you have dinner with me?"
"No," I said. "Here, suck it in as hard as you can." I wrenched on the end of the harness, and managed to get another inch or two out of it. Now it doubled back, and there was a bit of tail sticking out the other side of the buckle. Good enough for government work.
Ed had made it to the top. "Hey, Ed, you ready to belay up the medic?" Ed gave me an enthusiastic wave and a thumbs-up. The medic started up then, sending a shower of dirt and rocks down on me. To get out of the fall line I walked back around the corner to grab all the stuff that had been left behind. There was a lot. I packed up the first-aid kit, the yellow vacuum pump, and the excess patient-packaging materials, the webbing and all the other crap that had showed up after it was no longer needed. I tied it all to my harness with a piece of webbing. When I came back around the corner I was relieved that there was no sign of Ed or the medic, which I assumed meant that everything had gone OK. I hand-over-handed it back up the rope. The weight of all the equipment hanging and swinging between my legs made it difficult going, and by the time I'd made it to the top everybody else was gone.
I picked up my green Park Service fleece jacket that someone had left next to a pile of ropes at the top of the cliff, and put it back on. There was no one around, so I went back up the little trail to the parking lot and found Chris doing paperwork in her patrol car. "Hey, Chris," I said, "you need me to help with anything else?"
"No, they're going to fly her out, the helicopter will be here in a minute. We're just trying to get the people who are parked here out of the way right now. Stefan says he'll drop the gear off later, so you can put it away." I put the first-aid kit back in the ambulance and headed up the hill, out of the way, back to another task, to the next thing on the list. On the way back to Paradise, the Funky Monkey contributed Dave Matthews's "Eat, drink, and be merry, for tomorrow we die."
The woman had left a little blood on my jacket. Back at the dorm, I pulled the screwdriver out of it, tossed the jacket in my laundry pile, and stuck the screwdriver in my pants pocket. I had guessed right this morning—it was warm enough without a jacket, now that the sun was high overhead. I was hungry again. I grabbed a bagel from the communal fridge and found some cream cheese to smear on it, and then walked back up to the climbers' self-registration box eating my bagel in the sun. When I got there I pulled out the screwdriver and started back in on the Plexiglas display cover.
There were a few cars in the lot. A couple of tourists strolled by and took a picture of the mountain with the Paradise Old Station and the self-registration box and me with the screwdriver in the foreground. I'm sure the whole tableau was beautiful, and apparently the tourists thought so, too, because the man asked me if I would take a picture of him and his wife standing right where I had been.
"Sure is quiet here, you must really enjoy your job," said the woman, handing me her camera.
"Yeah, I'm lucky," I said, and I realized that I was. I put my eye up to the viewfinder and said, "Just take one more step backwards so I can get you and the scenery in the picture," but I was the only one who thought it was funny.
## AFTERWORD
It's autumn again. A new fall. A few years down the road. In the months after Charlie died, almost all the climbing rangers who had worked with me on the south side of the mountain got married. So did I, wearing a green knee-length skirt and with flowers in my hair. Then I bought a pre-depression era house with a bright kitchen across the street from a strawberry field. I can see the mountains from the main road, but they are a long ways off and there are a large number of fields and foothills, fences, old barns, and belted Galloway cows between us.
A lot of things are much the same as they were that last season on Rainier. I still awake, needed in the middle of the night, but now my ear is attuned to the soft chirps my little daughter makes rooting in her sleep rather than the reverberations of human chaos caused by ice fall, rock fall, trip 'n falls, tempests, and avalanches. I still anxiously await springtime, but now it's because I want to know if my collard greens, arugula, and daffodils will grow, and not because I'm afraid I'll get frostbite or die this year.
Asked by my editor to add an afterword to my book, I can only offer that neither of my twin philosophies, that shared hardship increases camaraderie nor the doctrine "that which doesn't kill you makes you stronger," worked out for me. The mountain irrevocably broke me in many ways, but it also kept me focused on what I wanted in my life: good friends to grow old with. Along with a strong desire to grow old generally. If this experience had been the sum of my life, then this book would have outlined a tragedy. Fortunately, it was only a summer job I had for three years in my early twenties.
And now I have a husband who looks out for me, family and friends who support me. People I found as much through shared joys as shared hardships. Crazily enough, climbing is once again something I do for fun with the people I love.
## ABOUT THE AUTHOR
BREE LOEWEN DISCOVERED CLIMBING when she joined a volunteer search and rescue group at age 15. Through a series of adventures and misadventures she has climbed in Alaska, Canada, Mexico, and various South American countries, as well as all over the western U.S.
After years of living out of her car she now resides with her husband Russell and daughter Vivian in Carnation, Washington. On clear days she can see Mount Rainier from her front porch. On cloudy ones she can see its image on the license plate of her car, as the backdrop of evening newscasts, and reflected in the eyes of every hiker in plastic boots climbing Mt. Si in the early days of spring.
She continues to enjoy all types of climbing along with sea kayaking, volunteering for Seattle Mountain Rescue, restoring her 1920's craftsman style bungalow, and baking fruit pies.
THE MOUNTAINEERS, founded in 1906, is a nonprofit outdoor activity and conservation club, whose mission is "to explore, study, preserve, and enjoy the natural beauty of the outdoors...." Based in Seattle, Washington, the club is now the third-largest such organization in the United States, with seven branches throughout Washington State.
The Mountaineers sponsors both classes and year-round outdoor activities in the Pacific Northwest, which include hiking, mountain climbing, ski-touring, snowshoeing, bicycling, camping, kayaking and canoeing, nature study, sailing, and adventure travel. The club's conservation division supports environmental causes through educational activities, sponsoring legislation, and presenting informational programs. All club activities are led by skilled, experienced volunteers, who are dedicated to promoting safe and responsible enjoyment and preservation of the outdoors.
If you would like to participate in these organized outdoor activities or the club's programs, consider a membership in The Mountaineers. For information and an application, write or call The Mountaineers, Club Headquarters, 7700 Sand Point Way NE, Seattle, WA 98115; 206-521-6001.
The Mountaineers Books, an active, nonprofit publishing program of the club, produces guidebooks, instructional texts, historical works, natural history guides, and works on environmental conservation. All books produced by The Mountaineers fulfill the club's mission.
**_Send or call for our catalog of more than 450 outdoor titles:_**
| The Mountaineers Books
1001 SW Klickitat Way, Suite 201
Seattle, WA 98134
800-553-4453
---|---
mbooks@mountaineersbooks.org
www.mountaineersbooks.org
| {
"redpajama_set_name": "RedPajamaBook"
} | 76 |
\section{Introduction}
Simple, inexpensive, and modular robots can provide significant benefits for large scale ocean monitoring and marine operations. Simple designs can be made quiet and non-intrusive to allow for observation of ocean fauna and can be deployed by the hundreds to take measurements over a much wider range than a single robot could cover. Modular robots, meanwhile, can build aquatic structures such as bridges, filters, etc., move them, and disassemble them when done. Such robotic systems could vastly improve our ability to detect and clean oil spills; find, track, and remove ecologically harmful trash collections; or even search for survivors from plane crashes or shipwrecks.
Aquatic robotic teams have been explored previously, but they have generally been limited by cost or locomotion. Leonard et al. considered the use of a team of 10 robotic gliders to explore and sample a coastal region \cite{Leonard2007}\cite{Leonard2010}, but the use of gliding locomotion restricts the permitted task space. Mintchev et al. presented a design for a miniature AUV capable of swarming behavior and full 3D motion \cite{Mintchev2014}, but it involves multiple complex, custom, and expensive actuators. Modular aquatic robots were explored in the Tactically Expandable Maritime Platform (TEMP) project, which deployed 33 modules to autonomously construct an aquatic bridge \cite{OHara2014}\cite{Paulos2015}, but required complex holonomic actuation. Furno et al. also presented a design for a modular underwater robot capable of docking and reconfiguration \cite{Furno2017}, as did Mintchev et al. \cite{Mintchev2014Journal}, but similar work is limited.
Actuation thus presents a barrier to the development of significant modular and swarming aquatic systems, and a simple, inexpensive module is needed to propel development. Propulsion via an internal rotor and the conservation of momentum is a novel and inexpensive mechanism that can enable this work. Originally explored in the context of terrestrial locomotion, such as by Kelly et. al with the Chaplygin Sleigh \cite{Kelly2012} and Degani for vertical climbing between two walls \cite{Degani2010}\cite{Degani2016}, it has recently been applied to aquatic locomotion as well. A fishlike robot based on the design was developed in \cite{Kelly2012}, and Tallapragada showed that the core propulsion mechanism was vortex shedding from the tail due to the internal rotor\cite{Tallapragada2015}.
A novel approach to the internal rotor in an aquatic environment was presented by Refael and Degani, who explored using an internal rotor to induce a paddling motion in a set of passive flippers by the conservation of angular momentum \cite{Refael2015}\cite{Refael2018}, but their design is hampered by limited mobility. In this paper we extend the work of Refael and Degani, presenting and characterizing a \textbf{new design} for a momentum-driven swimming robot capable of functioning as a mobile sensor platform individually or \textbf{in a coordinated or modular group} to overcome its limited mobility.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{media/isoBoat3.jpg}
\caption{The Modboat prototype, which is described in Section \ref{sec:mechDes}.}
\label{fig:renderingIso}
\end{figure}
This paper is organized as follows: in Section \ref{sec:basicDes} we present the basic design developed in \cite{Refael2018} and equations of motion for it. In Section \ref{sec:swarmAndModul} we discuss the design principles that we apply to the basic model to allow modularity and swarming behavior, and in Section \ref{sec:mechDes} we introduce our design: the Modboat. In Section \ref{sec:experiments} we present experiments done to evaluate the design; we discuss the results in Section \ref{sec:discussion} and offer an improved design.
\section{Design Principle and Modeling} \label{sec:basicDes}
\subsection{Dynamic Model} \label{sec:model}
We consider a single-actuated swimming robot based on the design presented by Refael and Degani \cite{Refael2015}, \cite{Refael2018}. It consists of a single cylindrical body, while a motor in the center rotates a driving mass --- a symmetric mass with a high moment of inertia. Two flippers are mounted to the underside of the body and allowed to rotate freely, with hard stops defining a closed position and a fully open position. A representative diagram is shown in Fig. \ref{fig:diagramSimple}, where both flippers are shown open for clarity.
\begin{figure}[t]
\centering
\resizebox{!}{0.6\linewidth}{%
\begin{tikzpicture}
\def1.5{3.5}
\coordinate (O) at (10,10);
\path (O) +(145:1.5) coordinate (FL);
\path (O) +(35:1.5) coordinate (FR);
\path (O) +(35:1.5*1.5) coordinate(FRRef);
\draw[very thick] (O) circle[radius=1.5];
\fill (O) circle[radius=2pt] node[below left] {\Large O};
\fill (FL) circle[radius=2pt];
\fill (FR) circle[radius=2pt];
\begin{scope}[rotate around={10:(FL)}]
\path (FL) arc (0:55:1.5) coordinate (FLEnd);
\end{scope}
\draw[ultra thick, blue] (FL) -- (FLEnd);
\begin{scope}[rotate around={170:(FR)}]
\path (FR) arc(0:-55:1.5) coordinate (FREnd);
\end{scope}
\draw[ultra thick, blue, dotted] (FR) -- (FREnd);
\node[above left](FRMid) at ($(FR)!0.5!(FREnd)$) {\Large $l_f$};
\draw[dashed] (O) -- (FR);
\draw[dashed] (FR) -- (FRRef);
\node[above left](RRad) at ($(O)!0.5!(FR)$) {\Large $r_b$};
\coordinate(AX) at (O);
\draw[gray, thick, ->] (AX) --++ (0,1) node[above] (YA) {$b_y$};
\draw[gray, thick, ->] (AX) --++ (1,0) node[right] (XA) {$b_x$};
\begin{scope}[rotate around={-25:(O)}]
\draw[gray, thick, ->] (AX) --++ (0,1) node[above] (YB) {$a_y$};
\draw[gray, thick, ->] (AX) --++ (1,0) node[right] (XB) {$a_x$};
\end{scope}
\draw
pic["\Large $\beta$",draw=black, ->, angle eccentricity=1.2, angle radius=2.0cm]{angle=XA--O--FR};
\draw
pic["\Large $\psi$",draw=black, ->, angle eccentricity=1.2, angle radius=1.15cm]{angle=FRRef--FR--FREnd};
\draw
pic["\Large $\theta$",draw=black, ->, angle eccentricity=1.2, angle radius=1.75cm]{angle=YB--O--YA};
\path (O) +(65:1.5*1.5) coordinate (RTD);
\path (O) +(115:1.5*1.5) coordinate(LTD);
\draw
pic["\Large $\dot{\theta}$",draw=black, ->, angle eccentricity=1.2, angle radius=4cm]{angle=RTD--O--LTD};
\path (O) +(270:1.5/2) node[below](BD) {\Large $F_{b}$};
\draw[very thick, red, ->] (O) -- (BD);
\path (O) +(245:1.25*1.5) coordinate (RFR);
\path (O) +(295:1.25*1.5) coordinate (LFR);
\draw
pic["\Large $\tau_r$",draw=red, very thick, <-, angle eccentricity=.9, angle radius=4.5cm]{angle=RFR--O--LFR};
\node[](FRMid1) at ($(FR)!0.5!(FREnd)$) {};
\begin{scope}[rotate around={-125:(FRMid1)}]
\path (FRMid1) +(90:1.5/2) node (FDEnd) {\Large $F_d(\dot{\theta} < 0)$};
\end{scope}
\draw[ultra thick, red, <-] (FRMid1) -- (FDEnd);
\node[](FLMid1) at ($(FL)!0.5!(FLEnd)$) {};
\begin{scope}[rotate around={125:(FLMid1)}]
\path (FLMid1) +(90:1.5/2) node (FDEnd1) {\Large $F_d(\dot{\theta} > 0)$};
\end{scope}
\draw[ultra thick, red, <-] (FLMid1) -- (FDEnd1);
\draw[ultra thick, dotted] (O) circle[radius=1.5/1.2];
\path (O) +(135:1.5/1.4) coordinate (X1);
\path (O) +(135:1.5/1.2) coordinate (X2);
\draw[ultra thick, dotted] (X1) -- (X2);
\draw
pic["\Large $\phi$",draw=black, ->, angle eccentricity=1.2, angle radius=1.95cm]{angle=YA--O--X2};
\end{tikzpicture}}
\caption{A functional diagram of the robot design. The motor is mounted at $O$, and the orientation of the driving mass is given by $\phi$. $\theta$ defines the orientation of the body-fixed frame $b$ in the world frame $a$. Although both flippers are shown as fully open, only one is open at any given time, as indicated by the solid line. Forces and torques considered are shown in red.}
\label{fig:diagramSimple}
\end{figure}
The robot moves by using the conservation of angular momentum. When the motor spins the driving mass, conservation of momentum requires that the body rotate in the opposite direction, with the amount of rotation proportional to the relative inertias of the body and the mass. The angular acceleration this creates, as well as drag from the fluid, causes the leading flipper to open against its hard stop, as shown in Fig. \ref{fig:diagramSimple}. At this point the drag ($F_d$) acts as thrust, pushing the robot laterally and forward. If the motor rotation is reversed, the open flipper is pulled closed, while the new leading flipper is opened. The lateral thrust elements cancel out, while the forward motion is preserved, resulting in a net forward motion.
We define a model to analyze the motion, in which we let $\theta$ define the orientation of the body in an inertial frame, while $\phi$ defines the angle of the driving mass relative to the body. Finally, let $(x,y)$ be the position of the center of the robot in the inertial frame. Then the configuration of the system is $\begin{bmatrix} x & y & \theta \end{bmatrix}^T$. Assuming that we have sufficient control of the motor, we take $\phi(t)$ as the prescribed input variable.
To simplify the model of the robot, we make the following assumptions:
\begin{enumerate}
\item The flippers open and close instantaneously and provide thrust only when fully open.
\item Only one flipper is open at any time, determined by $\sign{(\dot{\theta})}$.
\item The linear velocity of the robot is negligible compared to the rotational velocity. Thus the force on the flaps depends exclusively on $\dot{\theta}$ and not on $\dot{x}$ or $\dot{y}$.
\item The fluid velocity across each flap is approximately constant and equal to the fluid velocity at the center of the flap ($\norm{v_{f}}$).
\item The velocity of the water everywhere else is $0$; i.e. there are no external flows.
\end{enumerate}
The following forces, shown in Fig. \ref{fig:diagramSimple}, are considered to develop the equations of motion:
\begin{enumerate}
\item Drag on the open flipper $F_d$, which is modeled as a flat plate. The closed flipper is ignored.
\item Linear drag $F_b$ on the robot body as it moves through the fluid.
\item Rotational drag $\tau_r$ on the robot body as it rotates within the fluid (this is not considered in \cite{Refael2018}).
\end{enumerate}
\begin{table}[t]
\centering
\caption{Parameters in \eqref{eq:eom_main} and \eqref{eq:variablesMain}. Values provided are for the design presented in Section \ref{sec:mechDes}.}
\begin{tabular}{llrl} \toprule
Var. & Description & Value & Units \\ \midrule
$m$ & Total mass of the robot & 0.63 & $\si{kg}$ \\
$I_{t}$ & Inertia of driving mass & \num{1.6e-3}& $\si{\kg \square\metre}$ \\
$I$ & Inertia of whole robot & \num{1.7e-3} & $\si{\kg \square\metre}$ \\
$A_{sub}$ & Submerged body area & 0.025 & $\si{\square\metre}$ \\
$r_{t}$ & Driving mass radius & 0.075 & $\si{\metre}$ \\
$r_{b}$ & Bottom body radius & 0.025 & $\si{\metre}$ \\
$l_{f}$ & Flipper length & 0.050 & $\si{\metre}$ \\
$d_{f}$ & Flipper submerged depth & 0.043 & $\si{\metre}$ \\
$\beta$ & Flipper angular location & 45 & $\si{\degree}$\\
$\psi$ & Flipper maximum open angle & -21 & $\si{\degree}$\\
$\rho$ & Density of water & 1000 & $\si{\kg\per\cubic\metre}$ \\
$C_b$ & Body translation drag coeff. & 1.0 & --- \\
$C_r$ & Body rotation drag coeff. & 1.2 & --- \\
$C_f$ & Flipper drag coeff. & --- & --- \\ \bottomrule
\end{tabular}
\label{tab:params}
\end{table}
The full derivation and a partial analysis of the assumptions can be found in \cite{Refael2018}, whose model we have reproduced for clarity. The final equations of motion are given in \eqref{eq:eom_main}, with coefficients given by \eqref{eq:variablesMain}.
\begin{align}
M \begin{bmatrix} \ddot{x} \\ \ddot{y} \\ \ddot{\theta} \end{bmatrix} =& \dot{\theta}^2 K_{f} R \begin{bmatrix} \sin(\beta + \psi)\sign(\dot{\theta}) \\ \cos(\beta + \psi) \\ - K_{t} \sign(\dot{\theta}) \end{bmatrix} - K_{b} \norm{v} \begin{bmatrix} \dot{x} \\ \dot{y} \\ 0 \end{bmatrix} \nonumber \\
& - \begin{bmatrix} 0 \\ 0 \\ C_r \dot{\theta} \end{bmatrix} -\begin{bmatrix} 0 \\ 0 \\ I_{t} \ddot{\phi} \end{bmatrix} \label{eq:eom_main}
\end{align}
\begin{subequations} \label{eq:variablesMain}
\begin{align}
M &= \begin{bmatrix} m & 0 & 0 \\ 0 & m & 0 \\ 0 & 0 & I \end{bmatrix} \\
R &= \begin{bmatrix} \cos(\theta) & -\sin(\theta) & 0 \\ \sin(\theta) & \hphantom{-}\cos(\theta) & 0 \\ 0 & 0 & 1 \end{bmatrix} \\
\norm{v} &= \sqrt{\dot{x}^2 + \dot{y}^2} \\
K_{f} &= \frac{1}{2}\rho C_{f} d_f l_{f} \left (r_{b}^2 + \frac{1}{4} l_{f}^2 + r_{b} l_{f} \cos(\psi) \right ) \\
K_{t} &= r_{b}\cos(\psi) + \frac{1}{2} l_{f} \\
K_{b} &= \frac{1}{4} \rho C_b A_{sub}
\end{align}
\end{subequations}
All variable definitions are presented in Table \ref{tab:params}, in which the values provided represent the design presented in Section \ref{sec:mechDes}. The values for drag coefficients $C_b$ and $C_f$ are estimated, while $C_r$ is calculated using the model presented for rotating cylinders in \cite{Childs2010}.
\subsection{Input} \label{sec:input}
As described in Section \ref{sec:model}, when the robot rotates with $\dot{\theta} < 0$ the right flipper is activated, while when $\dot{\theta} > 0$ the left flipper is activated. We can achieve forward motion by alternating left and right flipper paddles, which is accomplished by inputting a periodic function of $\phi$ to force a periodic function of $\theta$.
Following \cite{Refael2018}, the robot we consider is propelled by inputs of the form given in \eqref{eq:input}, which defines a piecewise-continuous sinusoid with varying frequencies $\omega_1$ and $\omega_2$. $T_1$ and $T_2$ are the periods associated with \textit{complete} rotations at frequencies $\omega_1$ and $\omega_2$, respectively, $A$ is the amplitude, and $\phi_0$ is the zero-orientation of the driving mass (the midpoint of the oscillation). The 4-tuple $\left ( T_1 , T_2 , A , \phi_0 \right )$ thus fully defines the input function; the input $(1,1,2,0)$, for example, corresponds to a symmetric cosine wave with period $1\si{s}$, amplitude $2\si{rad}$, and centered around $0 \si{rad}$.
\begin{equation} \label{eq:input}
\phi(t) = \begin{cases}
\phi_0 + A\cos{\left (\omega_1 t \right )} & t \in \left [0, \frac{T_1}{2} \right ) \\
\phi_0 - A\cos{\left (\omega_2 \left (t - \frac{1}{2}T_1 \right ) \right )} & t \in \left [\frac{1}{2}T_1,\frac{T_1 + T_2}{2} \right )
\end{cases}
\end{equation}
We define a \textbf{stroke} as a period during which the motor rotates in a single direction, i.e. either of the two cases in \eqref{eq:input}. A \textbf{cycle} is then defined as a full period, i.e. two strokes. The robot can then be ``steered'' by varying the periods of oscillation that define the two strokes of the input function $\phi(t)$. When $T_1 = T_2$, the oscillation is symmetric and results in oscillations about a straight line. This can be seen in Fig. \ref{fig:trajStraight}, where numerically solving \eqref{eq:eom_main} with input as in \eqref{eq:input} produces an initial deviation as the robot starts from rest, but then settles into oscillation around a straight line. When $T_1 > T_2$, however, the stroke that activates the right flipper is faster than the stroke that activates the left. Since the angular momentum imparted by the flippers is proportional to $\int \dot{\theta}^2 dt$ (i.e. the drag on a flat plate caused by the rotation of the body in the water), the result is more thrust from the right flipper, which results in a counterclockwise trajectory. When $T_1 < T_2$ the opposite occurs, causing a clockwise trajectory.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{media/trajStraightWithInset.png}
\caption{A $30\si{s}$ simulation of the equations of motion \eqref{eq:eom_main} with inputs $(1,1,2,0)$. The boat starts at rest with $\theta (t = 0) = 0$ (upward). The initial location is marked as a green circle, while the final is a red X. The blue line plots the COM, while the arrows indicate the orientation $\theta_t$ (see Section \ref{sec:desChoices}). The inset shows a magnified portion of the trajectory (marked in black) to clarify detail.}
\label{fig:trajStraight}
\end{figure}
\section{Design for Swarming/Modularity} \label{sec:swarmAndModul}
\subsection{Requirements} \label{sec:requirements}
Although the design presented in Section \ref{sec:basicDes} is innovative in its simplicity, it lacks the ability to move quickly and precisely. Because forward motion is defined by a reciprocal motion of the motor, it is limited by the constant need for the motor to switch directions, so the thrust generated by each stroke is limited. Moreover, the response time of the system is limited by the fastest cycle-time it can achieve, which defines the lowest amount of time it takes to make a full maneuver. Because of the limited thrust, however, most maneuvers will take longer than one cycle, so the response time is further slowed.
We consider two methodologies for remedying these challenges. The first is to design the swimmer to be \textbf{modular}. The TEMP project \cite{OHara2014}\cite{Paulos2015} showed that modular robot teams made of capable individuals can be useful as structures, but they can also improve mobility when the individual robots are limited, as ours is. Allowing multiple robots to rigidly connect together can potentially allow the group to be more responsive and more powerful, but at a minimum enables more actuated degrees of freedom of the conglomerate. Multiple robots swimming together may provide additional thrust when paddling in phase or more uniform thrust when paddling out of phase. Multiple robots facing in different directions, meanwhile, can make the system more maneuverable by allowing it to turn either more or less effectively or to brake, as the situation requires.
While the modular approach focuses on overcoming the design's limitations, the second approach --- \textbf{swarming} --- embraces them. In this method, the robots are released en masse. Although no one robot can be relied on to overcome external flows or respond quickly, the group as a whole will be far more resistant to these negative effects. This allows less capable units to perform on par with more functional ones, enabling vast mobile sensor networks or search teams.
\subsection{Design Choices} \label{sec:desChoices}
In order to ensure that our system can function both as a swarming robot or as a modular one, we make design choices that will satisfy the criteria for both methodologies. We therefore make several modifications to the design presented in \cite{Refael2018} when developing a prototype. In particular, we want to ensure that there exists a functional central body with which other robots can interact, whether by docking/undocking or otherwise, and that the motion of the flippers does not interfere with other robots swimming nearby, whether docked or simply in proximity.
\begin{enumerate}
\item \textbf{Primary Top Body:} we replace the driving mass with a separate, larger body and assign primary focus to its orientation. Thus, while the original configuration included the orientation $\theta$ of the body bearing the flippers, we replace it with the orientation of the new top body $\theta_t$. Because the two bodies are mechanically linked by the motor, the modeling change is achieved by augmenting \eqref{eq:eom_main} with a new orientation variable $\theta_t$ as in \eqref{eq:ori_main}.
\begin{equation} \label{eq:ori_main}
\theta_t = \theta + \phi
\end{equation}
\item \textbf{Non-Protruding Flippers:} the flippers are designed such that they do not protrude from the projection of the top body at any time. Assuming the top bodies of neighboring boats are roughly co-planar, top body contact will prevent any collision between flippers. Although it is still possible for flows created by flippers to interfere with the ability of other robots in a group to swim, we do not consider this presently.
\item \textbf{Docking Points:} magnetic docking points are added at four positions on the top body to allow docking between individual robots.
\item \textbf{Tail for Undocking:} a feature protruding from the outline of the top body (the ``tail'') is added to the bottom body. This tail --- when suitably designed --- allows the robots to undock from each other by using the existing single actuator.
\item \textbf{Inexpensive:} the robot must be designed to be inexpensive to allow significant quantities to be produced.
\end{enumerate}
The design created to satisfy these criteria and fit the model presented in Section \ref{sec:basicDes} --- the Modboat --- is presented in Section \ref{sec:mechDes}.
\section{Mechanical Design} \label{sec:mechDes}
The Modboat --- shown in Fig. \ref{fig:renderingIso} --- is comprised of two bodies: (1) a top body that serves as the brain of the robot (shown in Fig. \ref{fig:topBody}), and (2) a bottom body that serves as the propulsion system (shown in Fig. \ref{fig:bottomBody}), mechanically linked by a single DC motor. Both bodies rotate in the water when the robot is operating, so to reduce drag they are shaped as cylinders.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{media/boat2top.png}
\caption{Rendering of the top body of the Modboat. The encoder is mounted on the input shaft of the motor, at the top of the figure, while wiring is not shown for clarity. The coordinate frame has its origin at the center of the body and is fixed to the body.}
\label{fig:topBody}
\end{figure}
The top body, shown in Fig. \ref{fig:topBody}, is waterproofed and houses five rechargeable NiMH AAA batteries that power the robot. It also contains an ESP8266 microcontroller and motor driver, as well as the motor itself. The point at which the motor shaft exists the body is not waterproofed. Instead, the Modboat is designed to float such that the interface is above the waterline. For these initial single module prototypes the magnetic docking points are omitted.
The top body functions as the driving mass described in Sections \ref{sec:basicDes} and \ref{sec:desChoices}. By including the heaviest components --- the batteries and motor --- in the top body we give it a higher moment of inertia than the bottom body. This allows the Modboat to achieve more rotation from the bottom body using smaller amplitude inputs.
\begin{figure}[t]
\centering
\includegraphics[width=0.85\linewidth]{media/boat2bottom.png}
\caption{Rendering of the bottom body of the Modboat. The coordinate frame has its origin on the axis of rotation at the base of the coupler, and is fixed to the body.}
\label{fig:bottomBody}
\end{figure}
The bottom body, shown in Fig. \ref{fig:bottomBody}, has no actuators or other electronic components. It is comprised of a coupler, which interfaces with the motor, a central cylindrical body, and a tail. The flippers are held in place by loose pin joints on the top and bottom, allowing them to rotate freely between fully closed (against the central cylinder) and fully open (with the hard stops contacting the central cylinder). Their curved shape is designed to lay flat against the central cylinder to maintain a cylindrical profile for reduced drag.
The central cylinder is hollow. This serves to (1) lower the mass and moment of inertia of the bottom body and (2) provide a potential payload compartment. The current bottom body is not waterproofed, but water-safe sensors and payloads may be equipped.
A single 100:1 geared DC motor capable of $14.7 \si{rad/s}$ is used to create relative rotation between the two bodies, with a magnetic encoder to measure orientation. A PID controller run on the ESP8266 ensures that the motor orientation follows prescribed trajectories $\phi(t)$ as defined in \eqref{eq:input}, but the gearbox allows $\approx 5^\circ$ of backlash when static.
The result is an affordable prototype Modboat that costs about $\$122$, as shown in Table \ref{tab:cost}. The components were chosen with only minimal attention to cost; the affordability is rather a result of the simple propulsion mechanism. Once performance is proven, however, the design can be optimized to reduce the cost even further.
\begin{table}[t]
\centering
\caption{Approximate material cost of the Modboat prototype.}
\begin{tabular}{lccr} \toprule
Part Name & Cost/Unit (USD) & Qty & Cost (USD) \\ \midrule
6'' OD acrylic tube & 3.99/inch & 3 & 15.99 \\
1.25'' acrylic tube & 0.25/inch & 2 & 0.50 \\
12''x12''x1/8'' acrylic & 9.15/sheet & 1 & 9.15 \\
2'' ABS tube & 1.15/inch & 3 & 3.45 \\
7/8'' ABS rod & 0.55/inch & 3 & 1.65 \\
12''x12''x1/8'' ABS & 8.87/sheet & 2 & 17.74 \\
Pololu 20D DC Motor & 22.95 & 1 & 22.95 \\
Pololu Encoder & 4.48 & 1 & 4.48 \\
Pololu Mounting Hub & 3.47 & 1 & 3.47 \\
Flippers & 7.50 & 2 & 15.00 \\
NodeMCU ESP8266 & 8.39 & 1 & 10.99 \\
Motor Driver {\tiny (TB6612FNG)} & 3.33 & 1 & 3.33 \\
Custom PCB & 6.20 & 1 & 6.20 \\
Electronic comps. & --- & --- & 4.00 \\
Screws \& Glue & --- & --- & 3.00 \\ \midrule
& & \textbf{Total:} & 121.90 \\\bottomrule
\end{tabular}
\label{tab:cost}
\end{table}
\section{Experiments} \label{sec:experiments}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{media/trajBad.png}
\caption{A $30\si{s}$ trajectory of motion capture data from the Modboat running with inputs $(1,1,2,0)$. The initial location is marked as a green circle while the final is a red X, and the trajectory has been artificially rotated so that the initial motion is upwards.}
\label{fig:trajBad}
\end{figure}
The Modboat was tested in open-loop in a $4.5\si{m} \times 3.0\si{m} \times 1.2 \si{m}$ tank of water equipped with an OptiTrack motion capture system that provided planar position, orientation, velocity, and angular velocity data at $120 \si{Hz}$. A MATLAB interface was used to send motion commands to the Modboat and record the incoming data for post-processing.
We tested both symmetric and asymmetric input waveforms. The curvature of the resulting trajectories was measured by fitting a circle to the most consistently curved portion of the trajectory and recording its radius, with positive radii assigned to counterclockwise trajectories and negative radii to clockwise trajectories. Ideally, we would see radii approaching infinity for symmetric inputs and finite radii for asymmetric inputs.
\begin{figure}[t]
\centering
\subfloat[Data for Modboat with bottom cylinder.\label{fig:weightShiftBottom}]{%
\includegraphics[width=\linewidth]{media/summaryWSBottom1.png}}
\hfill
\subfloat[Data for Modboat with $5\si{mm}$ spacers.\label{fig:weightShiftSpacers}]{%
\includegraphics[width=\linewidth]{media/summaryWSSpacers.png}}
\caption{The radii of the fitted circles plotted against the orientation of the center of mass. The dashed red lines indicate the diameter of the Modboat, while the red circles indicate data truncated to $4\si{m}$ for clarity (since large radii approach straight lines, where the radius ceases to be a useful measure). All tests were conducted for $30\si{s}$ with inputs $(1,1,2,0)$.}
\label{fig:summary12}
\end{figure}
It is reasonable to expect some deviation from a straight path --- due both to imperfect design of the system and non-static water conditions --- but for symmetric input waveforms we observed strong curves in both clockwise and counterclockwise directions. A sample trajectory is shown in Fig. \ref{fig:trajBad} and demonstrates significant deviation from the straight line trajectory predicted by our model in \eqref{eq:eom_main} for those inputs, such as the one shown in Fig. \ref{fig:trajStraight}.
While there was variation in how much the Modboat turned in a given test, we observed a particularly strong dependence on the location of the center of mass (COM). While the Modboat is designed to align the center of mass with the center of rotation at the motor shaft, some offset exists due to manufacturing imperfections and wiring. If the COM lies along the $y$ axis of the top body (see Fig. \ref{fig:topBody}), then a motion according to \eqref{eq:input} with $\phi_0 = 0$ causes a symmetric oscillation of the COM about the $y$ axis on the bottom body (see Fig. \ref{fig:bottomBody}). If the COM is off-axis, however, it will oscillate asymmetrically about the bottom body axis and may induce curvature in the trajectory. By varying $\phi_0$ in \eqref{eq:input}, we can therefore simulate shifting the COM within the top body and evaluate its influence.
The radii that result from varying $\phi_0$ are presented in Fig. \ref{fig:weightShiftBottom}, with the diameter of the boat marked for reference. The radii cluster around the diameter lines, indicating sharp turns regardless of center of mass position. We do not observe a region of straight-line behavior, which would be indicated by points tending towards $\pm \infty$.
We found that the on-board motion was symmetric by evaluating the data provided by the encoders, so we considered the effect of the closed position of the flippers on the trajectory curvature. $5\si{mm}$ spacers were placed on the bottom body cylinder to prevent the flippers from fully closing, and the same set of tests was performed. The results are presented in Fig. \ref{fig:weightShiftSpacers}, where we observe that the radii of all the trajectories have increased, and the region around $\phi_0 \in [-15^\circ,-10^\circ]$ now displays curvatures that are much closer to straight lines.
\section{Discussion} \label{sec:discussion}
The sharp turning behavior when the desired trajectories are straight, indicated by the small radii in Fig. \ref{fig:weightShiftBottom}, makes control of this design very difficult. The data indicate that the robot must be precisely balanced to achieve straight line motion; Fig \ref{fig:weightShiftBottom} has only a $5^\circ$ range ($\phi \in [-10^\circ,-5^\circ]$) in which the radii transition from negative to positive, which is where straight lines would occur, despite the fact that we measured the COM as being within $\pm 1.25\si{mm}$ of the motor axis. While achieving sub-millimeter precision in balancing the robot is technically possible, it would (1) increase the cost of the system and (2) mean that we cannot carry payloads or dock with other robots, as these would shift the COM. We could use closed-loop control to improve performance, but these values indicate that there may be a fundamental issue with the design presented causing a heightened sensitivity to mass balance.
The core of the mechanism for turning and propulsion is the differentiated position of the flippers. Each flipper opens due to two forces: (1) a centrifugal action due to the rotation of the body and (2) drag from the moving water. The drag opening the flipper, shown in Fig. \ref{fig:flapOpenAngle}, acts along the tangent to the circular body, since we assume in Section \ref{sec:model} that the translational velocity of the robot is negligible relative to its rotational velocity. The torque opening the flipper is then given by \eqref{eq:tauOpen}, and is a function of $\sin{(\alpha)}$, where $\alpha$ is the opening angle. This torque produces a positive feedback loop, since (if $\alpha > 0$) as $\alpha$ increases $\sin{(\alpha)}$ increases and so does $\tau_{open}$, further opening the flipper until it is completely open. Under the assumptions made in Section \ref{sec:model}, each flipper produces thrust only when it is fully open, so it is imperative that the flippers open symmetrically when they are driven symmetrically in order to achieve balanced thrust and straight line motion.
\begin{equation} \label{eq:tauOpen}
\tau_{open} = \frac{1}{2} l_f F_{f} \sin{(\alpha)}
\end{equation}
\begin{figure}[t]
\centering
\resizebox{\linewidth}{!}{%
\begin{tikzpicture}
\coordinate(AX) at (6,8);
\coordinate(AY) at (13,8);
\draw[gray, thick, ->] (AX) --++ (0,1) node[above] (Y) {$y$};
\draw[gray, thick, ->] (AX) --++ (1,0) node[right] (X) {$x$};
\path (AY) --++ (0,1) node (Y1) {};
\path (AY) --++ (1,0) node (X1) {};
\def1.5{1.5}
\def135{135}
\coordinate (O) at (10,10);
\path (O) +(135:1.5) coordinate (FL);
\path (O) +(45:1.5) coordinate (FR);
\draw[very thick] (O) circle[radius=1.5];
\fill (O) circle[radius=2pt] node[below right] {O};
\fill (FL) circle[radius=2pt];
\fill (FR) circle[radius=2pt];
\begin{scope}[rotate around={60:(FL)}]
\draw[ultra thick, blue] (FL) arc (0:90:1.5) coordinate (FLEnd);
\end{scope}
\draw[dashed, thick] (FL) -- (FLEnd);
\begin{scope}[rotate around={80:(FR)}]
\draw[ultra thick, blue] (FR) arc (0:-90:1.5) coordinate (FREnd);
\end{scope}
\draw[dotted] (FL) -- (O);
\begin{scope}[rotate around={90:(FL)}]
\path (FL) +(135:1.5) coordinate(FLRef);
\end{scope}
\tkzMarkRightAngle[draw=gray,size=.25](O,FL,FLRef);
\draw[dotted] (FL) -- (FLRef);
\node[](FLMid) at ($(FL)!0.5!(FLEnd)$) {};
\fill (FLMid) circle[radius=2pt] node[below right] {};
\begin{scope}[rotate around={90:(FLMid)}]
\path (FLMid) + (135:1.5) node[](FLMidRef) {$F_{f}$};
\end{scope}
\draw[thick, <-] (FLMid) -- (FLMidRef);
\draw[thick]
pic["$\alpha$",draw=black, <-, angle eccentricity=1.5, angle radius=.5cm]{angle=FLEnd--FL--FLRef};
\path (O) +(65:1.5*1.5) coordinate (RTD);
\path (O) +(115:1.5*1.5) coordinate(LTD);
\draw[dashed, thick]
pic["$\dot{\theta}$",draw=gray, ->, angle eccentricity=1.2, angle radius=2.0cm]{angle=RTD--O--LTD};;
\end{tikzpicture}}
\caption{A simplified model of the Modboat bottom body, showing the flipper opening angle $\alpha$, measured from the perpendicular to the radius. The dashed line represents the flat plate model of the flipper. The drag $F_f$ is shown acting along the tangent line, since we assume in Section \ref{sec:model} that translational velocity is negligible, making the fluid velocity along the tangent.}
\label{fig:flapOpenAngle}
\end{figure}
When the flippers are completely closed against the bottom body cylinder, very little area is exposed between the flipper and the cylinder and very little water is present in the gap, preventing drag from playing a role. Thus, when the body begins to rotate only the centrifugal action works to open the flipper until water can enter and provide additional drag force. As long as the Modboat floats at zero roll angle when stationary (where roll is the rotation about the body-frame $y$ axis in Fig. \ref{fig:topBody}), this effect is symmetric and the system should conform to the model in \eqref{eq:eom_main}. But we observed that the flippers display a high sensitivity to the roll angle, opening slightly under their own weight at even small non-zero roll angles. The lower flipper --- which floats partially open with water in the gap --- has a potentially significant advantage in opening time and therefore thrust produced. Since even slight offsets of the COM can induce non-zero roll angles in the water, this would cause the Modboat to turn depending on which flipper was given the advantage by this effect.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{media/boat3bottom.png}
\caption{Rendering of the updated bottom body of the Modboat without the bottom body cylinder. This change required a redesign of the flippers to allow a new bottom pin connection, but the overall shape was maintained.}
\label{fig:bottomBodyNew}
\end{figure}
This explanation would indicate that the weakness of the design presented in Section \ref{sec:mechDes} and shown in Fig. \ref{fig:bottomBody} is the flush closed position of the flippers against the bottom body cylinder. By enforcing a gap between the flippers and the body even when closed, we would allow the flippers to open more symmetrically even when the Modboat has a non-zero roll angle. This is verified in Fig. \ref{fig:weightShiftSpacers}, where $5\si{mm}$ spacers were used to prevent flush closure of the flippers, resulting in larger radii for all trajectories and significantly larger radii around $\phi_0 \in [-15^\circ,-10^\circ]$, which now are beginning to approach straight lines. This indicates that drag is the dominant effect in determining how symmetrically the flippers open, and that allowing it to act is critical. Nevertheless, while the $5\si{mm}$ spacers are an improvement, the drop in radius for $\phi_0 > -10^\circ$ and $\phi_0 < -15^\circ$ shows that the COM dependence is still present and further improvement is needed.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{media/summaryWSNoBottom.png}
\caption{The radii of the fitted circles plotted against against the orientation of the center of mass for the Modboat with \textbf{no bottom cylinder}. The dashed red lines indicate the diameter of the Modboat, and the red circles indicated truncated data as in Fig. \ref{fig:weightShiftSpacers}. All tests were conducted for $30 \si{s}$ with inputs $(1,1,2,0)$.}
\label{fig:weightShiftNoBottom}
\end{figure}
A redesigned bottom body that removes the cylinder entirely is shown in Fig. \ref{fig:bottomBodyNew}; the flippers have been modified slightly to continue to allow a two pin connection, but their overall shape has been maintained. This design allows water to access the flippers at all times so that drag can act immediately when they begin to open, and this reduces the dependence on the roll angle significantly. Fig. \ref{fig:weightShiftNoBottom} shows the results for testing this new design, where we observe that the sharpest turns are in the same region as the widest turns of Fig. \ref{fig:weightShiftBottom} and we can achieve relatively large radii over a much wider range of COM offsets. While these results are still far from ideal, they are effective enough to allow closed-loop control to finish the job.
Although we have removed the payload compartment from this new bottom body, we can still carry water-safe payloads as long as they do not prevent water from reaching the flippers. Additionally, a payload compartment can still be designed as long as it meets this restriction. The ability to dock with other robots for modularity has also not been restricted by the redesign, as the flippers are still contained within the top body profile.
\section{Conclusions}
In this paper we have presented and characterized a new design for an affordable single-motor swimming robot --- the Modboat --- based on \cite{Refael2018}, which is unique in allowing modular and swarming behaviors. These behaviors are aimed to overcome the main limitations of the design, which are its low thrust and maneuverability, while building on its strengths: simplicity and inherent low cost.
We have shown experimentally that this robot is subject to significant sensitivity to the position of its center of mass, effectively removing its ability to swim in straight lines. This is most likely caused by the non-zero roll angle that offset mass induces on the robot, which accentuates asymmetries in flipper thrusts.
Any robot implementing this design must be built to reduce this sensitivity in order to provide reasonable open-loop behavior that can be stabilized in closed-loop. We have shown that redesigning the passive bottom body of the Modboat significantly improves the open-loop performance of the system in attempting to swim straight lines. The resulting trajectories are considered reasonable enough to allow closed-loop control to stabilize the system.
In future work we will to implement closed-loop orientation feedback and evaluate the ability of the Modboat to follow trajectories and perform tasks using this methodology. This type of position control will be critical to enable the docking of multiple boats together as we explore swarming and larger group behaviour. The impact of disturbances, such as flows or vortices, will also be considered, as will different motion primitives that may be more successful at driving the system.
\section*{Acknowledgment}
We thank Dr. M. Ani Hsieh for graciously allowing us the use of her tank and motion capture system for all of the testing and data collection described in this work.
\bibliographystyle{./bibliography/IEEEtran}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,304 |
GServiceBase *GServiceBase::vss_agent_service = NULL;
// Register the executable with the Service Control Manager (SCM).
//
// Argumants:
// service - the reference to a GServiceBase object.
//
// Return Value: If the function succeeds, the return value is TRUE,
// otherwise FALSE.
BOOL GServiceBase::Run(GServiceBase* service) {
vss_agent_service = service;
SERVICE_TABLE_ENTRY serviceTable[] = {
{ service->vss_agent_service_name, ServiceMain },
{ NULL, NULL }
};
// Connects the main thread to the service control manager, which causes
// the thread to become service control dispatcher for the calling process.
return StartServiceCtrlDispatcher(serviceTable);
}
// Entry point for the service registers the handler function and starts
// the service.
//
// Arguments:
// argc - number of command line arguments.
// argv - array of command line arguments.
void WINAPI GServiceBase::ServiceMain(DWORD argc, LPWSTR *argv) {
assert(vss_agent_service != NULL);
// Register the handler function for the service.
vss_agent_service->vss_agent_statusHandle =
RegisterServiceCtrlHandler(
vss_agent_service->vss_agent_service_name, ServiceControlHandler);
if (vss_agent_service->vss_agent_statusHandle == NULL) {
throw GetLastError();
}
// Start the service.
vss_agent_service->Start(argc, argv);
}
// The function is called by the SCM whenever a control code is sent to the
// service.
//
// Arguments:
// serviceControl - the control code. Can be one of the values:
//
// SERVICE_CONTROL_CONTINUE
// SERVICE_CONTROL_INTERROGATE
// SERVICE_CONTROL_NETBINDADD
// SERVICE_CONTROL_NETBINDDISABLE
// SERVICE_CONTROL_NETBINDREMOVE
// SERVICE_CONTROL_PARAMCHANGE
// SERVICE_CONTROL_PAUSE
// SERVICE_CONTROL_SHUTDOWN
// SERVICE_CONTROL_STOP
//
// This parameter can also be a user-defined control code ranges from 128
// to 255.
void WINAPI GServiceBase::ServiceControlHandler(DWORD serviceControl) {
switch (serviceControl) {
case SERVICE_CONTROL_STOP: vss_agent_service->Stop(); break;
case SERVICE_CONTROL_PAUSE: vss_agent_service->Pause(); break;
case SERVICE_CONTROL_CONTINUE: vss_agent_service->Continue(); break;
case SERVICE_CONTROL_SHUTDOWN: vss_agent_service->Shutdown(); break;
case SERVICE_CONTROL_INTERROGATE: break;
default: break;
}
}
#pragma endregion
#pragma region Service Constructor and Destructor
// The constructor of GServiceBase initializes a new instance of the
// GServiceBase class.
//
// Arguments:
// serviceName - the name of the service.
// canStop - the service can be stopped.
// canShutdown - the service is notified when system shutdown occurs.
// canPauseContinue - the service can be paused and continued.
GServiceBase::GServiceBase(LPWSTR serviceName,
BOOL canStop = TRUE,
BOOL canShutdown = TRUE,
BOOL canPauseContinue = FALSE) {
// Service name must be a valid string and cannot be NULL.
vss_agent_service_name = (serviceName == NULL) ? L"" : serviceName;
vss_agent_statusHandle = NULL;
// The service runs in its own process.
vss_agent_status.dwServiceType = SERVICE_WIN32_OWN_PROCESS;
// The service is starting.
vss_agent_status.dwCurrentState = SERVICE_START_PENDING;
// The accepted commands of the service.
DWORD controlsAccepted = 0;
if (canStop) {
controlsAccepted |= SERVICE_ACCEPT_STOP;
}
if (canShutdown) {
controlsAccepted |= SERVICE_ACCEPT_SHUTDOWN;
}
if (canPauseContinue) {
controlsAccepted |= SERVICE_ACCEPT_PAUSE_CONTINUE;
}
vss_agent_status.dwControlsAccepted = controlsAccepted;
vss_agent_status.dwWin32ExitCode = NO_ERROR;
vss_agent_status.dwServiceSpecificExitCode = 0;
vss_agent_status.dwCheckPoint = 0;
vss_agent_status.dwWaitHint = 0;
}
// The virtual destructor of GServiceBase.
GServiceBase::~GServiceBase(void) {
}
#pragma endregion
#pragma region Service Start, Stop, Pause, Continue, and Shutdown
// The function starts the service. It calls the OnStart virtual function.
//
// Arguments:
// argc - number of command line arguments
// argv - array of command line arguments
void GServiceBase::Start(DWORD argc, LPWSTR *argv) {
try {
// Tell SCM that the service is starting.
SetServiceStatus(SERVICE_START_PENDING);
// Perform service-specific initialization.
OnStart(argc, argv);
// Tell SCM that the service is started.
SetServiceStatus(SERVICE_RUNNING);
} catch (DWORD error) {
// Log the error.
WriteErrorLogEntry(L"Service Start", error);
// Set the service status to be stopped.
SetServiceStatus(SERVICE_STOPPED, error);
} catch (...) {
// Log the error.
LogOperationalMessage(L"Service failed to start");
// Set the service status to be stopped.
SetServiceStatus(SERVICE_STOPPED);
}
}
// When implemented in a derived class, executes when a Start command is sent
// to the service by the SCM or when the operating system starts when a service
// is configured to start automatically.
//
// Arguments:
// argc - number of command line arguments
// argv - array of command line arguments
void GServiceBase::OnStart(DWORD argc, LPWSTR *argv) {
UNREFERENCED_PARAMETER(argc);
UNREFERENCED_PARAMETER(argv);
}
// The function stops the service and calls the OnStop virtual function.
void GServiceBase::Stop() {
DWORD originalState = vss_agent_status.dwCurrentState;
try {
// Tell SCM that the service is stopping.
SetServiceStatus(SERVICE_STOP_PENDING);
// Perform service-specific stop operations.
OnStop();
// Tell SCM that the service is stopped.
SetServiceStatus(SERVICE_STOPPED);
} catch (DWORD error) {
WriteErrorLogEntry(L"Service Stop", error);
// Set the orginal service status.
SetServiceStatus(originalState);
} catch (...) {
LogOperationalMessage(L"Service failed to stop.");
// Set the orginal service status.
SetServiceStatus(originalState);
}
}
// When implemented in a derived class, executes when a Stop command is sent
// to the service by the SCM.
void GServiceBase::OnStop() {
}
// The function pauses the service if the service supports pause and continue.
// It calls the OnPause virtual function.
void GServiceBase::Pause() {
try {
// Tell SCM that the service is pausing.
SetServiceStatus(SERVICE_PAUSE_PENDING);
// Perform service-specific pause operations.
OnPause();
// Tell SCM that the service is paused.
SetServiceStatus(SERVICE_PAUSED);
} catch (DWORD error) {
WriteErrorLogEntry(L"Service Pause", error);
// Tell SCM that the service is still running.
SetServiceStatus(SERVICE_RUNNING);
} catch (...) {
LogOperationalMessage(L"Service failed to pause.");
// Tell SCM that the service is still running.
SetServiceStatus(SERVICE_RUNNING);
}
}
// When implemented in a derived class, executes when a Pause command is sent
// command is sent to the service by the SCM.
void GServiceBase::OnPause() {
}
// The function resumes normal functioning after being paused by calling
// OnContinue virtual function.
void GServiceBase::Continue() {
try {
// Tell SCM that the service is resuming.
SetServiceStatus(SERVICE_CONTINUE_PENDING);
// Perform service-specific continue operations.
OnContinue();
// Tell SCM that the service is running.
SetServiceStatus(SERVICE_RUNNING);
} catch (DWORD error) {
WriteErrorLogEntry(L"Service Continue", error);
// Tell SCM that the service is still paused.
SetServiceStatus(SERVICE_PAUSED);
} catch (...) {
LogOperationalMessage(L"Service failed to resume.");
// Tell SCM that the service is still paused.
SetServiceStatus(SERVICE_PAUSED);
}
}
// When implemented in a derived class, OnContinue runs when a Continue command
// is sent to the service by the SCM.
void GServiceBase::OnContinue() {
}
// The function executes when the system is shutting down. It calls OnShutdown
// virtual function.
void GServiceBase::Shutdown() {
try {
// Perform service-specific shutdown operations.
OnShutdown();
// Tell SCM that the service is stopped.
SetServiceStatus(SERVICE_STOPPED);
} catch (DWORD error) {
WriteErrorLogEntry(L"Service Shutdown", error);
} catch (...) {
WriteErrorLogEntry(
L"Service Shutdown.", GetLastError());
}
}
// When implemented in a derived class, executes when the system is shutting
// down. Specifies what should occur immediately prior to the shutdown.
void GServiceBase::OnShutdown() {
}
#pragma endregion
#pragma region Helper Functions
// The function sets the service status and reports the status to the SCM.
//
// Arguments:
// currentState - the state of the service
// exitCode - error code to report
// waitHint - estimated time for pending operation, in milliseconds
void GServiceBase::SetServiceStatus(DWORD currentState,
DWORD exitCode,
DWORD waitHint) {
static DWORD dwCheckPoint = 1;
// Fill in the SERVICE_STATUS structure of the service.
vss_agent_status.dwCurrentState = currentState;
vss_agent_status.dwWin32ExitCode = exitCode;
vss_agent_status.dwWaitHint = waitHint;
vss_agent_status.dwCheckPoint =
((currentState == SERVICE_RUNNING) ||
(currentState == SERVICE_STOPPED)) ?
0 : dwCheckPoint++;
// Report the status of the service to the SCM.
::SetServiceStatus(vss_agent_statusHandle, &vss_agent_status);
}
#pragma endregion
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,384 |
{"url":"http:\/\/hotelmansionlosarcos.com\/y3ah\/outside-ac-unit-not-running-but-inside-is-reddit.html","text":"Outside ac unit not running but inside is reddit. Check that the brea...\n\nOutside ac unit not running but inside is reddit. Check that the breakers for the air conditioner are not tripped The circuit breaker usually controls both the inside and outside units You can find the switch at the side of your house near the outside AC unit Unit had been working fine after the previous wiring issue If you found it, give it a push to see if anything happens It happened a second time, they came out reset the system again It will make a buzzing noise then if you get it turning, it will start running Flip it to the \u201coff\u201d setting and then back to \u201con\u201d The power switch can look just like a light switch and so it is very easy to simply turn it off which can cause the indoor system My Air Conditioning Unit Won\u2019t Shut Off For the furnace\/air handler to the outdoor unit set of wires connect yellow to Y and Blue to C There are two ways to determine whether the outside air conditioning unit is not running: 1 \u2022 Check the circuit box in your home to make sure the 1 Check Your Electrical Wiring The compressor hums and the fan turns at a slower speed (sometimes a varying speed) So after arriving home from a 4 day vacation I come to find that the outside AC unit is not kicking on again If that doesn\u2019t do the trick, check the wiring in your thermostat With the temp sender being the possible cause I decided to replace it You just need to bring the switch back to on position To fix your AC, you will have to adequately locate the problem 5 Here is how you can approach this problem in general: Diagnoze why the outdoor unit is not running Phone: (561) 777-9888 We've had issues with our outside unit not running sometimes If you have a tripped breaker or a Air Conditioner Compressor Is Not Working, and Fan Is Running Sometimes the AC\u2019s compressor is not working, but the fan still runs Finally I turned off the AC unit in the house If the ring is too tight on the pipe, use a hacksaw to cut it off Check the wires connecting the two units if the breaker is on, but the inside unit isn\u2019t working Thus, it is often necessary to replace a broken and faulty compressor 5 Debris On The Air Conditioner Axle Check the temperature setting to make sure it hasn\u2019t been adjusted whether the thermostat is set to cool In some cases, even when your air conditioning system cools to the set temperature, the outdoor unit continues to run and just won\u2019t shut off The motor is a genteq ua11bs More than often means that you\u2019re blowing hot air through the vents since the attic is the hottest room in the house and that\u2019s the starting point for the air that is getting moved 2 These are all you would need for this specific application There are also times where you can fix it yourself Next door can't hear the noise 3 Burnt Out Air Conditioner Fan Motor All Time Air Conditioning is an Air Conditioning Repair Service, Air conditioning contractor, Air conditioning system supplier, and We've had someone come out and tell us it was the high pressure switch and they had us switch out the filter and they reset the system and it was fine Another key difference is that the circulate Turn it back to cooling mode if it\u2019s turned off, set to heat, or set for the continuous fan (sometimes clearly labeled \u201con\u201d) On hotter days (98 degrees & above), we recommend you let that unit run so it can \u201ckeep up This is sometimes simple to resolve 4 Bad Air Conditioner Capacitors There is an outside faucet about 7 inches from my inside outlet 4 Tripped Circuit Breaker For some additional context: We had to turn on electricity two days before we would get our keys, so the ac was running without anything else for those two days Could be a When I went to help my mom bring up the groceries I did notice there was a beetle around but it was way below the ac but then after a moment when my mom and I get back inside with more groceries she tells me that there are a bunch of bees flying around the ac outside so a while later she sends me to throw out the trash and when I look up I Our apartment is around 1,370 sqft and from research online it looks like we would need about a 2 Doing so means more warm air will blow over the evaporator coils to help defrost the pipes Why Is My Outside AC Unit Not Running But Inside Is? In cooling systems, the indoor and outdoor HVAC units work in tandem to produce the desired results \"Denon nails the basics with superb sound and great noise cancellation Many air conditioners have these, but some do not 4 While this is happening with the inside unit, observed that sometimes the outside unit would turn on or not at all Hours : Saturday: Open 24 hours The outside unit doesn't turn-off when the thermostat turns-off the inside unit Could be a A\/c went out fan on condenser not running Trane & American Standard Thermostatic Expansion Valve (TXV) Class Action If your air conditioner or heat pump was manufactured in 2014 and you experienced a TXV failure, your TXV failure was likely caused by an unapproved rust inhibitor used in the air conditioner I took it out completely to But, first, we\u2019ll go over some of the most common reasons for a split AC outdoor unit not working HVAC contractor in Boynton Beach, Florida Ask Your Own HVAC Question Low side float valve Well today we woke up to 82 degrees in Our apartment is around 1,370 sqft and from research online it looks like we would need about a 2 Unlock Ratings If your AC compressor is not functioning, here are some situations you may face: Warm, dry air coming from the running fan Hey everyone, my AC is set to 73 but is stuck at 77\/76 Flipping off the circuit breaker of the house should also do the trick Keep wires connected and move the circuit board out of the way Make sure you examine all of the wiring within your unit, especially the wiring connected to an outlet 1)Power Supply Issues Place your palm on the air conditioner\u2019s interior unit You eventually shut off the outdoor unit by turning off the Inside unit attempts to start, makes a constant buzz for about 10 20 seconds, then the sound goes away Answered in 3 minutes by: 8\/26\/2009 Make sure the temperature is set to cool Place your hand against the inside AC unit A\/c went out fan on condenser not running Rather than enjoying a good book with a cup of coffee in the afternoon, instead they Power failure can be the result of a tripped breaker or a power switch that is turned off If you let your system continue to run, ice may build up on both the inside and outside unit Call Today: (732) 741-6300 Could be a Hey everyone, my AC is set to 73 but is stuck at 77\/76 Outdoor units are powered separately from the indoor unit Make sure the circuit breaker for the outdoor unit did not trip It\u2019s located on the side of the building near the outdoor A\/C unit If hot air is blowing in rather than cold air, this indicates that the external unit is Here we will focus on the case of the outside AC unit not running but the inside is You may be able to fix this AC issue with the flip of a switch The first day we used 39 kwh and the second day we used 76 kwh 3 My 5 year old carrier unit went out over night, the fan on the outside unit is not turning Accidentally Unit Switched Off \u2013 Typically, near the outdoor AC unit, there is an on and off switch When the fan blades stop spinning as the air conditioner runs, one of the causes below are typically at fault Without turning the AC off inside the house, I pulled the fuse on the electrical box behind the AC unit Meanwhile, the Circulate setting is a hybrid and auto that keeps the fan running throughout heating cycles and then again for a few minutes every hour even when the furnace is off When this happens, the outdoor unit will not work because the power supply to it is switched off Could be a 2 Damaged Wires This power switch is typically found in a closet, the attic, or a crawl space that exists close to your system A rattling noise may mean that something has come loose inside the air conditioner and is blowing about the unit 5-3 ton ac Then, locate the fan controls on the thermostat and switch the fan from AUTO to ON All our work comes with a guarantee to ensure your satisfaction To try this method, head over to your cooling system and look for a \u2018reset\u2019 switch Reply Thermostat calls for cool\/clicks inside blower comes on and runs runs runs Warm Air From The Inside Air Handler Do not connect any more wires I took it out completely to AC outside unit not runninghttps:\/\/www Typically, there\u2019s an on-off switch in a small box near the outside unit Call an HVAC contractor, like Point Bay Fuel, right away A loose or We can recommend the best solutions for your HVAC repair or replacement system needs while staying within your budget Running the AC fan alone without the air conditioner is a good way of evening out the temperature throughout a house but since the ductwork is typically in the Attic, Book a service appointment and call R We offer free, in-home estimates commerch = https:\/\/teespring I took it out completely to 5 Fan that blows in house has a burnt out motor, or the relay that controls both the outdoor unit and the interior fan system is not turning the inside unit Step 1: Shut off the power supply to the air conditioner at the disconnect or breaker panel I only turn my temperature up two degrees Call an HVAC contractor, like Point Bay Fuel, right away Method 2- If hot air is blowing in rather than cold air, this indicates that the external unit is not operating correctly Ohl today Replace the valve 1 Rather than being cooled by the equipment, the warm summer air is merely traveling through the system The indoor units cannot blow cold air if the compressor starts malfunctioning I\u2019m guessing that means the fan is bad, how much am I looking at here as far as cost My ac unit is on outside the fan inside is working but no air is being pushed through Most air conditioning units Place your hand against the indoor air conditioning unit F Call Today: (732) 349-5059 \u201d 3 Common Causes For Compressor AC Unit Problems I've spent the last two weeks trying to find a strange tapping noise that wakes me up two or three times per night Address : 2940 NW Commerce Park Dr #12, Boynton Beach, FL 33426 Next I turned off the breaker but again, the hum did not stop Call Today: (610) 377-1098 The inside fan is not running it can be one or two things AC not working The issue is usually a lack of power If the outside AC fan is not spinning while the inside is, something is wrong Answer: This could either be the thermostat, thermostat wiring (loose or broken wire), blower motor capacitor, blower motor, control board or fan center that is causing the fan on the This way, the AC doesn\u2019t have to work too hard to bring the temperature down to somewhere between 75-80 degrees Most air conditioning units Problem: Outdoor air conditioning unit is running but the indoor furnace blower will not run with the fan in the \u201cAuto\u201d position on the thermostat and the thermostat calling for cooling Knock it out and remove the nut too It also turns on both cooling fans in case the engine is overheating In most cases, either no power is getting to the outdoor unit or you have a defective compressor contactor 2 1 Air Conditioner Thermostat Settings the air fans inside my bmw 1 series are not working how, bmw 1 series 2015 fuse box location bmw series release, bmw 1 series fuse box location caja de fusibles e81 e82 e87 e88, cheap bmw 1 BMW E60 Fuel Pump Failure First, turn the AC unit off then check if the power switch on your HVAC system is \u2018off\u2019 or \u2018on\u2019 \u2022 Check the circuit box in your home to make sure the The secondary air injection system is designed to pump outside air into the exhaust in order to help reduce harmful emissions Show Less Step 2: Remove the service panel on the A\/C unit \"\/> Hey everyone, my AC is set to 73 but is stuck at 77\/76 No cool air\/temp change Motor Issues Step 5 After replacement Close the new valve When this [] There Is Warm Air Coming Out From The Indoor Air Handler: Place a hand near the air conditioner\u2019s indoor unit Thanks The main difference is that the fan ON setting means that the fan is set to run as long as the furnace is switched ON Reset tripped breakers and flip back any power switches that are turned off I am not hearing any sound coming from the outside unit Little had changed with the <b>Ford<\/b> Kung Fu Maintenance demonstrates how to repair an air conditioner that is running outside but the indoor blower wheel is not starting spinning or buzzing no The system won\u2019t be cooling your home (aka, a broken AC) unless both sides of the system are working If it is blowing hot air instead of cold air, the outdoor unit isn\u2019t working correctly If you are not sure how to do that, then shut off the power switch of the house Thank you very much for reading lg neo plasma air conditioner remote control manual When there\u2019s an outside AC unit not running but inside is, the issue could be fixed by doing a simple system reset Remove the screws holding the blower motor assembly in place and carefully slide it out of the cabinet Remove the screws holding the circuit board in place You need to get extra covering so the wind can not whistle through the AC capacitors will cost you anywhere between $130 and$260 There was no change and the hum did not stop com\/stores\/steve Running AC fan only Helpful Sandra M on Jul 13 This conclusion was arrived at by running over 35,528 Merge Mansion User Reviews through our NLP machine learning process to determine if users believe the app is legitimate or not Though the cost varies based on several factors such as model type, brand, and whether it\u2019s a dual or single run, the average price to replace bad capacitors is approximately $180 You will need to go swap the wires at the outdoor unit to redo this wiring so that it matches inside No hum\/buzz nothing! A\/c went out fan on condenser not running The warm air only passes through the system when the equipment should be cooling it After reading around, this points to a toasted capacitor-have ordered a replacement The warm air only passes through the air conditioning system instead of going through the cooling cycle As you may know, people have search hundreds times for their chosen readings like this lg neo plasma air conditioner remote control manual, but end up in infectious downloads The Problem: When turning on the AC, you may notice the fan running when the compressor is not Circuit Breaker Issues Try a System Reset It turns the a\/c off in case the outside temp is below freezing to protect the compressor from damage If one of these is not working, then the house will not cool down no matter how long you wait Dr Oz: Aneurysm Popping Sound Turn off the electronic connection to the air conditioner The compressor of the air conditioning unit circulates refrigerant between the outside and inside units As mentioned, the fan is linked to a motor which allows it to rotate The Indoor Air Handler Provides Warm Air Step 3: Find the start capacitor for the fan When there is no power connection to your ac outside unit, it won\u2019t turn on to run Bearings inside the condenser fan motor American Standard \/ Trane OEM Thermostatic Expansion Valve Still nothing, then I thought it might be the air filter valuetesters Call Now: (610) 377-1098 Explore Our Case Studies When your fan is \u201cON,\u201d it will blow air even if the AC isn\u2019t running a cooling cycle Where to look next? Show More com Apr 19, 2012 \u00b7 Surprisingly, the maze in the latter is depicted as the painted decoration of a floor, on which bull leaping is taking place!Other parts of the A\/c went out fan on condenser not running Then open the water-main shutoff valve and let the water run until all the air is out of the pipes Call 888-628-5890 Today or Book Online for Heating and Cooling Service! #1: The Circuit Breaker Tripped If your outside AC fan stops working, it will be easy to notice by peaking into the condenser from above Why is your outside AC unit not turning on? In many cases, this issue arises after your circuit breaker trips Wait a few minutes after the machine has turned on before High pressure switch issues Turn off electrical power to the indoor unit Broken Compressor Outside unit not running, blower not running, no tripped breakers 2 Dirty Outside Compressor Unit Share this conversation Occasionally the fan stops altogether and just hums, but starts when assisted Go to your air conditioning system and seek out a reset button to attempt this procedure If these noises seem to happen constantly, or throughout the time the air conditioner is running, there's a good chance it's one of these two issues: The unit is icing up Everything seems to run fine (cool air\/no odd noise) when both units run simultaneously As I write (update) this article in August, temperatures are 115\u00b0+ degrees in Mesa, AZ Somebody may have accidentally turned it to off position First, make sure the thermostat setting is on \u201ccool\u201d and is set several degrees below the outdoor temperature Leave the fan ON and the AC system OFF for 3 to 4 hours before turning the The Indoor Air Handler Provides Warm Air Call Today: (800) 253-9001 I took it out completely to If you notice that hot air is being blown in rather than cold air, the outside unit is not performing properly If it\u2019s turned off, turn it back on and wait for the air conditioner to cool your home The system won\u2019t be cooling your home (aka, a broken AC) unless both sides of the system are working Check your breaker box for a blown breaker or tripped fuse If it releases hot air instead of cold air, the outside AC unit isn\u2019t working as it should This is a confusing situation when the outside ac unit not running but inside is I\u2019ve tried a few things such as, cleaning around the outside unit to make sure it\u2019s clear, hosing off the unit and getting the dirt off Sunday: Open 24 hours System Reset: If the exterior AC unit is not operating but is operating inside, the problem can be resolved by rebooting a standard system DO NOT proceed if you\u2019re not sure you\u2019ve properly shut off the power I visited the outside unit and it was humming, lightly, but the fan was not running 3 Safety Switch Lock Remove the access panel to the blower compartment When one side of the system isn\u2019t working, try the following: \u2022 Make sure the AC hasn\u2019t been turned off Nearly 85 percent of all HVAC repairs stem from electrical problems Some AC capacitor costs can go up to$300 to \\$400 Check the circuit breaker and reset it This is why the fan turns on and may run after the car is turned off kv jp nh am hm ge xo gc ku wb js it em rt bs qy lp wc im vp vs ud sd jp my ip wp ie js mz nm kx sf of pu hx ft es yv qf qa em hd cy bh sy dj uq hm yf cj dw ge za uj jg hj va ky zd ke gb ix ky ij nc zd mf gx lv ek ix qc tz pn sf vg tj im wb um rx ui ga ps zl wk bx xb ty mv nk cr ih jl eb mv ki bu dy","date":"2022-08-18 08:19:55","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.23148195445537567, \"perplexity\": 1639.8171060382952}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882573172.64\/warc\/CC-MAIN-20220818063910-20220818093910-00683.warc.gz\"}"} | null | null |
Dzielnica Uzdrowiskowa w Kołobrzegu – zwyczajowa nazwa części miasta Kołobrzegu położona w pasie nadmorskim między portem, a Ekoparkiem Wschodnim. Administracyjnie Dzielnica Uzdrowiskowa wchodzi w skład jednostek pomocniczych Osiedle nr 1 Solne Zdroje i Osiedla nr 7 Ogrody.
Historia
Dzielnica powstała w drugiej połowie XIX wieku wraz z likwidacją twierdzy kołobrzeskiej i rozwojem Kołobrzegu jako nadmorskiego kurortu. Na przełomie XIX i XX wieku powstało tutaj wiele hoteli, pensjonatów i zakładów zdrojowych. Większość z nich uległa zniszczeniu w 1945.
Odbudowa dzielnicy została rozpoczęta w 1952. Najwięcej nowych obiektów powstało w latach sześćdziesiątych i siedemdziesiątych XX wieku. Od początku XXI wieku dzielnica przeżywa kolejny boom budowlany związany z wznoszeniem na miejscu pustych placów nowych obiektów uzdrowiskowych, ośrodków wczasowych i osiedli mieszkaniowych.
Zabudowa
Dzielnica Uzdrowiskowa jest oddzielona od śródmieścia miasta torami linii kolejowej łączącej Kołobrzeg z Koszalinem. Od morza natomiast odseparowana utrzymanym w formie lasu liściastego parkiem im. Stefana Żeromskiego. Głównymi ulicami dzielnicy są: ul. Marii Rodziewiczówny, ul. Władysława Sikorskiego i ul. Antoniego Sułkowskiego. Tworzą one kilkukilometrowy deptak wyłączony z ruchu drogowego.
Na terenie dzielnicy uzdrowiskowej w Kołobrzegu znajdują się przede wszystkim budynki sanatoryjne zbudowane w latach: sześćdziesiątych, siedemdziesiątych i osiemdziesiątych XX wieku dla ośrodków wypoczynkowych państwowych zakładów pracy i Uzdrowiska Kołobrzeg. W kwadracie ulic: Borzymowskiego, Rodziewiczówny, Zdrojowej i Norwida zachowały się jednak częściowo zabudowania przedwojenne, z których większość pochodzi z przełomu XIX i XX wieku.
Charakterystyczne obiekty
Molo
Morskie Oko
Amfiteatr
Muszla koncertowa
Korty tenisowe
Kościół św. Marcina
Lapidarium
Pomnik Sanitariuszki
Zakład Przyrodoleczniczy nr 1
Sanatorium Arka
Sanatorium Bałtyk
Sanatorium Ikar
Sanatorium Jantar
Sanatorium Lech
Sanatorium Mewa
Sanatorium Muszelka
Sanatorium Perła Bałtyku
Dziecięcy Szpital Uzdrowiskowy Słoneczko im. prof. Teodora Rafińskiego
Kamienny Szaniec
Marine Hotel
Hotel Aquarius SPA
Tereny zielone
Park im. Stefana Żeromskiego
Park im. Aleksandra Fredry
Bibliografia
Dzielnice i osiedla Kołobrzegu | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,671 |
Q: Joomla issue after upgrade from 1.5 to 2.5 I've just upgraded a Joomla website from 1.5 to 2.5.6 using jupgrade. It's working to some extent, but only showing partial content on each page.
I suspect the problem is that the newer Joomla no longer uses Sections, which the old site did use. Unfortunately I have no idea how to translate that to the new Categories only system.
Could anyone advise as to how it's done?
Thanks!
A: As far as I'm aware, JUpgrade should deal with content as a whole If it has not worked fully, try running the upgrade again as JUpgrade has been know to be glitchy (providing you have a backup of your old Joomla site) but before doing so, ensure that you're using Joomla 1.5.26 and nothing earlier.
If this fails, you should either contact the developer of JUpgrade or maybe try using a commercial migration tool, such as SPupgrade.
Hope this helps
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 2,025 |
Q: Matplotlib plot disappears upon adding annotation I am trying to add some annotations to a graph I've made, however the entire plot disappears every time I try to add an annotation using the code:
ax.annotate(text='Hi', xy=(1-4-2020, 0), xycoords='data')
Regardless of whether I change the co-ordinates, the same issue occurs.
Here is the plot I am using:
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import pandas as pd
plt.figure(figsize=[15, 10])
plt.grid(True)
plt.plot(tweets_df['date'], tweets_df['compoundSMA'])
ax = plt.gca()
ax.xaxis.set_major_locator(mdates.MonthLocator(interval=1))
ax.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
plt.gcf().autofmt_xdate()
#plt.savefig('sentimentSMAvaderhighqualitytest.png', dpi=300)
plt.show()
Any help or insight would be amazing.
Thanks.
A: xy=(1-4-2020, 0) works only when you're plotting using Pandas. When using matplotlib, you'll have to try this way
ax.annotate(text='Hi', xy=(mdates.date2num(dt.datetime(2020,4,1)), 0), xycoords='data')
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,656 |
Q: Can I ask WindowsPhone Approve team something like "expedited review" as it in Apple? I have app which was written for some event, and with Apple I make an expedited review request.
Does WindowsPhone have something like this?
A: Yes, when uploading your application, you just need to write all you need in the message form.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,686 |
Menus made easier
One of the toughest tasks small restaurants handle is producing professional-looking menus in print and online, but MustHaveMenus helps automate that process.
By Heather Clancy for Small Business Matters | August 13, 2013 -- 15:57 GMT (08:57 PDT) | Topic: SMBs
Typos are the bane of my existence, so I always feel a tremendous amount of empathy whenever I see an error on a restaurant menu in my small New Jersey town or on the list of services posted in a local business.
Enter MustHaveMenus, a service that was specifically created to help owners and managers manage and automate this process, with a particular focus on keeping printed versions and Web site editions synchronized.
It does this by offering more than 3,000 different menu templates that restaurants can use to upload their content and then archive it for updates as necessary. No more starting over from scratch any time a new menu is needed.
"We want to come in and make what they are doing today easier, faster and cheaper," said Jim Williams, CEO of the company, which hails from Ashland, Ore. His team must be doing something right, more than 30,000 independent restaurants have already signed up to use it, which is almost twice the number it had about two months ago when I first connected with the company.
Initially, MustHaveMenus left out the printing step of the process and let restaurant produce files that they can take to a local printer. But in June, the company announced a relationship with a printing services company, so that part is handled, too. Pricing starts at $24.95 for 25 copies of a single-sided menu. The company is also beta-testing a synchonization service that automatically updates information in multiple places, as appropriate. So, if a restaurant manager makes a change to one, the information is automatically reflected on the restaurant's Facebook page and on its Web site.
So far, the company has stayed away from adding online ordering capabilities, which is the focus of a partnership between SinglePlatform and GrubHub .
"It's a matter of looking and listening to where the demand is coming from," Williams said. "We don't want to be a point solution, a business that will come in and provide an answer for just one problem. This will be the foundation for marketing, transactions and so on."
And even though the focus is restaurants, Williams said that other sorts of small businesses, such as salons and doctor's offices, have been using the templates to share information about their own services menus.
Jager Tavern and Grill, a new restaurant in the Sarasota, Fla., area, that features dishes that use Jagermeister as a foundation for the sauces has been using the service since it opened this spring. (A sample of its menu appears at the bottom of this post.)
Cliff Boltwood, one of the co-owners, said the service has helped take at least one time-consuming task off the hands of the restaurant's managers.
"It is a pretty massive menu for a sports bar, and we anticipate quarterly or seasonal updates," he said.
One reason that Jager Tavern opted for the MustHaveMenus approach is that it already has a very active Facebook presence -- it acquired more than 1,000 followers before its opening, because of the local marketing buzz that built up around its menu concept. Apparently, lots of people are enthusiastic about Jagermeister in their food.
Being able to update the restaurant's menu information quickly and easily is also a major consideration as it plans to launch its Web site, Boltwood said.
Smart Office Productivity Innovation Start-Ups Cloud
More from Heather Clancy
Accellion tackles secure mobile content updates
Symphony Commerce orchestrates online sales for fast-growing merchants
ClickBank's e-commerce proposition for time-challenged digital entrepreneurs
Bracket Computing bridges local servers, cloud capacity
Best Prime Day deals 2019: Small business servers
The best of the Prime Day small business server deals on Amazon.
Shopify creates AI-powered fulfillment network for SMB merchants
The fulfillment network uses machine learning to match orders, route inventory and negotiate rates for merchants.
Singapore sees drop in common security threats, but foresees more data breaches
Cyber Security Agency says the number of common cyber threats, including website defacements and phishing, dipped in Singapore last year, but expects to see more frequent data breaches ...
H&R Block to acquire Canadian fintech startup Wave
Wave Financial offers free accounting, invoicing, and receipt-tracking software for small businesses.
Square brings Postmates, DoorDash integrations to Square for Restaurants
Square for Restaurants rounds out the company's vertical POS offerings, which include Square for Retail and Square Appointments for service-based businesses. ...
Alibaba Cloud touts Asian heritage and focus as competitive advantages
While coy over how the Huawei-US debacle may impact other Chinese technology vendors, Alibaba Cloud executives play up their "in Asia, for Asia" focus and investment in the region ...
Technology drives Singapore's ranking as most competitive economy, knocking off US
Advanced technological infrastructure and favourable immigration laws are amongst key factors that helped propel Singapore to pole position this year as the world's most competitive ...
Singapore to launch QR identity verification tool for businesses
Government says it will introduce in third-quarter 2019 a tool called SG-Verify, which will enable businesses to perform secure identity verification and data transfer via QR scans ...
Singapore targets lawyers in digital transformation drive
Law Society establishes SmartLaw Guild to showcase best practices and case studies of technology adoption in the legal community, which have been offered support, including a S$3.68 ... | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 131 |
\section{Introduction}
The large--angle Bhabha process is well suited for the determination
of the luminosity ${\cal{L}}$ at $e^+e^-$ colliders of the
intermediate energy range $\sqrt{s}=2 \varepsilon \sim 1 \mbox{GeV}$
\cite{vepp2m,dafne}. Small scattering angle kinematics of Bhabha scattering
is used for high--energy colliders such as LEP~I \cite{sabs}.
As far as $0.1 \%$ accuracy is desirable
in the determination of ${\cal L}$, the corresponding requirement
\begin{eqnarray}
\bigg| \frac{\delta\sigma}{\sigma} \bigg| \leq 10^{-3}
\end{eqnarray}
on the Bhabha cross section theoretical description appears.
The quantity $\Delta\sigma$ is an unknown uncertainty in the cross section
due to higher order radiative corrections.
A great attention was paid to this process during the
last decades (see review~\cite{labsr} and references therein).
The Born cross section with weak interactions taken
into account and the first order QED radiative corrections to it
were studied in detail~\cite{born}.
Both contributions, the one enhanced by
{\em the large logarithmic multiplier\/} $L=\ln(s/m^2)\ $
(where $s=(p_++p_-)^2=4\varepsilon^2$ is the total
center--of--mass (CM) energy squared, $m$ is the electron mass),
and the one without $L$ are to
be kept in the limits (1): $\alpha L/\pi$, $\alpha/\pi$.
As for the corrections in the second order of the perturbation theory,
they are necessary in the leading and next--to--leading approximations
and take the following orders, respectively:
\begin{eqnarray}
\biggl(\frac{\alpha}{\pi}\biggr)^2L^2, \quad
\biggl(\frac{\alpha}{\pi}\biggr)^2L.
\end{eqnarray}
The total two--loop ($\sim (\alpha /\pi)^2$) correction could be
constructed from:
1) the two--loop corrections arising from the emission of
two virtual photons;
2) the one--loop corrections to a single real (soft and hard)
photon emission;
3) the ones arising from the emission of two real photons;
4) the virtual and real $e^+e^-$ pair production~\cite{pairs}.
As for the corrections in the third order of perturbation theory,
only the leading ones proportional to $(\alpha L /\pi)^3$
are to be taken into account.
In this paper we consider the emission of two real hard photons:
\begin{equation}
e^+(p_+)+e^-(p_-)\rightarrow e^+(q_+)+e^-(q_-)+\gamma(k_1)+\gamma(k_2).
\label{proc}
\end{equation}
The relevant contribution to the {\em experimental\/} cross section has the
following form
\begin{equation}
\sigma_{\mathrm{exp}}=\int \d\sigma \; \Theta_+\Theta_-,
\end{equation}
where $\Theta_+$ and $\Theta_-$ are the experimental restrictions
providing the simultaneous detection of both the scattered electron
and positron. First, this means that their energy fractions
should be larger than a certain (small) quantity
$\varepsilon_{\mathrm{th}}/\varepsilon$, $\varepsilon_{\mathrm{th}}$
is the energy threshold of the detectors.
The second condition restricts their
angles with respect to the beam axes. They should be larger
than a certain finite value $\psi_0$ ($\ \psi_0\sim 35^{\circ}$
in the experimental conditions accepted in \cite{vepp2m}):
\begin{equation}
\pi - \psi_0 > \theta_{-}, \, \theta_{+}> \psi_0, \qquad
\theta_{\pm}=\widehat{\vecc{q}_{\pm}\vecc{p}}_{-}\, ,
\end{equation}
where $\theta_{\pm}$ are the polar angles of the scattered
leptons with respect to the beam axes ($\vecc{p}_-$).
We accept the condition on the energy threshold
of the charged--particle registration
$q_{\pm}^0 > \varepsilon_{\mathrm{th}}$.
Both photons are assumed to be hard. Their minimal energy
\begin{eqnarray}
\omega_{\mathrm{min}}=\Delta\varepsilon, \qquad \Delta \ll 1,
\end{eqnarray}
could be considered as the threshold of the photon registration.
The main $(\sim (\alpha L/\pi)^2$) contribution to the total
cross section (5) comes from the collinear region:
when both the emitted photons move within
narrow cones along the charged particle momenta (they may
go along the same particle). So
we will distinguish 16 kinematical regions:
\begin{eqnarray}
&& \widehat{\vecc{a} \vecc{k}}_1\ \ \mbox{and}\ \
\widehat{\vecc{a} \vecc{k}}_2 < \theta_0,
\qquad
\widehat{\vecc{a} \vecc{k}}_1\ \ \mbox{and}\ \
\widehat{\vecc{b} \vecc{k}}_2 < \theta_0,
\nonumber \\ \label{eq6}
&& \frac{m}{\varepsilon} \ll \theta_0 \ll 1, \qquad
a \ne b, \quad a,\, b=p_-, p_+, q_-, q_+ \, .
\end{eqnarray}
The matrix element module square summed over spin states in the regions
(\ref{eq6}) is of the form of the Born matrix element multiplied by the
so--called collinear factors. The contribution
to the cross section of each region has also the form of $2\to 2$
Bhabha cross sections in the Born approximation multiplied by factors of
the form
\begin{equation}
\d\sigma_i^{\mathrm{coll}}=\d\sigma_{0i} \biggl[ a_i(x_j,y_j)
\ln^2\biggl(\frac{\varepsilon^2\theta_0^2}{m^2}\biggr) + b_i(x_j,y_j)
\ln\biggl(\frac{\varepsilon^2\theta_0^2}{m^2}\biggr) \biggr],
\end{equation}
where $x_j=\omega_j/\varepsilon$, $y_1=q_{-}^0/\varepsilon$,
$y_2=q_{+}^0/\varepsilon$ are the energy fractions of the photons
and of the scattered electron and positron.
The dependence on the auxiliary parameter $\theta_0$ will be cancelled
in the sum of the contributions of the collinear and semi--collinear regions.
The last region corresponds to the kinematics, when only one photon
is emitted inside the narrow cone $\theta_1 < \theta_0$ along one of
the charged particle momenta. And the second photon is emitted outside
any cone of that sort along charged particles ($\theta_2 > \theta_0$):
\begin{equation} \label{scol}
\d \sigma_i^{\mathrm{sc}}=\frac{\alpha}{\pi}
\ln\biggl(\frac{4\varepsilon^2}{m^2}\biggr)
\d \sigma_{0i}^{\gamma}(k_2),
\end{equation}
where $\d \sigma_{0i}^{\gamma}$ has the known form of the single hard
bremsstrahlung cross section in the Born approximation \cite{brem}.
Below we show explicitly that the result of the integration over the
single hard photon emission in eq. (\ref{scol}) in the kinematical
region $\theta_2^i > \theta_0$ ($\theta_2^i$ is the emission angle of
the second hard photon with respect to the direction of one of the
four charged particles) has the following form
\begin{equation}
\int \d \sigma_{0i}^{\gamma}(k_2)=-2\ln\biggl(\frac{\theta_0^2}{4}\biggr)
a_i(x,y) \d\sigma_0^i + \d\tilde{\sigma}^i.
\end{equation}
The collinear factors in the double bremsstrahlung process
were first considered in papers of the CALCUL collaboration~\cite{calcul}.
Unfortunately they have a rather complicated form
which is less convenient for further analytical integration
in comparison with the expressions given below.
The method of calculation of the collinear factors may
be considered as a generalization of the quasi--real electron
method~\cite{quasi} to the case of multiple bremsstrahlung.
Another generalization is required for the calculations of
the cross section of process $e^+e^- \to 2e^+2e^-$ \cite{pairs}.
It is interesting that the collinear factors for the
kinematical region of the two hard photon emission along
the projectile and the scattered electron are found the
same as for the electron--proton scattering process considered
by one of us (N.P.M.) in paper~\cite{npm}.
There are 40 Feynman diagrams of the tree type which describe the double
bremsstrahlung process in $e^+e^-$ collisions. The differential
cross section in terms of helicity amplitudes was computed
about ten years ago~\cite{calcul,kurper}.
It has a very complicated form. We note that the contribution from
the kinematical region in which
the angles (in the CM system) between any two final
particles are large compared with $m/\varepsilon$ is of the order
\begin{equation}
\frac{\alpha^2 r_0^2 m^2}{\pi^2\varepsilon^2} \sim 10^{-36} \mbox{cm}^2,
\end{equation}
($r_0$ is the classical electron radius).
So, the corresponding events will possess poor statistics at
the colliders with the luminosity
${\cal L} \sim 10^{31}\, - \, 10^{32} \mbox{cm}^{-2}\mbox{s}^{-1}$.
More probable are the cases of double bremsstrahlung imitating
the processes $e^+e^- \to e^+e^-$ or $e^+e^- \to e^+e^-\gamma$,
which corresponds to the emission of one or two photons along
charged--particle momenta.
\section{Kinematics in the collinear region}
It is convenient to introduce, in the collinear region, new variables
and transform the phase volume of the final state in the following
way (from now on we will work in the CM system):
\begin{eqnarray}
&& \int\! \d \Gamma = \int\! \frac{\d^3q_-\d^3q_+\d^3k_1\d^3k_2}
{16q^0_{-}q^0_{+}\omega_1\omega_2(2\pi)^8}
\delta^{(4)}(\eta_1p_- + \eta_2p_+ - \lambda_1q_- - \lambda_2q_+)
\nonumber \\ && \qquad \label{z0}
=\frac{m^4\pi^2}{4(2\pi)^6}\int\limits_{\Delta}^{1}\! \d x_1
\int\limits_{\Delta}^{1}\! \d x_2\; x_1 x_2
\int\limits_{0}^{2\pi}\! \frac{\d\phi}{2\pi}
\int\limits_{0}^{z_0}\! \d z_1 \int\limits_{0}^{z_0}\! \d z_2
\int\! \d \Gamma_q,
\\ \nonumber &&
\int\! \d \Gamma_q=\int \frac{\d^3q_-\d^3q_+}
{4q^0_{-}q^0_{+}(2\pi)^2}
\delta^{(4)}(\eta_1p_- + \eta_2p_+ - \lambda_1q_- - \lambda_2q_+),
\\ \nonumber &&
z_{1,2}=\biggl( \frac{\theta_{1,2}\varepsilon}{m} \biggr)^2,
\quad \phi= \widehat { \vecc{k}_{1\bot} \vecc{k}}_{2\bot},
\quad x_i=\frac{\omega_i}{\varepsilon}, \quad
z_0 = \biggl( \frac{\theta_{0}\varepsilon}{m} \biggr)^2 \gg 1,
\quad \Delta=\frac{\omega_{\mathrm{min}}}{\varepsilon}\, ,
\end{eqnarray}
where $\theta_i$ $(i=1,2)$ is the polar angle of the $i$--photon
emission with respect to the momentum of the charged
particle that emits the photon; $\eta_{\pm}$ and
$\lambda_{\pm}$ depend on the specific emission kinematics,
they are given in Table 1.
\vspace{.5cm}
\begin{table}
\caption{$\eta_i$ and $\lambda_i$
for different collinear kinematics.}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
~~ & $p_-p_-$ & $q_-q_-$ & $p_+p_+$ & $q_+q_+$ & $p_-p_+$ &
$q_-q_+$ & $p_-q_-$ & $p_+q_+$ & $p_-q_+$ & $p_+q_-$ \\ \hline
$\eta_1$ & $y$ & $1$ & $1$ & $1$ & $1-x_1$ & $1$ & $1-x_1$ & $1$ &
$1-x_1$ & $1$ \\ \hline
$\eta_2$ & $1$ & $1$ & $y$ & $1$ & $1-x_2$ & $1$ & $1$ & $1-x_1$ &
$1$ & $1-x_1$ \\ \hline
$\lambda_1$ & $1$ & $\frac{1}{y}$ & $1$ & $1$ & $1$ & $\frac{1}{1-x_1}$
& $1+\frac{x_2}{y_1}$ & $1$ & $1$ & $1+\frac{x_2}{y_1}$ \\ \hline
$\lambda_2$ & $1$ & $1$ & $1$ & $\frac{1}{y}$ & $1$ & $\frac{1}{1-x_2}$
& $1$ & $1+\frac{x_2}{y_2}$ & $1+\frac{x_2}{y_2}$ & $1$ \\ \hline
\end{tabular}
\end{table}
\vspace{.5cm}
The columns of the Table correspond to a certain choice of
the kinematics in the following way: $p_-p_-$ means the emission
of both the photons along the projectile electron,
$p_+q_-$ means that the first of the photons goes along the
projectile positron; the second, along the scattered
electron, and so on. The contributions from 6 remaining kinematical
regions (when the photons in the last 6 columns are interchanged)
could be found by the simple substitution $x_1 \leftrightarrow x_2$.
We will use the momentum conservation law
\begin{equation}
\eta_1p_- + \eta_2p_+ = \lambda_1q_- + \lambda_2q_+ \, ,
\label{conser}
\end{equation}
and the following relations coming from it:
\begin{eqnarray} \label{eq:14}
&& \eta_1+\eta_2=\lambda_1y_1 + \lambda_2y_2, \qquad
\lambda_1y_1\sin\theta_-=\lambda_2y_2\sin\theta_+,
\qquad y_{1,2}=\frac{q_{1,2}^0}{\varepsilon}\, ,
\nonumber \\ && \label{cos}
\lambda_2 y_2=\frac{\eta_1^2+\eta_2^2+(\eta_2^2-\eta_1^2)c}
{\eta_1+\eta_2+(\eta_2-\eta_1)c}\, .
\end{eqnarray}
Each of 16 contributions to the cross section of process (\ref{proc})
can be expressed in terms of the corresponding Born--like cross section
multiplied by its collinear factor:
\begin{eqnarray} \label{di}
\d \sigma_{\mathrm{coll}} &=& \frac{1}{2!}\biggl(\frac{\alpha}{2\pi}\biggr)^2
\frac{x_1 x_2}{2} \sum_{(\eta,\lambda)} \overline{K}(\eta,\lambda)
\d \tilde{\sigma}_0(\eta,\lambda) \d x_1 \d x_2, \\ \nonumber
\d \tilde{\sigma}_0(\eta,\lambda) &=& \frac{2\alpha^2}{s}\, B(\eta,\lambda)\,
\d I(\eta,\lambda), \qquad
B(\eta,\lambda) = \biggl( \frac{\tilde{s}^2+\tilde{t}^2
+ \tilde{s}\tilde{t}}{\tilde{s}\tilde{t}} \biggr)^2,
\\ \nonumber
\d I_i(\eta,\lambda) &=& \int\!\! \frac{\d^3q_-\d^3q_+}{q_-^0q_+^0}
\delta^{(4)}(\eta_1p_-+\eta_2p_+-\lambda_1q_--\lambda_2q_+) \\ \nonumber
&=&\frac{4\pi\eta_1\eta_2\d c}{\lambda_1^2\lambda_2^2
[c(\eta_2-\eta_1)+\eta_1+\eta_2]^2}\, ,\\ \nonumber
\overline{K}(\eta,\lambda) &=& m^4 \int\limits_{0}^{z_0}\d z_1
\int\limits_{0}^{z_0}\d z_2 \int\limits_{0}^{2\pi}
\frac{\d \phi}{2\pi} {\cal K}(\eta,\lambda), \\ \nonumber
\tilde{t} &=& (\eta_1p_--\lambda_1q_-)^2
=-\tilde{s}\frac{\eta_1(1-c)}{\eta_1+\eta_2+(\eta_2-\eta_1)c}\, , \\ \nonumber
\tilde{s} &=& (\eta_1p_-+\eta_2p_+)^2
=4\varepsilon^2\eta_1\eta_2=s\eta_1\eta_2,
\quad \tilde{s}+\tilde{t}+\tilde{u}=0.
\end{eqnarray}
The sum over $(\eta,\lambda)$ means the sum over 16
collinear kinematical regions. The corresponding $(\eta,\lambda)$
could be found in Table~1. The quantities
${\cal K}_i(\eta,\lambda)$ are as follows:
\begin{eqnarray}
&& {\cal K}(p_-p_-)=\frac{2}{y}{\cal A}(A_1,A_2,A,x_1,x_2,y), \quad
{\cal K}(q_-q_-)=2y{\cal A}(B_1,B_2,B,\frac{-x_1}{y},\frac{-x_2}{y},
\frac{1}{y}), \nonumber \\ \nonumber
&& {\cal K }(p_+p_+)=\frac{2}{y}{\cal A}(C_1,C_2,C,x_1,x_2,y), \quad
{\cal K}(q_+q_+)=2y{\cal A}(D_1,D_2,D,\frac{-x_1}{y},\frac{-x_2}{y},
\frac{1}{y}), \\
&& {\cal A}(A_1,A_2,A,x_1,x_2)=-\frac{yA_2}{A^2A_1} - \frac{yA_1}{A^2A_2}
+ \frac{1+y^2}{x_1x_2A_1A_2} + \frac{r_1^3+yr_2}{AA_1x_1x_2}
\nonumber \\ && \quad \label{eq:19}
+ \frac{r_2^3+yr_1}{AA_2x_1x_2}
+ \frac{2m^2(y^2+r_1^2)}{AA_1^2x_2}
+ \frac{2m^2(y^2+r_2^2)}{AA_2^2x_1},
\end{eqnarray}
\begin{eqnarray} \label{eq:20}
&& {\cal K}(p_-p_+)=2K_1K_2,\qquad {\cal K}(p_-q_+)=-2K_1K_3,\qquad
{\cal K}(p_+q_-)=-2K_4K_5, \\ \nonumber
&& {\cal K}(q_-q_+)=2K_6K_7,\qquad {\cal K}(p_-q_-)=-2K_1K_5,\qquad
{\cal K}(p_+q_+)=-2K_4K_3, \\ \nonumber
&& K_1=\frac{g_1}{A_1x_1r_1}+\frac{2m^2}{A_1^2}, \quad
K_2=\frac{g_2}{C_2x_2r_2}+\frac{2m^2}{C_2^2}, \quad
K_3=\frac{g_4}{D_2x_2t_2}-\frac{2m^2}{D_2^2}, \\ \nonumber
&& K_4=\frac{g_1}{C_1x_1r_1}+\frac{2m^2}{C_1^2}, \quad
K_5=\frac{g_3}{B_2x_2t_1}-\frac{2m^2}{B_2^2}, \quad
K_6=\frac{g_1}{B_1x_1}-\frac{2m^2}{B_1^2}, \\ \nonumber
&& K_7=\frac{g_2}{D_2x_2}-\frac{2m^2}{D_2^2}, \qquad
r_1=1-x_1,\quad r_2=1-x_2, \\ \nonumber
&& g_1=1+r_1^2, \quad g_2=1+r_2^2,\quad g_3=y_1^2+t_1^2,
\quad g_4=y_2^2+t_2^2, \\ \nonumber
&& t_1=y_1+x_2,\quad t_2=y_2+x_2, \quad y=1-x_1-x_2,
\end{eqnarray}
$y_1,\,y_2$ are the energy fractions of the scattered
electron and positron defined in eq.~(\ref{cos}).
Expressions (\ref{eq:20}) agree with the results of paper~\cite{calcul}
except for a simpler form of ${\cal K}(q_-q_+)$.
As for eq. (\ref{eq:19}) it has an evident advantage
in comparison to the corresponding formulae given in paper~\cite{calcul}.
Let us note that the remaining factors ${\cal K}(p,q)$ could be
obtained from the ones given in eq.~(\ref{eq:20}) using relations of
the following type:
\begin{eqnarray}
{\cal K}(p_-q_-)(x_1,x_2,A_1,B_2)={\cal K}(q_-p_-)(x_2,x_1,A_2,B_1).
\end{eqnarray}
Note also that terms of the kind $m^4/(B_2^2C_1^2)$
do not give logarithmically enhanced contributions, and
we will omit them below.
The denominators of the propagators entering into
eqs. (\ref{eq:19}), (\ref{eq:20}) are:
\begin{eqnarray}
&& A_i=(p_--k_i)^2-m^2, \qquad A=(p_--k_1-k_2)^2-m^2, \nonumber
\\ \label{abcd}
&& B_i=(q_-+k_i)^2-m^2, \qquad B=(q_-+k_1+k_2)^2-m^2,
\\ \nonumber
&& C_i=(k_i-p_+)^2-m^2, \qquad C=(k_1+k_2-p_+)^2-m^2,
\\ \nonumber
&& D_i=(q_++k_i)^2-m^2, \qquad D=(q_++k_1+k_2)^2-m^2.
\end{eqnarray}
For further integration it is useful to rewrite the denominators
in terms of the photon energy fractions $x_{1,2}$ and their emission
angles. In the case of the emission of both the photons along $p_-$ we
would have
\begin{eqnarray}
&& \frac{A}{m^2}=-x_1(1+z_1)
-x_2(1+z_2)+x_1x_2(z_1+z_2)+2x_1x_2\sqrt{z_1z_2}\cos\phi,
\nonumber \\
&& \frac{A_i}{m^2}=-x_i(1+z_i),
\end{eqnarray}
where $z_i=(\varepsilon\theta_i/m)^2$, $\phi$ is the azimuthal
angle between the planes containing the space vector pairs
$(\vecc{p}_-\, ,\vecc{k}_1)$ and $(\vecc{p}_-\, ,\vecc{k}_2)$.
In the same way one can obtain in the case $k_1\, ,k_2 \Vert q_-\,$:
\begin{eqnarray}
&& \frac{B}{m^2}=\frac{x_1}{y_1}(1+y_1^2z_1)
+\frac{x_2}{y_1}(1+y_1^2z_2)+x_1x_2(z_1+z_2)+2x_1x_2\sqrt{z_1z_2}\cos\phi,
\nonumber \\ &&
\frac{B_i}{m^2}=\frac{x_i}{y_1}(1+y_1^2z_i).
\end{eqnarray}
Then we perform the elementary azimuthal angle integration
and the integration over $z_1\, , z_2$ within the logarithmical
accuracy using the procedure suggested in paper~\cite{npm}:
\begin{eqnarray} \label{eq26}
\overline{a}=m^4\int\limits_{0}^{z_0}\d z_1 \int\limits_{0}^{z_0}\d z_2
\int\limits_{0}^{2\pi} \frac{\d \phi}{2\pi} a.
\end{eqnarray}
The list of the relevant integrals is given in Appendix A.
In this way one obtains the differential cross section in the collinear
region:
\begin{eqnarray} \label{sigc}
&& \d \sigma_{\mathrm{coll}} = \frac{\alpha^4 L}{4\pi^2 s}
\frac{\d^3q_+\d^3q_-}{q_+^0q_-^0}
\frac{\d x_1 \d x_2}{x_1x_2} \bigl(1+{\cal P}_{1,2} \bigr) \Biggl\{
\frac{1}{yr_1^2}
\biggl[\frac{1}{2}(L+2l)g_1g_5
\\ \nonumber && \quad
+(y^2+r_1^4)\ln\frac{x_2r_1^2}{x_1y}+
x_1x_2(y-x_1x_2)-2r_1g_5\biggr][B_{p_-p_-}\delta_{p_-p_-}
+ B_{p_+p_+}\delta_{p_+p_+}] \\ \nonumber && \quad
+\frac{1}{yr_1^2} \biggl[ \frac{1}{2}(L+2l+
4\ln y)g_1g_5+(y^2+r_1^4)\ln\frac{x_1r_1^2}{x_2y}
+x_1x_2(y-x_1x_2)-2r_1g_1\biggr] \\ \nonumber && \quad
\times [B_{q_-q_-}\delta_{q_-q_-}+B_{q_+q_+}\delta_{q_+q_+}]
+B_{p_-p_+}\delta_{p_-p_+}\biggl[(L+2l)\frac{g_1g_2}{r_1r_2}-2\frac{g_1}{r_1}
- 2\frac{g_2}{r_2}\biggr] \\ \nonumber && \quad
+B_{q_-q_+}\delta_{q_-q_+}\biggl[(L+2l
+2\ln(r_1r_2))\frac{g_1g_2}{r_1r_2}-2\frac{g_1}{r_1}-2\frac{g_2}{r_2}\biggr]
\\ \nonumber && \quad
+[B_{p_-q_-}\delta_{p_-q_-}+B_{p_+q_-}\delta_{p_+q_-}]
\biggl[(L+2l+2\ln y_1)\frac{g_1g_3}{r_1y_1t_1}-2\frac{g_1}{r_1}
-2\frac{g_3}{y_1t_1}\biggr] \\ \nonumber && \quad
+ [B_{p_+q_+}\delta_{p_+q_+}
+B_{p_-q_+}\delta_{p_-q_+}]\biggl[(L+2l+2\ln y_2)\frac{g_1g_4}{r_1y_2t_2}-
2\frac{g_1}{r_1}-2\frac{g_4}{y_2t_2}\biggr] \Biggr\}.
\end{eqnarray}
We used the symbol ${\cal P}_{1,2}$ for the interchange operator
(${\cal P}_{1,2}f(x_1,x_2)=f(x_2,x_1)$ ).
We used the notation (see also eq.~(\ref{eq:20})):
\begin{eqnarray}
l=\ln\biggl(\frac{\theta_0^2}{4}\biggr),\qquad g_5=y^2+r_1^2,
\end{eqnarray}
where $\theta_0$ is the collinear parameter. Delta--function
$\delta_{p,q}$ corresponds to the specific
conservation law of the kinematical situation defined
by the pair $p,q$ (see Table~1):
$\delta_{p,q} = \delta^{(4)}(\eta_2p_+ + \eta_1p_- - \lambda_1q_-
- \lambda_2q_+)$. Besides, we imply that the
first photon is emitted along the momentum $p$; and the second,
along the momentum $q$ ($p,\, q = p_-,\, p_+,\, q_-,\, q_+)$.
These $\delta$--functions could be taken into account in the integration
as is made in the expression for $\d I(\eta,\lambda)$
(see eq.~(\ref{di})).
Finally, we define
\begin{eqnarray}
B_{p,q}=\biggl(\frac{\eta_2s}{\lambda_1t}
+\frac{\lambda_1t}{\eta_2s}+1 \biggr)^2, \qquad
t=(p_- - q_-)^2.
\end{eqnarray}
\section{Contribution of the semi--collinear region}
We will suggest for definiteness that the photon with momentum $k_2$ moves
inside a narrow cone along the momentum direction of one of
the charged particles, while the other photon moves in any direction
outside that cone along any charged particle.
This choice allows us to omit the statistical factor
$1/2!$. The quasireal electron method~\cite{quasi} may be used to obtain
the cross section:
\begin{eqnarray}
\d \sigma^{\mathrm{sc}}&=&\frac{\alpha^4}{32s\pi^4}\,
\frac{\d^3q_-\d^3q_+\d^3k_1}
{q_-^0q_+^0k_1^0} V \frac{\d^3k_2}{k_2^0} \biggl\{
\frac{{\cal K}_{p_-}}{p_-k_2} \delta_{p_-}R_{p_-}
\nonumber \\ \label{eq30}
&+& \frac{{\cal K}_{p_+}}{p_+k_2} \delta_{p_+}R_{p_+}
+ \frac{{\cal K}_{q_-}}{q_-k_2} \delta_{q_-}R_{q_-}
+ \frac{{\cal K}_{q_+}}{q_+k_2} \delta_{q_+}R_{q_+} \biggr\}.
\end{eqnarray}
We omitted the terms of the kind $m^2/(p_-k_2)^2$ in eq. (\ref{eq30})
because their contribution does not contain the large logarithm $L$.
The quantities entering into eq.~(\ref{eq30}) are given by:
\begin{eqnarray}
V&=&\frac{s}{k_1p_+ \cdot k_1p_-} + \frac{s'}{k_1q_+ \cdot k_1q_-}
- \frac{t'}{k_1p_+ \cdot k_1q_+} - \frac{t}{k_1p_- \cdot k_1q_-}
\nonumber \\
&+& \frac{u'}{k_1p_+ \cdot k_1q_-} + \frac{u}{k_1q_+ \cdot k_1p_-}\, .
\end{eqnarray}
$V$ is the known accompanying radiation factor;
${\cal K}_i$ are the single photon emission collinear factors:
\begin{eqnarray}
{\cal K}_{p_-}={\cal K}_{p_+}=\frac{g_2}{x_2r_2}, \quad
{\cal K}_{q_-}=\frac{y_1^2+(y_1+x_2)^2}{x_2(y_1+x_2)}, \quad
{\cal K}_{q_+}=\frac{y_2^2+(y_2+x_2)^2}{x_2(y_2+x_2)}\, .
\end{eqnarray}
Quantities $R_i$ read:
\begin{eqnarray}
&& R_{p_-}=R[sr_2,tr_2,ur_2,s',t',u'], \nonumber
\quad R_{p_+}=R[sr_2,t,u,s',t'r_2,u'r_2], \\
&& R_{q_-}=R[s,t\frac{t_1}{y_1},u,s'\frac{t_1}{y_1},t',u'\frac{t_1}{y_1}],
\quad R_{q_+}=R[s,t,u\frac{t_2}{y_2},s'\frac{t_2}{y_2},t'\frac{t_2}{y_2},u'],
\end{eqnarray}
where the function $R$ has the form~\cite{ber81}:
\begin{eqnarray}
&& R[s,t,u,s',t',u']=\frac{1}{ss'tt'}\bigl[ ss'(s^2+{s'}^2)+tt'(t^2+{t'}^2)
+uu'(u^2+{u'}^2) \bigr], \nonumber \\ \nonumber
&& s=(p_++p_-)^2, \quad s'=(q_++q_-)^2, \quad t=(p_--q_-)^2, \\
&& t'=(p_+-q_+)^2, \quad u=(p_--q_+)^2, \quad u'=(p_+-q_-)^2.
\end{eqnarray}
Finally, we define
\begin{eqnarray}
\delta_{p_-}&=&\delta^{(4)}(p_-r_2+p_+-q_+-q_--k_1),\nonumber \\ \nonumber
\delta_{p_+}&=&\delta^{(4)}(p_-+p_+r_2-q_+-q_--k_1), \\ \nonumber
\delta_{q_-}&=&\delta^{(4)}(p_-+p_+-q_+-q_-\frac{y_1+x_2}{y_1}-k_1), \\
\delta_{q_+}&=&\delta^{(4)}(p_-+p_+-q_+\frac{y_2+x_2}{y_2}-q_--k_1).
\end{eqnarray}
Performing the integration over angular variables of the collinear photon
we obtain
\begin{eqnarray}
\d \sigma^{\mathrm{sc}} &=& \frac{\alpha^4 L}{16 s\pi^3}\,
\frac{\d^3q_-\d^3q_+\d^3k_1}
{q_-^0q_+^0k_1^0} \d x_2 V \biggl\{{\cal K}_{p_-} [R_{p_-}\delta_{p_-}
+ R_{p_+}\delta_{p_+}] \nonumber \\ \label{eq35}
&+& \frac{1}{y_2}{\cal K}_{q_+} R_{q_+}\delta_{q_+}
+ \frac{1}{y_1}{\cal K}_{q_-} R_{q_-}\delta_{q_-} \biggr\}.
\end{eqnarray}
To see that the sum of cross sections
(\ref{sigc}) and (\ref{eq35})
\begin{eqnarray} \label{siggg}
\d \sigma^{\gamma\gamma}=\d \sigma^{\mathrm{coll}}
+ \int\d O_1 \bigl(\frac{\d \sigma^{\mathrm{sc}}}{\d O_1} \bigr)
\end{eqnarray}
does not depend on the auxiliary parameter $\theta_0$.
We verify that terms
$L \cdot l$ from eq.~(\ref{sigc}) cancel out with the terms
\begin{equation}
L \frac{k_1^0q_i^0}{2\pi} \int\frac{\d O_1}{k_1q_i} \ \approx \ - L\cdot l,
\end{equation}
which arise from 16 regions in the semi--collinear kinematics.
\section{Numerical results and discussion}
We separated the contribution of the collinear and semi--collinear
regions using the auxiliary parameter $\theta_0$. By direct numerical
integration according to the presented formulae we had convinced
ourselves that the total result is independent on the choice of $\theta_0$.
It is convenient to compare the cross section of double hard
photon emission with the Born cross section
\begin{eqnarray}
\sigma^{\mathrm{Born}}=\frac{\alpha^2\pi}{2s}
\int\limits_{-\cos\psi_0}^{\cos\psi_0}\left(\frac{3+c^2}{1-c}\right)^2\d c.
\end{eqnarray}
For illustrations we integrated over some typical experimental
angular acceptance and chose the following values of the parameters:
\begin{eqnarray}
&& \psi_0=\pi/4,\quad \sqrt{s}=0.9\; \mbox{GeV}, \quad \Delta_1=0.4, \quad
\Delta=0.05, \quad \theta_0 = 0.05, \nonumber \\
&& L = 15.0, \quad l=-7.38\, ,
\end{eqnarray}
where $\Delta_1$ defines the energy threshold for the registration
of the final electron and positron: $q_{\pm}^0 > \varepsilon_{\mathrm{th}}
=\varepsilon\Delta_1$.
Note that restrictions on $\theta_0$ (\ref{eq6}) and (\ref{z0})
$(z_0=\exp\{L+l\} \gg 1)$ are fulfilled.
For the chosen parameters we get
\begin{eqnarray}
&& \sigma^{\mathrm{Born}} = 1.2\; \mbox{mkb}, \qquad \nonumber
\frac{\sigma^{\mathrm{coll}}}
{\sigma^{\mathrm{Born}}}\cdot 100\% = - 0.25\,\%, \\
&& \frac{\sigma^{\mathrm{sc}}}
{\sigma^{\mathrm{Born}}}\cdot 100\% = 0.81\,\%, \qquad
\delta\sigma^{\mathrm{tot}}=
\frac{\sigma^{\mathrm{sc}}+\sigma^{\mathrm{coll}}}
{\sigma^{\mathrm{Born}}}\cdot 100\% = 0.56\,\%.
\end{eqnarray}
The {\em phenomenon\/} of negative contribution to the cross section
from the collinear kinematics is an artifact of our approach.
Namely, we systematically omitted positive terms without large logarithms,
among them we dropped terms proportional to $l^2$. The cancellation
of $l^2$ terms can be seen only after adding the contribution
of the non--collinear kinematics (when both photons are emitted
outside narrow cones along charged--particle momenta). The non--collinear
kinematics does not provide any large logarithm $L$.
Both quantities $\sigma^{\mathrm{coll}}$ and
$\sigma^{\mathrm{sc}}$ depend on auxiliary parameter
$\theta_0$. We eliminated by hands from eq.~(\ref{sigc}) the
terms proportional to $l$ and obtained the following quantity:
\begin{eqnarray}
\frac{\sigma_{\mathrm{coll}}^{\mathrm{bare}}}
{\sigma^{\mathrm{Born}}}\cdot 100\% = 1.43\,\%.
\end{eqnarray}
This quantity corresponds to an approximation for the correction
under consideration in which one considers only the collinear
regions and takes into account only terms proportional to $L^2$ and $L$
(all terms dependent on $\theta_0$ are to be omitted).
Having in mind the cancellation of $\theta_0$--dependence in
the sum of the collinear and semi--collinear contributions,
we may subtract from the value of the semi--collinear contribution
the part which is associated with $l$:
\begin{eqnarray}
&& \sigma_{\mathrm{sc}}^{\mathrm{bare}}= \nonumber
\sigma_{\mathrm{sc}} + (\sigma_{\mathrm{coll}}
- \sigma_{\mathrm{coll}}^{\mathrm{bare}} ), \qquad
\frac{\sigma_{\mathrm{sc}}^{\mathrm{bare}}}
{\sigma^{\mathrm{Born}}}\cdot 100\% = -0.87\,\%.
\end{eqnarray}
Looking at {\em bare\/} quantities one can get an idea of relative
impact of two considered regions. We see that at the precision
level of $0.1\%$ the next--to--leading contributions of
semi--collinear regions are important.
In figure~1 we illustrated the dependence on parameter $\Delta$
of the bare collinear contribution for different fixed values
of $\Delta_1$. Large growing in the region of small $\Delta$
corresponds to an infrared singularity, which will be cancelled
after adding contributions of virtual and soft photon
emission.
\ack
We are grateful to S.~Eidelman, G.~Fedotovich, P.~Franzini and
G.~Pancheri for fruitful discussions. For the help in numerical
calculations we thank I.V.~Amirkhanov, T.A.~Strizh and T.P.~Puzynina.
We are also grateful to INFN, Parma university for hospitality.
The work was partially supported by INTAS grant 93--1867
and RFBR grant 96-02-17512.
One of us (A.B.A.) is thankful to the INTAS fundation
for financial support via an ICFPM grant.
\section*{Appendix A}
We present here the list of integrals (see eqs.~(\ref{abcd} -- \ref{eq26})):
\begin{eqnarray}
&& \overline{\frac{A_2}{A^2A_1}}=\frac{L_0}{x_1x_2r_1^2}
\biggl[\frac{1}{2}L_0 + \ln \frac{x_2r_1^2}{x_1y} - 1
+ \frac{x_1x_2}{y} \biggr], \nonumber \\
&& \overline{\frac{1}{AA_1}}=\frac{L_0}{x_1x_2r_1}
\biggl[\frac{1}{2}L_0 + \ln \frac{x_2r_1^2}{x_1y} \biggr],
\qquad \overline{\frac{m^2}{AA_1^2}}=-\frac{L_0}{x_1^2x_2r_1}\, ,
\\ \nonumber
&& \overline{\frac{1}{A_1A_2}}=\frac{L_0^2}{x_1x_2},
\qquad \overline{\frac{1}{A_1B_2}}=-\frac{L_0}{y_1x_1x_2}
(L_0+2\ln y_1),
\\ \nonumber
&& L_0=\ln z_0 \equiv L + l, \qquad l=\ln(\frac{\theta_0^2}{4}),
\qquad L=\ln(\frac{4\varepsilon^2}{m^2}).
\end{eqnarray}
The remaining integrals could be obtained by simple substitutions
defined in eqs.~(\ref{abcd} -- \ref{eq26}).
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,716 |
Q: Как написать аппликацию похожую на DevicePairingWizard.exe чтобы спаривать Bluetooth до логин Windows 10? Привет.
Как написать аппликацию похожую на "DevicePairingWizard.exe" чтобы спаривать Bluetooth до логин Windows 10 ?
Как это сделать после - я знаю.
Но до логина я получаю "Access Denied".
"DevicePairingWizard.exe" может это сделать до.
Какой API (c,c++ ,c# - не важно) он использует? Допустим MAC адрес известен.
Спасибо.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 2,769 |
{"url":"https:\/\/hopfcombinatorics.wordpress.com\/2012\/03\/01\/dual-coalgebra-hw3-problem-2-2\/","text":"# Dual coalgebra (HW3 problem\u00a02)\n\nI don\u2019t understand how information from the algebra gets into the dual coalgebra.\n\nHere\u2019s what I think I know: You dualize the multiplication to get a comultiplication (a trick I understand diagrammatically but not constructively) , and elements of the coalgebra are linear functionals in bijection with the elements of the algebra.\n\nDo we know based on element a of algebra A how a* maps A* to the field?\n\n1. The notation a* is confusing. There is no way to assign to each a in A an element a* of A*. So usually, even when a already stands for an element of A, the notation a* can be used for any *arbitrary* element of A*, which may be totally unrelated to a.\n\nThere is one exception to this rule, and that appears when you have a basis $\\left(e_i\\right)_{i\\in I}$ of A. Then, there is a \u201cdual basis\u201d of A*, and this basis is usually denoted by $\\left(e_i^{\\ast}\\right)_{i \\in I}$. (Caveat: it is only a basis if A is finite-dimensional.) But even in this case, it can\u2019t be said that each $e_i^{\\ast}$ depends only on the respective $e_i$; it depends on the whole basis.\n\nHow to understand the dual coalgebra? The easiest way is probably this one:\n\nAssume that A is finite-dimensional (otherwise, the notion of the dual coalgebra is harder to define \u2013 it is not the whole A* anymore). Let $\\left(e_1,e_2,...,e_n\\right)$ be a basis of A. Write down the multiplication table of A; this is the $n\\times n$ matrix whose (i, j)-th entry is the product $e_i e_j$, written as a vector with respect to the basis $\\left(e_1,e_2,...,e_n\\right)$. Now, for every k, you can compute the coproduct of $e_k^{\\ast}$ in the dual coalgebra A* by\n\n$\\Delta\\left(e_k^{\\ast}\\right) = \\sum_{i, j} \\left(\\text{the }e_k\\text{-coordinate of }e_ie_j\\right)\\cdot e_i^{\\ast}\\otimes e_j^{\\ast}$.\n\n2. Brian Cruz\n\nThis is also when I ended up getting, using $\\Delta=\\rho^-1m^*$ to show it. Start out with\n\n$\\Delta e_k^*=\\rho^-1m^*e_k^*=\\sum_{i,j}\\lambda_{ijk}e_i^*\\otimes e_j^*$\n\nmove the $\\rho$ to the other side and the magic happens when you start evaluating on basis of the tensor space. The key is that since our dim is finite, we have a dual basis such that $e_k^*(e_j)=\\delta_kj$ so that it\u2019s one when they are the same and zero everywhere else.\n\nI got this idea by looking at the matrices on http:\/\/haskellformaths.blogspot.com\/2011\/04\/what-is-coalgebra.html (which does not quite present things in the exact way Darij does above, but it\u2019s equivalent)\n\n3. Brian Cruz\n\nOh yeah, and the $\\Delta=\\rho^{-1}m^*$ part came from Maria and Brian from a conversation we had earlier today!\n\n4. From the responses, it seems we think that there is a strong association of the commultiplication of an algebra\u2019s dual with its multiplication- roughly, that the coproduct of a dual-basis element $e_k^*$ is the sum of tensors $e_i^*$ and $e_j^*$ over all i\u2019s and j\u2019s such that $e_i \\otimes e_j=\\lambda_{ij} e_k$? Does anybody understand how this happens, mechanically, during the dualization? It makes mystical sense to me\u2026","date":"2018-03-24 07:48:25","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 19, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8575502038002014, \"perplexity\": 441.2235395569763}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-13\/segments\/1521257649961.11\/warc\/CC-MAIN-20180324073738-20180324093738-00320.warc.gz\"}"} | null | null |
Calling all Texans! I loved making this--it was a request from a close friend of mine who lives in Dallas! The star is 3D, hand cut, painted and distressed. It has a weathered/distressed look which really makes this piece pop. This would look great in your Texas home....... or your home away from your Texas home. You know you want it!
It's handmade from 100% reclaimed wood. Each piece is totally unique so the item may differ slightly from photos. It is equipped with a hanger on the back so it's ready to hang on the wall. The sizes given are approximate. This piece is made from reclaimed wood and handmade to order, each one is totally unique. Item may differ slightly from photos. | {
"redpajama_set_name": "RedPajamaC4"
} | 8,518 |
Q: Python, website request login not working Im trying to login to a website via a script but when I print the websites html content, but I cant see any of the data available after login...
Can someone tell what I am missing? Thank you!
def main():
headers = {
"User-Agent":
"Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.101 Safari/537.36",
}
s = requests.session()
s.headers.update(headers)
s.get('https://www.e-ams.at/eams-sfa-account/p/index.jsf')
# Generate the post data
data = {
'url': 'https://www.e-ams.at/eams-sfa-account/p/index.jsf',
'j_username': 'username',
'j_password': 'password'
}
# Perform the post request
r = s.post('https://www.e-ams.at/eams-sfa-account/p/index.jsf', data=data)
# Try to get data only available after login
r = s.get('https://www.e-ams.at/eams-sfa-account/p/EsaSBasisdaten.jsf?eamsTrack=1524234335254')
print(r.url)
print(r.text)
print(r.status_code)
A: If it is not part of the html form inputs, specifying the url in data dict is not correct.
Your request must be as follows:
data = {
'j_username': 'username',
'j_password': 'password'
}
r = s.post('https://www.e-ams.at/eams-sfa-account/p/index.jsf', data=data)
Generally speaking all the input tags of the form (both visible and hidden) must be included in the data dict
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,013 |
\chapter{The OLYMPUS 3D Event Display}
\label{chap:ed}
As a tool for the development of the OLYMPUS analysis, a three-dimensional event display was created for the OLYMPUS analysis framework that permitted the visualization of
the detector geometry, the results of hit and track reconstruction, and the results of simulated events. The program functions as a standalone visualization tool to allow
adaptation for other purposes, but is useful when integrated with an analysis framework (such as the one developed by J.C. Bernauer for OLYMPUS) to permit the simultaneous analysis of the events
being visualized. The event display was extremely useful in diagnosing issues both with data reconstruction and simulation by allowing direct inspection of events, especially those
events for which a particular procedure was not functioning.
The event display utilizes elements from the ROOT Event Visualization Environment originally developed for the ALICE experiment \cite{Brun1997,alice} and OpenGL to produce the detector
visualization \cite{Shreiner:2009:OPG:1696492}. The detector geometry is imported via the GDML format discussed in Section \ref{sec:detmod}\cite{gdml}, and thus the display is capable of
visualizing any geometry in that format.
The display provides functionality to control which elements of the detector are displayed, how and which elements of the data and reconstruction
are displayed, completely control the camera view of the detector and to switch between orthographic and perspective viewing modes, and to save images of the display. The display also provides
information relevant to the events displayed and inherits functionality from the OLYMPUS analysis framework that allows direct access to specific events in a file and viewing of analysis
histograms as they are filled by the events being displayed. Figure \ref{fig:ed} shows an example of the interface, displaying the detector and a reconstructed elastic {$e^- p$} event.
The source code for the OLYMPUS Event Display (written in C++ using ROOT libraries) is available from the author for academic purposes, and is easily adaptable for other
particle physics experiments. Notably, Geant4 includes the functionality to export geometries written in the Geant4 geometry framework to GDML, providing immediate compatibility
with the OLYMPUS event display \cite{Agostinelli:2002hh}.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.0\textwidth]{figures/eventdisplay.png}}
\caption[User interface of the OLYMPUS 3D Event Display]{The user interface of the OLYMPUS 3D event display, running in conjunction with the OLYMPUS
analysis framework for reconstruction of events from data. The main display area can be used to visualize the geometry of the detector, hits in the various detector elements,
and the results of track reconstruction. The controls surrounding it provide options for changing visualization options, stepping through events, and the ability to save the display
window as an image.}
\label{fig:ed}
\end{figure}
\chapter{Histograms of Data Events by the Quantities used for the Elastic Event Selection}
\label{chap:kincuts}
This appendix provides 2D histograms of events after initial pair selection for each of the kinematic cut parameters used in the
elastic pair selection described in Section \ref{sec:pairsel} and the reconstructed lepton angle $\theta_{e^\pm}$.
Two histograms are presented for each parameter: one corresponding to elastic {$e^- p$} event selection and the other to {$e^+ p$} selection.
Each histogram has been normalized by the Rosenbluth cross section (using dipole form factors for simplicity) as a function of $\theta_{e^\pm}$ to make the scale of
the counts across the entire histogram more uniform and thus make the resolution of the reconstructed parameters and shape of the background at back angles
visible at all $\theta_{e^\pm}$. In general, the corresponding simulation histograms were effectively identical up to the removal of the background and slightly
better resolutions in some parameters.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/cut1e.pdf}}
\caption[Histogram of the $\Delta t$ cut parameter for the {$e^- p$} initial pair selection]{Histogram of the $\Delta t$ cut parameter (uncorrected for vertex position) for the {$e^- p$} initial pair selection (before background subtraction)
as a function of the electron
scattering angle, normalized to the Rosenbluth cross section.}
\label{fig:cut1e}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/cut1p.pdf}}
\caption[Histogram of the $\Delta t$ cut parameter for the {$e^+ p$} initial pair selection]{Histogram of the $\Delta t$ cut parameter (uncorrected for vertex position) for the {$e^+ p$} initial pair selection (before background subtraction)
as a function of the positron
scattering angle, normalized to the Rosenbluth cross section.}
\label{fig:cut1p}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/cut2e.pdf}}
\caption[Histogram of the $\Delta z$ cut parameter for the {$e^- p$} initial pair selection]{Histogram of the $\Delta z$ cut parameter for the {$e^- p$} initial pair selection (before background subtraction)
as a function of the electron
scattering angle, normalized to the Rosenbluth cross section.}
\label{fig:cut2e}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/cut2p.pdf}}
\caption[Histogram of the $\Delta z$ cut parameter for the {$e^+ p$} initial pair selection]{Histogram of the $\Delta z$ cut parameter for the {$e^+ p$} initial pair selection (before background subtraction)
as a function of the positron
scattering angle, normalized to the Rosenbluth cross section.}
\label{fig:cut2p}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/cut3e.pdf}}
\caption[Histogram of the elastic angle correlation cut parameter for the {$e^- p$} initial pair selection]{Histogram of the elastic angle correlation cut parameter for the {$e^- p$} initial pair selection
(before background subtraction) as a function of the electron
scattering angle, normalized to the Rosenbluth cross section.}
\label{fig:cut3e}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/cut3p.pdf}}
\caption[Histogram of the elastic angle correlation cut parameter for the {$e^+ p$} initial pair selection]{Histogram of the elastic angle correlation cut parameter for the {$e^+ p$} initial pair selection
(before background subtraction) as a function of the positron
scattering angle, normalized to the Rosenbluth cross section.}
\label{fig:cut3p}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/cut4e.pdf}}
\caption[Histogram of the $E_{\text{beam},p}$ cut parameter for the {$e^- p$} initial pair selection]{Histogram of the $E_{\text{beam},p}$ cut parameter for the {$e^- p$} initial pair selection (before background subtraction) as a function of the electron
scattering angle, normalized to the Rosenbluth cross section.}
\label{fig:cut4e}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/cut4p.pdf}}
\caption[Histogram of the $E_{\text{beam},p}$ cut parameter for the {$e^+ p$} initial pair selection]{Histogram of the $E_{\text{beam},p}$ cut parameter for the {$e^+ p$} initial pair selection (before background subtraction) as a function of the positron
scattering angle, normalized to the Rosenbluth cross section.}
\label{fig:cut4p}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/cut5e.pdf}}
\caption[Histogram of the $E_{\text{beam},\theta}$ cut parameter for the {$e^- p$} initial pair selection]{Histogram of the $E_{\text{beam},\theta}$ cut parameter for the {$e^- p$} initial pair selection (before background subtraction) as a function of the electron
scattering angle, normalized to the Rosenbluth cross section.}
\label{fig:cut5e}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/cut5p.pdf}}
\caption[Histogram of the $E_{\text{beam},\theta}$ cut parameter for the {$e^+ p$} initial pair selection]{Histogram of the $E_{\text{beam},\theta}$ cut parameter for the {$e^+ p$} initial pair selection
(before background subtraction) as a function of the positron
scattering angle, normalized to the Rosenbluth cross section.}
\label{fig:cut5p}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/cut6e.pdf}}
\caption[Histogram of the $\Delta E'_\theta/E'^2$ cut parameter for the {$e^- p$} initial pair selection]{Histogram of the $\Delta E'_\theta/E'^2$ cut parameter for the {$e^- p$} initial pair selection (before background subtraction) as a function of the electron
scattering angle, normalized to the Rosenbluth cross section.}
\label{fig:cut6e}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/cut6p.pdf}}
\caption[Histogram of the $\Delta E'_\theta/E'^2$ cut parameter for the {$e^+ p$} initial pair selection]{Histogram of the $\Delta E'_\theta/E'^2$ cut parameter for the {$e^+ p$} initial pair selection
(before background subtraction) as a function of the positron
scattering angle, normalized to the Rosenbluth cross section.}
\label{fig:cut6p}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/cut7e.pdf}}
\caption[Histogram of the $p_z$ balance cut parameter for the {$e^- p$} initial pair selection]{Histogram of the $p_z$ balance cut parameter for the {$e^- p$} initial pair selection (before background subtraction) as a function of the electron
scattering angle, normalized to the Rosenbluth cross section.}
\label{fig:cut7e}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/cut7p.pdf}}
\caption[Histogram of the $p_z$ balance cut parameter for the {$e^+ p$} initial pair selection]{Histogram of the $p_z$ balance cut parameter for the {$e^+ p$} initial pair selection
(before background subtraction) as a function of the positron
scattering angle, normalized to the Rosenbluth cross section.}
\label{fig:cut7p}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/cut8e.pdf}}
\caption[Histogram of the $\Delta \phi$ cut parameter for the {$e^- p$} initial pair selection]{Histogram of the $\Delta\phi$ cut parameter for the {$e^- p$} initial pair selection (before background subtraction)
as a function of the electron
scattering angle, normalized to the Rosenbluth cross section.}
\label{fig:cut8e}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/cut8p.pdf}}
\caption[Histogram of the $\Delta \phi$ cut parameter for the {$e^+ p$} initial pair selection]{Histogram of the $\Delta\phi$ cut parameter for the {$e^+ p$} initial pair selection (before background subtraction)
as a function of the positron
scattering angle, normalized to the Rosenbluth cross section.}
\label{fig:cut8p}
\end{figure}
\chapter{Introduction}
\label{Chap1}
While protons and neutrons comprise nearly all of visible matter by mass, their inner workings remain relatively poorly understood.
Since Rutherford reported the ``anomalous'' discovery that collisions between $\alpha$ particles (helium nuclei)
and nitrogen nuclei could produce hydrogen nuclei (i.e., protons) \cite{doi:10.1080/14786431003659230} in 1919, understanding the role of the proton
in nuclei and the nature of the proton itself has been one of the primary goals of particle physics. Shortly after his discovery, Rutherford proposed that
the nuclei of atoms consist of protons and related neutral particles of very similar mass \cite{Rutherford374}. Chadwick discovered Rutherford's
neutral particles in 1932 \cite{Chadwick0,Chadwick1}, and he found that they behaved suspiciously similarly to protons, despite their neutral charge, indicating that
interesting physics hid within the nature of protons and neutrons. The first hint of the proton's internal complexity came with Otto Stern's 1933 discovery
that the magnetic moment of the proton deviated by a factor of 2.79 from the value
expected if it was a point-like particle \cite{stern1,stern2}. Experiments such as those of Hofstadter at Stanford began to probe the structure of
the proton with elastic electron-proton scattering in the 1950s, demonstrating the proton's finite size \cite{PhysRev.98.217,PhysRev.102.851,RevModPhys.28.214}. Concurrent with the experiments
of Hofstadter, other experiments were discovering a previously unknown sector of particles related to the proton: a variety of hadronic states varying widely in mass, quantum numbers,
lifetimes, and decay products. The quark model proposed by Gell-Mann and Zweig in 1964 \cite{GELLMANN1964214,Zweig:570209} gave order to the mess of hadrons, and the MIT-SLAC
experiments in the 1960s and 1970s slowly uncovered the point-like nature of the proton's, and other hadrons', constituents (i.e., quarks and gluons) \cite{PhysRevLett.23.930,PhysRevLett.23.935}.
While quantum chromodynamics (QCD) comprises a complete quantum field theory of the strong interaction of quarks and gluons \cite{doi:10.1142/9789814525220_0008,Agashe:2014kda}, the non-perturbative
nature of this theory at low energies poses significant difficulties in modeling the interactions that give rise to the bound states of QCD. Thus, experimental tests of QCD remain critical to furthering
knowledge of the strong force. In particular, a complete understanding of the proton, the fundamental bound state of QCD, is crucial. Despite over a century of both theoretical and experimental study,
basic questions remain about the proton, including whether it is truly stable \cite{pdperk,pdproc}, how its constituents give rise to its total spin ($S=\frac{1}{2}$) \cite{jps,doi:10.1142}, and more seemingly
simple topics such as the physical extent and distribution of charge within protons \cite{Carlson201559,Perdrisat2007694,PhysRevC.69.022201}. OLYMPUS is chiefly concerned
with a recent puzzle in the latter category.
\section{The Fundamentals of OLYMPUS: Motivation and Goals}
The OLYMPUS experiment concerns fundamental properties that were examined by the earliest studies of the proton's finite size and structure, i.e., measurement of the
form factors $G_E$ and $G_M$ for the elastic scattering of electrons from protons that encode how the structure of the proton affects the electromagnetic interaction between
the two particles. The form factors are typically represented as functions of the square of the four-momentum transfer that occurs in the {$e^- p$} interaction $Q^2$, so as to make
them Lorentz invariant representations of the proton's structure.
Measurements of the form factors began in the 1950s with the work of Hofstadter at Stanford \cite{PhysRev.98.217,PhysRev.102.851,RevModPhys.28.214} and continued using the method
originally formulated by Rosenbluth \cite{PhysRev.79.615} (Section \ref{sec:rossep}) into the 1990s, pushing the measurements to higher values of $Q^2$ \cite{ff3,ff4,ff8,ff9}. In the 1990s,
the development of polarized electron beams and targets provided a new method to study the elastic form factors, offering a means of measuring the ratio $\frac{\mu_pG_E}{G_M}$ as a function of $Q^2$ rather than
a method of measuring them individually \cite{pol11,pol3,pol12,pol7,pol9,pol13}. When comparing the behavior of $\frac{\mu_pG_E}{G_M}$ as found
by the previous data (shown in blue in Figure \ref{fig:disc}) to the new data based on methods employing polarization (the red data in \ref{fig:disc}), a significant disparity between
the results yielded by each method was discovered. All data prior to the polarization-based measurements was consistent with a value of $\frac{\mu_pG_E}{G_M}$ that stays relatively constant and near unity as a function of $Q^2$
while the polarization data favors a clear downward sloping trend that is completely inconsistent with the previous results. Theorists reanalyzed the data of the older experiments
using modern methods \cite{PhysRevC.68.034325} and new experiments using the method of Rosenbluth were conducted \cite{ff10,ff11}, but the discrepancy persisted.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.0\textwidth]{figures/GE_GM_Ratio2.pdf}}
\caption[The $G_E/G_M$ discrepancy]{Selection of experimental results on the ratio $\frac{\mu_pG_E}{G_M}$ along with phenomenological fits to the data, illustrating the discrepancy
between experiments using Rosenbluth separation and polarization-based methods.
(Rosenbluth separation data: \cite{ff3,ff4,ff8,ff9,ff10,ff11}, polarization data: \cite{pol11,pol3,pol12,pol7,pol9,pol13}, phenomenological fits: \cite{BerFFPhysRevC.90.015206}) (Figure reproduced from Reference \cite{Milner:2014}).}
\label{fig:disc}
\end{figure}
The most widely-accepted hypothesis to explain this discrepancy is that the Rosenbluth method of extracting the form factors from {$e^- p$} scattering data does not properly account for an effect
that was previously assumed to be negligible: the contribution to the scattering cross section from two photons being exchanged between the particles. Typically, calculations
assume a single photon carrying the full momentum transfer. Due to the complexity of the proton's state between the exchange of the two photons, this contribution
is not analytically calculable from theory and predictions from various models vary significantly \cite{Blunden:2003sp,Chen:2004tw,Afanasev:2005mp,Blunden:2005ew, Kondratyuk:2005kk, Borisyuk:2006fh,TomasiGustafsson:2009pw}.
Experiments to test this hypothesis were necessary.
As will be discussed in Section \ref{sec:estpe}, a measurement of the ratio of the cross section for positron-proton scattering to that of electron-proton scattering provides
an experimental signature of two-photon exchange in the lepton-proton interaction. To test the hypothesis, and provide a provide a means of discriminating between different theoretical and phenomenological
predictions for the significance of the two-photon exchange contributions, measurements must have very small uncertainties ($\lesssim 1\%$). The OLYMPUS experiment was designed to achieve this
goal. The approach taken by OLYMPUS to make this measurement is summarized by the following equation:
\begin{equation}
R_{2\gamma}\left(\epsilon,Q^2\right) = \frac{\sigma_{e^+p}\left(\epsilon,Q^2\right)}{\sigma_{e^-p}\left(\epsilon,Q^2\right)} = \frac{N_{e^+p,\text{data} }\left(\epsilon,Q^2\right)}{N_{e^- p,\text{data}}\left(\epsilon,Q^2\right)} \cdot
\frac{N_{e^-p,\text{MC}}\left(\epsilon,Q^2,\mathcal{L}_{e^-}\right) }{N_{e^+ p,\text{MC}}\left(\epsilon,Q^2,\mathcal{L}_{e^+}\right)}.
\label{eq:rat}
\end{equation}
OLYMPUS compared an experimental measurement of the rates of elastic {$e^\pm p$} scattering ($N_{e^\pm p,\text{data} }\left(\epsilon,Q^2\right)$) with a detailed simulation of the expected
rates in the absence of two-photon exchange ($N_{e^\pm p,\text{MC} }\left(\epsilon,Q^2,\mathcal{L}_{e^pm}\right)$). In addition to collecting a high-statistics sample of both {$e^- p$} and {$e^+ p$} events to control statistical uncertainties, it was also critical
to precisely measure the relative luminosity collected for each of the lepton species ($\mathcal{L}_{e^+}/\mathcal{L}_{e^-}$), and to create a model of the experiment in simulation that
accurately reflected the reality of the experiment to control the systematic uncertainties of the measurement. This thesis provides a comprehensive discussion of the OLYMPUS
experiment and analysis to produce a measurement of $R_{2\gamma}$ with $\lesssim1\%$ total uncertainty in the kinematic range of $(0.4 \leq \epsilon \leq 0.9)$, $(0.6 \leq Q^2 \leq 2.2)$ GeV$^2/c^2$.
Projections for the precision of the experiment, the world's existing data prior to 2015 on {$\sigma_{e^+p}/\sigma_{e^-p}$} , and various theoretical and phenomenological predictions for the results of OLYMPUS are shown in
Figure \ref{fig:projections}.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.0\textwidth]{figures/epratiow.pdf}}
\caption[Existing data, model predictions, and projected OLYMPUS uncertainty on {$\sigma_{e^+p}/\sigma_{e^-p}$} ]{The ratio of {$e^+ p$} to {$e^- p$} scattering at a lepton beam energy
of 2.01 GeV as a function of the kinematic variable $\epsilon$ (see Equation \ref{eq:eps}) as predicted by phenomenological models \cite{BerFFPhysRevC.90.015206,Chen:2007ac, Guttmann:2010au},
theoretical calculations of possible two-photon exchange contributions \cite{Blunden:2003sp,Chen:2004tw,Afanasev:2005mp,Blunden:2005ew, Kondratyuk:2005kk, Borisyuk:2006fh,TomasiGustafsson:2009pw},
and projected uncertainties of the measurement OLYMPUS will provide. The existing world data on {$\sigma_{e^+p}/\sigma_{e^-p}$} prior to $\sim$2015 is shown \cite{Yount:1962aa,Browman:1965zz,Bouquet:1968aa,Mar:1968qd}, although these experiments were all at different beam
energies. (Figure reproduced from Reference \cite{Milner:2014}.)}
\label{fig:projections}
\end{figure}
\section{Notes on the Organization and Content of this Work}
The intent of this work is to provide discussion of the essential parts of all elements of the OLYMPUS experiment and complete details on the primary elements which the author
contributed to the experiment. Topics in the latter category include various elements of the detector construction and calibration (in particular, the hydrogen
target system (Section \ref{sec:target})), the survey of the magnetic field and implementation of the results as a field map for tracking and simulation (Section \ref{sec:magsur}),
the slow control luminosity analysis (Section \ref{sec:sclumi}), all aspects of the 12{$^\circ$} luminosity analysis from calibration to the event selection analysis (Section \ref{sec:12lumi}),
and a complete independent analysis of the main OLYMPUS {$\sigma_{e^+p}/\sigma_{e^-p}$} result (Chapter \ref{Chap6}). Appendix \ref{chap:ed} presents a 3D event display framework created by the author
for the visualization of both data and simulated events for OLYMPUS using ROOT libraries \cite{Brun1997} that is easily adaptable for other experiments. The other theses written on
OLYMPUS provide detailed descriptions of a number of topics including radiative corrections (References \cite{schmidt} and \cite{russell}), track reconstruction (References \cite{schmidt}
and \cite{russell}), the symmetric M{\o}ller-Bhabha luminosity analyses (Reference \cite{oconnor} and \cite{schmidt}), the analysis and simulation of the time-of-flight detector
system (Reference \cite{russell}), implementation of the surveyed detector geometry in the simulation detector model \cite{oconnor}, and independent {$\sigma_{e^+p}/\sigma_{e^-p}$} analyses in each.
Rather than reproduce discussion on basic topics of particle physics that are well covered in other works, this thesis assumes basic knowledge of essential particle and nuclear physics concepts,
both in the description of physics processes and the detection of particles, and thus only includes introductions to essential advanced topics that are critical to the motivation and execution of
the OLYMPUS experiment. Excellent references for any unfamiliar terms and concepts include References \cite{Agashe:2014kda} and \cite{griffiths} for essential
concepts of particle physics, Reference \cite{grupen} for particle detection related topics, and Reference \cite{peskin} for the basics of particle physics theory.
As a guide to the contents of this work, the following summarizes the main topics and goals of each chapter:
\begin{itemize}
\item Chapter \ref{Chap2} provides an introduction to the fundamentals of proton form factors and their experimental determination and a more detailed discussion
regarding the proton form factor puzzle, the two-photon exchange hypothesis, and the goals of OLYMPUS.
\item Chapter \ref{Chap3} describes the essential elements of the design and operation of the OLYMPUS experiment, including conventions for description of the experiment, the beam,
the hydrogen target, and the detector systems.
\item Chapter \ref{Chap4} outlines the OLYMPUS analysis strategy, in particular the work done to ensure that the OLYMPUS simulation robustly represented the experimental conditions
to allow comparison of data and Monte Carlo for the final analyses.
\item Chapter \ref{Chap5} details the measurement of the relative luminosity of electron and positron data collected in the three independent systems designed for the measurement, with
particular attention to the slow control and 12{$^\circ$} luminosities. For the 12{$^\circ$} luminosity, all relevant details of the analysis are presented including calibration, hit reconstruction, track
reconstruction, simulation implementation, event selection, and a detailed systematic uncertainty analysis. For the slow control luminosity, a novel Monte Carlo code is presented that
simulates the density of gas undergoing molecular flow within a geometry.
\item Chapter \ref{Chap6} describes the independent analysis of the main cross section result conducted by the author, including aspects of the detector performance and simulation, the analysis
methods used, and an estimate of systematic uncertainties.
\item Chapter \ref{Chap7} presents the results of the analysis from the previous chapter and discusses it the context of the other analyses, existing data on {$\sigma_{e^+p}/\sigma_{e^-p}$} , and the implications
for the form factor discrepancy and future work.
\end{itemize}
\chapter{The Proton, Form Factors, and Theoretical Motivation of the OLYMPUS Experiment}
\label{Chap2}
Fundamentally, OLYMPUS seeks to address the question of how charge and magnetization are distributed within
the proton. While a complete and consistent theoretical model of the strong force interactions that give rise
to the proton and other hadronic states exists in the form of quantum chromodynamics (QCD)\footnote{See Chapter 9 of Reference \cite{Agashe:2014kda} for a
review of the fundamentals of QCD or Reference \cite{doi:10.1142/9789814525220_0008} for a more detailed discussion.}, calculations of the complex
bound state of three valence quarks, quark-antiquark pairs, and gluons that comprise the proton have proven extremely difficult due to the non-perturbative
nature of QCD at low energies ($\lesssim \Lambda_\text{QCD}\approx 200$ MeV). While there has been some recent success in calculating properties of the proton and other light bound hadronic
states using lattice QCD \cite{PhysRevD.67.034503,PhysRevLett.92.022001,Durr1224}, full descriptions of the distributions and interactions of particles within bound
states of QCD remain elusive. Since the proton is the only fundamental, stable bound state of QCD, experimental examination of the proton to determine its
nature is critical to furthering knowledge of the strong force.
Since Rose first suggested in 1948 that the charge distributions within protons and nuclei could
be examined by scattering leptons from protons \cite{PhysRev.73.279}, a variety of experiments have studied the proton in this way over a large
range of energies. Before proceeding with a description of the OLYMPUS experiment and the analysis of the experiment's data,
it is useful to briefly review the theoretical formalisms regarding the structure of the proton, methods
of probing this structure, and the discrepancy in proton structure measurements that OLYMPUS probed by searching
for two-photon exchange (TPE) in {$e^\pm p$} scattering. Complete reviews of proton form factors may be found in
References \cite{Perdrisat2007694} and \cite{0954-3899-34-7-S03}, while References \cite{Arrington:2011dn} and \cite{Carlson:2007sp} specifically
review the physics and experimental landscape of TPE.
\section{Fundamentals and Formalism}
Since the observation that the proton magnetic moment deviated significantly from the value expected for a point-like spin-$\frac{1}{2}$
by Otto Stern in 1933 \cite{stern1,stern2}, the nature of the proton's finite structure and the physics that gives rise
to it has formed a deep and rich field of study in nuclear and particle physics. While atomic physics-based experiments measuring the Lamb
shift in hydrogen provide precise measurements of the proton's charge radius \cite{Pohl2010,RevModPhys.80.633,PhysRevLett.84.5496,Sick200362},
lepton-proton scattering has provided the key means of more comprehensively examining the structure of the proton. In particular, elastic
lepton-proton scattering provides insight into the distribution of charge and magnetization within the proton, providing information about
the size of the proton and the interactions that govern its structure.
\subsection{Elastic Lepton-Proton Scattering}
\label{sec:escat}
First, it is useful to establish conventions for the description of elastic scattering events. For a given {$e^\pm p$} event in which a lepton elastically scatters from
a proton via the exchange of one or more photons (i.e., via the electromagnetic interaction), the kinematics are
completely described by the initial and final four-momenta of the lepton (defined to be $k$ and $k'$ respectively), those of the proton
($p$ and $p'$ respectively), and the electron and proton masses ($m$ and $M$ respectively) as labeled in Figure \ref{fig:born}. The four-momentum transfer from the electron to the proton is then:
\begin{equation}
q=p'-p=k-k'.
\end{equation}
For the kinematics relevant to OLYMPUS, where a beam of leptons strikes protons at rest, and using natural units in which $c=1$:
\begin{equation}
k = \left( E_\text{beam},0,0,\sqrt{E_\text{beam}^2-m^2} \right),
\end{equation}
\begin{equation}
p = \left( M,0,0,0 \right),
\end{equation}
where $E_\text{beam}$ is the beam energy (2.01 GeV for OLYMPUS) and the initial direction of the lepton is along $z$. The angle $\theta$ is used to represent
the polar angle relative to the beam axis ($z$ as described) of the post-scatter three-momentum of the lepton. Throughout this work, the \textit{lab reference frame} or \textit{lab frame}
will refer to these convention in which the proton is at rest initially. For more information on the description of event kinematics as they pertain
to the conventions used for OLYMPUS, see Section \ref{sec:conv}.
\begin{figure}[thb!]
\centerline{\includegraphics[width=0.7\textwidth]{figures/born_scat.eps}}
\caption[Feynman diagram of the first order {$e^\pm p$} scattering process]{Feynman diagram representing the first order contribution to the process
of elastic {$e^\pm p$} scattering, i.e., the Born or single photon exchange approximation \cite{Born1926}. As in the text, $k$ and $k'$ represent the incoming and
outgoing four-momenta of the lepton and $p$ and $p'$ those of the proton. The four-momenta transfered to the proton is $q$. The circle at the proton
vertex represents the complex interaction of the proton with the exchanged photon, which is a function of its internal structure and interactions.}
\label{fig:born}
\end{figure}
While the variables in the previous paragraph describe the kinematics of the interaction, it is often more convenient to use Lorentz invariant variables
that allow direct comparison between experiments conducted at different beam energies. Of particular utility are the square of the four-momentum transfer, $q^2 = q\cdot q$,
and variable $\epsilon$ that is related to the scattering angle of the electron $\theta$ in the lab frame. For elastic scattering, note that $q^2<0$ and so it
is convenient to define:
\begin{equation}
Q^2 = -q^2 >0.
\end{equation}
The variable $\epsilon$ may be formally defined most easily by first defining two intermediate variables \footnote{The variable $\epsilon$ has a physical
interpretation as the relative flux of longitudinally polarized virtual photons. The derivation of this interpretation is well beyond the scope of this work, but is detailed
in Section 2.4 of Reference \cite{wtf}.}:
\begin{equation}
\tau = \frac{Q^2}{4M^2},
\end{equation}
\begin{equation}
\nu = \frac{k\cdot p}{M^2}-\tau.
\end{equation}
Then,
\begin{equation}
\epsilon = \frac{\nu^2-\tau\left(1+\tau\right)}{\nu^2+\tau\left(1+\tau\right)}.
\end{equation}
While this appears to be a cumbersome variable, it has a straightforward interpretation in terms of the lab frame scattering angle of the lepton $\theta$:
\begin{equation}
\epsilon = \left[1+2\left(1+\tau\right)\tan^2\left(\frac{\theta}{2}\right) \right]^{-1}.
\label{eq:eps}
\end{equation}
Thus, $\epsilon$ varies from one for very forward scattering in the lab frame ($\theta=0^\circ$) to zero for very backward scattering ($\theta=180^\circ$).
To interpret $Q^2$ in the lab frame, it is useful to neglect the electron mass ($m\ll M,E_\text{beam}$), which results in:
\begin{equation}
Q^2 = 4E_\text{beam}E'\sin^2\left(\frac{\theta}{2} \right),
\end{equation}
where $E'$ is the outgoing lepton energy in the lab frame.
Having established conventions for the description of elastic lepton-proton scattering, the question of the cross section for the process and the insight
it offers regarding the structure of the proton may be addressed. Since the weak interaction is suppressed by the masses of the $W^\pm$ and $Z$ bosons
($M_{W^\pm,Z}\sim \mathcal{O}(100\:\text{GeV})$), it is negligible relative to the electromagnetic interaction (in terms of its contribution to the overall
interaction cross section). Additionally, since each photon-fermion interaction vertex in a quantum electrodynamics (QED) Feynman diagram carries a factor of
$\sqrt{\alpha} \approx \sqrt{\frac{1}{137}}$ to the contribution of that diagram to the matrix element for a given interaction (and thus a factor of $\alpha$ to
the cross section), often the approximation is made that only lowest-order allowed diagram (that with the fewest number of vertices) is used to compute the matrix element for an interaction. In the case
of elastic {$e^\pm p$} scattering, the lowest-order approximation is single photon exchange with no additional initial or final state radiation, as shown in Figure \ref{fig:born}.
This is known as the Born approximation \cite{Born1926}.
To construct the most general cross section for lepton-proton elastic scattering, first consider the case of a light lepton (spin-$\frac{1}{2}$) scattering from a much more massive, spinless
target (e.g., a spin-0 nuclei or ``spinless proton''). The only first-order QED diagram that contributes to the process is a diagram akin to that shown in Figure \ref{fig:born}.
Such a process is known as Mott scattering \cite{Mott425,Mott658}, and the cross section is straightforward to compute from the rules of QED \cite{peskin,griffiths}:
\begin{equation}
\sigma_\text{Mott} = \frac{\alpha^2 E' \cos^2\left(\frac{\theta}{2}\right)}{4E^3\sin^4\left(\frac{\theta}{2}\right)}.
\end{equation}
The problem of lepton-proton scattering then effectively reduces to properly adjusting the treatment of the proton to account for both its structure and spin.
\subsection{Elastic Form Factors}
To account for the effect of the proton's structure and spin, first consider the general invariant matrix element for the Feynman diagram in
Figure \ref{fig:born} (using the notation and conventions of Reference \cite{Arrington:2011dn}):
\begin{equation}
\mathcal{M}_\gamma = -\frac{e^2}{q^2}j_{\gamma\mu}J_\gamma^\mu,
\end{equation}
where $e$ is the electron charge. The current $j_{\gamma\mu}$ is that of the lepton, and is thus the standard current of a Dirac fermion:
\begin{equation}
j_{\gamma\mu} = \overline{u}_e\left(k'\right)\gamma_\mu u_e\left(k\right),
\end{equation}
where $u_e$ represents the lepton's Dirac spinor. The proton's current, $J_\gamma^\mu$, is more complicated. As first shown by Foldy in 1952 \cite{PhysRev.87.688},
the most general current for a spin-$\frac{1}{2}$ particle in QED that satisfies current conservation and Lorentz invariance may be written as:
\begin{equation}
J_\gamma^\mu = \overline{u}_p\left(p'\right)\left( \gamma^\mu F_1(Q^2) + \frac{i\sigma^{\mu\nu}q_\nu}{2M} F_2(Q^2) \right) u_p\left(p\right),
\end{equation}
where $u_p$ is the proton spinor and $F_1(Q^2)$ and $F_2(Q^2)$ are respectively Dirac and Pauli form factors for elastic scattering from the proton. In the Born
approximation, these are functions of $Q^2$ alone (making them Lorentz invariant) and they encode the internal behavior of the proton to all orders in QCD.
When discussing the structure of the proton it is most useful to recast the form factors as the following linear combinations of the Dirac and
Pauli form factors $F_1$ and $F_2$:
\begin{equation}
G_E(Q^2) = F_1(Q^2)-\tau F_2(Q^2),
\end{equation}
\begin{equation}
G_M(Q^2) = F_1(Q^2)+F_2(Q^2),
\end{equation}
which are respectively the electric and magnetic elastic form factors, and were defined by Hand, Miller, and Wilson \cite{ff5}. These form factors
are normalized such that $G_E(0) = 1$ and $G_M(0) = \mu_p = 2.793$, i.e., the static values of the proton's charge and magnetic moment in the units of $e=c=\hbar=1$. Parameterizing
the form factors in this way leads to a simpler form for the final computed differential cross section in the Born approximation, known as the Rosenbluth
formula \cite{PhysRev.79.615}\footnote{When first computed by Rosenbluth in 1950, the formula did not take the form shown in Equation \ref{eq:Ros}, but the modern
representation of the formula is presented here.}:
\begin{equation}
\left( \td{\sigma}{\Omega}\right)_\text{Born} = \frac{\sigma_\text{Mott}}{\epsilon(1+\tau)}\left[ \epsilon G_E^2(Q^2) + \tau G_M^2(Q^2) \right].
\label{eq:Ros}
\end{equation}
\subsection{Physical Interpretation of the Elastic Form Factors}
In addition to simplifying the form of the Rosenbluth formula for the {$e^\pm p$} elastic cross section, the electromagnetic form factors $G_E$ and $G_M$ additionally offer some insight into
more intuitive properties of the proton's structure, namely the distribution of charge and magnetization within the proton. As first noted by Sachs in 1962 \cite{PhysRev.126.2256},
in a reference frame in which both the lepton and proton exchange three-momentum but no energy (and thus the lepton rebounds in the opposite direction from its initial trajectory with the same
energy)\footnote{This reference frame is known as the Breit frame or ``brick wall'' frame.}
the form factors $G_E$ and $G_M$ are the Fourier transforms of the charge and magnetization distributions of the proton. Note, however, that the velocity of this frame relative to the lab
frame varies as a function of $Q^2$ and that the transformation of spatial distributions between these frames is generally very complex. While model-dependent methods of conducting such
transformation have been established \cite{PhysRevC.66.065203}, it is generally quite difficult and caution should be exercised when interpreting the direct physical meaning of $G_E$ and
$G_M$.
\section{Determination of the Proton Elastic Form Factors}
While a number of theoretical models exist for computing the proton elastic form factors (see Section 4 of Reference \cite{Perdrisat2007694} for an overview), none have been successful
in providing a complete and predictive description of the proton's elastic electromagnetic interactions due to complexity of the underlying interactions. Thus, the determination
of the values of $G_E$ and $G_M$ has been an active experimental question since they were first formulated. The first measurements of quantities related to the form factors were
conducted by Hofstadter and McAllister at Stanford in the mid-1950s, who measured a single form-factor-like quantity (effectively a single factor modifying the Mott cross section, i.e., the combination of
the effects of two form factors) \cite{PhysRev.98.217,PhysRev.102.851,RevModPhys.28.214}. As theory progressed, it was realized that experiments were needed that separated the effects
of the two form factors. The technique of Rosenbluth separation was developed allowing the independent extraction of $G_E^2$ and $G_M^2$. Measurements using this technique
began in the late 1960s \cite{ff5,ff6,ff14,ff1,ff2,ff3,ff4,ff12,ff7} and continued through the turn of the century \cite{ff8,ff9,ff10,ff11,ff15}. In the 1990s, the advent of polarized beams
and targets provided a new method of measuring the ratio $G_E/G_M$, used in experiments conducted at MIT-Bates and Jefferson Lab \cite{pol1,pol2,pol3,pol4,pol5,pol6,pol7,pol8,pol9,pol10,pol11,pol12}.
The two techniques are briefly
summarized here, while the results from each method are discussed in Section \ref{sec:discrep}.
\subsection{Rosenbluth Separation}
\label{sec:rossep}
The Rosenbluth separation technique takes advantage of the form of the Rosenbluth formula (Equation \ref{eq:Ros}) for the {$e^\pm p$} elastic cross section and the dependence of the form factors
on $Q^2$ alone. While the exact techniques used by various experiments differ slightly, Rosenbluth separation effectively amounts to rewriting Equation \ref{eq:Ros} in the form:
\begin{equation}
\left( \td{\sigma}{\Omega}\right)_\text{reduced} = \frac{\epsilon(1+\tau)}{\tau\sigma_\text{Mott}} \left( \td{\sigma}{\Omega}\right)_\text{exp} = G_M^2(Q^2) + \frac{\epsilon}{\tau} G_E^2(Q^2),
\label{eq:rsep}
\end{equation}
where $\left( \td{\sigma}{\Omega}\right)_\text{exp}$ is the experimentally measured value of the elastic {$e^\pm p$} cross section as a function of $\epsilon$ and $\tau$.
Since $\tau = \frac{Q^2}{4M}$ and $\epsilon = \left[1+2\left(1+\tau\right)\tan^2\left(\frac{\theta}{2}\right) \right]^{-1}$, measuring the cross section at fixed $Q^2$ over a variety of scattering
angles $\theta$ and plotting the values of the reduced cross section defined by Equation \ref{eq:rsep} as a function of $\epsilon$ results in a linear function. Fitting this function yields
$\frac{1}{\tau}G_E^2(Q^2) = \frac{4M}{Q^2}G_E^2(Q^2)$ as the slope and $G_M^2(Q^2)$ as the intercept. By conducting these measurements for a variety of $Q^2$ values, the form factors may
be mapped out. In most experiments the values of $Q^2$ and $\epsilon$ are selected by adjusting narrow acceptance single- and double-arm spectrometers and collecting
data in a number of spectrometer configurations \cite{grupen,fernow}. An example of the application of this technique from Reference \cite{ff8} is shown in Figure \ref{fig:rsep}. Rosenbluth
separation is the only technique currently available that allows independent measurement of the individual form factors.
\begin{figure}[thb!]
\centerline{\includegraphics[width=0.7\textwidth]{figures/adhivros.png}}
\caption[Example application of Rosenbluth separation]{Application of the Rosenbluth separation technique at eight values of $Q^2$. (Figure reproduced from
Reference \cite{ff8}.)}
\label{fig:rsep}
\end{figure}
\subsection{Polarization-Based Techniques}
The Rosenbluth formula (Equation \ref{eq:Ros}) and Rosenbluth separation assume unpolarized lepton beams and protons targets. As first observed by Akhiezer and Rekalo, the Rosenbluth
separation technique suffers from the dominance of the $\frac{\tau}{\epsilon}G^2_M$ term at high $Q^2$ (subjecting the extraction of $G_E$ to large uncertainty) \cite{ak1}. They, and others,
proposed methods involving experiments with longitudinally polarized electron beams on unpolarized proton targets and measuring the polarization of the recoil proton (i.e., the \textit{polarization
transfer} from the lepton to the proton) \cite{ak2,PhysRevC.23.363}. Additionally, techniques were proposed using unpolarized beams on polarized targets \cite{RevModPhys.41.236}.
Considering the case of a polarized beam on an unpolarized target to illustrate the method, the cross section for elastic $\vec{e}p\rightarrow e\vec{p}$ scattering in the Born
approximation may be calculated from the rules of QED in a similar fashion to the spin-averaged unpolarized case \cite{ak2,PhysRevC.23.363}. The differential cross sections
for the outgoing proton spin to be aligned longitudinally or transversally are respectively:
\begin{equation}
\td{\sigma^{(L)}}{\Omega} = h\sigma_{Mott} \frac{E+E'}{M}\sqrt{\frac{\tau}{1+\tau}} \tan^2\left(\frac{\theta}{2}\right) G_M^2,
\end{equation}
\begin{equation}
\td{\sigma^{(T)}}{\Omega} = 2h\sigma_{Mott}\sqrt{\frac{\tau}{1+\tau}} \tan\left(\frac{\theta}{2}\right) G_E G_M,
\end{equation}
where $h$ is the electron helicity. The average longitudinal and transverse polarizations $P_L$ and $P_T$ of the outgoing protons are then proportional to their respective
outgoing spin cross sections. Taking the ratio of the expressions for the cross sections (and thus polarizations), the unknown electron helicity cancels to yield:
\begin{equation}
\frac{G_E}{G_M} = -\frac{E_\text{beam}+E'}{2M}\tan\left( \frac{\theta}{2} \right) \frac{P_T}{P_L}.
\end{equation}
This technique provides a robust measurement of the $\frac{\mu_pG_E}{G_M}$ ratio, helped by cancellation of various systematic uncertainties caused by the simultaneous measurement
of the two polarizations using the same setup, and when paired with Rosenbluth measurements of $G_M$ provides the most sensitive method of extracting $G_E$ at high $Q^2$. A review
of the methods used to extract $\frac{\mu_pG_E}{G_M}$ from polarized target experiments may be found in Section 3.2 of Reference \cite{Perdrisat2007694}.
\section{Existing Data and the Proton Form Factor Discrepancy}
\label{sec:discrep}
As noted previously, numerous experiments have measured the proton elastic form factors (or the ratio $\frac{\mu_pG_E}{G_M}$ in the case of the polarization-based measurements) over a range
of $0.05$ GeV$^2\lesssim Q^2 \lesssim 10$ GeV$^2$. A selection of Rosenbluth separation data for $G_E$ and $G_M$ is shown in Figures \ref{fig:ge} and \ref{fig:gm} respectively, reproduced
from Reference \cite{Perdrisat2007694}. The form factors are typically displayed, as in the figures, normalized to the dipole form factor:
\begin{equation}
G_D = \frac{1}{\left(1+\frac{Q^2}{0.71\:\text{GeV}^2}\right)^2},
\label{eq:dipff}
\end{equation}
which corresponds (in the Breit frame) to an exponentially falling charge or magnetization distribution. As can be seen in the figures, the dipole model roughly describes the form factor
data obtained via Rosenbluth scattering below a few GeV$^2$, especially $G_E$, but that the data deviate from the dipole model at higher $Q^2$. As shown in Figure \ref{fig:disc}, the ratio
$\frac{\mu_pG_E}{G_M}$ measured using Rosenbluth scattering is $\sim$1 and reasonably flat.
\begin{figure}[thb!]
\centerline{\includegraphics[width=0.7\textwidth]{figures/GE.png}}
\caption[Selection of Rosenbluth separation measurements of $G_E$]{Selection of experimental measurements of $G_E(Q^2)$ using Rosenbluth separation, normalized to the dipole form factor
$G_D$ (Equation \ref{eq:dipff}) and proton magnetic moment $\mu_p$.
(Original data: \cite{ff5,ff3,ff1,ff2,ff4,ff12,ff7,ff13,ff8,ff9,ff10,ff11}) (Figure reproduced from Reference \cite{Perdrisat2007694}).}
\label{fig:ge}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=0.7\textwidth]{figures/GM.png}}
\caption[Selection of Rosenbluth separation measurements of $G_M$]{Selection of experimental measurements of $G_M(Q^2)$ using Rosenbluth separation, normalized to the dipole form factor
$G_D$ (Equation \ref{eq:dipff}).
(Original data: \cite{ff5,ff6,ff14,ff3,ff1,ff2,ff4,ff12,ff7,ff15,ff8,ff9,ff10,ff11}) (Figure reproduced from Reference \cite{Perdrisat2007694}).}
\label{fig:gm}
\end{figure}
Data from polarization-based measurements, however, provide a starkly different measurement of $\frac{\mu_pG_E}{G_M}$. As Figure \ref{fig:disc} shows, the data from such experiments
cluster around a line that decreases as a function of $Q^2$ with a slope of approximately -0.14 GeV$^{-2}$. At $Q^2\gtrsim 5$ GeV$^2$, global fits to Rosenbluth separation data and
polarization-based data show more than a factor of two difference in $\frac{\mu_pG_E}{G_M}$. This is known as the proton form factor ratio discrepancy, and has been an active field
of research for both theorists and experimentalists.
\section{Possible Causes of the Discrepancy and the Two-Photon Exchange Hypothesis}
\label{sec:posscause}
Given the significant discrepancy between the different methods of measuring the proton form factor ratio (and the consistency of measurements of the same type) discovered
in the 1990s, efforts were made both to verify the discrepancy with modern measurement and to explore possible causes of the discrepancy. Rosenbluth separation experiments
conducted in 2004-2005 (References \cite{ff10} and \cite{ff11}) and reanalysis of the existing data (References \cite{PhysRevC.68.034325} and \cite{PhysRevC.69.022201}) confirmed the discrepancy at values of $Q^2$ up to $\sim$8 GeV$^2$,
further increasing the call for explanations of the discrepancy
from the standpoint of theory and of understanding the physics implicit in the analysis methods used for each technique. In particular, the \textit{radiative corrections} applied
for each measurement were examined carefully.
In general, the term radiative corrections encompasses any shift applied to a raw measured cross section, form factor, etc. to account for the effect of making
the Born approximation. In practice, for {$e^\pm p$} scattering, this amounts to considering changes to the cross section that occur due to contributions from higher-order
Feynman QED diagrams that are experimentally indistinguishable from elastic scattering by single photon exchange, like those shown in Figures \ref{fig:hod} and \ref{fig:ihod}.
Note that the event types shown in Figure \ref{fig:ihod} may be indistinguishable in an experiment from an event with no externally emitted photon, due to the photon having very small
energy such that the deviation from elastic kinematics is smaller than the resolution of the detector. Past
experiments (prior to 2000) accounted for the effects of diagrams up to order $\alpha^2$ using the prescription of Mo and Tsai \cite{PhysRev.122.1898,MoRevModPhys.41.205}.
This prescription, however, did not account for the structure of the nucleon and made a number of other approximations. The method for applying radiative corrections to
elastic {$e^\pm p$} data was improved upon by Maximon and Tjon \cite{MaximonPhysRevC.62.054320}, who included the effect of the proton form factors in their calculations and
avoided several of the approximations made by Mo and Tsai, which altered the size of the radiative correction (itself typically a correction of $\mathcal{O}(10\%)$ to measured
cross sections) by several percent, an effect not large enough to explain the discrepancy \cite{PhysRevC.68.034325}. Radiative corrections are also critical to the
OLYMPUS result and the approach used for OLYMPUS is described in Section \ref{sec:radgen}. A detailed discussion of the OLYMPUS radiative corrections may be found
in References \cite{schmidt} and \cite{russell}.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.0\textwidth]{figures/a2diags.png}}
\caption[QED diagrams of up to order $\alpha^2$ that contribute to elastic {$e^\pm p$} scattering]{QED diagrams up to order $\alpha^2$ that contribute to elastic {$e^\pm p$} scattering.
(Figure reproduced from Reference \cite{MaximonPhysRevC.62.054320}.)}
\label{fig:hod}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=0.8\textwidth]{figures/inelasdiag.png}}
\caption[QED diagrams involving a soft external photon that contribute to elastic {$e^\pm p$} scattering]{QED diagrams involving a soft external photon that contribute to elastic {$e^\pm p$} scattering.
(Figure reproduced from Reference \cite{MaximonPhysRevC.62.054320}.)}
\label{fig:ihod}
\end{figure}
In these radiative corrections calculations, however, effects due to the two-photon exchange (TPE) diagrams in which both photons carry comparable energy (hard two photon exchange)
were not considered (the diagrams marked ``box'' and ``crossed-box'' in Figure \ref{fig:hod}). In particular, it is difficult to account for the behavior of the proton
between the proton-photon vertices, since in that leg of the diagram the proton may be off shell or possibly even in an excited state such as a $\Delta^+$. While early
models of two-photon exchange suggested that the contribution of TPE was considerably smaller than 1\% of the cross section, the validity of these models was limited
to values of $Q^2$ below $\sim$1 GeV$^2$ where the probability of hard TPE is smaller \cite{PhysRev.74.1759,PhysRev.102.537,PhysRev.106.561,PhysRev.113.741,1969190,PhysRev.180.1541,PhysRev.184.1860}.
\subsection{Theoretical Calculations of TPE and their Effect on the $\frac{\mu_pG_E}{G_M}$ Ratio}
Given that radiative corrections due to the TPE diagrams were the least understood among the corrections of order up to $\alpha^2$, renewed efforts were
made to better model this effect and examine the effect of such models' predictions on the form factor ratio
\cite{Blunden:2003sp,Guichon:2003qm,REKALO2004322,Chen:2004tw,Afanasev:2005mp,Blunden:2005ew, Kondratyuk:2005kk,Borisyuk:2006fh,TomasiGustafsson:2009pw}.
Descriptions of the details of these models are beyond the scope of this work, but an overview may be found in References \cite{Arrington:2011dn} and \cite{Carlson:2007sp}.
As an example, the effect of the model developed in Reference \cite{Blunden:2005ew} on $\frac{\mu_pG_E}{G_M}$ is shown in Figure \ref{fig:blundpred}, and as can be seen
such predictions tend to explain at least some of the discrepancy between the Rosenbluth separation and polarization-based measurements.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.0\textwidth]{figures/blundcor.png}}
\caption[Example of a theoretical model for the effect of TPE on Rosenbluth separation $\frac{\mu_pG_E}{G_M}$ measurements]{Example of a theoretical model for the
effect of TPE on Rosenbluth separation $\frac{\mu_pG_E}{G_M}$ measurements, showing the shift of the Rosenbluth separation points
(``LT'') towards the polarization transfer points (``PT'') for two separate assumptions regarding the amount of TPE that occurs (filled markers). Note that the TPE model accounts for a significant fraction
of the form factor discrepancy. (Figure reproduced from Reference \cite{Blunden:2005ew}.)}
\label{fig:blundpred}
\end{figure}
Since QCD does not provide an unambiguous, calculable
method for modeling the proton, such predictions vary significantly; models range from accounting for none of the discrepancy to nearly all of it, as can be seen in wide spread
of model predictions for {$\sigma_{e^+p}/\sigma_{e^-p}$} due to TPE in Figure \ref{fig:projections}. Thus, there exists a strong demand for experimental measurements of the contributions of TPE to
{$e^\pm p$} scattering.
\subsection{Experimental Signature of TPE}
\label{sec:estpe}
Given the hypothesis of TPE and an established need for experimental measurement of its contribution to {$e^\pm p$} elastic scattering, it was quickly realized that a comparison of {$e^- p$} and {$e^+ p$} elastic scattering
cross sections provides direct experimental access to the size of the TPE matrix elements $\mathcal{M_{\gamma\gamma}}$ corresponding to the box and crossed-box diagrams of Figure
\ref{fig:hod}. This can be approximately understood by considering the toy example of the calculation of the total squared matrix element in Figure \ref{fig:simpint}. Terms of the total
matrix element $\mathcal{M}$ that represent an interference of the one-photon and two-photon diagrams are proportional $(\pm\alpha)^3$ where the $\pm$ is determined by the charge of the
lepton. Thus, the matrix element is shifted downward for electron scattering by these terms and upward for positron scattering, creating an asymmetry that could be measured
experimentally. Using $\mathcal{M}_\gamma$ to represent the single photon exchange matrix element (with positive sign, i.e, for {$e^+ p$} scattering) and $\mathcal{M}_{\gamma\gamma}$ for the TPE elements, the ratio
expressed in terms of an experimental measurement in Equation \ref{eq:rat} may be expressed in terms of the matrix elements as:
\begin{equation}
R_{2\gamma}\left(\epsilon,Q^2\right) = \frac{\sigma_{e^+}}{\sigma_{e^-}} \sim 1 + 4\alpha \frac{\mathcal{M}_{\gamma\gamma}}{\mathcal{M}_{\gamma}}.
\end{equation}
Thus, an experiment which can measure this cross section ratio at the relevant kinematics provides direct access to the value of the TPE contribution to the cross section,
which would put valuable constraints on theoretical models and offer insight into what fraction of the form factor ratio discrepancy can be explained by TPE. An important
note is that in addition to the Born/TPE interference contribution differing in sign between {$e^- p$} and {$e^+ p$} scattering, interference between the Born diagram and the externally radiated
photon (bremsstrahlung) diagrams (Figure \ref{fig:ihod}) and thus careful correction for these effects is required in the analysis of any {$\sigma_{e^+p}/\sigma_{e^-p}$} experiment attempting to extract
the TPE contribution.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.0\textwidth]{figures/TPE.png}}
\caption[Contribution of TPE diagrams to the total {$e^\pm p$} matrix element]{A simplified visual representation of the contribution of the TPE Feynman diagrams to the total
matrix element for {$e^\pm p$} scattering. Interference terms between the one-photon exchange diagram and two-photon exchange diagrams contribute terms proportional to $(\pm\alpha)^3$ due
to the lepton vertices, changing sign for {$e^+ p$} and {$e^- p$} scattering.}
\label{fig:simpint}
\end{figure}
\section{Experiments Measuring {$\sigma_{e^+p}/\sigma_{e^-p}$} and Physics Goals of the OLYMPUS Experiment}
Given the spread in theoretical models for TPE and their effect on the value of {$\sigma_{e^+p}/\sigma_{e^-p}$} (Figure \ref{fig:projections}), experiments measuring {$\sigma_{e^+p}/\sigma_{e^-p}$} at the accuracy of 1\% or better
at values of $Q^2\gtrsim\mathcal{O}(1\:\text{GeV}^2)$ are required to differentiate the models, with preference towards higher $Q^2$ where the form factor discrepancy is largest.
Existing data on this quantity from the 1960s (shown in Figure \ref{fig:projections}) is much more imprecise than this goal, especially at higher values of $Q^2$
\cite{Yount:1962aa,Browman:1965zz,Anderson:1966zzf,Cassiday:1967aa,Bouquet:1968aa,Mar:1968qd,Bartel:1967aa}. Thus, three modern experiments have sought to measure {$\sigma_{e^+p}/\sigma_{e^-p}$} via complimentary
experimental setups:
\begin{enumerate}
\item OLYMPUS at DESY, Hamburg, Germany \cite{Milner:2014},
\item CLAS at Jefferson Lab, Newport News, Virginia \cite{PhysRevLett.114.062003,ass}, and
\item VEPP-3 at Novosibirsk, Russia \cite{vepp3PhysRevLett.114.062005}.
\end{enumerate}
The relative kinematic reaches of these experiments are shown in Figure \ref{fig:reach}. Details on these experiments may be found in the cited references, but it is worthwhile
to note the fundamentally different approaches of the experiments. The VEPP-3 experiment operated at lower energies than OLYMPUS with a non-magnetic spectrometer (to avoid differences in {$e^- p$} and {$e^+ p$} acceptance), while the CLAS experiment
utilized a magnetic spectrometer with a simultaneous $e^+/e^-$ beam produced by pair production from photons.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.0\textwidth]{figures/reach_fixedQ2.pdf}}
\caption[Kinematic reaches of the three TPE experiments]{Kinematic reaches of the three TPE experiments. Note that since the CLAS experiment did not have fixed beam energy, the
elastic events are spread across the $\left(\epsilon,Q^2\right)$ bins shown, dominated by the higher cross section in the bottom right of the bins. (Figure reproduced from Reference \cite{schmidt}.)}
\label{fig:reach}
\end{figure}
Results from the latter two experiments preceded OLYMPUS results, and are shown in Figure \ref{fig:crap}. As can be seen in the figure, these results are limited in statistics (requiring broad
binning) and, while suggestive of an upward trend in {$\sigma_{e^+p}/\sigma_{e^-p}$} with decreasing $\epsilon$, do not provide a definitive trend for the TPE contribution.
Given that OLYMPUS collected much higher statistics than either CLAS or VEPP-3 and reaches higher in $Q^2$, OLYMPUS is expected to provide a considerably more definitive ratio result.
It is the goal of OLYMPUS to measure {$\sigma_{e^+p}/\sigma_{e^-p}$} to a total (statistical+systematic)
uncertainty of better than 1\% across the full kinematic range of the experiment, as shown by the red projected bins of Figure \ref{fig:crap}. The remainder of this work discusses the OLYMPUS
analysis and results in detail.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.0\textwidth]{figures/axelres.png}}
\caption[Results from VEPP-3 and CLAS with the OLYMPUS projections]{Results on {$\sigma_{e^+p}/\sigma_{e^-p}$} from CLAS \cite{PhysRevLett.114.062003} and VEPP-3 \cite{vepp3PhysRevLett.114.062005}, along with
projected uncertainties for OLYMPUS (limiting any bin to at most 1\% statistical uncertainty). Note the different experiments cover different $Q^2$ ranges for the shown $\epsilon$ range, and so
are not directly comparable on this plot, but that the data are chosen so as to provide a rough comparison. (Figure reproduced from Reference \cite{schmidt}.)}
\label{fig:crap}
\end{figure}
\chapter{The OLYMPUS Experiment}
\label{Chap3}
To construct an experimental setup suitable for the physics goals of OLYMPUS, a large
acceptance spectrometer was assembled, capable of exclusively reconstructing $e^\pm p$
elastic scattering events while precisely measuring the recorded luminosity for
each lepton species. To achieve this, the spectrometer consisted of the principal elements
of the Bates Large Acceptance Spectrometer Toroid (BLAST) \cite{Alarcon20001111c,Hasell:2009zza} combined
with several newly constructed luminosity monitors, operated at the DORIS III electron/positron storage
ring at the Deutsches Elektronen-Synchrotron (DESY) in Hamburg, Germany \cite{DORIStab,DORISrep}. This spectrometer
surrounded a gaseous hydrogen target, internal to the DORIS beamline. During the experiment, the target
was exposed to electron and positron beams from DORIS, with the beam
species changed approximately daily. Great care was taken to ensure that running conditions for the two different
leptons species were as identical as possible to avoid unwanted systematic differences between the two modes.
Data for the experiment were taken in two periods: January 20, 2012--February 27, 2012 (Run I) and October 24, 2012--January 2, 2013 (Run II). Approximately
4.5 fb$^{-1}$ of integrated luminosity was acquired, including various calibration runs that are not included
in the sample of data used to construct the {$\sigma_{e^+p}/\sigma_{e^-p}$} result. The vast majority of these data were collected in the second run, after several
improvements to the trigger, target system, and other detector elements were made between runs.
This chapter describes the important details of the experimental setup used to measure the elastic $\sigma_{e^+}/\sigma_{e^-}$ ratio, including both
the detectors used for the reconstruction of {$e^\pm p$} events over a large range of kinematics and the luminosity monitors. Also discussed
are the essential elements of the data acquisition system (DAQ) and the operation of the experiment, especially as they pertain to
the analysis of the detector data. Much of this chapter summarizes the complete published descriptions of the experiment in
References \cite{Milner:2014} and \cite{tdr}. Additional references that discuss certain elements of the system in greater detail
are cited in the relevant sections.
\section{Conventions for the Description of the Experiment}
\label{sec:conv}
Throughout this work, the ``OLYMPUS global'' coordinate system will be used to describe the positions, orientations, and trajectories
of various elements of the experiment, in addition to various ``local'' coordinate systems defined for specific detectors. Figure
\ref{fig:schem} shows the coordinate axes of the global system relative to the detector setup. The origin of this system
is defined to be the center of the target cell for the purpose of defining positions throughout the detector. The beam traversed the experimental setup
approximately along the $+z$-axis through the origin, up to measured beam offsets on the order of a millimeter and angles relative to the
axis on the order of 0.5 mm. Thus, ``upstream'' refers to the $-z$ direction and ``downstream'' to the $+z$-direction, corresponding to
the movement of beam particles. In describing the layout of the experiment, the positive $x$-direction is referred to as ``beam left'' and
the negative $x$-direction as ``beam right'', or more succinctly, the left and right sides of the detector. Unless otherwise noted, references
to coordinate axis in the text and figures refer to this system.
Additionally, standard spherical
coordinate angles (the polar angle $\theta$ and azimuthal angle $\phi$) are used when convenient, especially when describing the kinematics
of particle trajectories as described in Section \ref{sec:escat}. Note that $\theta$ and $\phi$ for a tracked or simulated particle refer to the angles relative to a coordinate
system aligned with the global system but centered on the scattering vertex for the event in question.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.1\textwidth]{figures/allonbuttophalf.pdf}}
\caption[Solid-model of the OLYMPUS detector]{Solid model of the OLYMPUS detector, with essential components, an approximate scale,
and the orientation of the global coordinate
system labeled. The origin of the global coordinate system is at the center of the target chamber. The top four ($+y$) toroid coils have been removed to
make other detector systems visible. Various components that are replicated on each side of the detector are not labeled to avoid redundancy.}
\label{fig:schem}
\end{figure}
\section{The DORIS III $e^+/e^-$ Storage Ring}
A stable storage ring source of GeV scale positron and electron beams was critical to the OLYMPUS experiment goals. While
several accelerators were initially assessed as candidates for the location of the experiment \cite{Hillert2006}, the Doppel-Ring-Speicher
\footnote{Doppel-Ring-Speicher translates to ``double-ring storage'' in English.} (DORIS) III
storage ring at the Deutsches Elektronen-Synchrotron, Hamburg, Germany\footnote{From this point forward the accelerator will be referred to as ``DORIS''}
was selected as the facility most capable of delivering both the
desired beam conditions for the experiment and the laboratory support structure for the construction, commissioning, and
execution of the experiment \cite{DORIStab,DORISrep}.
Historically DORIS operated as an electron-positron collider, operating at the energy
of 3.5 GeV per beam between 1974 and 1978, and at 5.0 GeV per beam starting in 1978. Over the course of the 1980s, the ring was converted
to be a synchrotron light source and became a full time light source in 1993. A key physics discovery, the mixing of the neutral $B$-mesons,
was made at DORIS in 1987 by the ARGUS experiment \cite{ALBRECHT1987245}. OLYMPUS, which began installation at the former site of the ARGUS
experiment in 2010, took data in 2012 and witnessed the last beams from the facility as the last experiment to run there on January 2, 2013. Following
OLYMPUS, the accelerator was dismantled \cite{DORIShist}.
\subsection{OLYMPUS at DORIS}
Since OLYMPUS was the last experiment to run at DORIS, it was possible to make a number of modifications to the accelerator
to provide for the needs of the experiment. Additionally, because the accelerator was operated as a synchrotron light source
between the two OLYMPUS data runs, several constraints were placed on the design of the experiment. In particular, the target
system was installed as a permanent part of the beamline, and as such the system was required to handle a variety of conditions.
The experiment was placed in the large straight section of the ring, as seen in Figure \ref{fig:dorisover}. Several
changes were made to the ring and experiment design to facilitate both the goals of OLYMPUS and synchrotron light production:
\begin{itemize}
\item a large effort was made to reconfigure the operation of the beam for OLYMPUS running (2.01 GeV, with the beam in ten bunches),
\item several RF acceleration cavities were relocated away from the detector site,
\item extra quadrupole magnets were installed on each side of the detector in the beamline to reduce the beam width in the interaction
region and then return it to its original size for the synchrotron light creation elements of the ring, and
\item the OLYMPUS target was continuously cooled to allow the operation of the ring in the harsher conditions of synchrotron
light production (4.5 GeV beam in five bunches, at 150 mA currents).
\end{itemize}
The implementation of the target system to meet these requirements is discussed in Section \ref{sec:target} and in greater
detail in Reference \cite{Bernauer201420}. Several changes also were required to facilitate the frequent switching between
the lepton species of the beam that OLYMPUS required:
\begin{itemize}
\item the high voltage pulse power supplies for the pre-accelerator beam extraction line and DORIS injector kicker magnets
were heavily refurbished,
\item the septa magnets in the pre-accelerator and injection line were modified to operate in both polarities, and
\item remote control switches were constructed and installed for the magnet power supplies throughout the accelerator system.
\end{itemize}
The beam position in the interaction region was continuously measured by monitors installed on either side of the target chamber.
Additionally, a dipole reference magnet was installed downstream of the experiment in series with the ring's bending dipoles to continuously
measure the beam current. This data, along with a number of relevant parameters, was recorded by the accelerator archive systems and was
made available to the OLYMPUS archiving systems.
\begin{figure}[thb!]
\centerline{\includegraphics[width=0.6\textwidth]{figures/doris_overhead.pdf}}
\caption[Overhead view of the DORIS III $e^+/e^-$ storage ring]{Overhead view of the DORIS III $e^+/e^-$ storage ring, showing the direction of beam circulation and
the locations of the OLYMPUS
spectrometer in the straight segment of the ring and the synchrotron light experiment
stations around the ring (which were not active during OLYMPUS data taking). The total length of the beamline was approximately 300 m \cite{Bernauer201420}.}
\label{fig:dorisover}
\end{figure}
\subsection{Beam Specifications}
\label{sec:beam}
During OLYMPUS operation, the beam was operated at $\sim$2.01 GeV and distributed in ten bunches. The beam species was
switched approximately daily, but in general a concerted effort was made to keep the running conditions identical for both
species. Due to the fact that the beam injected into DORIS was at full energy during OLYMPUS running, OLYMPUS ran in ``top-up''
mode throughout the fall run. In this mode, leptons were added to the beam after the current dropped by only a few percent rather
than waiting until a significant fraction of the beam had decayed to refill. This allowed OLYMPUS to collect data at a much more
constant beam current, which helped to both increase the collected luminosity and maintain stable data-taking conditions. Top-up mode
was periodically interrupted during electron running, since approximately every 30 minutes the pre-accelerator system at DESY
was switched to positrons to fill the PETRA ring. During these times, the current dropped below the top-up level while waiting
for the system to permit a refill. Typical beam current levels during Run II over the course of a 14 hour period are
presented in Figure \ref{fig:bcur}, showing the stability of the current in top-up mode, the periodic PETRA fill period, and the typical 20 minute daily
pause to switch the beam species. The beam
parameters for operation of the experiment are summarized in Table \ref{tab:beam}.
\begin{table}[thb!]
\begin{center}
\begin{tabular}{l|l}
Parameter & Value \\
\hline
\hline
Species & $e^+$ or $e^-$ (alternated $\sim$daily) \\
Energy & 2.01 GeV \\
Current & 60--65 mA (top-up mode) \\
Bunches & 10 \\
Bunch spacing & 96 ns (100 ns every fifth bunch) \\
Lifetime with gas in target & 40--55 minutes \\
Bunch length & 19.5 mm ($\sim$0.1 ns) \\
\end{tabular}
\end{center}
\caption[DORIS beam parameters for OLYMPUS operation]{A summary of the typical parameters of the DORIS beam during OLYMPUS data-taking.}
\label{tab:beam}
\end{table}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.0\textwidth]{figures/beamcurrent.png}}
\caption[Typical beam current during OLYMPUS operation]{The beam current in mA as a function of time for a typical 14-hour period of OLYMPUS running,
showing the 20 minute pause in operation to
switch the beam species at 9:00. To the left of the pause, electron running is shown with the regular larger drops in current caused by the refills
of the PETRA ring. To the right, positron running is shown, which did not have the top-up interruptions since PETRA also used positrons. Outside these
interruptions, a stable beam current of around 62 mA was maintained \cite{uwe1}.}
\label{fig:bcur}
\end{figure}
\section{The OLYMPUS Internal Hydrogen Target}
\label{sec:target}
Lepton-proton scattering events were generated by placing a windowless, internal gaseous hydrogen
target in the DORIS beamline, which simultaneously provided a high target density of protons while minimizing
adverse effects on the beam due to the presence of the target (increased emittance, uneven bunch charges, etc.). The target
system consisted of several key components, including the hydrogen gas cell, a beam collimator, a cryogenic cooling system, an aluminum
scattering chamber enclosing the cell, wakefield suppression elements to prevent heating due to the large charge-per-bunch
of the DORIS beams, and a multi-stage vacuum system to remove hydrogen from the beamline to prevent spoiling the ring vacuum. The primary
goals of the target system were to present an effective thickness of $\sim$3$\cdot10^{15}$ H atoms/cm$^2$ to the beam in view of the detectors
while handling the intense heating conditions of the bunched DORIS beam.
A detailed discussion of these components may be found in \cite{Bernauer201420}, while an essential overview is provided
in this section. A discussion of the physics of the gas contained within the system and its effect on the experiment's
kinematic acceptance and luminosity is found in Section \ref{sec:sclumi}.
\subsection{Scattering Chamber}
The main elements of the target system were contained within a scattering chamber, manufactured from a single solid block of aluminum
to provide high vacuum integrity. This chamber was 1200 mm long and 254 mm high. The chamber tapered from a width
of 245 mm at its upstream end to 114.3 mm at its downstream end, resulting in a trapezoidal prism shape for the chamber. This design increased
the visibility of the target cell within the chamber to the forward detector elements while still providing room for the installation of the collimator
and access ports at the upstream end. The chamber interfaced directly with the beamline
(and the beamline vacuum) at each end. The chamber was designed at the MIT-Bates Linear Accelerator Center, taking advantage of experience
gained designing similar targets for BLAST and other experiments \cite{Cheever:2006xt}. Figure \ref{fig:chamber} shows the key elements of the chamber design.
\begin{figure}[thb!]
\centerline{\includegraphics[width=0.8\textwidth]{figures/scattering_chamber_labels.pdf}}
\caption[Schematic of the OLYMPUS scattering chamber]{Schematic of the OLYMPUS scattering chamber, showing the principal elements of the design as viewed from beam
right \cite{Bernauer201420}.}
\label{fig:chamber}
\end{figure}
The chamber included several ports to allow access to the inside of the chamber, both for installation and/or repair work and to provide windows for scattered
particles to escape the target. The ports were sealed either with Atlas\footnote{Atlas Technologies, Port Townsend, WA, USA} explosion-bonded bimetallic flanges
or O-rings (in the case of the window ports). The target windows consisted of 0.25 mm thick 1100 aluminum foil, and subtended a polar angle range of
8{$^\circ$} to 100{$^\circ$} relative to the center of the target to provide a complete view of the cell to the detector systems (with the exception of the 12{$^\circ$} system
which could not view the most downstream portion). The chamber was mounted on an aluminum table, which supported its weight above the vacuum system
and provided screws for the alignment of the chamber to match the beamline.
\subsection{Target Cell}
The target cell, used to contain a considerable concentration of hydrogen in the vicinity of the beam before it escaped to the vacuum system, was a
600 mm long elliptical aluminum tube through which the beam passed directly. The elliptical cross section (27 mm wide by 9 mm high) was chosen
to mimic the envelope of the DORIS beam to mitigate any unwanted interactions of the beam with solid material in the target system. The target cells
used in the experiment were manufactured by molding two sheets of \SI{75}{\micro\meter} thick Goodfellow\footnote{Goodfellow Corporation, Coraopolis, PA, USA}
aluminum foil and mounting them in an aluminum support frame. The cells were constructed at Ferrara University/INFN, where similar cells were
constructed for the HERMES experiment \cite{Airapetian:2004yf}. A photograph of an OLYMPUS target cell mounted in the scattering chamber (with the windows and other
internal target system components removed) is shown in Figure \ref{fig:cellphoto}, and its connections to the other internal components
is shown in Figure \ref{fig:tarin}. More details on the manufacturing process for these cells are described in Reference \cite{Bernauer201420}.
\begin{figure}[thb!]
\centerline{\includegraphics[width=0.8\textwidth]{figures/cellinchamber.png}}
\caption[Photograph of the OLYMPUS target cell]{The OLYMPUS target cell, mounted within the aluminum scattering chamber
(with the windows and wakefield suppressors removed) \cite{Bernauer201420}.}
\label{fig:cellphoto}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.0\textwidth]{figures/chamber_inside.pdf}}
\caption[Components of the OLYMPUS target system within the scattering chamber]{Schematic of the components of the hydrogen target
system contained within the scattering chamber, including the gas inlet, target cell, connection to the cryogenic system,
collimator, and wakefield suppressors. For reference, the origin of the OLYMPUS coordinate system is the center of the target cell
directly beneath the inlet, and the beam passed from left-to-right (positive OLYMPUS $z$), approximately through the center of the wakefield suppressors,
collimator, and target cell as shown in the figure. The conal wakefield suppressors at each end connect smoothly to the beamline \cite{Bernauer201420}.}
\label{fig:tarin}
\end{figure}
\subsection{Collimator}
Also among the internal components of the target system shown in Figure \ref{fig:tarin} is a collimator placed on the upstream side of the
target cell to block beam halo from entering the OLYMPUS interaction region. The collimator was manufactured from a solid tungsten cylinder, with an outer
diameter of 82.55 mm and length of 139.7 mm. The bore of the collimator was elliptical in cross section, 25 mm by 7 mm at its upstream face and flaring
to the 27 mm by 9 mm size of the target cell and intermediate wakefield suppressor at its downstream end. The size of the collimator and its
bore were chosen based on the results of Monte Carlo simulation of beam halo particles and synchrotron radiation impinging on the collimator,
and considering the resulting predicted heat loads and forward scattering of unwanted particles into the interaction region. During the 4.5 GeV
synchrotron light production operation of DORIS, the collimator was exposed to $\sim$25 W of heating, which was well within the dissipation
capabilities of the cooling system. During OLYMPUS operation, the heat load was considerably less ($\sim$1 W).
\subsection{Wakefield Suppressors}
The final internal components of the system shown in Figure \ref{fig:tarin} were the wakefield suppression elements, which served to provide
a continuous and smooth electrical conductance connection between the beamline interfaces at each end of the target chamber and among the
internal target chamber components. This was required due to the bunched nature of the DORIS beam, which caused strong electromagnetic wakefields
that could induce considerable heating of the system if not provided a highly conductive means of conveyance in the system. Three wakefield suppressors
were present in the system: an elliptical tube element between the cell and the collimator, and two conal elements which flared from elliptical profiles
at their interfaces with the target system to circular profiles at their interfaces with the beamline. The elements were coated with a thin layer
of silver to increase their conductivities and their connections to other system elements included beryllium-copper spring cones to ensure good
electrical contact. Holes were drilled through the wakefield suppressors to allow the escape of beam gas, but these holes were placed as far as possible
from the beam to mitigate their impact on the conductivity of the system.
\subsection{Cryogenic System}
To mitigate the heating caused by the beam and to increase the target density by cooling the H$_2$ gas within the system, the target system
was actively cooled to temperatures below 75 K whenever beam was in the DORIS ring. This was achieved using a CryoMech\footnote{CryoMech, Inc., Syracuse, NY, USA}
AL230 coldhead and CP950 compressor system. The interface of the coldhead with the components of the target system can be seen in both
Figures \ref{fig:chamber} and \ref{fig:tarin}. The cooling system connected to the target cell assembly via a solid copper shunt coated with
indium at the interface with the cell assembly. The aluminum scattering chamber (exposed to the beam hall atmosphere) was at the temperature of the beam
hall air and was thus thermally insulated from the cooled elements. The system was capable of dissipating 36 W at 25 K, sufficient to
handle the heating caused by both synchrotron light production and OLYMPUS beam conditions. This was verified by the installation of seven Pt100
thermocouple temperature sensors along the length of the target that were used to monitor the system temperature whenever beam was in the ring.
\subsection{Vacuum System}
Due to the fact that hydrogen gas was flowed directly into the beamline, it was necessary to include a powerful vacuum system to remove the gas from the beamline as it escaped
the target cell assembly to avoid spoiling the vacuum necessary for the beam. The system included six turbomolecular pumps (split between
Osaka\footnote{Osaka Vacuum, Ltd., Osaka, Japan} TG 1100M and Edwards\footnote{Edwards, Crawley, United Kingdom} STP 1003C models). Since
the system was required to remove hydrogen from the ring at low pressures, four non-evaporable getters (NEGs) were installed with four of the
turbomolecular pumps. Due to the fact that turbomolecular pumps operate using magnetically levitated rotors, the pumps were required to be placed
outside of the OLYMPUS magnetic field (either along the beamline from the interaction region or in a pit below the experiment).
The placement of the pumps is shown in Figure \ref{fig:vac}. The pumps placed below the experiment were connected to the system with large-diameter
piping to insure a high conductance connection. As can be seen in the figure, the pumps were arranged in three stages, each of which reduced
the pressure by approximately an order of magnitude.
This reduced the $\sim$10$^{-6}$ Torr pressure inside the chamber to the $10^{-9}$ Torr pressure of the ring.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.0\textwidth]{figures/vacuum.pdf}}
\caption[Diagram of the target system vacuum components]{Diagram of the vacuum components of the OLYMPUS target system, showing the six turbomolecular
pumps and their high conductance connections to the target chamber and beamline. The non-evaporable getters were placed at the locations
of turbomolecular pumps 1, 2, 5, and 7 to assist with the removal of hydrogen at low pressures. \cite{Bernauer201420}.}
\label{fig:vac}
\end{figure}
\subsection{Hydrogen Gas Supply System}
The source of protons for {$e^\pm p$} scattering in the system (and of electrons for the luminosity monitoring processes) was molecular hydrogen gas
flowed into the middle of the target cell. The binding energies of hydrogen atoms and molecules are negligible compared to the GeV
scale beam energy, and thus the target effectively served as a free proton target. The H$_2$ gas used for the target was supplied by a
Parker\footnote{Parker-Hannifin Corporation, Haverill, MA, USA} hydrogen gas generator, which produced ultra-pure ($<$200 ppb impurity) hydrogen
via the electrolytic dissociation of deionized water. Within the generator, the dissociation occurred in palladium tubes, which were opaque
to all atoms and molecules in the system except H$^+$ ions. The generator produced a pressure of 20 psi on the supply line
to the target. This rate of gas production exceeded the target's
actual supply needs by orders of magnitude, but was done to maintain positive pressure relative to the atmosphere outside the line.
The actual gas flow into the target was controlled by a system of remotely-controlled mass flow controllers (MFCs) and pneumatic solenoid valves
modeled on the system used for the BLAST target \cite{Cheever:2006xt}, and operated using the slow control system described in Section
\ref{sec:sc}. Two buffer vessels of known volume were used to calibrate the MFCs. The MFCs could provide reliably calibrated flow rates
between 0.1 and 1.0 standard cubic centimeters per minute (sccm), and the range used during the experiment running was approximately 0.4-0.8 sccm.
\section{The OLYMPUS {$e^\pm p$} Spectrometer}
\label{sec:spect}
The OLYMPUS spectrometer used for the reconstruction of elastic {$e^\pm p$} events was predominantly constructed from various components
of the BLAST experiment \cite{Hasell:2009zza}. The detector consisted of gas drift chambers to the left and right of the beam
for particle trajectory reconstruction, scintillator panels beyond the drift chambers to provide timing and trigger information, and a
toroidal magnetic field to provide particle bending for momentum reconstruction and background suppression. The total acceptance of
the main detector package for {$e^\pm p$} events ranged over approximately $(25^\circ \leq \theta\leq 80^\circ)$ and
$(-15^\circ \leq \phi \leq 15^\circ) \cup (165^\circ \leq \phi \leq 185^\circ)$ in the particle scattering angles, corresponding to
the instrumentation of the left and right sides of the detector. This corresponds to ranges in the kinematic parameters of approximately
$(0.4 \leq \epsilon \leq 0.9)$ and $(0.6 \leq Q^2 \leq 2.2)$ GeV$^2/c^2$.
The essential details of these components are described in this section (and in more detail in References \cite{Milner:2014,tdr,Bernauer20169}).
\subsection{Toroidal Magnet}
\label{sec:tormag}
Momentum reconstruction for particles detected in the OLYMPUS detector was made possible by the generation of a toroidal magnetic
field around the target. Additionally, the field produced
by the toroid bent low-energy particles (produced by M{\o}ller and Bhabha scattering in the target and other sources of background tracks)
away from the detector systems to reduce the number of unwanted hits in the detectors. Eight water-cooled copper coils carrying a nominal
current of 5000 A arranged symmetrically around the beamline
produced this field. The lower four of these coils are shown in Figure \ref{fig:schem}, while Figure \ref{fig:magpho} shows the complete coil
configuration prior to installation of the magnet in the DORIS beamline. The coils pinched towards the beamline in their downstream
sections, but opened away from each other in the upstream sections to accommodate the target.
The peak field strength was approximately 0.3 T in the regions of the tracking detectors.
The coils were originally part of
the BLAST experiment, and the design of the magnet is described in detail in Reference \cite{Dow2009146}. A toroidal field was originally
chosen for the BLAST experiment to minimize the magnetic field near the beam, which was critical to the polarized target experiments
conducted at BLAST \cite{Hasell:2009zza,doi:10.1142}. While zeroing the field along the beamline was not critical to OLYMPUS,
the coils were aligned when installed to minimize the field in this region to avoid perturbing the beam.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.0\textwidth]{figures/toroid_photo.jpg}}
\caption[The OLYMPUS toroid prior to installation]{Photograph of the OLYMPUS toroid prior to installation in the DORIS beamline, showing
the symmetric arrangement of the eight coils \cite{Milner:2014}.}
\label{fig:magpho}
\end{figure}
During data-taking, the toroid was set to produce a field that bent positively charged particles towards positive $\theta$
trajectories as they moved through the detector system. The strength of the field in the $y$ direction in the $y=0$ plane bisecting
the detector systems in shown in Figure \ref{fig:bymap}. While the original conception of the OLYMPUS experiment included regularly
switching the toroid polarity to reduce the systematic uncertainties associated with the relative acceptance of positron and electron
events, this was in practice infeasible. In the opposite polarity, low energy electrons originating from the target
were bent into the detectors which caused an intractable background that obscured desired events. While efforts were made to mitigate this
effect to permit running with the second polarity (including increasing the field strength and physically shielding the detectors with material
to stop low energy particles), these changes were not sufficient to create a good environment for elastic event reconstruction. While some data
were taken with the opposite polarity for testing purposes, this dataset only represents approximately 13\% of all data collected, since it was
limited to low luminosity running in which the noise rate was sufficiently low to permit event reconstruction.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.0\textwidth]{figures/toroid_data.pdf}}
\caption[Map of $B_y$ in the tracking detector region]{Map of the strength of the $B_y$ component of the OLYMPUS magnetic field in the global $y=0$
plane as measured in the field survey (Section \ref{sec:magsur}). The regions of strongest field occurred within the drift chamber tracking volumes,
while the field was much smaller in the vicinity of the target along the $z$-axis \cite{Milner:2014}.}
\label{fig:bymap}
\end{figure}
To ensure that the magnetic field was well understood for the purposes of track reconstruction and event simulation, a large effort was undertaken
after the conclusion of data-taking to survey the field throughout the detector volumes. This effort is described in Section \ref{sec:magsur} and
Reference \cite{Bernauer20169}.
\subsection{Time-of-Flight (ToF) Scintillator System}
The time-of-flight (ToF) scintillator system, also inherited from the BLAST experiment \cite{Hasell:2009zza}, consisted of 36 scintillator
bars (18 per side) arranged in walls on each side of the detector, as shown in Figure \ref{fig:schem}. These bars played the critical role of providing timing
signals for the trigger and the readout of the main detector elements, as described in Section \ref{sec:trig}. The bars were arranged in panels
of four, five, and nine bars on each side with the panels arranged to point the normal vector of the panel plane approximately towards the target.
The forward panel bars measured $119.4\:\text{cm}\times17.8\:\text{cm}\times2.54\:\text{cm}$, while the bars in the rear two panels were larger:
$180.0\:\text{cm}\times26.2\:\text{cm}\times2.54\:\text{cm}$. This arrangement guaranteed that the acceptance of the ToF bars for tracks
originating from the target completely included that of the drift chambers and 12{$^\circ$} luminosity telescopes.
Each bar consisted of a solid block of Bicron\footnote{Bicron, Solon, OH, USA}
BC-408 scintillator, a plastic scintillator designed for applications requiring fast response times (0.9 ns) over large areas such as in the OLYMPUS
experiment \cite{bicron}. While new BC-408 exhibits attenuation lengths on the order of two meters, due to the age of the scintillator bars
and their exposure to radiation throughout the BLAST and OLYMPUS experiments, the scintillators' attenuation in the bars during OLYMPUS
running was significantly worse. While efforts were made to replace the most damaged bars, a large analysis effort was made to properly
account for the state of the scintillator, which is described in detail in Reference \cite{russell}.
Each bar was instrumented with two 3-inch diameter Electron Tubes\footnote{Electron Tubes, Ltd., Ruislip, Middlesex, United Kingdom}
9822B02 photomultiplier tubes, connected to the top and bottom of each bar via Lucite\footnote{Lucite International, Southampton, Hampshire, United Kingdom}
light guides. The light guides where arranged to orient their connected PMTs approximately perpendicularly to the toroidal magnetic field
to minimize the effect of the field on the PMT gain, as shown in Figure \ref{fig:tofpho}. The PMTs were also wrapped in mu-metal shielding to further reduce any such
effects. Signals from the detectors were passed to dedicated ADC and TDC channels for each PMT, processed using constant fraction discriminators
for the downstream 16 bars on each side and leading-edge discriminators for the upstream 2 bars on each side. This distinction was due to the fact
that the upstream bars were not included in the original design, and were added only after it was realized that a larger margin of safety was
desired to cover the full acceptance of the 12{$^\circ$} telescopes.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.0\textwidth]{figures/tof_mount.jpg}}
\caption[Arrangement of the right side ToF detectors]{Photograph of the right side of the ToF detector system prior to installation of the detector
package within the DORIS beamline, showing the orientation of the PMTs
perpendicular to the direction of the toroidal magnetic field \cite{Milner:2014}.}
\label{fig:tofpho}
\end{figure}
The signals from the PMTs were rapidly digitized for processing by the trigger. When triggered, these signals produced a common-start signal
for the ToF TDCs and a common-stop signal for the drift chamber TDCs. The total ADC counts were recorded as a measure of the energy
deposition in the ToF bars, and the ToF TDC information for each bar could be used to estimate both the vertical position at which the bar was hit
and the flight time of the particle from the vertex to the bar. This analysis is described in Section \ref{sec:recon}.
\subsection{Drift Chamber Tracking System}
\label{sec:thegdwcs}
The main detectors that provided hit position information for reconstruction of {$e^\pm p$} events were two large drift chambers mounted between the toroid
coils on the left and right sides of the detectors. Functioning as a typical drift chamber, charged particles passing through the gas contained within these
chambers induced ionization, and the electric field created by the wires held at potential throughout the chamber caused the resulting electrons to drift
towards dedicated detection wires. Better position resolution was achieved by recording the signal time using TDCs, with a common-stop signal provided by the fast signal of the time-of-flight
system. Since drift speeds of the electrons in the gas were on the order of 5-6 cm/\SI{}{\micro\second}, the arrival of the drifting electrons across the centimeter-scale cells occurred
well after the ToF signal. This extended drift time was converted to a distance from the wire at which the particle passed (Section \ref{sec:ttd}), which provided positional
information for track reconstruction.
Once again, these detectors were inherited from the BLAST experiment \cite{Hasell:2009zza}, but
underwent a complete refurbishment for OLYMPUS. The refurbishment included a complete rewiring of the chambers as well as new voltage distribution electronics and connections
for all wires. Each chamber consisted of three trapezoidal frustum-shaped aluminum frames joined to create a single
gas volume, a schematic of which is shown in Figure \ref{fig:dframe}. Thin plastic windows closed off the gas volume on the inner and outer faces of the connected frame
assemblies. Each of the three frames contained two layers of wire ``cells''. A cell
was formed by an array of wires that were arranged to form a rectangular box region used to create an electric drift region, as shown in Figure \ref{fig:cells}. The wires were held at stepped voltages
from ground at the boundary between two cells to 2800 V at the center of a cell. In the center of the cell, an additional column of wires contained
three sense wires held at 3900 V to attract drifting electrons to generate the detector's signals. To help resolve the ambiguity of a single time recorded on a sense wire,
the three sense wires were offset from the center plane of the cell
by $\pm0.5$ mm to create a small difference in drift times from each side of the cell.
Each cell was 78 mm by 40 mm, and extended from the top inner face of the frame to the bottom
inner face. The two layers of cells in each frame extended parallel to each other, 20 mm apart, arranged such that wires in layer were at $\pm5^\circ$ from vertical to
create a stereo angle for 2D position reconstruction. A total of 318 cells were present in the two drift chambers, containing 954 sense wires and approximately 10,000 total wires.
The chambers were filled with an Ar:CO$_2$:C$_2$H$_6$O gas mixture at a ratio of approximately 87.4:9.7:2.9. The argon and carbon dioxide were mixed at a 9:1 ratio by a dedicated
mixing system, while the ethanol was introduced by bubbling the mixed gas through liquid ethanol at 5 $^\circ$C. Unfortunately, this method of introducing ethanol to the system
resulted in fluctuation in the concentration of ethanol during data-taking, which altered the drift properties of the gas. This required careful calibration of the drift chamber
time-to-distance calibration, as described in Section \ref{sec:ttd}\footnote{The introduction of ethanol to the drift chamber gas was part of an attempt to reduce the number of noise
hits in the drift chambers, particularly in the inner layers. This, however, was not a sound decision from the standpoint of achieving that goal. While it did confer the benefit
of slowing the drift speed of electrons in the gas (thus increasing the resolution somewhat), ethanol is typically introduced to drift chambers so as to reduce the amount of carbon
``whisker'' build-up on the wires due to other organic compounds in the drift gas, which can cause dark currents on the signal lines \cite{blum}.
For an Ar:CO$_2$ mixture, however, whisker growth is not expected and has not been reported to be observed \cite{dont}. Thus, the introduction of ethanol was very unlikely to reduce the level
of noise in the OLYMPUS drift chambers, and the instability of the ethanol level certainly was more of a detriment than was balanced by the minor benefit of decreased drift speed.}.
The gas pressure in the chambers was maintained at about 1 inch
of water above local atmosphere to create positive pressure within the chambers, corresponding to a flow rate of approximately 5 L/min.
\begin{figure}[thb!]
\centerline{\includegraphics[width=0.8\textwidth]{figures/wc_assembly_iso.pdf}}
\caption[Schematic of an assembled OLYMPUS drift chamber]{An isometric schematic of one of the drift chambers, illustrating how the three frames were assembled to create
a single gas volume \cite{Milner:2014}.}
\label{fig:dframe}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=0.95\textwidth]{figures/dc_field_3500.png}}
\caption[Lines of drift within the OLYMPUS drift chamber cells]{A view along the direction of the wires forming the rectangular drift cells, with the lines of electron drift induced in the cells
by the voltages on the wires. Note that the lines predominantly tend towards the three sense wires on which signals were recorded. The drift lines are tilted relative to the axes of the cells
due to the interaction of the OLYMPUS magnetic field with the drifting electrons, creating a drift component in the direction of $\mathbf{E}\times\mathbf{B}$ \cite{Milner:2014}.}
\label{fig:cells}
\end{figure}
Pulse signals from arriving drift electrons on sense wires were first decoupled from the high voltage on the voltage distribution electronics, and then discriminated and amplified
on dedicated front-end electronics attached directly to distribution boards on the chambers. The discriminated signals were passed to LeCroy\footnote{Teledyne LeCroy, Chestnut Ridge, NY, USA}
1877 multi-hit TDC modules. Conversion of recorded drift times to distances from the wires and the method of reconstructing those distances
into trajectories are discussed in Section \ref{sec:recon}.
\section{$12^\circ$ Tracking Telescope Luminosity Monitors}
To provide one method of luminosity measurement for the experiment, a dedicated telescope tracking system was developed for the reconstruction
of elastic {$e^\pm p$} scattering events in which the lepton is scattered with $\theta\approx 12^\circ$. The system tracked leptons independently of the
drift chambers, while the drift chamber data were used to reconstruct the corresponding proton near the upper end of the chambers' $\theta$ acceptance.
Hit positions of passing charged particles were reconstructed in the detector plane elements of the telescope, and these hits were reconstructed
to find the particles' trajectories, as described in Sections \ref{sec:12hit} and \ref{sec:12track}.
The system took advantage of the rapid increase of the elastic {$e^\pm p$} cross section at small $\theta$ to produce a high statistics count of events.
Using the system as a luminosity measurement requires the assumption that TPE is small at forward angles, and thus absent this
assumption and given another method of normalizing the luminosity between species, the 12{$^\circ$} telescopes provide an additional tracking point
for the measurement of {$\sigma_{e^+p}/\sigma_{e^-p}$} forward of the drift chambers.
Two telescopes were constructed as units and then mounted to the forward side of each of the drift chambers. Each system consisted of three gas electron
multiplier (GEM) detectors interleaved with three multi-wire proportional chambers (MWPCs) for tracking, with two scintillator tiles instrumented with
silicon photomultipliers (SiPMs), and a Pb glass calorimeter array for triggering. The essential layout of these systems is shown in Figure
\ref{fig:fordet}, while Figure \ref{fig:12photo} shows a photograph of one of the telescopes with its mounting brackets and readout electronics.
Six planes and two detector technologies were utilized to provide high resolution
tracking of the high momentum forward events, and to provide a mechanism of cross-checks and calibration within the detector system. Ultimately,
the GEM detectors were not used in the final luminosity analysis due to problems with the stability of their efficiencies (see Section
\ref{sec:hahahaha}), but were useful in the calibration of the system and are being considered for use in future experiments \cite{refId0,Balewski:2014pxa}.
This section describes the essential details of these detector systems, while the luminosity analysis of the elastic {$e^\pm p$} scattering events
recorded by the 12{$^\circ$} telescopes in conjunction with back-angle portions
of the drift chamber tracking system is presented in Section \ref{sec:12lumi}. The value of {$\sigma_{e^+p}/\sigma_{e^-p}$} at $\sim$12{$^\circ$} using alternate
luminosity measurements is considered in Section \ref{sec:12TPE}.
\begin{figure}[thb!]
\centerline{\includegraphics[width=0.95\textwidth]{figures/12degree_view_biglabels.pdf}}
\caption[Diagram of the forward detector systems]{Schematic view from above of the forward detector systems, including the 12{$^\circ$} luminosity monitor telescopes
and Symmetric M{\o}ller/Bhabha calorimeter, relative to the target chamber and beamline \cite{Milner:2014}.}
\label{fig:fordet}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.0\textwidth]{figures/12Degree.jpg}}
\caption[Photograph of the right-side 12{$^\circ$} tracking telescope]{Photograph of the right-side 12{$^\circ$} tracking telescope, showing the detector elements
and their readout electronics \cite{Milner:2014}.}
\label{fig:12photo}
\end{figure}
\subsection{Silicon Photomultiplier (SiPM) Scintillator Tiles}
\label{sec:sipm}
The main trigger signals for the 12{$^\circ$} telescopes were provided by two $120\:\text{mm}\times120\:\text{mm}\times4\:\text{mm}$
scintillator tiles in each telescope, consisting of solid blocks of Eljen\footnote{Eljen Technology, Sweetwater, TX, USA} EJ-204 plastic
scintillator. Charged particles passing through the planes induced scintillation light, which was detected by the instrumentation of the planes to produce
a recorded signal. As shown in Figure \ref{fig:fordet}, the tiles were placed at the target ends of the telescopes
and between the last two tracking planes in each telescope so that they would not be the limiting elements in the acceptance of
the detectors. Each tile was instrumented with two Hamamatsu\footnote{Hamamatsu Photonics K.K., Hamamatsu, Japan} silicon photomultiplier (SiPM)
multi-pixel photon counters (MPPCs) mounted on opposite sides of the square tiles to homogenize the total recorded light yield across the horizontal
acceptance of the tiles. Additionally, each tile was wrapped in Millipore\footnote{EMD Millipore, Billerica, MA, USA} Immobilon-P diffuse reflector
to further boost the light yield. Analog signals from the MPPCs in each tile were summed and passed to a constant fraction discriminator
to produce the effective fast trigger signal from each tile. Basic parameters of the SiPM tiles and their operation are summarized in Table
\ref{tab:sipm}.
The operation of the 12{$^\circ$} system trigger is described in Section \ref{ss:12dtrig} and the performance of the scintillator tiles
is analyzed in Section \ref{sec:12perf}, but in general the tiles provided a very high efficiency ($>$99\%), uniform trigger for the telescopes
throughout data-taking.
\begin{table}[thb!]
\begin{center}
\begin{tabular}{l|l}
Parameter & Value \\
\hline
\hline
Scintillator type & Eljen EJ-204 \\
Size & $120\:\text{mm}\times120\:\text{mm}\times4\:\text{mm}$ \\
Reflective coating & Millipore Immobilon-P \\
SiPM type & Hamamatsu S10931-050P (3600 pixels) \\
Typical gain & $7.5\cdot 10^5$ \\
Typical bias voltage & 72 V \\
Typical dark count rate & $\sim$0.8 MHz \\
Preamplification & 25x \\
\end{tabular}
\end{center}
\caption[Parameters of the SiPM tiles and their operation]{A summary of essential parameters of the SiPM tiles used in the 12{$^\circ$} telescope
trigger and their operation \cite{dief1}.}
\label{tab:sipm}
\end{table}
\subsection{Lead Glass Calorimeters}
\label{sec:lg}
Lead glass calorimeters were mounted to the steel beams of the detector frame behind each 12{$^\circ$} telescope to
provide an independent trigger for the system for calibration and performance evaluation of the SiPM scintillator
tile trigger system.
Each calorimeter consisted of three rectangular prisms of lead glass instrumented with photomultiplier tubes at one
end. Particles passing through the 12{$^\circ$} telescopes would shower in the steel beam beyond the telescope, and the calorimeters
recorded the remnants of this shower exiting the beam as shown in Figure \ref{fig:lgevent}. The signal from
this system was relatively noisy, with a number of events that did not correspond to trackable 12{$^\circ$} events, due to the
conditions in the detector hall, but the sample was sufficient to evaluate the performance of the SiPM tile trigger, as
described in Section \ref{sec:12perf}.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.0\textwidth]{figures/lgevent.png}}
\caption[Simulated lead glass trigger event]{Overhead view of a simulated $e^+p$ event in the left 12{$^\circ$} telescope,
showing the showering in the steel beam behind the telescope
(mass of photons in green) and resulting energy deposits from electrons in the penetrating shower (red).}
\label{fig:lgevent}
\end{figure}
\subsection{Gas Electron Multiplier (GEM) Detectors}
\label{sec:gemdet}
Three of the tracking elements in each telescope were triple gas electron multiplier (GEM) detectors \cite{Sauli20162}. A relatively
new detector technology, a GEM detector amplifies the initial ionization of gas by a charged particle by drifting the ionization electrons
towards small holes ($\sim$\SI{100}{\micro\meter} diameter) in a copper-coated Kapton\footnote{E. I. du Pont de Nemours and Company, Wilmington, DE, USA} foil,
which is held at a fixed potential to attract the drifting electrons \cite{grupen}. The high field gradients in the vicinity of the holes,
shown in Figure \ref{fig:gemfield}, induce an avalanche of secondary electrons to multiply the initial ionization signal by factors on the order of 1000.
Several such foils may be stacked together with order millimeter spacing and stepped potentials to further multiply the signal while avoiding high potential
gradients across the whole detector which could cause unwanted discharge. The final multiplied electron signal is typically collected on a readout
plane beyond the final multiplication foil, which is arranged in a pixel or strip layout to provide precise hit position information by
leveraging the high statistics of the position distribution of the arriving electrons.
\begin{figure}[thb!]
\centerline{\includegraphics[width=0.75\textwidth]{figures/gemfield.png}}
\caption[Electric field in the vicinity of a GEM foil]{Electric field in the vicinity of holes of a GEM foil.
Note the high field gradients in the immediate vicinity of the small holes. (Figure reproduced from \cite{Sauli20162}.)}
\label{fig:gemfield}
\end{figure}
The OLYMPUS GEM detectors were designed at the MIT-Bates Linear Accelerator Center and assembled at Hampton University. Each detector unit
consisted of three GEM foils (with the amplification holes) in between a cathode and readout foils. The total active area of each detector
was $100\:\text{mm}\times100\:\text{mm}$. The readout board and GEM foils were spaced 2 mm from one another, with a 3 mm gap between
the cathode and the first multiplication foil, the arrangement of which is shown in Figure \ref{fig:gemex}. The five foils were enclosed in a pressurized volume filled with a premixed Ar:CO$_2$ 70:30
gas mixture such that none of the foils were subjected to a pressure gradient that could cause deformation. The GEM foils, manufactured by
TechEtch\footnote{TechEtch, Inc., Plymouth, MA, USA}, consisted of \SI{50}{\micro\meter}-thick Kapton coated with \SI{5}{\micro\meter} of
copper on each side. The holes in the foil were \SI{70}{\micro\meter} in diameter, arranged in a triangular pattern over the entirety of the
foil with \SI{140}{\micro\meter} spacing. The readout boards (also from TechEtch) consisted of strips and pads of copper arranged to provide
two dimensional information from charge collection on a Kapton substrate. The pattern of strips and pads on a readout foil is shown in Figure
\ref{fig:gemro}. The pitch of pads/strips along each direction was \SI{400}{\micro\meter}
and the relative sizes of the strips and pads was optimized to balance the charge sharing between the strips and pads. Each readout board
had 250 strip/pad channels along each of the two plane dimensions.
\begin{figure}[thb!]
\centerline{\includegraphics[width=0.95\textwidth]{figures/gem_explode.png}}
\caption[Exploded view showing the components of an OLYMPUS GEM detector]{ Exploded view schematic of one of the OLYMPUS triple-GEM detectors,
showing the component layers from the readout electronics and foil on the left, the three GEM foils with their support frames in the middle, and the
cathode foil and pressure containment layer on the right \cite{Milner:2014}.}
\label{fig:gemex}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=0.8\textwidth]{figures/gemreadout.png}}
\caption[Micrograph of the GEM detector readout plane pattern]{ Micrograph of the GEM readout plane layout, showing the perpendicular strip/pad
pattern with the pads connected by vias beneath the strips. Note the small defect near the center, a repaired short between a pad and
strip that was repaired by breaking the connection \cite{kohl1}.}
\label{fig:gemro}
\end{figure}
The signals from the readout pads and strips of a single plane were routed to four readout cards (two per dimension), each with 128 channels (leaving 12 total
unused channels on each GEM detector). Each card included an APV25-S1 analog pipeline chip \cite{French:2001xb}, which sampled each of the 128 channels
at a 40 MHz rate. When triggered, the pipeline was read out in a single multiplexed data line to the data acquisition system managed by a multi-purpose
digitizer (MPD) \cite{Musico:2011lia}. The digitizer included VME-based fast ADCs and a field-programmable gate array (FPGA) for the creation
and readout of the GEM's data signal.
The analysis of the GEM detector data, including the development of a new dedicated hit-finding algorithm for the detectors, is discussed in
Section \ref{sec:12lumi}. While the GEM detectors were not used in the final analyses of this work (as discussed in Section \ref{sec:hahahaha}),
an effort was undertaken to understand their output for the purposes of using GEM data for calibration when possible and to facilitate their
possible use in future experiments.
\subsection{Multi-Wire Proportional Chambers (MWPCs)}
\label{ss:mwpcdet}
The remaining three tracking planes in each telescope consisted of multi-wire proportional chambers (MWPCs), constructed at the Petersburg Nuclear Physics Institute (PNPI). Each detector
consisted of three planes of anode sense wires, each with a cathode wire plane on each side at 2.5 mm spacing. Similar to the drift chambers in principle,
particles passing through the MWPC planes
induced ionization in the gas filling the detectors. This created electrical signals on the sense wires as the electrons and ions drifted towards the wires, which
were held at high voltage. For the MWPCs, however, no timing or other signal information was maintained and thus only hit/no hit information
was available from each wire. The sense wires in each of the three
anode planes were \SI{25}{\micro\meter} in diameter, consisted of gold-plated tungsten, and were spaced by 1 mm along the plane. The cathode wires
were \SI{90}{\micro\meter}-diameter beryllium bronze wires and were spaced by 0.5 mm. The middle anode wire plane was oriented so that the wires
were vertical, while the wires in the other two planes were oriented at $\pm30^\circ$ from vertical to provide two-dimensional reconstruction information,
while prioritizing the resolution in the horizontal position as it was most important to momentum reconstruction. The wires were connected to a CROS-3
readout system, detailed in Reference \cite{Uvarov:cros3}, that interfaced with the main OLYMPUS DAQ.
The anode planes were contained within a pressure volume filled with an Ar:CO$_2$:CF$_4$ 65:30:5 mixture to provide the medium for ionization.
This gas mixture and the voltage anode plane operation voltage (3200 V) were chosen by consideration of simulations of the detectors
using the GARFIELD gas ionization and drift software package \cite{Veenhof:1998tt} and experience operating similar detectors at the HERMES
experiment \cite{Andreev:2001kr}.
Hit information from the MWPCs consisted of ``yes/no''-type decisions from the wires in each plane, which limited their resolution but greatly
simplified their operation and readout. Combined information from all three planes provide better hit resolution in the horizontal direction and strong
noise rejection. The performance of the MWPCs and analysis of their data, including their critical role in the determination of the 12{$^\circ$} luminosity estimate,
are discussed in detail in Section \ref{sec:12lumi}.
\section{Symmetric M{\o}ller/Bhabha Scattering Luminosity Monitor}
As a second source of luminosity monitoring, OLYMPUS included a forward ($\theta\approx 1.3^\circ$) calorimeter system designed for the detection of
symmetric M{\o}ller/Bhabha (SYMB) scattering events. The detector was designed to register scattering events involving
the atomic electrons of the hydrogen gas in the target and beam leptons in which
the outgoing particles scatter at symmetric angles from the direction of the beam.
Three principal processes contribute to this measurement between the two beam species:
\begin{enumerate}
\item elastic $e^-e^-\rightarrow e^-e^-$ scattering (M{\o}ller scattering) \cite{moller},
\item elastic $e^+e^-\rightarrow e^+e^-$ scattering (Bhabha scattering) \cite{bhabha}, and
\item $e^+e^-\rightarrow \gamma\gamma$ annihilation.
\end{enumerate}
Additionally, and critically to the final method of luminosity determination from this system as described in Section \ref{sec:symblumi},
the detector also registered the very far forward leptons from {$e^\pm p$} events. As particles entered the SYMB detector, they induced \v{C}erenkov radiation,
which was detected by the photomultipliers that instrumented the system. At such small $\theta$, the cross sections for this processes
are very large (on the order of a barn/sr), and thus the SYMB calorimeter was subject to very high rates. As such, the detector operated at
a much higher rate than the rest of the detector, internally counting events using a digital histogramming system to count events in which
energy was registered in one or both sides of the calorimeter. The nature of these histograms and the conditions under which they are filled
is described in Section \ref{sec:symbro}. This section covers the essential details of the SYMB and its operation,
while Reference \cite{PerezBenito20166} provides a full description of the system. The analysis of the SYMB data and the resulting
luminosity estimate is summarized, although not detailed, in Section \ref{sec:symblumi}.
Figure \ref{fig:fordet} shows the placement of the SYMB system near the beamline and approximately three meters from the target chamber,
while Figure \ref{fig:symbsc} shows a more detailed schematic of the components of the system. The detectors were constructed at
Johannes Gutenberg-Universit\"{a}t using expertise and materials from the A4 experiment \cite{KOBIS1998625,A4M}. Two calorimeters were
constructed, each consisting of a $3\times 3$ array of PbF$_2$ crystals. Each crystal was approximately $26\:\text{mm}\times26\:\text{mm}\times160\:\text{mm}$
in size. In the $3\times 3$ array, the long dimension of the crystals was placed parallel to the direction of incoming particles and corresponded to
approximately 15 radiation lengths. The 78 mm square cross section of the array contained approximately two Moli\`{e}re radii \cite{Baunack:2011pb}.
Examples of the crystals and one of the SYMB nine-crystal assemblies are shown in Figure \ref{fig:crys}.
Since PbF$_2$ is a pure \v{C}erenkov radiator, the response time
of the crystals is extremely fast (as there is no scintillation delay), making it an ideal material for a high-rate electromagnetic calorimeter
\cite{ANDERSON1990385}. Millipore reflective paper was wrapped around each individual crystal to increase the contained light yield.
Each of the 18 crystals was instrumented with a
Philips\footnote{Koninklije Philips N.V., Amsterdam, the Netherlands} XP 29000/01 PMT to collect light produced in the crystals, chosen for their
fast (20 ns) response time. This response time was the limiting factor in the SYMB operation rate, and thus the detector could operate at up
to 50 MHz. Each PMT was connected to a dedicated ADC channel to measure the effective energy deposited for an event.
Lead collimators, 100 mm thick with 20.5 mm diameter holes, were placed in front of each array to shield the crystals and
define the acceptance for symmetric events. Since the collimator holes defined the acceptance of the calorimeters, they were carefully
surveyed.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.0\textwidth]{figures/figmoller.pdf}}
\caption[Schematic of the SYMB calorimeter system]{Schematic overhead view of the SYMB detector system, showing the approximate placement
of the collimators, crystal arrays, and shielding boxes of the detector. Note that the horizontal scale of the figure is broken in the
beamline region to illustrate the placement of the detector relative to the target \cite{Milner:2014}.}
\label{fig:symbsc}
\end{figure}
\begin{figure}[thb!]
\includegraphics[width=0.49\textwidth]{SYMB_Crystals.jpeg}
\includegraphics[width=0.49\textwidth]{SYMB_Calor.jpeg}
\caption[Photographs of the SYMB calorimeter crystals and assembly]{Three of the PbF$_2$ crystals of the SYMB detector (left) and one of the nine-crystal calorimeter
assemblies instrumented with PMTs (right) \cite{Milner:2014}.}
\label{fig:crys}
\end{figure}
\section{Slow Control System}
\label{sec:sc}
Elements of the experiment slower than the event trigger rate were controlled by a slow control system that monitored, set, and recorded several
parameters at all times during the operation of the experiment. These parameters included voltages set via communication with high voltage supplies,
temperature readings, the target gas supply system settings, DORIS beam parameters, and a variety of other settings. The software linking the slow
control system to the hardware elements of the experiment was implemented using the Experimental Physics and Industrial Control System (EPICS) \cite{epics}.
In addition to the EPICS system, a server was implemented for the system in Python using the FLASK micro-framework\footnote{\url{http://flask.pocoo.org/}}.
Finally a JavaScript- and HTML-based graphical interface for this server provided the ability to control the system from any computer with secure access
to the internal network at DESY (and appropriate logins for the OLYMPUS system). The software was run on three dedicated Linux VME computers, with appropriate interface
cards for the various hardware elements of the experiment. The user interface included the ability to change settings such as voltages,
gas flow, vacuum controls, etc., real-time plots and warnings regarding the operational conditions of the detector and beam,
as well as a variety of presets for all parameters appropriate for different running conditions. This made the experiment
easy to monitor and operate, not only from the control room at DORIS (which was staffed at all times during the experiment), but from anywhere with Internet connectivity.
Additionally, the slow control continuously sampled readout information from various systems and recorded these in a PostgreSQL database, which was properly synced
with the data acquisition system to associate the history of the readout with the recorded detector data.
\section{The OLYMPUS Trigger and Readout}
\label{sec:trig}
The relatively high luminosity associated with the fixed-target nature of the OLYMPUS experiment, in conjunction
with the fact that a complete detector readout required much more time ($\sim$0.1 ms) than the beam bunch spacing, required
the experiment to employ a trigger system to select events of interest for readout from the fast detector system information of the experiment
and reject events that would not correspond to elastic {$e^\pm p$} events.
The primary trigger signals were the discriminated signals from the ToF scintillators and the 12{$^\circ$} telescope scintillators, although information
from the drift chambers, lead glass calorimeters, and DORIS accelerator information also contributed to the system. Notably, the DORIS
bunch clock provided the reference time signal for the ToF and drift chamber TDCs. When a trigger condition was met, information from all
relevant detector channels was read out and the system was gated (i.e., prevented from collecting further events) until the readout was complete.
The time the trigger was closed for readout amounted to ``dead time'', which was typically $\sim$30\% during normal OLYMPUS
operation. Thus, in order to maximize the usefulness of the dataset, the trigger was implemented to increase
the fraction of recorded events that included reconstructible elastic {$e^\pm p$} events, while simultaneously providing a sufficient number
of events for various testing and calibration purposes.
In addition to the main trigger, which focused on selecting elastic {$e^\pm p$} events, a number of other triggers provided more open configurations at a prescaled rate. The trigger was implemented
using a VME FPGA, which permitted the combination of up to 16 fast input signals to produce up to 16 output trigger conditions
in parallel that could be passed to the data acquisition system. Additionally, each of the output conditions could be prescaled to reduce to the rate at which
an individual condition produced a detector readout below the natural rate of occurrence for that condition. This was useful for certain test trigger conditions, which
would have dominated the data sample without prescaling. The critical elements
of the trigger system and conditions are described in the following subsections.
\subsection{Main Kinematic Trigger and Second-Level Drift Chamber Trigger}
The main OLYMPUS trigger combined information from the ToF scintillators and the drift chambers to produce events with an
enhanced purity of elastic {$e^\pm p$} events. The basic trigger condition required that a ToF bar on each side of the detector
recorded a hit (defined as coincidence between the top and bottom PMTs of each bar), such that the left and right bar pairing
corresponded to a conceivable left/right pairing for the kinematics of elastic scattering from the gas target. The map of such
allowed pairings is shown in Figure \ref{fig:trigcon}. The allowed left/right pairings were determined by Monte Carlo simulation of both
{$e^- p$} and {$e^+ p$} events. All pairings that occurred for elastic events in the simulation of either species were allowed for all runs,
and an additional buffer of one bar on each side of
the allowed block of bars on the opposite side corresponding to a specific bar was added to avoid rejecting good elastic {$e^\pm p$} events.
The primary advantage of this system was that it disallowed events that paired very forward bars on each side, since these bars
had the highest hit rates but kinematically could not correspond to elastic events. This trigger was not prescaled (i.e., it was allowed to
induce data readout at its natural rate of occurrence).
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.5\textwidth]{figures/triggermap.pdf}}
\caption[Map of allowed OLYMPUS trigger conditions]{Map of the corresponding left and right fast trigger signal pairs from the OLYMPUS
ToF bars (labeled by index) and 12{$^\circ$} telescope scintillators (labeled L12 and R12, for left and right) that would result in a detector
readout (green) versus event rejection (red) for the main kinematic and 12{$^\circ$} luminosity triggers. This configuration was used for the
majority of production data-taking runs.}
\label{fig:trigcon}
\end{figure}
Additionally, during Run I it was determined that the restrictions of the kinematic left/right ToF pairings in the trigger still
resulted in a high number of events with insufficient data in the drift chambers to permit reconstruction. Thus, for the primary
production run, a ``second-level'' trigger element was added to the main trigger which required that at least one wire in each of the middle
and outer drift chamber regions on each side recorded a good time for an event to be accepted. Since events without such information could
not be reconstructed anyway, this condition did not reject any events that could be considered part of the final data sample. The condition
reduced the false trigger rate due to ToF noise, beam hall radiation conditions, etc. by approximately a factor of ten, which greatly
improved the dead time of the experiment to allow a much higher efficiency of luminosity collection.
\subsection{$12^\circ$ Telescope Trigger}
\label{ss:12dtrig}
The second primary element of the trigger system was the dedicated trigger for the 12{$^\circ$} telescopes, which required the coincidence
of both SiPM scintillator tiles (Section \ref{sec:sipm}) in one of the 12{$^\circ$} arms and a ToF hit (top/bottom PMT coincidence) in one of the back seven bars
on the opposite side. These conditions are shown as ``L12'' and ``R12'' in Figure \ref{fig:trigcon}. This trigger was also not prescaled
so as to produce the highest possible statistics for the 12{$^\circ$} elastic {$e^\pm p$} luminosity measurement. The system could also be triggered
using the lead glass calorimeters mounted behind the telescope, although this trigger was prescaled due to its high rate of undesired
events and was used predominantly for measurement of the SiPM tile efficiencies (Section \ref{sec:12eff}).
\subsection{SYMB Readout}
\label{sec:symbro}
The readout of the SYMB calorimeter was handled independently of the main trigger system due to the way in which the SYMB detector produced
digital histograms of events internally. More detail on this system may be found in Reference \cite{PerezBenito20166}. The detector was
designed to rapidly sum the light yield in the nine crystals on each side of the calorimeter, apply basic conditions to the recorded
yield in each side, and then record the event in a digital histogram if all conditions were met. Each digital histogram was
essentially a 2D histogram of the energy recorded by each side of the detector system (represented by the total ADC count).
The readout system was based on the similar
system used for the A4 experiment, where the crystals used in the SYMB detector were originally designed and used \cite{KOBIS1998625,A4M}.
The detector could operate at a rate up to 50 MHz, which was predominantly limited by the 20 ns response time of the PMTs used
to instrument the PbF$_2$ crystals. Two basic conditions were applied to the SYMB PMT signals to generate a readout in the digital histograms:
\begin{enumerate}
\item that the total energy deposited in the nine crystals of one of the calorimeters exceeded a given threshold, and
\item that the central crystal of the array meeting the threshold requirement have the largest recorded energy deposition
of the nine (the ``local max'' requirement).
\end{enumerate}
If either side of the detector met these requirements, both sides were read out to fill either the left- or right-master histogram
depending on which side met the conditions. If both sides simultaneously met the conditions, the coincidence histogram was filled.
These histograms were filled at a rate much higher than the main trigger rate, and thus the data acquisition system did not record
the histograms on an event-by-event basis. Instead, the integrated histograms were read out approximately every 70,000 events (or
at the end of a run) to limit the readout time spent on SYMB data acquisition.
\subsection{Additional Triggers}
\label{ss:addtrig}
Other triggers used during the experiment, typically designed to provide information for tests, calibration, etc. for the detectors,
were prescaled to reduce their rate of occurrence in the dataset or were used for occasional dedicated tests runs. Such triggers included:
\begin{enumerate}
\item conditions involving looser requirements on combinations of ToF hits, including allowing hits with only one PMT firing in a bar (either
with or without a coincident hit in the opposite side of the detector), useful for calibration of the ToFs,
\item triggers allowing the readout of the 12{$^\circ$} telescopes with only registered SiPM or lead glass hits (i.e., with no requirement of
a ToF bar),
\item a ``clock'' trigger for random readout of the detector, and
\item dedicated triggers for recording of cosmic ray events or other special event topologies.
\end{enumerate}
Such triggers will be described as necessary in the description of any calibrations for which they were used. In general, these
special triggers were employed with varying prescale factors throughout the experiment, but the main triggers (the kinematic and
SiPM tile 12{$^\circ$} trigger) were run without prescaling for all data production runs.
\subsection{Readout}
The readout system was implemented by the Universit\"{a}t Bonn group, based on experience developing similar systems for operation
of the ELSA accelerator and its experiments \cite{Hillert2006,bonndaq}. The system ran on VME CPU modules, and operated as a ``synchronous system'';
when a trigger occurred all detectors were read out simultaneously and no further triggers were accepted until the readout was complete.
While such a system increased the dead-time of the experiment (the fraction of the running time in which the trigger was gated and events were
not accepted), the advantage of avoiding possible synchronization of data errors was judged to be of sufficient benefit to the experiment to justify
the increased dead-time. The readout systems for the detectors were arranged in a master-slave architecture, in which a master module managed a number
of slave modules responsible for interaction with particular systems and handled the gating of the trigger until all slave modules completed their readout.
The data acquisition system collected all relevant detector readout data for a given event (ADC counts, TDC counts, slow control parameters, etc.), and produced
an output ZEBRA format file \cite{zebra}. This ZEBRA file was then converted to a ROOT tree format to facilitate easier access to the data \cite{Brun1997}.
More information on this system can be found in References \cite{Milner:2014} and \cite{bonndaq}.
\section{Experiment Operation}
During data-taking, a two or three person team operated the experiment 24 hours a day from the control room in the DORIS ring building, which was shared
with the DESY accelerator operators to facilitate communication between the OLYMPUS and accelerator operators. The OLYMPUS shift crew was responsible for operating
the slow control system and monitoring its various readouts, as well as continuously monitoring the data using low-level analysis that was conducted as data were taken.
A dedicated run plan dictated the configuration of the beam, detector, etc., to maintain stable running during the main production periods. The experiment
collected approximately 4.5 fb$^{-1}$ of data during Run II (which comprises the data set for this work), of which approximately 3.1 fb$^{-1}$ is considered
excellent data in which the configuration and condition of the detector were optimal. It is this latter dataset that is used in this work for the analyses presented, although
at a later date an effort may be made to include portions of the data not used. The collection of luminosity over time for Run II is shown in
Figure \ref{fig:lot}.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.0\textwidth]{figures/lumifall.pdf}}
\caption[Approximate integrated luminosity collected by OLYMPUS during Run II]{The integrated luminosity collected by the experiment, as measured by the slow control
(Section \ref{sec:sclumi}), during the second OLYMPUS run, separated by the various beam/toroid configurations \cite{Milner:2014}.}
\label{fig:lot}
\end{figure}
\chapter{Analysis Strategy, Detector Calibration, and Monte Carlo Simulation}
\label{Chap4}
In the original design of the OLYMPUS experiment \cite{tdr}, the plan for the operation and analysis of the experiment involved regularly switching the
polarity of the toroidal magnet and computing $R_{2\gamma}$ as a super-ratio of the experimentally measured elastic event counts $N_{e^\pm p,\pm B}$ in the four combinations of
species and toroid polarity, normalized to the luminosity collected for each orientation:
\begin{equation}
R_{2\gamma} = \frac{\sigma_{e^+p}}{\sigma_{e^-p}} \approx \sqrt{ \frac{N_{e^+p,+B}N_{e^+p,-B}}{N_{e^-p,+B}N_{e^-p,-B}}\cdot \frac{\mathcal{L}_{e^-p,+B}\mathcal{L}_{e^-p,-B}}{\mathcal{L}_{e^+p,+B}\mathcal{L}_{e^+p,-B}}}.
\label{eq:idrat}
\end{equation}
Measuring $R_{2\gamma}$ using the method of Equation \ref{eq:idrat} confers several advantages, primarily relating to the cancellation of systematic effects due to the differences
in the relative acceptance of {$e^+ p$} and {$e^- p$} events events due to physical detector bounds and detector efficiency. This cancellation occurs due to the fact that an {$e^+ p$} event of given kinematics traversing the
detector in positive field polarity take the same path through the detector as an {$e^- p$} event with the same kinematics in negative polarity (and vice versa), while two such events
in traversing the detector in the same field polarity take slightly different paths through the system due to their opposite bending, and thus are subject to different acceptances,
detection efficiencies, etc. Since the acceptance of the OLYMPUS detector for exclusive {$e^\pm p$} events is primarily dominated by the acceptance of the lepton, the slight difference
in proton acceptances for {$e^- p$} and {$e^+ p$} events of the same vertex kinematics that does not cancel between the field polarity configurations was expected to be negligible.
This approach, while ideal in principle, suffers from a number of complications that make it impractical:
\begin{enumerate}
\item In the negative field polarity M{\o}ller scattering electrons are swept into the first layers of the drift chamber tracking system, causing a saturation of
certain regions of the detector and spoiling the cancellation of tracking efficiency between the two polarities, as described in Section \ref{sec:tormag}.
\item The reversal of the magnetic field reverses the direction of the $\mathbf{E}\times\mathbf{B}$ ionization electron drift in the drift chambers, significantly changing the calibration of recorded drift
time-to-distance (TTD) from the wire at which a particle passed. This required change in detector calibration between polarities again spoils the ideal cancellation of efficiencies.
See Figure \ref{fig:cells} for an illustration of the effect on the drift lines and Section \ref{sec:ttd} for a description of the TTD calibration.
\item The negligibility of the different acceptance for protons from {$e^+ p$} and {$e^- p$} events of the same kinematics in opposite toroid polarities depends on having uniform efficiency for
track reconstruction throughout the acceptance. As will be discussed in Section \ref{sec:specperf}, this was not the case for OLYMPUS (and would likely be an impractical requirement due to the small likelihood
of individual channels, voltage supplies, etc. in a 954-channel system performing identically).
\end{enumerate}
For these reasons, it was determined that OLYMPUS could achieve better systematic uncertainties by taking data with a single toroid polarity setting, and making a detailed effort
to properly account for the acceptance differences of {$e^- p$} and {$e^+ p$} events in simulation.
Conducting a measurement of $R_{2\gamma}$ based on Equation \ref{eq:rat} demands careful consideration of all aspects of the detector system and the analysis that affect the relative
acceptance of {$e^- p$} and {$e^+ p$} events and proper implementation of such effects in the Monte Carlo simulation. This chapter describes the essential strategy used in the OLYMPUS
analysis to achieve this goal, including essential information on detector survey and calibration, a brief
introduction to event reconstruction in the main spectrometer, and a description of the advanced Monte Carlo simulation employed in the analysis. Chapters \ref{Chap5} and \ref{Chap6}
provide more detailed information on the specifics of the luminosity and main cross section ratio analyses, including the measurement and simulation implementation of effects such as detector
efficiencies, resolutions, etc.\ for individual detector systems.
\section{Calibration of the Spectrometer Position and Magnetic Field}
\label{sec:speccal}
Due to the fact that OLYMPUS operated using a single field polarity, knowledge of the positions of the detectors, their solid angle coverage, and the magnetic field
throughout the detector volumes was crucial to properly determining the acceptance of the OLYMPUS detector for {$e^+ p$} and {$e^- p$} events. The acceptance of the detector was
determined by careful survey of both the detector positions and the magnetic field, and great care was taken to properly represent the results of these measurements in
the representations of the detector system used for the Monte Carlo simulation and particle track reconstruction. This section describes the essential methods of the position
and field surveys and their implementations in the OLYMPUS analysis.
\subsection{Detector Position Survey and Modeling}
\label{sec:detmod}
The primary optical surveys of the detectors, target chamber, and beamline elements were conducted by the DESY survey and alignment group (MEA2) \cite{mea2}. The surveys were conducted
using a laser tracker system, which provided precise measurement of a polar and azimuthal angle relative to the mounted position of the tracker and a laser-ranged distance from the tracker
to reflective prism targets placed on all elements of the detector, beamline, support frames, detector hall walls and floors, etc. The position of the laser tracker relative
to the center of the OLYMPUS coordinate system (Section \ref{sec:conv}) could be reconstructed via measurements of known points throughout the detector hall, which were conducted regularly
throughout the survey and anytime the laser tracker was moved. This allowed reconstruction of any measured point in the OLYMPUS coordinate system from the information provided by
the laser tracker. All detector elements, the target chamber, and beamline elements were surveyed at least twice: once in late 2011 prior to Run I and again after Run II in Spring 2013.
Certain individual detector elements were surveyed more frequently in the interim to check for shifting positions.
Physical objects that were part of the detector, target system, beamline, etc. were modeled for the analysis using Geometry Description Markup Language (GDML), an XML-based language
for the description of geometries that is compatible with both the ROOT and GEANT4 frameworks \cite{gdml,Brun1997,Agostinelli:2002hh}. This allowed the use of a single geometry model
in all OLYMPUS analysis applications (simulation, track reconstruction, visualization, etc.), greatly reducing the possibility of geometric errors in the analysis. The solid models of experiment elements
in the geometry model were constructed on the basis of both survey data and original design specifications for the elements. The placement and orientations of the elements in the
geometry were determined from the survey data. The survey data were converted to coordinates in the OLYMPUS global coordinate system, and a global fit was performed for the corresponding
points on the modeled detector elements to determine their positions and rotations. Redundancy and overdetermined placements in the survey dataset allowed for the identification and removal
of inconsistencies and erroneous survey data. The final survey fit provided accurate placement of objects in the solid model to better than \SI{100}{\micro\meter} for most elements, although
some elements with less-determined survey data (such as the GEM planes) had slightly higher uncertainties \cite{bernauer3}. Complete details on the implementation of the OLYMPUS solid
model may be found in Reference \cite{oconnor}.
\subsection{Beam Position}
At all times during OLYMPUS data-taking, the position of the DORIS beam was monitored by two beam position monitors (BPMs), placed slightly upstream and downstream of
the target chamber along the beamline. This provided a measurement of the central position of the DORIS beam at two points near the OLYMPUS target with an uncertainty
of $\sim$\SI{100}{\micro\meter}. The high degree of precision was obtained by conducting a detailed survey of the positions of the BPMs, as described in the
previous section, and then performing a series of calibration measurements after they had been removed from the beamline \cite{bpm}. The calibration was conducted by
mounting the BPMs with a current carrying wire passed through them which simulated the beam current. The position of this wire was varied and the BPM readout was matched
to the surveyed position of the wire. The wire position was varied well beyond the range of beam positions that occurred during OLYMPUS data-taking, but measurements were
focused on the regions most relevant to the experimental conditions. The surveyed wire positions were fitted in a similar fashion as the detector survey data so as to produce a mapping between
the BPM readout data and beam positions in the OLYMPUS global coordinate system. Additional tests, such as reversing the direction of current in the wire, were conducted so as to simulate the
oppositely-charged beam species to provide estimates of systematic effects.
\subsection{Magnetic Field Survey and Modeling}
\label{sec:magsur}
Since electrons and positrons bend in opposite directions in a magnetic field, precise knowledge of the OLYMPUS magnetic field was critical to determining the detector acceptance
for implementation in the simulation and tracking. To achieve this, a detailed survey of the OLYMPUS magnetic field was conducted in situ before moving the toroid coils after
the experiment. The detector elements (chosen to be non-magnetic and thus neither influence or be influenced by the field) were removed, permitting access to probe all areas
within the volumes corresponding to the detector acceptance. This effort is described in detail in References \cite{Bernauer20169} and \cite{schmidt}, but is briefly summarized here.
The measurement was conducted by constructing a system in which a three-dimensional Hall probe was mounted to a system of translation tables and support brackets that allowed movement
of the probe throughout the volume relevant to OLYMPUS track trajectories, from the target chamber to the locations of the ToF scintillators (the outermost detectors). Positions were
systematically scanned in 50 mm steps in the inner tracking region (which affects trajectories most strongly) and 100 mm steps in the outer region in all three spatial dimensions. Approximately
36,000 points were surveyed. During this survey, the position of the Hall probe was constantly monitored using a system of theodolites so that its position at each measurement point could be reconstructed. After the measurements
were performed, a fitting procedure similar to that used for the detector elements was used to reconstruct the probe positions corresponding to the field measurements.
Since the implementation of the field for tracking and reconstruction requires knowledge of the field at all locations (rather than a grid of points), a model was developed
to compute the field at any location in the detector system. This model consisted of approximating the OLYMPUS toroid coils as filaments and computing the field due to the current using
the Biot-Savart law. This allowed the computation of the magnetic field at arbitrary location and additionally provided the capability of numerically computing the spatial
derivatives of the field components. The placement of the toroid coils in the model was determined by initially allowing their positions to float and fitting their
positions and rotations so as to best fit the survey data position/field points. This fit achieved an average residual of 18.7 G, and was in general much better throughout the critical
tracking volumes. The coil model, however, was not a good approximation near the coils (where the approximation of modeling the toroid elements as thin filaments broke down) and thus
the field was interpolated directly from the data in such regions.
The model calculation, however, was too slow to be used directly for
simulating or reconstructing particle trajectories for the OLYMPUS analysis, and so a fast and precise
interpolation scheme was developed\footnote{The remainder of this section is predominantly reproduced from Reference \cite{Bernauer20169}, which was written by and describes
the work of the author.}. The coil model was used to pre-calculate the magnetic
field vector and its spatial derivatives on a regular $50$~mm~$\times$~$50$~mm~$\times$~$50$~mm grid covering the entire
spectrometer volume, so that the field could be interpolated between surrounding grid points.
The interpolation scheme had to balance several competing goals:
\begin{itemize}
\item minimizing the memory needed to store the field grid,
\item minimizing computation time for field queries, and
\item faithfully reproducing the coil model in both the field and its derivatives.
\end{itemize}
To achieve this, an optimized tricubic spline interpolation scheme was developed based
on the routine of Lekien and Marsden \cite{NME:NME1296}. For each point $P$ in the grid,
24 coefficients were calculated using the coil model (8 per component of the vector magnetic field):
\begin{multline}
C_{i,P} = \left\{ B_i, \pd{x}{B_i}, \pd{y}{B_i},\pd{x\partial y}{B_i},\pd{z}{B_i},\pd{x\partial z}{B_i},\pd{y\partial z}{B_i},\pd{x\partial y \partial z}{B_i}\right\}
\\ \text{for} \:\: i\in \left\{ x,y,z \right\}.
\end{multline}
For the interpolation, it is convenient to consider the grid in terms of boxes defined
by eight grid points, as shown in Figure \ref{fig:grid}, and define box-fractional
coordinates $x_f,y_f,z_f \in [0,1] $ parallel to the global axes spanning each box.
Each point in the grid is labeled with an integer index $j$, such that stepping from
point $P_j$ one unit in $x$ reaches point $P_{j+1}$. Stepping one unit in $y$ from
point $P_j$ reaches $P_{j+n_x}$, where $n_x$ is the size of the grid in the $x$ direction.
Stepping from point $P_j$ one unit in $z$ reaches point $P_{j+n_xn_y}$, where $n_y$ is the
size of the grid in $y$ direction. Then, a local tricubic spline can be defined for each
field component in the box:
\begin{equation}
B_i(x,y,z) = \sum_{l,m,n=0}^3 a_{i,lmn} x_f^ly_f^mz_f^n \:\:\:\: i\in \left\{ x,y,z \right\},
\end{equation}
where the coefficients $\left\{a_{i,lmn}\right\}$ are functions of
the set of the 64 parameters $\left\{C_{i,P}\right\}$, where $P$ is any of the eight grid points at
the vertices of the box.
This function is a 64-term polynomial for each box and is $C^1$ at the box boundaries.
The coefficients $\left\{a\right\}$ can be computed from the parameters $\left\{C_{i,P}\right\}$ following the
prescription in Reference \cite{NME:NME1296}. This prescription, however, requires three $64\times64$ matrix
multiplications per box. Once completed for a given grid box, these multiplications can be stored
for future use, but this adds to the size of the grid in memory, approaching a factor of 8 for
large grids.
\begin{figure}[hptb]
\centerline{\includegraphics[width=0.8\textwidth]{gridcube.eps}}
\caption[Grid box indexing scheme for the magnetic field interpolation]{A generalized box in the interpolation grid identified by its lowest-indexed grid point $P_j$, where $n_x$ and
$n_y$ are the $x$ and $y$ dimensions of the grid in units of grid points. \label{fig:grid}}
\end{figure}
To avoid these costs, the spline was refactored so that the parameters $C_{i,P}$ can be used
directly as coefficients. The resulting basis functions take the form:
\begin{gather}
f_0(x_i) = \left(x_i-1\right)^2 \left(2x_i+1\right) \\
f_1(x_i) = x_i\left(x_i-1\right)^2 \\
f_2(x_i) = x_i^2\left(3-2x_i\right) \\
f_3(x_i) = x_i\left(x_i-1\right)
\end{gather}
where $ i\in \left\{ x_f,y_f,z_f \right\}$. The interpolation then takes the form:
\begin{equation}
\label{eq:spline}
B_i(x,y,z) = \sum_{l,m,n=0}^3 b_{i,lmn} f_l(x_f) f_m(y_f) f_n(z_f) \:\:\:\: i\in \left\{ x,y,z \right\},
\end{equation}
where each coefficient $\left\{b_{i,lmn}\right\}$ is one of the parameters $C_{i,P}$. The correspondence
between $\left\{b_{i,lmn}\right\}$ and $C_{i,P}$ is shown in Table \ref{tab:coef}.
\begin{table}[hptb]
\tabcolsep=0.15cm
\begin{center}
\begin{tabular}{l | c c c c c c c c}
& $B_i$ & $ \pd{x}{B_i}$ & $ \pd{y}{B_i}$ & $\pd{x\partial y}{B_i}$ & $\pd{z}{B_i}$ & $\pd{x\partial z}{B_i}$ & $\pd{y\partial z}{B_i}$ & $\pd{x\partial y \partial z}{B_i}$ \\
\hline
$P_{j}$ & 000 & 100 & 010 & 110 & 001 & 101 & 011 & 111 \\
$P_{j+1}$ & 200 & 300 & 210 & 310 & 201 & 301 & 211 & 311 \\
$P_{j+n_x}$ & 020 & 120 & 030 & 130 & 021 & 121 & 031 & 131 \\
$P_{j+n_x+1}$ & 220 & 320 & 230 & 330 & 221 & 321 & 231 & 331 \\
$P_{j+n_xn_y}$ & 002 & 102 & 012 & 112 & 003 & 103 & 013 & 113 \\
$P_{j+n_xn_y+1}$ & 202 & 302 & 212 & 312 & 203 & 303 & 213 & 313 \\
$P_{j+n_xn_y+n_x}$ & 022 & 122 & 032 & 132 & 023 & 123 & 033 & 133 \\
$P_{j+n_xn_y+n_x+1}$ & 222 & 322 & 232 & 332 & 223 & 323 & 233 & 333 \\
\end{tabular}
\end{center}
\caption[Mapping of coefficients for the tricubic spline defined by Equation \ref{eq:spline}]{Mapping of the coefficients $\left\{b_{i,lmn}\right\}$ (defined in Equation
\ref{eq:spline}) to the field values and
derivatives at the grid points contained in the box with lowest-indexed point $P_j$. Entries
in the table are the values of $lmn$ corresponding to each combination of point and
coefficient on the interpolation box. \label{tab:coef}}
\end{table}
With this interpolation scheme, the procedure for querying the field map consisted
of determining the grid box containing the queried point, computing the box fractional
coordinates of the queried point in that box, and then applying the tricubic spline
interpolation for each of the three field components independently. Special care was
taken to optimize the speed and number of arithmetic operations in the routine (e.g., by pre-computing factors
such as the basis functions that are used repeatedly and by converting division operations to
multiplications whenever possible). Additionally, the coefficients for each grid
point were arranged in a single array so that, for any box in the grid, the 192 coefficients
associated with the 8 points of the box occurred in 16 continuous blocks of 12 coefficients,
permitting rapid reading of the entire array and facilitating single instruction, multiple data (SIMD)
computing, further increasing the speed of field queries. This scheme provided a precise and fast field
implementation for the detector model for both simulation and track reconstruction.
\section{Event Reconstruction}
\label{sec:recon}
For exclusive elastic {$e^\pm p$} event reconstruction, data were combined from the drift chambers and ToF scintillators, in conjunction with the trajectory bending caused
by the magnetic field, to provide complete reconstruction of particle trajectories (the event vertex and momentum vector at the vertex). While track reconstruction
algorithms for these detectors existed from the BLAST experiment \cite{crawford}, OLYMPUS operated with higher particle energies, different background conditions, different drift gas properties, and
more demanding precision goals than BLAST. Thus, required new tracking algorithms were required.
In essence, track reconstruction in the OLYMPUS detector amounted to finding the best solution for a particle trajectory given a set of loci that is derived from data
from the drift chambers and time-of-flight scintillators given the knowledge of the magnetic field throughout the volume of the detector. The locus of points corresponding to a ToF
hit consisted of a horizontal band of points across a bar with a good top-bottom PMT hit combination, reconstructed from the time difference of the hits in the top and bottom PMTs of the bar.
These ToF hits were relatively low resolution ($\sigma\approx 10$ cm in the vertical direction and limited to width of the bar in the horizontal direction), but provided valuable information when
properly weighted in the track reconstruction. The locus of points from a wire chamber hit consisted of two lines parallel to the wire with a valid time recorded, approximately equidistant from the wire in the
plane parallel to the wire chamber faces. The locus consisted of two lines due to ambiguity of a single time in representing a hit that passed either upstream or downstream of the wire. This
ambiguity was resolved for each wire hit through a combination of the 0.5 mm stagger between successive wire planes, use of hits in adjacent cells to limit a track location, and global fit information.
The conversion of the drift time recorded on a wire to the distance of the locus from the wire is in general a complicated problem, and is discussed in the next section.
Ideally, a reconstructed track in the OLYMPUS detector had valid wire times in all 18 wire planes and a good ToF hit fitting the trajectory suggested by those wire hits. Often a track had
multiple valid hits in a single wire plane due to the track crossing the boundary between two wire chamber cells and ionizing gas in the active regions of both, which provided a valuable constraint
on the location of the track by automatically resolving the wire-side decision for that plane. In practice, inefficiencies in the chambers caused a track to have fewer than 18 hits in the drift
chambers, but typically a track could be reconstructed with fewer hits as in the example event reconstruction shown in Figure \ref{fig:elase}.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.0\textwidth]{figures/elasticelectron.png}}
\caption[Event display of a reconstructed {$e^- p$} event]{A reconstructed elastic {$e^- p$} event where the electron (red track) was detected in the left drift chamber
and the proton (blue track) in the right drift chamber. The cyan lines in the ToF bars represent the horizontal lines corresponding to the loci of points corresponding to the
PMT time difference used to estimate the vertical position of the ToF hit. The color-filled sections of the drift chamber planes indicate wires with valid times for the event, color-coded
by the number of wires with valid times in a given cell (green indicates all three wires in a cell fired, blue two wires, and red one). The scattering chamber and toroid coils have been
removed from the display for clarity.}
\label{fig:elase}
\end{figure}
\subsection{Time-to-Distance (TTD) Conversion for the Drift\\Chambers}
\label{sec:ttd}
The conversion of recorded drift times (effectively the elapsed time between the ionization of the gas in the drift chamber to the signal time on the wire after drift) to the
corresponding distance from the wire at which the track passed in the drift chambers (the time-to-distance (TTD))
was a complicated function of the electric fields generated by the wires, the magnetic
field in the vicinity of the individual wires, the angle at which the track passed relative to the normal of the relevant wire plane, and individual irregularities of the wires.
Due to the complexity of the problem, several models were tested for the TTD conversion. These models varied from models of very few parameters fit only to simulations of the drift
gas using the Monte Carlo frameworks GARFIELD and MAGBOLTZ \cite{Veenhof:1998tt,Biagi1999234} to spline models of hundreds of parameters fit iteratively to the experiment data.
Ultimately, it was found that a model between these two extremes yielded the best results, both in terms of final tracking resolutions and ability to properly reconstruct the maximum number
of tracks.
The goal of the TTD function is to convert the time recorded by a wire to the position in the plane of the wire at which the track passed. Figure \ref{fig:drift} shows an
example of a simulated track passing through an OLYMPUS drift chamber cell as viewed looking along the wires. Using the coordinate system specified in the figure, the goal of the TTD
is to reconstruct the position of the particle trajectory in the $y=1$ cm, $y=2$ cm, and $y=3$ cm planes. Assuming the ideal condition in which there are no noise times for a wire, the
earliest time on a wire will correspond to an ionization from the passing track. Even ignoring the stochastic nature of the points along the trajectory that have ionizations, the earliest
ionization point with the shortest drift time to the wire is in general not in the wire plane and is a function of the local magnetic field (which gives rise to the $\mathbf{E}\times\mathbf{B}$
drift in the local $y$ direction) and the angle relative to the local $y$ axis at which the track passes.
\begin{figure}[thb!]
\centerline{\includegraphics[width=0.9\textwidth]{figures/electrontrack.png}}
\caption[Simulated lines of electron drift in an OLYMPUS drift chamber cell]{Simulated lines of electron drift (yellow) from points of ionization along a particle trajectory (open green circles)
to the wires in a simulated OLYMPUS drift chamber cell using GARFIELD for the drift simulation and MAGBOLTZ for the determination of the drift gas properties \cite{Veenhof:1998tt,Biagi1999234}. The drift
lines are angled relative to the symmetric axes of the drift cell due to the $\mathbf{E}\times\mathbf{B}$ drift induced by the local magnetic field. The
three points at which drift lines converge correspond to the positions of the sense wires. The green dashed lines represent isochrons, lines of equal drift time to a given wire.}
\label{fig:drift}
\end{figure}
The basic behavior of drifting electrons in an OLYMPUS drift cell may be understood by considering the electric field the electrons experience as they traverse the cell, as shown in Figure
\ref{fig:cellEx}. For the bulk of the cell, the arrangement of wires provides an approximately uniform drift field of $\sim$600 V/cm, which in conjunction with the resistance to drift of the gas, gives rise
to an approximately constant drift velocity for the electrons. In this region, the TTD function is approximately linear for all other conditions such as magnetic field strength and track incidence
angle held fixed. The TTD function must model different behaviors near the sense and ground wires. Near the sense wire, the drift field rapidly increases, causing the drift electrons to accelerate
towards the wire. This effectively compacts the drift distances from the region near the wire into a shorter range of times than in the linear region.
Near the ground planes, the relatively weak field causes electrons in this region to slowly accelerate, spreading many similar distances from the wire to a wide
range of drift times.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.00\textwidth]{figures/cellEx_projections.png}}
\caption[Calculated electric field strength for a simulated drift chamber cell]{Calculated electric field strength in the local $x$ direction for a middle wire in a simulated OLYMPUS drift cell. The wire
in question is at $x\approx12$ cm and the correspond ground wires are are $x\approx8$ and $x\approx16$ cm. For the bulk of the cell the electric field is roughly constant, but changes rapidly in the
vicinity of the sense wires and ground planes.}
\label{fig:cellEx}
\end{figure}
The final TTD model used in the OLYMPUS analysis accounted for these effects by modeling distance from the wire as a function of recorded drift time as a cubic function near
the wire, a linear function in the bulk of the cell, and steeper linear plateau near the ground planes. Additionally, this function was adjusted by the trigonometric factors
introduced by the angle of the Lorentz drift and the angle of the track relative to the wire plane. The function was given additional freedom along the length of each wire
by making all parameters of the function polynomials in the global $\phi$ of the track. More details on the derivation and exact forms of these functions may be found in
Reference \cite{schmidt}.
Rather than attempt to determine the parameters of the functions in each region based on the ideal physics of the drift cell (i.e., the Lorentz angle, the drift velocities,
the radius of the cubic acceleration region, the position of the start of the plateau region, etc.), the parameters were iteratively fit to data tracks.
This allowed considerably more freedom in accounting for imperfections in cell voltages, wire time offsets, and the effects of the magnetic field in three dimensions. This also
allowed for better handling of the fact that, as discussed in Section \ref{sec:thegdwcs}, the fraction of ethanol in the drift gas fluctuated and thus caused changes in the TTD relation.
To account for this, the data runs were examined by hand and grouped according to the width of their drift time distributions. A fit of the TTD model to a GARFIELD simulation was used as a seed for the
first iteration of tracking the data. Each individual wire was fit to two functions for the TTD on the upstream and downstream sides of the wire for each group to account for imperfections
in individual wires and cells. This seed was constructed for the TTD group with the distribution width closest to average width among the groups and then the iterated solution
for that group was used as the seed for the next nearest groups in widths and those groups were iterated to produce a solution, which was then used to seed the next group, and so on. Each group was
iterated (tracked using the results of the previous TTD fit and then refitted to the TTD model) at least twice, after which it was found that the
fit residuals did not typically continue to improve. Typically, average residuals between the TTD function and the reconstructed track distances were on the order of a few tenths of a millimeter.
Examples of the resulting TTD functions are shown in Figure \ref{fig:ttdex}.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.00\textwidth]{figures/cell_003.png}}
\caption[Example fitted TTD functions for two drift chamber cells]{Example TTD fit results for two cells that were near the downstream ends of the drift chambers in the same positions
left and right of the beamline for fixed track $\phi$. Note the spread in the TTD over several millimeters caused by the angle of incidence of the track. For this case, the functions in the
corresponding left and right cells are extremely similar, as would be expected in the absence of any imperfections in wire voltages, placement, etc.}
\label{fig:ttdex}
\end{figure}
\subsection{Track Reconstruction}
\label{sec:track}
With the TTD functions established, wire hit loci could be passed to a tracking algorithm for the full reconstruction of particle trajectories. This process is covered
in considerably more detail in References \cite{schmidt} and \cite{russell}, but is briefly described here for completeness. Multiple tracking algorithms were developed
for OLYMPUS, in part as a control on systematic uncertainties due to tracking efficiency and in part due to the challenging nature of OLYMPUS tracking (caused by the high
rates of noise hits in the inner portions of the drift chambers, the ambiguity of the upstream/downstream decision for hits on an individual wire, etc.).
The tracking algorithms used two common components, which were designed to reduce noise and increase the speed of track reconstruction. The first of these was
a pattern library of the combinations of wire hits that could reasonably correspond to a track of given kinematics so as to eliminate wires from consideration that
were not included in a possible track combination and to avoid attempting to track events in which no pattern was present \cite{DELLORSO1990436}. This pattern library was generated from a very large set of
simulated events extending throughout and beyond the kinematic ranges and particle species possible for the OLYMPUS running conditions. This additionally prevented tracks from events with combinations
of noise hits from mimicking elastic tracks that could contaminate the sample and were difficult to simulate. A toy example of a matched pattern and a rejected event are shown
in Figure \ref{fig:pattern}. To account for inefficiencies in the wire chambers, patterns were allowed a tolerance of missing one complete cell layer (i.e, one of the six) in matching
library patterns.
\begin{figure}[thb!]
\centerline{\includegraphics[width=0.7\textwidth]{figures/toypattern.png}}
\centerline{\includegraphics[width=0.7\textwidth]{figures/nopattern.png}}
\caption[Toy example of the tracking pattern library function]{Toy example of tracking pattern matching for the OLYMPUS track reconstruction algorithms. In the top figure, the six cells
obvious to the eye as corresponding to a track form a pattern match while the noise hits from the from the three cells to the left of the pattern were not passed to the track fitter. In
the bottom figure, no pattern was matched and thus no attempt was made to track the event \cite{oconnor1}.}
\label{fig:pattern}
\end{figure}
The second common component was a model fit to simulation data that provided positions in the wire planes corresponding to initial track parameters known as \textit{Fasttrack}. Other
tracking routines (including that used in the 12{$^\circ$} system (Section \ref{sec:12track})) iteratively simulate particle trajectories to minimize the residuals between the trajectory and
hit positions. Such an approach, however, would be much too slow for the OLYMPUS reconstruction. To avoid the need of iteratively propagating simulated trajectories, a dense library of simulated
events was generated and a generalized spline of the initial simulated track parameters was fit to the positions of the trajectories in each wire plane. Then, when reconstructing tracks, the
spline function was queried for a given set of kinematic parameters to interpolate the resultant wire plane locations of the trajectory. In addition to providing much higher speed than
simulated propagation, the spline function also allowed the computation of derivatives with respect to track parameters which is useful for many tracking algorithms.
With these tools in place, multiple algorithms were considered for the final fit of trajectories to data. The method used for the main final analysis was based on the
Elastic Arms Algorithm (EAA) \cite{OHLSSON1,OHLSSON2}. The essential principle of the algorithm is to begin with template tracks (i.e., the ``arms'') and deform them through
an iterative effective temperature cooling procedure to produce a global best fit the data. Numerous additions to this procedure were made in order to optimize the algorithm for the purposes
of OLYMPUS, which are discussed in References \cite{schmidt} and \cite{russell}. In general, this algorithm performed well for OLYMPUS, which is discussed quantitatively in
Section \ref{sec:reconrev}.
The other algorithms created for OLYMPUS included early iterations in which iterative simulated propagation was used (the predecessor of the 12{$^\circ$} system tracking algorithm
discussed in Section \ref{sec:12track}) and approaches involving a local fitting of track elements to collections of hits to then build complete trajectories. While these algorithms
were used as checks of the main EAA algorithm, they were not used for the final analyses presented in this work and thus are not discussed further here.
\section{The OLYMPUS Monte Carlo Simulation}
\label{sec:sim}
As noted previously, the analysis strategy of OLYMPUS required that the simulation be an accurate and detailed representation of the experiment and the experimental
conditions in all conceivable ways. In particular, the results of simulation were converted to the format of the raw data (i.e., TDC counts, ADC counts, etc.) after application
of the relevant resolutions on such quantities and then reconstructed using exactly the same methods as used on raw data. In this way, biases from simulation approximations
were minimized, providing a robust framework for the comparison of data and simulation. This section describes the procedure by which ``simulated raw data'' were produced from
the Monte Carlo framework; the reconstruction and analysis proceeded as described for the raw data for all elements of the experiment.
One of the notable advantages of this approach is that it permits a complete treatment of the effects of radiative corrections on the final data analysis, since the size of the radiative
corrections for any experiment depends on the acceptance and resolution of the detector systems used and the way in which events are selected from the data sample.
This arises from the fact that radiative events are only distinguishable from
purely elastic events if they result in a change to the momentum of the particle from its elastic momentum by an amount that can be distinguished by the detector according to its
resolution. Additionally, since radiative events may change trajectories they may be pushed in and out of the acceptance or into regions of different detection efficiency
relative to the pure elastic trajectory. The OLYMPUS analysis accounts for all such effects by full representing the detector in simulation including its physical acceptance, efficiencies,
and resolutions and by applying identical analyses to the data and simulation so that the effects of the choices made in the selection of elastic events are equally represented in data and
simulation. Although full Monte Carlo treatment of radiative corrections is common in modern higher energy experiments, most existing treatments of radiative corrections for {$e^\pm p$} scattering
are designed for single-arm (inclusive), high momentum resolution experiments and only allow for an adjustment of the size of the correction based on a single event selection parameter (typically
effectively amounting to the deviation of the lepton energy from the purely elastic energy) \cite{PhysRev.122.1898,MeisterPhysRev.130.1210,MaximonPhysRevC.62.054320,MoRevModPhys.41.205,PhysRevC.64.054610}.
The full Monte Carlo method used for OLYMPUS provides a higher level of confidence in the proper handling of radiative effects,
especially those that are opposite in sign for electrons and positrons, than classical methods by allowing radiative corrections to be properly treated in an exclusive event selection
and reducing the uncertainties associated with the single-arm momentum resolution of the experiment. The VEPP-3 TPE experiment utilized a similar approach to radiative corrections
as OLYMPUS and developed an {$e^\pm p$} radiative event generator \cite{vepp3PhysRevLett.114.062005,esepp}, while the CLAS TPE experiment applied a more classical approach
to radiative corrections to their data \cite{ass,PhysRevC.64.054610}.
\subsection{Procedure}
The procedure of simulating events and producing simulated detector data proceeded in three steps:
\begin{enumerate}
\item generation of event vertices (initial positions and momenta of particles in the event),
\item propagation of all generated particles through the detector geometry using GEANT4 with all relevant physics processes activated \cite{Agostinelli:2002hh}, and
\item digitization of the energy depositions, positions, etc. recorded by GEANT4 into ``raw data'' quantities.
\end{enumerate}
The propagation step of the procedure utilized the well-verified physics processes simulated by GEANT4, and depended critically on the detailed representation of the detector in the simulation
as discussed in Section \ref{sec:speccal}. The event generation and digitization procedures were developed particularly for the purpose of the OLYMPUS simulation and are
discussed in the following sections.
\subsection{Event Generation}
\label{sec:gen}
The first step in the OLYMPUS simulation was to generate event vertices by a Monte Carlo procedure. In general, this amounted to specifying a physical process to occur in the
detector system, drawing an event vertex from a specified target distribution, and drawing the momenta of all particles involved in the process from the distributions relevant
to the physics process. Options for the target distribution included realistic distributions generated from simulation of the target system (Section \ref{sec:tarsim}), fixed-vertex,
events generated isotropically in the target region, and approximations to the simulated target distribution. Options for the particle/momentum distributions included purely elastic
{$e^\pm p$} (Born approximation) kinematics, various test generators with isotropic momentum/angle distributions, an implementation of the ESEPP generator produced by the VEPP-3
TPE experiment \cite{esepp}, a new radiative {$e^\pm p$} generator developed for OLYMPUS (see next section), as well as generators for $e^\pm e^-$ events for the SYMB system that
included a new radiative generator for such processes \cite{spuds}.
Notably, most of the OLYMPUS generators were \textit{weighted} in that each event in simulation carries a scalar weight, which represents its contribution to the integral over
all Monte Carlo events that is compared with data. This approach provides two significant advantages:
\begin{enumerate}
\item events may be drawn more isotropically in phase space and then weighted appropriately so as to reduce the statistical error of the simulation in areas
of phase space where the true cross section is small (such as high $\theta$ in {$e^\pm p$} scattering) without simulating far more events than necessary in high cross section regions, and
\item an event can carry multiple weights corresponding to different models of radiative corrections, protons form factors, physical approximations, etc., allowing a single sample of
simulation events to test multiple physics models simultaneously without propagating, digitizing, and reconstructing the simulation events multiple times.
\end{enumerate}
The first advantage allowed OLYMPUS to achieve a very high statistical precision on the simulation throughout the entirety of the accepted phase space so as to make the statistical uncertainty due
to the Monte Carlo effectively negligible next to the data statistical and systematic uncertainties. The second provided a rich platform for the use of various radiative corrections models,
form factor models, etc. with the OLYMPUS data, which simplifies the comparison of OLYMPUS results with previous experiments and provides a strong indicator of the systematic uncertainties
on the OLYMPUS results associated with such effects.
For production simulation, i.e, the simulation results designed for the direct data/Monte Carlo comparison, the target distribution was specified as the resulting distribution
from the molecular flow Monte Carlo simulation described in Section \ref{sec:tarsim} and the physics processes were determined by the OLYMPUS radiative event generator.
\subsubsection{The OLYMPUS Treatment of Radiative Corrections and Radiative Event Generator}
\label{sec:radgen}
As previously noted, an new radiative {$e^\pm p$} event generator was developed for use with the OLYMPUS simulation framework. The development, testing, and application of this
generator are discussed in great detail in References \cite{schmidt} and \cite{russell}, while only the essential details are provided here. This generator was designed
to implement a variety of prescriptions for radiative corrections and the proton form factor, while providing a direct interface with the OLYMPUS analysis framework. In general, previous prescriptions
for radiative corrections to elastic {$e^\pm p$} scattering took the basic form:
\begin{equation}
\text{d}\sigma_\text{exp.} = \text{d}\sigma_\text{Born}\cdot\left( 1 + \delta(\theta,\Delta E)\right),
\end{equation}
where $\text{d}\sigma_\text{exp.}$ is the experimentally measured elastic cross section, $\text{d}\sigma_\text{Born}$ is the Born (single-photon exchange) cross section, and $\delta$ is the
radiative correction factor, which is typically computed as a function of the scattering angle $\theta$ and the effective energy resolution for the distinction of
radiative events for the experiment $\Delta E$. Typically, results for elastic {$e^\pm p$} scattering were reported as the extracted Born cross section after the subtraction of the correction
$\delta$ for comparison between experiments. The first of the prescriptions for computing the correction $\delta$ was developed by Mo and Tsai in the 1960s \cite{PhysRev.122.1898,MoRevModPhys.41.205}.
Additionally, Meister and Yennie developed an approach based on the early work of Tsai that made additional approximations for the purposes of facilitating computation \cite{MeisterPhysRev.130.1210}.
These correction prescriptions were the standard for nearly four decades until Maximon and Tjon published a new prescription in 2000 that reduced the number
of approximations made relative to Mo and Tsai, accounts approximately for the structure of the proton via the introduction of the dipole form factor (Equation \ref{eq:dipff}),
and reformulates the contributions due to soft two-photon exchange \cite{MaximonPhysRevC.62.054320}. Each of these prescriptions was designed for single-arm (inclusive) experiments,
and are thus formulated to rely on a cut in the lepton $\Delta E$. A prescription developed by Ent \textit{et al.}\ in 2001 reformulates the procedure of Mo and Tsai to produce
a method that allows the calculation of $\delta$ as a function of the missing energy reconstructed from exclusive detection of the lepton and proton, extending the applicability
of such models to coincidence experiments \cite{PhysRevC.64.054610}.
An extension to models for elastic {$e^\pm p$} radiative corrections, first proposed by Yennie \textit{et al.}\ in 1961 \cite{YENNIE1961379}, replaces the $(1+\delta)$ correction with a factor
of $\exp(\delta)$, a procedure known as \textit{exponentiation}. As demonstrated in Reference \cite{YENNIE1961379}, this allows for the correction to account for the emission
of multiple soft photons (i.e., that do not distinguishably change the event kinematics) and prevent the infrared divergence that occurs for the $(1+\delta)$ prescription, which
effectively only treats single photon emission. The soft approximation breaks down, however, as $\Delta E$ increases, necessitating a transition to consideration of such higher
$\Delta E$ events as hard bremsstrahlung processes.
The OLYMPUS radiative generator incorporates a wide variety of these prescriptions via the calculation of multiple weights. The details of the implementations of these
methodologies is discussed in References \cite{schmidt} and \cite{russell}, but the essential capabilities provided by the generator include the following:
\begin{enumerate}
\item implementation of the prescriptions of Mo and Tsai, Meister and Yennie, and Maximon and Tjon,
\item exact tree-level calculation of the bremsstrahlung matrix element (avoiding the soft photon approximation made by previous approaches),
\item separate weights for exponentiated and non-exponentiated corrections,
\item treatment of the vacuum polarization diagrams either from calculations including all leptons in the loop or using a data-driven approach (which, in principle,
includes all possible particles in the loop) \cite{vacpolweb,vacpolpres},
\item weights representing the Born and soft-photon approximations, and
\item and proper application of radiative corrections for different proton elastic form factor models.
\end{enumerate}
These various assumptions are represented across multiple weights computed for each simulated event, allowing the OLYMPUS data to be analyzed under a wide variety of
radiative corrections models and physical assumptions, facilitating both comparison to previous elastic {$e^\pm p$} data and providing extensive insight into the systematic effects
of radiative corrections for the final OLYMPUS results.
The generator was tested extensively prior to use for the final OLYMPUS analysis, including comparisons to the ESEPP generator \cite{esepp} and comparisons to the model
of Maximon and Tjon \cite{MaximonPhysRevC.62.054320} in the appropriate regions of phase space, as described in References \cite{schmidt} and \cite{russell}. In general,
the OLYMPUS radiative generator performed extremely well under all tests and provided a robust platform for the analysis of OLYMPUS results.
\subsection{Digitization of the Detector System}
In general, every effort was made to represent the elements of the OLYMPUS experiment in simulation to accurately represent the conditions under which the experiment operated.
This involved accounting for a number of factors via the generation of simulation events for each individual data run (each of which included $\sim$1$\cdot 10^{6}$ triggers) that
properly accounted for any effects such as beam position, target gas temperature, etc. that were subject to time variation. Such parameters were provided to the simulation from
the slow control data for each data run, and they were used to adjust the conditions such as the target gas density, the position of the beam with respect to the locations
of generated events, the magnetic field strength, etc.\ on an event-by-event basis to properly negate the possibility of such effects altering the final data/simulation comparison.
Regarding the digitization of detector elements, individual systems were treated so as to properly account for their resolutions and efficiencies (as measured from experimental data)
in the generation of simulated hits. The exact procedures used for different systems are discussed in the relevant sections describing the analyses using each detector, but in general
this was achieved by applying appropriate uncertainty to simulated TDC and ADC values, testing simulated hits against data-driven maps of simulated efficiency, and the elimination of
data from the final analysis corresponding to times when detectors were behaving unpredictably and could not be properly modeled in this fashion. This approach created a detailed model
of the OLYMPUS experiment in simulation, accounting for detector imperfections, a wide variety of time-varying effects that could otherwise introduce systematic uncertainties, and
a method of completely accounting for effects of detector acceptance and elastic event selections in the treatment of the radiative corrections applied to the experiment results.
\chapter{Determination of the Luminosity}
\label{Chap5}
As noted in Chapter \ref{Chap1}, the OLYMPUS result on {$\sigma_{e^+p}/\sigma_{e^-p}$} depends equally on two key elements of the data: the measurement of relative {$e^\pm p$}
rates as a function of angle and the measurement of the relative luminosity collected between the two species modes.
Ideally, each of these quantities should be known individually to better than $\pm1\%$ total (statistical and systematic) uncertainty
so as to provide an overall uncertainty of less than $\pm1\%$ on the final measurement. To this end, OLYMPUS
employed three functionally independent methods of luminosity determination:
\begin{enumerate}
\item calculation of the luminosity from the effective target density and beam current (the ``slow control luminosity''),
\item dedicated (separate from the main tracking system) forward ($\epsilon \approx 0.98$, $\theta \approx 12^\circ$) elastic {$e^\pm p$} event
reconstruction (the ``12{$^\circ$} luminosity''),
\item and very forward ($\theta\approx 1.3^\circ$) integrating calorimetric measurements of elastic $e^\pm e^-$ (M{\o}ller and Bhabha
scattering \cite{moller,bhabha}), $e^+e^-\rightarrow \gamma\gamma$ annihilation, and elastic {$e^\pm p$} events (the ``Symmetric M{\o}ller/Bhabha (SYMB) luminosity'').
\end{enumerate}
Each of these methods was sensitive to different physics processes and systematic uncertainty contributions, making
them a comprehensive and complementary set of measurements for the luminosity determination. In particular, the slow control
determination provided a real-time estimate of the luminosity during data-taking independent of event reconstruction. The 12{$^\circ$} system provided
a direct normalization of elastic {$e^\pm p$} with detectors independent from the drift chambers in a region where two-photon exchange (TPE) is expected to
be small. The SYMB calorimeter used extremely forward {$e^\pm p$} scattering (where TPE is expected to be extremely small) in conjunction
with the independent $e^\pm e^-$ process to provide an overall normalization decoupled from the TPE measurement of interest.
Throughout this chapter, reference will be made to the ``absolute luminosity'' and the ``species-relative'' luminosity ratio measurements,
with important distinctions drawn between them. For clarity, the absolute luminosity $\mathcal{L}_{e^\pm}$ for a given period of data-taking refers to the value
of the integrated luminosity that would be used in extracting the absolute cross section of elastic events:
\begin{equation}
\sigma_{e^\pm p}\left(\epsilon,Q^2\right) = \frac{N_{e^\pm p}\left(\epsilon,Q^2\right)}{\mathcal{L}_{e^\pm}},
\label{eq:abslumi}
\end{equation}
where $N_{e^\pm p}\left(\epsilon,Q^2\right)$ is the number of elastic {$e^\pm p$} events reconstructed in a given data
bin characterized by the phase space point $\left(\epsilon,Q^2\right)$ (assuming perfect acceptance) and $\sigma_{e^\pm p}\left(\epsilon,Q^2\right)$
is the total integrated cross section for elastic {$e^\pm p$} scattering into that phase space bin. The species-relative luminosity
ratio is then defined as the dimensionless ratio of the absolute luminosities for each lepton species:
\begin{equation}
R_\text{lumi} = \frac{\mathcal{L}_{e^+}}{\mathcal{L}_{e^-}},
\label{eq:rellumi}
\end{equation}
and ultimately it is this quantity that appears in Equation \ref{eq:rat} and is critical to the determination of
$R_{2\gamma}$. Notably, many systematic uncertainties cancel in the ratio of the luminosities to allow much more
precise determination of $ R_\text{lumi}$ than of the individual luminosities. This is discussed in detail in Section
\ref{ss:12sys}. Since the OLYMPUS data are in principle valuable for certain measurements involving the absolute
cross section (such as elastic form factor measurements), the absolute luminosity is considered in addition to
$ R_\text{lumi}$ to provide it for possible future use.
The author's primary effort regarding the luminosity analysis consisted of the determination of the slow control
and 12{$^\circ$} luminosity estimates, and thus this chapter will cover each of those in detail in Sections
\ref{sec:sclumi} and \ref{sec:12lumi}. Additionally, Section \ref{sec:tarsim} addresses the efforts that were made
to properly determine and simulate the shape of the target gas distribution generated by the OLYMPUS target system. This
had important implications for the relative acceptance of the two lepton species in the various detector systems.
Section \ref{sec:symblumi} provides a brief discussion of the SYMB system
luminosity analysis and the results arising from it. Section \ref{sec:alllumi} summarizes the results of the different
luminosity analyses and sets the stage for the primary OLYMPUS results on {$\sigma_{e^+p}/\sigma_{e^-p}$} .
\section{Slow Control Luminosity and Target Gas Distribution Determination}
\label{sec:sclumi}
The slow control luminosity determination served to provide a real-time estimate of the luminosity determination during
OLYMPUS data-taking independent of any of the event triggers and reconstruction. While not as precise, either for the
absolute or species-relative luminosity, the slow control estimate provided an important baseline for run simulations
and a cross check for more precise measurements. Additionally, as part of the slow control luminosity determination, concerted
efforts were made to understand the shape of the target gas distribution used in the experiment, which affected both the luminosity
and main {$\sigma_{e^+p}/\sigma_{e^-p}$} results via the varying acceptance for {$e^- p$} and {$e^+ p$} events as a function of position in the target. This was achieved
by developing a gas molecular flow Monte Carlo simulation with accurate representations of the both the target system geometry and the
physics of the gas within the system. This section will cover the various elements of the slow control luminosity determination in
detail, paying particular attention to the target gas simulation.
\subsection{Principle of the Measurement}
The slow control luminosity determination makes use of the basic definition of instantaneous luminosity
for a fixed target experiment \cite{bettini}:
\begin{equation}
\td{\mathcal{L}_\text{SC}}{t} = \frac{I_\text{beam}(t)}{e}\int_\text{target}\rho_p(z,t)\:\mathrm{d} z = \frac{2I_\text{beam}(t)}{e}\int_\text{target}\rho_{\text{H}_2}(z,t)\:\mathrm{d} z,
\label{eq:lumidef}
\end{equation}
where $I_\text{beam}(t)$ is the beam current, $e$ the electron charge, $\rho_p(z,t) = 2\rho_{\text{H}_2}(z,t)$ represent the density of protons/hydrogen molecules
in the target along the beam, and the integral runs over the length of the beam in the target $z$. Thus, a good estimate of the luminosity using
this method includes measurement of the beam current as a function of time and a model for the number and spatial distribution of molecules present in the target as a
function of the target input gas flow, temperature, and geometry.
During data-taking, the target and beam conditions were monitored using the slow control system described in Section \ref{sec:sc}. On an event-by-event
basis, the increment to the integrated slow control luminosity was calculated as the following variation of the integral of Equation \ref{eq:lumidef}:
\begin{equation}
\Delta\mathcal{L}_\text{SC} = Q_{\text{H}_2} \cdot \frac{I_\text{beam}}{e} \cdot \Delta t_\text{DTC} \cdot n_T \cdot\sqrt{\frac{75\:\text{K}}{T}},
\label{eq:sclcalc}
\end{equation}
where $Q_{\text{H}_2}$ is the input flow rate of H$_2$ molecules into the target cell in standard cubic centimeters
per minute (sccm)\footnote{The unit standard cubic centimeter per minute is defined as the flow rate of 1 cubic centimeter of gas at temperature 0 $^\circ$C
and pressure 1.01 bar passing a given point per minute. For reference, this defines $1\:\text{sccm}=4.477962\cdot 10^{17}\:\text{particles/s}$. For 1 sccm of
H$_2$ flow, note that the rate of proton flow is twice the gas particle flow.}, $I_\text{beam}$ is the beam current in A, $e$ the electron charge in C,
$\Delta t_\text{DTC}$ the trigger livetime (i.e., the elapsed time after being ``dead-time-corrected'' (DTC) for the time during detector readout when
the event triggers were not open), $n_T$ the effective total thickness of the gas target in cm$^{-2}$ presented to the beam as calculated by the target
simulation described in Section \ref{sec:tarsim} for 75 K and 1 sccm H$_2$ flow rate, and $T$ is the target temperature in K. Note that the integrated
luminosity scales linearly with the beam current (number of beam particles on target), trigger live time, and gas flow rate into the target (number of particles
in the target), as would be intuitively expected. The $1/\sqrt{T}$ dependence of the target thickness arises from the fact that the H$_2$ molecules
rapidly thermalize with the target walls, and thus their average velocity goes as the $\sqrt{T}$ behavior of the mean of the Maxwell-Boltzmann distribution \cite{maxwell1,maxwell2},
effectively changing the average amount of time that a single hydrogen molecule spends in the target by the inverse factor (further discussed in
Section \ref{sec:tarsim}). The total integrated slow control luminosity for a running period is simply the sum over the values of $\Delta\mathcal{L}_\text{SC}$ for
each event in the period. Also, note that
$\Delta t_\text{DTC}$ may be replaced by $\Delta t$, the simple elapsed time, to convert to a measure of the ``delivered'' integrated luminosity,
but that the ``collected'' luminosity is more relevant since this corresponds to the time when the detector was actually active and thus corresponds
to the luminosity that the detector system measures.
\subsection{Simulation of the Target Gas Distribution}
\label{sec:tarsim}
The target gas distribution used in the OLYMPUS simulation was determined using a newly-developed, standalone Monte Carlo
simulation of the molecular flow of hydrogen molecules within the target system. This section reviews the physics
of gas flow relevant to the system, explains the limitations of standard gas density calculations that make them insufficient
for the requirements of the experiment, and describes the new simulation.
\subsubsection{Relevant Details Regarding the Target System}
The internal components of the target system are shown in Figure \ref{fig:tarin}, including the various system components
in which hydrogen gas was contained and subject to exposure to the beam. Qualitatively, gas density was highest in the region
of the target cell directly below the inlet and tapered towards the edges of the system where gas could escape into the beamline
via holes in the wakefield suppressors or their connections to the beamline at the ends. The flow rate was controlled by
mass flow controllers (as detailed in Section \ref{sec:target}) and experimental data were only taken when the gas
flow was in a steady state (rate of input from the inlet matching the rate of gas collection by the pumps), as indicated by
the achievement of steady pressure in the target system. The temperature of the target cell assembly was measured as part of the slow control
system by seven equally-spaced thermocouples along the system parallel to the beam. Unfortunately, not all of these thermocouples were calibrated
for absolute temperature measurements.
\subsubsection{Physics of the Target System Gas}
Examining the components of Equation \ref{eq:sclcalc}, the $\Delta t_\text{DTC} $ was provided by the OLYMPUS trigger system (Section \ref{sec:trig}),
$I_\text{beam}$ was provided by the DESY accelerator system, and $Q_{\text{H}_2}$ and $T$ were monitored by the target and gas flow system (Section
\ref{sec:target} and Reference \cite{Bernauer201420}). The effective target thickness, however, is a much more complicated factor, involving
the dynamics of gas flow in conjunction with the specifics of the OLYMPUS target geometry and vacuum system. The physics of gas flow varies
considerably as a function of the gas pressure, nature of the conduit, and nature of the gas. A complete discussion of these phenomena may
be found in Reference \cite{rothvac}, but a brief discussion of the topic is provided here to establish the physics important to the flow regimes
present in the OLYMPUS target. The flow regimes of sparse gases are typically classified by the \textit{Knudsen number} \cite{Knudsen,0034-4885-49-10-001}:
\begin{equation}
\text{Kn} = \frac{\lambda}{D},
\end{equation}
the dimensionless ratio of the mean free path $\lambda$ between gas molecule collisions and the typical diameter $D$ of the conduit. For
$\text{Kn}\gtrsim0.5$, the behavior of the gas is dominated by its interaction with the conduit and intermolecular interactions are
considered negligible, a region known as \textit{molecular flow} where the dynamics of the gas flow simplify considerably. Ideally,
the OLYMPUS target system would be in this regime since this would make an accurate representation of the target gas much more
feasible. To determine if this was the case, the hydrogen may be approximated as an ideal gas with velocities distributed according
to the Maxwell-Boltzmann distribution. In this case, the mean free path may be analytically calculated:
\begin{equation}
\lambda = \frac{k_BT}{\sqrt{2}\pi\xi^2P} = 2.33\cdot 10^{-20}\:\left[ \frac{\text{Torr}\cdot\text{cm}^3}{\text{K}} \right]\:\frac{T}{\xi^2 P},
\end{equation}
where $k_B$ is the Boltzmann constant, $P$ the gas pressure, and $\xi$ the effective molecule diameter. This approximation was verified experimentally
by Sutherland and others in thermodynamic conditions similar to those in the OLYMPUS target \cite{sutherland}.
The region of highest pressure in the target gas system, and thus lowest mean free path, is the narrowest aperture (examining only the parts
near the cell that immediately affect the distribution of gas in the cell). This region corresponded to the inlet tube,
as shown in Figure \ref{fig:tarin}. The inlet was several centimeters long and of diameter $D=0.5$ cm, and was cooled by the cryogenic system
to temperatures as low as 35 K (in the absence of beam heat load on the system). While the pressure inside the inlet is difficult to estimate
and was not directly measured, note that for the hydrogen molecular diameter experimentally determined to be $\xi=4.04$ \AA~\cite{weast}
and the conservative (resulting in the lowest mean free path) case of $T=35$ K:
\begin{equation}
\lambda P = 5 \cdot 10^{-4}\:\text{cm}\cdot\text{Torr}.
\end{equation}
Thus, to consider the gas to be in the molecular flow regime ($\text{Kn} = \frac{\lambda}{D} \gtrsim 0.5$) requires:
\begin{equation}
P \lesssim 2 \cdot 10^{-3}\:\text{Torr}.
\end{equation}
Since this is three orders of magnitude larger than the pressure measured in the scattering
chamber during normal running conditions (a region of only approximately 1--2 orders of magnitude greater in volume than the inlet) \cite{Bernauer201420},
it is reasonable to assume that the gas was characterized by molecular flow throughout the target system.
\subsubsection{Molecular Flow}
As noted, in the molecular flow regime it is assumed that interaction between mole\-cules in the gas are negligible
and interactions with the walls of the conduit dominate the behavior of the flow. In particular, for the case of
the OLYMPUS target system where the gas is relatively low in density and the conduit walls (the target cell, inlet,
wakefield suppressors, etc.) are being actively cooled, the gas rapidly thermalizes to the temperature of the conduit.
The gas can be understood as a collection of individual molecules that propagate along straight lines between
collisions with the walls of the conduit with speeds distributed according to the Maxwell-Boltzmann distribution
of temperature $T$ (hence the $1/\sqrt{T}$ dependence of Equation \ref{eq:sclcalc}). Note that the collisions
at the walls of the conduit are not reflection-like (i.e, angle of incidence equal to the angle of reflection),
since the walls are fundamentally ``rough'' on the scale of molecular collisions. The rebounds from the wall
are typically modeled to be distributed as the cosine of the angle from the surface normal vector independent
of the incident angle, with no preference for the azimuthal angle relative to the normal vector (known as ``Knudsen's cosine
law'' \cite{kcos}). Note that the preference is to rebound normal to the surface, with vanishing probability to rebound
along the surface. While it is not excluded that there may be a finite time between collision with the wall and the reemission
of the molecule, this effect will equilibrate and become negligible after the gas flow is well established and many
molecules are in the system.
\subsubsection{Conductance Modeling}
Having established the molecular flow nature of the flow of the gas through the target system, the traditional approach to
determining the resultant gas density inside the target cell is to define the \textit{conductance} $C$ of the target cell
(with dimensionality of volume per unit time), analogous to
an electrical conductance. By construction, this conductance obeys a corresponding molecular flow analog of Ohm's law:
\begin{equation}
C = \frac{Q_T}{\Delta P},
\end{equation}
where $Q_T$ is the \textit{throughput} (with dimensionality pressure times volume per unit time)
and $\Delta P$ the pressure difference across the system. A more useful quantity than the
throughput is the \textit{pumping speed} $S$, defined as the throughput divided by the pressure
at the entrance to the system (in this case the inlet of the target cell)\footnote{The name ``pumping
speed'' arises from the fact that the system of interest often terminates at a vacuum pump, but this need
not be the case and the term can describe the rate of gas flow through any conduit.}.
Defining $P_\text{in}$ as the inlet pressure to the target cell and $P_\text{out}$ as the pressure
at the exit to the cell, then $S=Q_T/P_\text{in}$ and:
\begin{equation}
C = \frac{S P_\text{in}}{P_\text{in}-P_\text{out}} = \frac{S}{1-\frac{P_\text{out}}{P_\text{in}}}.
\end{equation}
This result is important in that it implies that for $P_\text{out}\ll P_\text{in}$, the pumping speed
goes to the conductance, independent of the precise value of the conductance. Thus, while the pressure
in the target inlet tube and at the exit to the target cell are not well known, it is satisfied that
$P_\text{out}\ll P_\text{in}$ and so the conductance of the system may be calculated to determine the
rate at which molecules pass through the cell (and thus the density of molecules inside the cell).
The computation of the conductance of an arbitrary conduit, however, is quite difficult. Typically,
such calculations require computation of integrals over all possible straight-line particle trajectories between collision points,
rely on assumptions that the geometry has uniform cross section, and ignore boundary conditions (i.e., assume
the conduit is very long). Analytical calculations for several simple geometries may be found in Reference
\cite{rothvac}. For the original OLYMPUS simulation, the target density was modeled using the Steckelmacher analytical computation
of the conductance for a long tube of elliptical cross section \cite{0022-3727-11-4-011} to model the OLYMPUS
target cell. Since the model had constant cross section, the resulting predicted gas density distribution was triangular,
peaking at the inlet and linearly declining to zero at each end of the target cell (similar to a voltage dropping
across a long, uniform resistor), shown as the ``Old Slow Control MC'' distribution in Figure \ref{fig:tardists}.
While initially expected to provide a sufficiently good model of the true
target density, comparison of analyses of simulation and data via tracking in multiple detectors indicated that
the simulated triangular target model did not predict the shape of the measured distribution well near the ends of the cell
and underestimated the density of the target by approximately 20\%.
In retrospect, this model failed due to the fact that it ignored the end conditions of the target cell, which directly connect to wakefield suppressors.
While the wakefield suppressors were manufactured with holes to allow the escape of gas (see Figures 7--9 in Reference
\cite{Bernauer201420}), the model severely underestimated the containment of gas by these elements and could not
make any prediction regarding the shape of the gas distribution in the regions near the ends of the cell. Additionally, the effective
conductance for gas escape was not the same at each end of the cell due to the connection of the collimator, and thus
the distribution of gas was additionally not symmetric about the inlet port. It was,
however, completely impractical to use analytical methods to compute the conductance of the complete target system
geometry, due to both the elliptical cone shapes of the collimator bore and wakefield suppressors and the lack
of gas escape handling in such calculations.
\subsubsection{Simulation of Molecular Flow and Conduit Geometry}
Due to the fact that the acceptance of {$e^- p$} and {$e^+ p$} events in the detectors can differ as a function of
target vertex position and the desire to have the slow control luminosity calculation more accurately
represent the overall target thickness, a new method was developed for computing the target density to
ensure an accurate representation in the simulation. As discussed, the complexity of the complete geometry
made any attempt at an analytical approach infeasible, which naturally suggests Monte Carlo simulation methods.
While there exists commercial software for molecular flow \cite{comsol}, this software is quite expensive
and designed for engineering applications that do not necessarily have similar conditions to internal
gas targets in nuclear and particle physics experiments. For this reason, a new, standalone molecular
flow and geometry simulation known as ``TargetSim'' written in C++ was developed for the purpose of the OLYMPUS experiment\footnote{The
source code for TargetSim, along with basic documentation, is available from the author for academic applications.}.
In general, TargetSim follows the approach of propagating particles (i.e., molecules or atoms of gas) of given mass through a modular geometry.
Molecules traverse the geometry starting from a source distribution defined by the user (i.e., a position and velocity
distribution for particles entering the system). Each input particle is individually tracked to its next collision with a
wall of the geometry or until it reaches a designated exit of its current geometry unit. At the exit to a geometry unit,
a particle may pass into another geometry unit, in which case its next intersection or exit point is calculated for that unit,
or out of the system (i.e., to a location where it will reach a vacuum pump). At each wall collision point, the gas molecule
is assumed to thermalize with the local temperature of the wall, and is then re-emitted with speed distributed according
to the Maxwell-Boltzmann distribution and direction according to Knudsen's cosine law. This process is iterated for each
particle until it reaches a system-exit condition. Such a propagated path in the OLYMPUS target system is shown in Figure
\ref{fig:typpath}.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.0\textwidth]{figures/typpath.png}}
\caption[Simulated path of a hydrogen gas molecule in the target system]{Simulated path of a hydrogen molecule inside the OLYMPUS
target system, beginning at the top of the inlet, passing into the upstream portion of the target cell, and eventually leaving
the system through the downstream wakefield suppressor. Coordinates are approximately the OLYMPUS global coordinates.}
\label{fig:typpath}
\end{figure}
Along the particle's path, the time occupation of the particle
in three-dimensional spatial bins is recorded. Then, at the end of the trajectory, the time occupation histogram
may be divided by the total time the particle spent in the system to create a position density distribution histogram for
a single particle. For a collection of many particles, the average of such distributions will converge to the position distribution
of the target system under steady-state conditions. Furthermore, the average time a particle spends in the system may
be used in conjunction with the steady-state input flow rate to determine the average number of particles inside
the system at a given time, and thus to place an overall normalization on the position distribution as a function of
flow rate for direct implementation in the simulation and slow control luminosity calculation.
The definition of the geometry in TargetSim is a user input to the program, which must include the following information for each
unit in the geometry:
\begin{enumerate}
\item equations or inequalities defining the surfaces of the geometry that a particle may strike,
\item a method of calculating the surface normal vector at any point on the strikable surfaces of the geometry,
\item a defined temperature function for each strikable point on the geometry,
\item an analytical or numerical method for computing the intersection of a line (i.e., a particle trajectory) with the strikable surfaces, and
\item conditions defining which points on the strikable surfaces are exit points and into which geometry unit or system exits those exit points pass
the particle.
\end{enumerate}
Geometry units are defined as instances of a general C++ geometry unit class with member variables and functions that provide
the above information to a propagator function which handles the Monte Carlo drawing of new particle directions, generates the
time occupation histograms, and records other requested information about the particle trajectory. The version of TargetSim used
for OLYMPUS includes implementations of cylindrical tubes, elliptical tubes, and elliptical cones as geometry units (including
some elements with holes that correspond to system exit locations). In principle, however, any geometry description that can be
implemented to satisfy the requirements above can be used as a geometry unit, thus making TargetSim extremely flexible to handle
a variety of molecular flow systems.
\subsubsection{Simulation Implementation and Results}
For the OLYMPUS implementation of TargetSim, gas particles were generated at the top of the cylindrical hydrogen inlet tube, 250 mm above the top of
the elliptical target cell, at room temperature with trajectories distributed as $\cos\theta$ relative to the downward pointing
vector along the inlet tube, which has been shown to be a good approximation for molecular flow in a long cylindrical tube \cite{ZHANG2012513}. The geometrical
elements included in addition to the inlet were the elliptical target cell, the elliptical cone internal bore of the collimator, and the three
wakefield suppressors (all shown in Figure \ref{fig:tarin}). The system exit conditions were at the holes of the wakefield suppressors and the ends of the
wakefield suppressors that attach to the beamline. Simulations in which the cylindrical beamline was expanded beyond the wakefield suppressors and in
which the target chamber was implemented as a box containing the system were conducted so as to assess the probability of a particle
reentering the system after one of the aforementioned exit conditions. This effect was found to be negligible, and so to increase the speed of the simulation,
additional beamline and target chamber elements were not included in the geometry. The surfaces of the system were assumed to be
at constant temperature, as suggested by the limited calibrated temperature information available from the thermocouples.
The results of the simulation for typical OLYMPUS running conditions are shown in Figure \ref{fig:tardists}, including a comparison
to the results of the Steckelmacher elliptical tube conductance calculation. As can be seen, the target simulation predicts
the $\sim$20\% increase in density relative to the conductance calculation that was suggested by other detector systems, and predicts
a more complicated shape for the target gas distribution extending beyond the $\pm300$ mm of the target cell. Note that the upstream
($-z$) end of the distribution is especially important to the 12{$^\circ$} luminosity telescope measurement since events from this range
may be in the telescope tracking acceptance. The simulation reproduces the $1/\sqrt{T}$ dependence of the gas occupation of the target
expected from the Maxwell-Boltzmann distribution of velocities. Thus, this factor may be computed analytically, as previously claimed
in the discussion of Equation \ref{eq:sclcalc} so as to avoid requiring simulations at each individual measured temperature from the dataset.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.2\textwidth]{figures/TargetDistribution.png}}
\caption[Predicted target gas distributions from TargetSim]{Predicted target gas density distributions for the region of the system near
the beam from TargetSim, for 0.6 sccm input flow and a range of typical constant temperatures for running conditions, compared to the prediction
from the elliptical tube conductance calculation of Reference \cite{0022-3727-11-4-011} (``Old Slow Control MC at 75 K''). Note that while the central regions of the simulation
predications are approximately triangular as predicted by the conductance, the distributions are greater in magnitude and asymmetric
relative to the triangular conductance prediction due to the increased and asymmetric resistance to flow of the additional target system
components on each end of the cell.}
\label{fig:tardists}
\end{figure}
To implement the results of the target simulation in the main OLYMPUS simulation to properly represent the vertex distribution of tracks,
the shape of the distribution was parametrized using piecewise polynomial fits to its shape. The distribution was normalized
by using the effective total target thickness of $n_T = 7.9095\cdot 10^{15}$ protons/cm$^{-2}$ predicted by the simulation at $T=75$ K and $Q_{\text{H}_2}=1.0$
sccm and then correcting for the measured flow and temperature as a function of time using Equation \ref{eq:sclcalc}. Comparison of the parametrization of
the simulation to data is complicated by the fact that the extended angular acceptance of the OLYMPUS tracker distorts the reconstructed $z$ vertex
position. By narrowing the event selection to a small $\theta$ range, however, data and the raw target distribution prediction may be approximately
compared. Such a comparison is shown in Figure \ref{fig:tarcomp}, for reconstructed elastic events selected for lepton $\theta$ within 0.5{$^\circ$} of 32{$^\circ$} .
In general, the simulation predicts the shape of the distribution very well, including the slopes on each side of the triangle and the behavior near
the ends of the cell.
Due to the strong indications that TargetSim produces a better target distribution prediction (both in terms of shape and normalization) than
previous methods, the results of TargetSim were incorporated into all generated simulation datasets for the OLYMPUS experiment.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.2\textwidth]{figures/tarcomp.pdf}}
\caption[Comparison of target distribution simulation to data]{Track vertex $z$ distribution for elastic events with lepton $\theta$ within 0.5{$^\circ$} of 42$^\circ$,
compared to the predicted distribution from the raw target simulation distribution. Note that the simulation distribution is normalized to have
equal integral to the data distribution for comparison of the distribution shapes in this plot.}
\label{fig:tarcomp}
\end{figure}
\subsection{Systematic Uncertainty and Discussion of the Slow\\Control Luminosity}
\label{sec:scsum}
Note that, despite the apparent success of TargetSim in predicting the shape and normalization of the gas density of the target,
several uncertainties make it unreasonable to use the slow control luminosity on its own as a precise luminosity determination. As
previously noted, the temperature measurements along the target cell were not well calibrated, and thus it is impossible
to determine the exact temperature of the target cell as a function of time and whether or not the temperature was uniform across the system at any given
time. Even a very plausible 5 K shift in temperature alters the absolute slow control luminosity by $\sim$3.2\%. Given that the target
temperatures during electron running were $\sim$10 K higher than during positron running due to different beam conditions, any non-linearity
or other misunderstood aspects of the temperature measurement could induce a false asymmetry in the slow control luminosity measurement
on the order of a percent. While beam energy was well constrained, the input flow was calibrated by filling buffer volumes in the gas
supply system, a process with only percent level precision and that doesn't account for any losses in the several-meter-long input line
from the supply system to the gas inlet. Adding these to the fact that the simulation is not yet definitively experimentally verified,
it is reasonable to ascribe a systematic uncertainty to the slow control luminosity of $\delta_{\text{SC,abs}}=\pm 5\%$ absolute
and $\delta_{\text{SC,rel}}=\pm 2\%$ relative.
While not viable as a standalone precision measurement, the slow control luminosity provided a valuable approximate
benchmark for the other luminosity monitors, and more
importantly the results of TargetSim provided a more accurate representation of the target gas distribution for use in the simulation.
The versatility and generality of the TargetSim code make it a good candidate for use in future experiments with molecular
flow regime gas targets and other applications.
\section{Luminosity Determined Using the 12$^\circ$ System}
\label{sec:12lumi}
To take advantage of the rapidly increasing elastic lepton-proton scattering cross section
at small lepton scattering angles ($\theta$) a dedicated tracking system consisting of two six-plane
telescopes was constructed as a means of providing a luminosity normalization point for
the measurement of {$\sigma_{e^+p}/\sigma_{e^-p}$} in the main tracking volume. While subject to possible differences
in the {$e^- p$} and {$e^+ p$} elastic cross sections due to TPE, these effects are universally expected to be
small (i.e., less than the experimental precision) at the kinematics accepted by the telescopes.
This uncertainty is addressed in Section \ref{ss:12sys} and an extraction of the value of
{$\sigma_{e^+p}/\sigma_{e^-p}$} from the system using an independent luminosity determination from the SYMB is presented
in Section \ref{sec:12TPE}.
The detectors and trigger of the 12{$^\circ$} telescopes are described in Chapter \ref{Chap3}, while
this section addresses the analysis of the data from the system. The analysis included full representation of the system in
the OLYMPUS Monte Carlo simulation, hit reconstruction,
track reconstruction, and event selection to produce final yield of elastic {$e^\pm p$} events for the luminosity
determination. Additionally, this section discusses the performance of the system, the comparison of reconstructed
data to Monte Carlo simulation, and the resulting luminosity extraction.
Please note that throughout this section, ``12$^\circ$'' will be used as a shorthand to refer to events in which the lepton
is reconstructed in the aforementioned tracking telescopes and to the detector system as a whole, even though the actual lepton scattering angles
accepted by the telescope varied over a range of several degrees around $\theta=12^\circ$.
\subsection{Principle of the Measurement}
Since the cross section for elastic {$e^\pm p$} scattering increases rapidly as $\theta$ decreases, scattering events at forward angles are a
natural choice for statistically precise luminosity measurements due to the high rate of events that can be sampled. With this principle
in mind, OLYMPUS included dedicated forward tracking elements to expand the acceptance for elastic {$e^\pm p$} events to scattering angles as small as 9{$^\circ$} for positrons
and 11{$^\circ$} for electrons. The most backward-going recoiling protons from elastic {$e^\pm p$} events where the lepton is accepted by the 12{$^\circ$} telescope had $\theta\approx76^\circ$,
meaning they were within the drift chamber acceptance described in Section \ref{sec:spect}. This allowed the exclusive reconstruction of elastic events, providing
a strong lever against background contamination while maintaining a high statistics data sample. Figure \ref{fig:typ12} shows an event display of such an event
from the data set.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.0\textwidth]{figures/typ12event.png}}
\caption[Event display of a typical 12{$^\circ$} telescope event]{A reconstructed elastic {$e^- p$} event where the electron (red track) was detected in the right 12{$^\circ$} telescope
and the proton (blue track) in the left drift chamber. Note the proper rejection of uncorrelated hits in the drift chamber to find the good selection of hits that
corresponded to the proton track. The scattering chamber and toroid coils have been removed from the display for clarity.}
\label{fig:typ12}
\end{figure}
In essence, the determination of the luminosity from the 12{$^\circ$} system involved the reconstruction of possible particle hits and tracks in the telescopes and drift chambers for events
passing the 12{$^\circ$} trigger (Section \ref{ss:12dtrig}), and then testing the resulting {$e^\pm p$} pairs (selecting on the beam species for elastic kinematics via a series
of cuts (Section \ref{sec:12ana}). Similar to the main {$e^\pm p$} analysis method (Section \ref{sec:mainana}), this procedure was followed for both experimental data
and digitized Monte Carlo data. Then, for a given simulated integrated luminosity $\mathcal{L}_\text{MC}$ corresponding to a set of data (correctly simulated beam parameters,
detector efficiencies, etc.), the measured integrated luminosity in the 12{$^\circ$} system for the set of data is simply a function of the number of elastic events $N$ accepted in data and simulation
and the simulated luminosity:
\begin{equation}
\mathcal{L}_{\text{12}^\circ} = \frac{N_\text{data}}{N_\text{MC}\left(\mathcal{L}_\text{MC}\right)} \cdot \mathcal{L}_\text{MC}.
\label{eq:l12}
\end{equation}
While simple in principle, this method requires a deep understanding of the conditions under which data were taken so that the simulation
properly replicates any conditions that could have affected the elastic event yield. While the simulation makes every attempt
to faithfully reproduce the data, as described in Section \ref{sec:sim}, simulation parameters are in general considered as possible
sources of systematic uncertainty and are exhaustively analyzed in Section \ref{ss:12sys}.
In a single data run file (typically with $\sim$1.0$\cdot 10^6$ triggers, lasting $\sim$20 minutes), approximately
10,000 total accepted {$e^- p$} events or 19,000 accepted {$e^+ p$} events (due to the differences in the trigger noise conditions between beam species)
were recorded. Since simulation could be run to arbitrary statistical precision, the data rate determined the overall statistical uncertainty
on the 12{$^\circ$} luminosity estimate: approximately 1\%/run or 0.5\%/hour. Combining the entire data run, the statistical precision
is on the order of 0.01\% and is thus negligible compared to various systematic uncertainties.
\subsubsection{Constraints of a Single Arm Measurement}
For the purpose of the analysis presented in this work, the luminosity was determined via the use of exclusively reconstructed {$e^\pm p$} events
rather than with inclusive events in which the lepton is reconstructed in one of the 12{$^\circ$} telescopes, but no requirement is placed on the proton. While
the latter has the advantage that it would completely separate the 12{$^\circ$} measurement from dependence on drift chamber data and make its independence
as a monitor more robust, the differing beam environments between $e^-$ and $e^+$ running made a single-arm inclusive measurement extremely difficult.
In particular, during electron running the rate of hits in the ToF bars that were part of the 12{$^\circ$} system trigger were considerably higher
than during positron running, which led to an increased rate of triggers from non-elastic events for $e^-$ running.
When attempting a single-arm
analysis, it was found that the ratio of the positron rate to the electron rate was approximately 2\% lower than the ratio found by an exclusive
analysis due to this increased background in the electron sample. While using information such as ToF meantime and energy deposition could
recover some of this difference, it could not discriminate against events with multiple ToF hits in the 12{$^\circ$} trigger window and the simplicity of the
ToF-only data did not provide a strong separation of good elastically scattered protons. Additionally, as will be discussed in Section \ref{sec:hahahaha},
the final analysis made use only of the MWPCs as tracking elements. Due to this, the resolution on the kinematic parameters of the reconstructed
lepton was quite limited. This made a background subtraction scheme like the one used for the main analysis (Section \ref{sec:backsub}) a dubious
approach. Due to this, it was determined that the systematic uncertainty inherent in an inclusive measurement was greater than the systematic
uncertainty introduced by requiring proton track information from the drift chambers. Thus, this work predominantly considers the exclusive measurement.
\subsubsection{Possible Contribution of TPE}
\label{sec:12posstpe}
Fundamentally, the measurement of the luminosity using elastic {$e^\pm p$} scattering in the 12{$^\circ$} system as a normalization point for the measurement of
{$\sigma_{e^+p}/\sigma_{e^-p}$} is limited by the fact that it involves the same physics process that is being examined for TPE contributions. While the vast majority of theoretical
and phenomenological models predict that TPE should be small (or at least negligible compared to systematic effects) in the kinematic region accepted
by the 12{$^\circ$} system \cite{BerFFPhysRevC.90.015206,Chen:2007ac, Guttmann:2010au,Blunden:2003sp,Afanasev:2005mp,Chen:2004tw,
Blunden:2005ew, Kondratyuk:2005kk, Borisyuk:2006fh,
TomasiGustafsson:2009pw}, there exists very little experimental evidence to validate these predictions. The effect of this assumption is considered as a systematic uncertainty
for the luminosity determination, and is discussed in detail in Section \ref{ss:tpe12sys}. Also considered in this work, however, is the measurement of the value of {$\sigma_{e^+p}/\sigma_{e^-p}$} in
this kinematic region, using the SYMB system to provide the luminosity normalization, and this is discussed in Section \ref{sec:12TPE}.
\subsection{Discussion of the Exclusion of the GEM Detectors}
\label{sec:hahahaha}
As previously noted, the GEM detectors that were part of the 12{$^\circ$} telescopes were not utilized in the luminosity analysis presented in this work. When closely examined
in the course of studying their performance during data-taking, it was found that they exhibited a strong, time-dependent variation in their efficiency for detection
of particles on the order of 10\%. The essence of this issue is illustrated in Figure \ref{fig:gemblow}, which shows the variation in the number of six-plane accepted tracks (i.e., events in which
each GEM contributed a hit) relative to the number of accepted tracks using only the MWPC hits. While it would be expected that few six-plane tracks would be found (both due
to the smaller acceptance covered by all six planes relative to the MWPCs only and the influence of single plane inefficiencies), it would be expected that the value of the ratio
would be constant to within statistical variation over the course of data-taking. Furthermore, this variation in efficiency was
found to be almost entirely correlated between the GEM planes within a telescope (i.e., all three GEMs dropped in efficiency
together, possibly on an event-by-event basis). This was discovered by noting that the structures present in Figure \ref{fig:gemblow} persist even when relaxing the tracking conditions
to demand only four planes. To require four planes, however, at least one GEM must supply a hit for the track. Requiring even a single GEM induced the structure in the measured
yield of elastic events, indicating that the efficiency of the GEM planes varied in a strongly correlated way.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/Apr22all6Monlyratioleft.png}}
\caption[Time-varying efficiency of the GEM detectors]{Ratio of accepted six-plane 12{$^\circ$} system tracks in the left arm to the number of accepted MWPC-only three plane tracks
as a function of OLYMPUS run index. The strong structures that vary with run index indicated a severe problem with the GEMs, which prevented their use for the final luminosity
analysis.}
\label{fig:gemblow}
\end{figure}
When this was discovered, a large effort was undertaken to attempt to determine its cause and rectify the issue to permit the use of the GEMs in the 12{$^\circ$}
analysis. The algorithms for hit-finding in the GEMs (Section \ref{sec:12hit}) and track reconstruction in the 12{$^\circ$} telescopes (Section \ref{sec:12track}) were completely redone
in an attempt to solve the problem. A great deal of improvements were made through this process, including improving the overall efficiency of the GEM planes and tracking, but
no changes that were made significantly affected the observed time dependence in the GEM hit yields. While the various changes in the efficiency over time could be roughly
correlated with changes in beam conditions, these effects were not sufficiently quantifiable to produce a solution. Based on this, it is theorized that the root cause of the issue
was a saturation effect in the GEM readout electronics that caused a varying, unknown deadtime for GEM hits, which gave rise to the observed inefficiency effects.
Ultimately, the exact cause of this time-varying efficiency was not definitively identified, but seemed to be associated with the readout system of the GEMs and its behavior
as a function of hit rate/beam conditions. This should be diligently kept in mind when considering the use of the OLYMPUS GEMs in future experiments \cite{refId0,Balewski:2014pxa}.
Due the correlation in this efficiency variation, the GEMs were fundamentally prevented from measuring event rates (absolute or
relatively). Any attempt to correct for the time-variation (i.e., by varying the simulated efficiency of the telescope and/or planes over time)
would amount to a manual scaling of the simulated yield, effectively nullifying the measurement of the elastic rate as an indicator of the luminosity.
While it is conceivable that GEM hits could be used if present without requiring them, this still would require implementing the time variance in the efficiency in
simulation in order to replicate the tracking resolution of the data in the simulation. Due to the uncertainty this would introduce if not properly accounted for
(and the fact that it would still introduce a manually-inserted variation in detector response into the simulation), it was decided that the GEMs could not be
part of the actual yield determination in the 12{$^\circ$} system. Hit information from the GEMs was still useful in tracking for applications in which absolute rate information
was not required, such as measuring the efficiencies of the MWPCs and SiPM scintillators (after checking to ensure no cross-system correlation was present) and measuring detector
misalignment using tracks.
\subsection{Hit Reconstruction}
\label{sec:12hit}
The first step in reconstructing particle trajectories for particles in the 12{$^\circ$} system was to generate hit positions in the detector planes from the system's raw data.
A new hit-finder for the GEMs was written ``from-scratch'' for the GEMs, partially in an attempt to remedy the issues described in Section \ref{sec:hahahaha}. While the new algorithm
did not remedy the time-dependent issue, it did significantly improve the performance of the GEMs in many regards. Due to this, and the fact that the GEMs may be used in future
experiments, this new hit-finder is described here. For the MWPCs, a functional hit-finder was available based on the code used for the HERMES experiment MWPCs \cite{Ackerstaff1998230},
and so only the basic elements of this algorithm and the improvements made are discussed.
\subsubsection{GEM Detectors}
\label{sec:gemhit}
While the GEMs were not used in the final analysis, a large effort was undertaken to improve their hit-finding routines relative to the original algorithms used, not only in the process
of attempting to solve the issues discussed in Section \ref{sec:hahahaha}, but also to increase their resolution and efficiency for usage in calibration analyses. The two crossing
patterns of the readout planes of the GEMs (described in Section \ref{sec:gemdet}) essentially provided independent 1D hit information. An example of such a 1D hit is shown in
Figure \ref{fig:g1d}. The basic strategy used in hit-finding for GEM detectors like the ones used in OLYMPUS is to find hit candidates (peaks/maxima) in the 1D data samples,
and then combine 1D hits in the same planes to form 2D hit candidates. Note, however, that the number of candidates scales multiplicatively with the number of 1D hits from each
axis. To combat this, most GEM systems (including the OLYMPUS detectors) are designed to share the charge signal as equally as possible between the dimensions of the readout so
that the magnitude of the 1D hits can be compared so as to indicate better which pairs of of 1D hits go together to properly form a 2D hit \cite{1748-0221-7-03-C03042,kgem}.
\begin{figure}[thb!]
\centerline{\includegraphics[width=0.95\textwidth]{figures/gem1d.png}}
\caption[Example of a 1D GEM hit in the raw ADC data]{An example of how a charged particle passing through one of the OLYMPUS GEM planes creates a signal visible in the 1D data from one of
the strip/pad patterns on the readout plane. The blue points correspond to the raw ADC counts registered on each ADC change (each channel corresponds to a single readout strip or connected
strip of pads), and the green represents the ADC data after a baseline/pedestal subtraction. Red points indicate a first pass attempt at identifying local maxima in the data as a first step in
hit-finding. In this event, a clearly identifiable hit occurs around APV Channel 210 with a possible smaller hit around Channel 240. The vertical dotted line represents the break between the two
separate APV cards used to readout a single dimension of the plane.}
\label{fig:g1d}
\end{figure}
The OLYMPUS GEMs presented several specific challenges with regard to hit-finding that were addressed in the improved hit-finder (relative to the original software used on the experiment)
that is described here:
\begin{enumerate}
\item strips with weak amplification and/or bad data transfer, which occurred both randomly among the ADC channels and in a periodic pattern at the edges of the four connectors
used used to read out each APV (see as a grid pattern in Figure \ref{fig:badeff}),
\item different ADC baseline/pedestals between the two APVs covering a single plane dimension, which made it difficult to reconstruct hits at the boundary between two APVs (seen
in the blue points in Figure \ref{fig:g1d}),
\item channel-to-channel ADC pedestal variation on top of the common-mode variation, and
\item other large-scale problems such as failed APV cards on the middle GEM in the left telescope.
\end{enumerate}
The new hit-finding algorithm addressed these issues to significantly increase the overall efficiencies of the detectors and to reduce the structural inefficiencies shown
in Figure \ref{fig:badeff}, but did not affect the time-dependent behavior of the GEM hit yields discussed in Section \ref{sec:hahahaha}.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.4\textwidth]{figures/gem5oldeff.pdf}}
\caption[Example GEM efficiency map using a previous hit-finding algorithm]{Efficiency for hit reconstruction of GEM 5 (the furthest downstream GEM in the right telescope), using
the previous hit-finding algorithm. Note the periodic occurrence of inefficient regions, as well as other random strips with low efficiencies caused by the issues discussed in the text.
The improved efficiency of this GEM, as well as of the others, is shown in Figure \ref{fig:gemeff}.}
\label{fig:badeff}
\end{figure}
The new GEM hit-finder proceeded as follows\footnote{The source code (C++) for this hit-finding algorithm, along with basic documentation, is available from the author for academic
applications.}:
\begin{enumerate}
\item Raw channel-by-channel ADC counts were mapped to their corresponding local coordinates in the GEM planes ($x$ and $y$).
\item A first pass was made over all events in a given data run file in which a 12{$^\circ$} trigger fired in which a line was fit to the ADC count as a function of channel number (properly ordered)
for each APV card (two cards per dimension per plane) to calculate the common-mode pedestal for the channels. A rudimentary 1D peak-finding algorithm was used to remove points
near a possible hit so as to avoid biasing the baseline fit. This procedure is illustrated by the magenta (without peak removal) and red (with peak removal) lines shown in Figure
\ref{fig:g1d}. The previous hit finding algorithm did not correct for possible hits in the baseline removal, which made the procedure prone to rejecting hits, especially
near the edge of an APV card. This created regions of strong inefficiency along the central axes of the planes.
\item After the first pass over the data, the mean and variance over all baseline fits of the separation of a single ADC channel count from the fit baseline on each event were calculated.
While the baseline fluctuated event-by-event, it was found that single channels exhibited predictably low or high counts relative to neighboring channels with a width that varied by
channel as well, as can be seen in Figure \ref{fig:c2c}. The mean deviations and width of the distributions for each channel saved to adjust the pedestal subtraction from the value
suggested by the baseline subtraction alone. No such channel-to-channel correction was made in the previous algorithm, which allowed the new algorithm to recover hits from channels
with lower average counts and reject spurious highs from high-count channels.
\item The data were then passed-over a second time, again event-by-event. Each ADC channel was adjusted by the baseline+deviation pedestal computed in the first pass. Local maxima in the ADC
channels were identified (using thresholds based on the widths from Step 3), and all pairings of $x$ and $y$ minima were considered. Pairings were scored based on the relative match of
the $x$ and $y$ signal strengths as well as the absolute total strength of the pairing. The user then chose how to select which hits to pass to the tracking algorithm from among the scored
hits (e.g., with minimum accepted scores, passing the highest-scoring hits, etc.). A visualization of identified hits with comparison to the hits identified by the old algorithm is
shown in Figure \ref{fig:ghits}.
\end{enumerate}
This new hit-finder succeeded in increasing the overall efficiencies of the GEMs by 5-10\% and greatly reduced the structure in the efficiencies across the planes. The resulting efficiencies
are shown in Figure \ref{fig:gemeff} in Section \ref{sec:12eff}. While the time-dependence of the GEM efficiencies did not ultimately arise from a hit-finding issue, this improved hit-finder
was useful in allowing the GEMs to provide better data for calibration of the 12{$^\circ$} system and should be useful for future experiments that use the OLYMPUS GEMs.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/chantochan.pdf}}
\caption[Channel-to-channel noise in the GEM ADC counts]{Histogram of the average deviation of the ADC count in each channel along one dimension of one of the GEMs. As can be seen
channels typically exhibited a clear mean deviation that could be used to correct the pedestal subtraction for that channel and a definable width that could be used to set the noise
threshold for hit candidates for each channel.}
\label{fig:c2c}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.00\textwidth]{figures/gem2d.png}}
\caption[Hits found by the new GEM hit-finder for a single event]{Data from the right GEM telescope for a single event (2D histograms of ADC counts) showing the hits identified
by the original GEM hit-finding algorithm (white stars) and the new algorithm (circles, colored red, yellow, or green by increasing hit score). The three strongest hits from each
plane reconstruct well to an elastically scattered lepton that would have been missed by the old algorithm due to the missed hits in the upstream (US) and downstream (DS) planes. The upstream
hit was missed in the old algorithm due to it $x$ location near the APV card boundary, while the downstream hit was missed due to its relatively weak signal and being near the $y$ APV card
boundary. For the latter, the new baseline subtraction methods allowed the hit to be separated from the noise.}
\label{fig:ghits}
\end{figure}
\subsubsection{MWPC Detectors}
Due to the relative simplicity of the MWPC hit information (see Section \ref{ss:mwpcdet}), hit-finding for the MWPCs did not require
the same complexity of analysis as the GEM hit-finder. Initial hit decisions for single wires were produced by software adapted from the software used for the HERMES experiment
MWPCs \cite{Andreev:2001kr,Ackerstaff1998230}
adapted by the PNPI group. While, in principle, single wire 1D hits could be passed to the track reconstruction algorithm used for the 12{$^\circ$} system, the combination of
information from the three wires in a chamber provided an important means of rejecting noise hits. Recalling that the MWPC wires in chamber were arranged in three planes
such that the wires were oriented at $-30^\circ$, $0^\circ$, and $+30^\circ$ relative to vertical in the three planes, local coordinates $U$, $X$, and $V$ may be defined
perpendicular to the wire orientations in each plane, in additional to local $x$ and $y$ coordinates perpendicular to the sides of the chamber face. This system is illustrated
in Figure \ref{fig:uxv}. The single wire hit decisions were given corresponding $U/X/V$ coordinates, which can then be converted to the plane coordinates via the following
linear combinations:
\begin{equation}
x_X = X,
\end{equation}
\begin{equation}
x_{UV} = \frac{U+V}{\sqrt{3}},
\end{equation}
\begin{equation}
y_{UV} = U - V.
\end{equation}
Note that there are only three linearly independent combinations that form $x$ and $y$ since there are three inputs $U$, $X$, and $V$
(e.g.; $y_{XV} = \sqrt{3}X-2V = \sqrt{3}\left(x_X-x_{UV}\right)+y_{UV}$). In constructing MWPC hits, all combinations of hits on the $U$, $X$, and $V$
planes within a single chamber on a given event were considered. Any combinations with $\left|x_X-x_{UV}\right|>4$ mm were rejected as bad combinations. The 4 mm
cutoff was chosen by studying tracks that were reconstructed with the other five planes of the telescope, and identifying hits in the unused sixth plane that appeared
correlated with the track (i.e., that were good hits). The boundary was chosen to ensure that such good hits were not cut. While this cut was wide, it was chosen
to exclude as few good hits as possible since hits from all three MWPCs in a telescope were required for tracks in the final analysis. The resulting higher hit multiplicity was handled
by the tracking algorithm described in the next section.
The final local hit coordinates for a given plane that were passed to the tracking algorithm were a weighted average of the independent $x$ constructions (since $x$ corresponded to
the bending direction of the field and thus was the direction in which maximum resolution was desired), while $y_{UV}$ was used for the $y$ coordinate. That is:
\begin{equation}
x = \frac{2x_X+3x_{UV}}{5},
\label{eq:mwpcx}
\end{equation}
\begin{equation}
y = y_{UV}.
\label{eq:mwpcy}
\end{equation}
\begin{figure}[thb!]
\centerline{\includegraphics[width=0.75\textwidth]{figures/MWPCplane.pdf}}
\caption[Local coordinate system for the MWPC planes]{Local coordinate system used for the MWPC detector hit finding. The $U$, $X$, and $V$ coordinates measure distances
along the perpendiculars to the different wire orientations, which may be converted via linear combinations to the local coordinates $x$ and $y$ that are used for the final
hit positions. The local $x$ coordinate points away from the beamline in both telescopes, while $y$ points up in the left MWPCs and down in the right MWPCs.}
\label{fig:uxv}
\end{figure}
\subsection{Tracking in the $12^\circ$ Telescope}
\label{sec:12track}
While the reconstruction of the scattered proton in 12{$^\circ$} events was conducted using the main drift chamber tracker
described in Section \ref{sec:recon}, the 12{$^\circ$} telescopes utilized a separate tracking system that allowed
greater flexibility than the main tracker. This was possible due to the relative simplicity of reconstruction in
the 12{$^\circ$} telescopes compared to the drift chambers in which the reconstruction is complicated by the uncertainty
in the drift time-to-distance calibration and the duplicity of each recorded hit. While the drift chamber tracking was
constrained to generate event vertices along the line of the beam due to its kinematic look-up library (``fasttrack''),
the 12{$^\circ$} tracker had full freedom to determine all five kinematic parameters of tracks passing through the telescopes. This allowed
the surveyed positions and orientations of the 12{$^\circ$} system to be well-verified, since the tracking was able to reproduce measured
beam positions and expected target distributions with no input of such information.
The first stage of the 12{$^\circ$} tracker was a candidate forming algorithm, which was necessary due to the very high rate of
particles scattered in the regions forward of the drift chambers (either from the target region or upstream of experiment) that
caused a high rate of noise hits in the 12{$^\circ$} detector planes. For the tracking used in the final analysis, only the MWPC hits
were used and so track candidates were first formed of all possible combinations of a hit in each of the three planes in the telescope
for a given event. For each of these three hit combinations, the sagitta (i.e., the distance of the middle hit from the line connecting
the hits in the inner and outer planes) was computed. Using a large library of simulated events created using the radiative elastic event generator
for both lepton species, the expected distribution of the value of the sagitta of MWPC tracks for good events was constructed. The distribution of
sagittas in simulation is shown in Figure \ref{fig:mwpcsag}. While the peak value of the sagitta for elastic events was slightly offset between the two
species due to the magnetic field, both peaked near 2 mm and had similar widths. Visual inspection of events with sagittas greater than 5 mm indicated
that the vast majority of these events involved hard scattering from a metallic element of the detector, and thus these events were not good events
for the purpose of the analysis. Thus, for both species a cut was placed at 5 mm for the maximum value of the sagitta for a candidate to be passed to the track
fitting algorithm. In data, this cut 25\%--30\% of three-hit candidates (predominantly with sagittas greater than 10 mm, a region where simulation indicates there
were no good events). When tracking with the GEMs additional information regarding the correlation of hits positions in adjacent GEM/MWPC pairs could be used to
further clean the track candidate sample; this is not discussed in detail here as it was not a component of the final analysis.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/mwpcsag.pdf}}
\caption[Distribution of track sagittas in the MWPC telescopes (simulated)]{Distribution of the sagittas for track candidates consisting of three MWPC hits in simulation for each
species in simulation. The clear peak value, similarity of the distributions to those seen in data, and verification that events in the long tail of the distribution corresponded
to unwanted events permitted the sagitta to serve as a useful cut against unwanted hit combinations to save time when tracking events.}
\label{fig:mwpcsag}
\end{figure}
The hits belonging to a selected candidate were passed as local coordinates in the detector planes to the tracking algorithm. The tracking algorithm utilized
the GEANT4E extension to GEANT4, that propagated tracks through the simulated OLYMPUS geometry and magnetic field according to the most probable energy loss rather
than determining energy losses via Monte Carlo \cite{geant4e,Agostinelli:2002hh}. In this scheme, a track propagated through a given geometry and field with given
initial conditions behaves deterministically as it traverses the detector system. This propagation was iterated with the initial conditions ($\theta$, $\phi$, $\left|\mathbf{p}\right|$,
$y$, and $z$ at the event vertex) free to vary so as to minimize the residuals between the recorded hit positions in data and the propagated hit positions generated by
the GEANT4E simulation. The minimization of the residuals was conducted using a Levenberg-Marquardt minimization routine, as implemented by the C/C++ Minpack libraries \cite{levmar,minpack,cminpack}.
While a relatively slow method of tracking compared to other algorithms, the relatively small number of 12{$^\circ$} events (compared to those handled by the drift chamber tracking) in the OLYMPUS
trigger sample allowed the use of the tracker to take advantage of its complete kinematic freedom in performing alignment calibrations, assessing resolutions, etc.
\subsection{System Performance}
\label{sec:12perf}
This section discusses various relevant aspects of the performance of the 12{$^\circ$} system during the OLYMPUS data runs and the methods used
to make the performance assessments. In general, the system performed consistently and well throughout data-taking (with the exception of
the previously mentioned GEM issues). The redundancy of the system with the inclusion of GEM data allowed precise assessment of the efficiencies
and resolutions of the trigger and MWPC planes, providing high confidence in the simulation implementation of those detectors.
\subsubsection{Detector Efficiencies}
\label{sec:12eff}
To provide an accurate measure of both the absolute and species-relative luminosities, it was critical to
determine and properly simulate the efficiencies of the various components of the 12{$^\circ$} system. In general, these
efficiencies were measured to very high precision using fully reconstructed events with the detector of interest
removed, which was made possible by the high redundancy of the 12{$^\circ$} telescopes. The methods and results of the
efficiency determinations for each subsystem are described in this section.
\paragraph{SiPM Scintillator Planes}
The performance of the SiPM trigger planes was evaluated using the lead glass calorimeters and the associated
trigger described in Sections \ref{sec:lg} and \ref{ss:12dtrig}. Since electron and positron events were
distributed differently across the trigger planes, it was important to measure any inconsistencies in efficiency
across the planes for implementation in the simulation.
A large sample of lead glass trigger events was compiled, and all possible six-plane tracks from the 12{$^\circ$} telescopes
in this sample were constructed. Six-plane tracks were used so as to achieve the best projected position resolution
for the tracks in the scintillator planes. The standard elastic sample cuts for the 12{$^\circ$} analysis (described
in Section \ref{sec:12ana}) were used to ensure that the tracks represented good, relevant events for the purpose
of the study. For each track, the trajectory of the track was propagated using the GEANT4E tracker to the planes
of the scintillator tiles. For each plane, if the plane recorded a hit for that event the corresponding position
bin was marked efficient for the event. Approximately 3 million tracked events per telescope were used to generate the efficiency
maps shown in Figure \ref{fig:sipmeff}, which were implemented in the digitization of the 12{$^\circ$} trigger for the
simulation. In the regions of the planes corresponding to the kinematic acceptance of the telescopes
for tracks, the efficiency of the planes was in excess of 97\% and typically higher ($\gtrsim$99\%). By comparing subsets
of the data sample, it was determined that no significant time dependence or beam species/running condition dependence
affected the SiPM plane efficiencies.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/sipmeff.pdf}}
\caption[Efficiencies of the SiPM scintillator tiles for the 12{$^\circ$} trigger]{Efficiency maps for the four SiPM-instrumented
scintillator tiles used for the 12{$^\circ$} trigger. The $x$ and $y$ coordinates in each plot represent the local detector coordinates
of each plane, which follow the convention in which $x$ points away from the beamline and $y$ points up in the left sector
and down in the right. Note that due to acceptance constraints (determined by the outermost detectors in the
telescopes (MWPCs 2 and 5), the inner planes (0 and 2) in each telescope are not completely illuminated by tracks.}
\label{fig:sipmeff}
\end{figure}
\paragraph{MWPC Detectors}
As the MWPC planes were critical to the ultimate measurement of the luminosity in the 12{$^\circ$} system, proper implementation of their
efficiencies in the simulation was also critical for the same reasons as for the SiPM plane efficiencies.
Due to the redundancy of the 12{$^\circ$} tracking system, it was straightforward to measure the MWPC plane efficiencies using the main
trigger and tracking with hits from the other five planes in the telescope aside from the plane being assessed. For a large
fraction of the data set (including data from all time periods used in the final analysis), tracks were reconstructed in the
12{$^\circ$} system using all five-plane combinations to check the efficiency of the sixth unused plane in each set using the same
methodology as described for the SiPM planes. The MWPC planes were found to have extremely high and consistent efficiency, both
across the face of each plane and as a function of time. Notably, the three planes in the right sector telescope had five known
inactive wires (due to bad connections, malfunctioning ADC channels, etc.) that were easily identified by this analysis, serving as a proof
of the validity of the analysis. Figure \ref{fig:mwpceff} presents the efficiency maps for each plane.
\begin{sidewaysfigure}
\centerline{\includegraphics[width=1.05\textwidth]{figures/mwpceff.pdf}}
\caption[Efficiencies of the MWPC detectors]{Efficiency maps for the six multi-wire proportional chambers used in the 12{$^\circ$}
luminosity telescopes. The $x$ and $y$ coordinates in each plot represent the local detector coordinates
of each plane, which follow the convention in which $x$ points away from the beamline and $y$ points up in the left sector
and down in the right. It is notable that all six chambers exhibit remarkably consistent and high ($>$98\%) away from the five known
inactive wires in the right side detectors (Planes 3-5).}
\label{fig:mwpceff}
\end{sidewaysfigure}
\paragraph{GEM Detectors}
The same method used for the MWPC efficiencies was applied to measure the efficiencies of the individual GEM planes, the results
of which are shown in Figure \ref{fig:gemeff}. While the efficiency with the new hit-finding algorithm was improved significantly
over that with the old algorithm (Figure \ref{fig:badeff}, the time-dependence problem of the overall efficiency was not solved, as previously
discussed. Some ``striping'' from weak channels remained in the new efficiency maps (as well as small regions of lower efficiency due to defects
in the readout planes in GEMs 1 and 4, as well as the larger region in GEM 1 due to a known bad APV), but in general the new hit-finder recovered
a significant number of hits in the regions in which the GEM data was challenging.
\begin{sidewaysfigure}
\centerline{\includegraphics[width=1.00\textwidth]{figures/gemeff.pdf}}
\caption[Average efficiencies of the GEM detectors]{ Efficiency maps for the six GEMs used in the 12{$^\circ$}
luminosity telescopes, using the new GEM hit-finder described in Section \ref{sec:gemhit}. Note that GEM 1 operated with a known bad APV for its $+x$ data (and no instrumentation for its
$y$ data), and the inner GEMs (0 and 3) were not completely illuminated due to constraints in the acceptance from the other detector and trigger planes. The efficiency with the new algorithm was significantly better than
that with the previous hit-finding method (Figure \ref{fig:badeff}). The efficiency data shown here was sampled for specific runs, but was found to vary approximately uniformly across each
plane as a function of time as discussed in Section \ref{sec:hahahaha}.
The $x$ and $y$ coordinates in each plot represent the local detector coordinates
of each plane, which follow the convention in which $x$ points away from the beamline and $y$ points up in the left sector
and down in the right. }
\label{fig:gemeff}
\end{sidewaysfigure}
\subsubsection{Hit Resolution}
The hit resolution in the MWPCs was fundamentally limited by the discrete nature of the detector's readout, which provided hits at specific wire locations spaced by
approximately 1 mm. While the placing of the wires was not exact (due to the wires being soldered by hand to the detector frames), the uncertainty on this placement ($\mathcal{O}(0.1\:\text{mm})$)
was considerably less than the wire spacing and randomly distributed about the nominal placement, and thus effectively negligible to the
overall hit position resolution. Propagating an uncertainty of 0.5 mm on each individual MWPC plane coordinates ($U$, $X$, and $V$) through Equations \ref{eq:mwpcx} and \ref{eq:mwpcy},
and approximately accounting for the correlation of hits in the three planes results in the approximate reconstructed hit uncertainties of:
\begin{equation}
\Delta x \approx 0.25\:\text{mm},
\end{equation}
\begin{equation}
\Delta y \approx 0.50\:\text{mm},
\end{equation}
in the local MWPC coordinates. In general, this calculation was found to be consistent with both data and simulation. Regarding the GEM hit resolution, a careful
study was not completed for this work since the GEM hit-finder was not fully optimized after it became clear that the use of the GEMs in final luminosity analyses
was precluded. In general, however, resolutions of order \SI{50}{\micro\meter} are likely achievable.
\subsubsection{Lepton Tracking Efficiency}
A lower bound on the lepton tracking efficiency was found by comparing the number of simulation events for which the generated kinematics were within
the cuts described in Section \ref{sec:12ana} and a candidate set of hits was produced according to the scheme discussed in Section \ref{sec:12track} to
the number of successfully tracked leptons from such candidates. This study found that the tracking algorithm produced a valid lepton track for 96.67\% of such candidates,
with a very small lepton species difference (discussed in Section \ref{ss:12sys}). Note, however, that this is a conservative lower bound on the tracking efficiency
since many of the events missed correspond to events in which the lepton underwent a hard scatter from an element of the target or detector system. This process occurs
in both simulation and data, and such events cannot be reasonably reconstructed and used in an elastic {$e^\pm p$} event sample. Visual inspection of such events indicated that
tracks missed in the inefficient sample very frequently corresponded to such hard-scatter anomalies.
\subsubsection{Lepton Tracking Resolutions}
The resolution of the tracking in the 12{$^\circ$} system (as it pertains to the analysis in which the GEM detectors were not included) was fundamentally limited
by the resolution of the MWPC hit reconstruction rather than limitations of the tracking scheme. Table \ref{tab:12res} summarizes some of the main tracking resolution
parameters estimated from the combined dataset used in the analysis. The resolutions were not found to vary between beam species, and the simulation digitization was
developed so as to properly replicate the data resolutions to the extent possible. Examples of the distributions of parameters presented in the table may be found in Section \ref{sec:12comp}.
Since the MWPCs were designed to sacrifice resolution in the vertical direction for extra sensitivity to the horizontal direction (the direction of the field bending), the resolution
in reconstructed $\theta$ was considerably better than that in $\phi$. In general, the momentum resolution is quite wide. While this complicates event selection, it also reduces
uncertainties due to radiative corrections that could otherwise be large due to the relatively narrow acceptance of the telescopes. Naturally, reconstructed parameters involving
{$e^\pm p$} pairs convolve the resolutions of the 12{$^\circ$} and drift chamber tracking, which is briefly discussed in Section \ref{sec:recon} but covered in more detail in References \cite{schmidt}
and \cite{russell}. Inclusion of the GEM detectors improves the resolution by a significant
factor (nearly an order of magnitude in some parameters), but no complete quantitative study of this has been completed.
\begin{table}[thb!]
\begin{center}
\begin{tabular}{|l|c|}
\hline
Parameter & Estimated Resolution \\
\hline\hline
Vertex $y$ position & 3.5 mm \\
\hline
Vertex $z$ position & 50 mm \\
\hline
Lepton momentum & 310 MeV \\
\hline
$\theta$ & 0.1{$^\circ$} \\
\hline
$\phi$ & 0.4{$^\circ$} \\
\hline
\end{tabular}
\end{center}
\caption[Approximate tracking resolutions in the 12{$^\circ$} telescopes]{A summary of approximate estimates of the resolution achieved for several kinematic parameters
for the tracking of {$e^\pm p$} leptons in the 12{$^\circ$} telescopes.}
\label{tab:12res}
\end{table}
\subsection{Method of Analysis}
\label{sec:12ana}
While single-arm inclusive measurements of the luminosity using the 12{$^\circ$} system were investigated, it was ultimately determined that an exclusive measurement (requiring
the detection of the protons in the {$e^\pm p$} scattering events) was the method that could achieve the smallest overall systematic uncertainty in the final absolute and relative
luminosity determinations. This is primarily due to the fact that in a single-arm measurement, contamination from background (i.e., non-elastic {$e^\pm p$} events that mimic
the signal of such an event (a $\sim$1.9 GeV forward lepton with a rearward ToF hit)) would be an appreciable fraction of the sample. While a simple cut on the meantime of the ToF hit
rejected a large fraction of this background, the relatively poor resolution associated with tracking only with the three planes of the MWPCs in each telescope made it difficult to properly
model and subtract the remaining background due to random ToF hits in the appropriate timing window. Since it was known that rates of random ToF hits during electron beam operation were
notably higher than during positron running, it was determined that an inclusive measurement would entail a larger systematic uncertainty than an exclusive measurement (which introduces
uncertainty from the tracking of the proton). An exclusive measurement does, however, heavily suppress the background leaving a very clean {$e^\pm p$} sample. Tests following the background
subtraction scheme described in Section \ref{sec:backsub} determined that for the exclusive selection of 12{$^\circ$} events the background fraction was $\mathcal{O}(0.1\%)$ and that the
species-relative difference in the background was on the order of the statistics of the entire dataset ($\mathcal{O}(0.01\%)$) and thus negligible.
The final analysis combined leptons tracked using the methods described in Section \ref{sec:12track} and protons tracked in the drift chambers using
the elastic arms algorithm tracking scheme discussed in Section \ref{sec:track}. The methodology and application to both simulation and data are described
in the following sections. This scheme produced the values of the event counts $N_\text{data}$ and $N_\text{MC}$ that were used in Equation \ref{eq:l12}
to determine the integrated luminosity.
\subsubsection{Exclusive Event Selection Scheme}
The exclusive event selection proceeded as follows, looping over all data and simulated event triggers:
\begin{enumerate}
\item Determine if the trigger condition met the 12{$^\circ$} trigger requirements (Section \ref{ss:12dtrig}), and reject the event if not.
\item Using the information from the trigger (which determined which 12{$^\circ$} telescope was active for the event), search for hit in
one of the valid trigger ToF bars (Figure \ref{fig:trigcon}) with a large enough meantime ($\gtrsim12$ ns) to allow the possibility that
the hit corresponded to an elastically scattered proton. This cut was chosen so as to be only a very wide selection against fast lepton backgrounds
from the target and to err on the side of inclusiveness rather than rejecting possible protons. If no such ToF hit was present, the event was rejected.
\item Create all possible pairing of properly charged leptons according to the track bending (i.e., $e^+$ or $e^-$ determined by the beam species) in the specified
telescope and identified proton tracks in the opposite side drift chambers. If no such pair exists, reject the event.
\item Select the pair from among the created pairs (multiple-pair events accounted for $\ll$1\% of events at this step), which minimizes the resolution-weighted sum of the cut
parameters listed in the next step.
\item Check if the event passes the following cuts (the chosen values of these cuts based on a comprehensive systematic uncertainty analysis are described in the fiducial and
elastic cut portions of Section \ref{ss:12sys}):
\begin{enumerate}
\item Fiducial cut on the reconstructed position of the event vertex in global $y$ as determined by the lepton track (the proton tracks did not have flexibility in this dimension)
\item Fiducial cut on the reconstructed position of the event vertex in global $z$ as determined by the average of the two tracks
\item Elastic kinematics cut on the coplanarity ($\Delta\phi\approx180^\circ$) of the two tracks
\item Elastic kinematics cut on the correlation of the placement of the global $z$ event vertex of the two tracks ($\Delta z$)
\item Elastic kinematics cut on the beam energy reconstructed from the angles of the two tracks, assuming elastic kinematics ($E_{\text{beam},\theta}\approx 2010$ MeV)
\item Elastic kinematics cut single-arm missing energy of the lepton over the square of the energy as computed by the expected
elastic energy from the reconstructed $\theta$ ($\Delta E'_\theta/E'^2\approx 0$)
\end{enumerate}
\item Add the event to the count of elastic events $N$ if all steps are passed.
\end{enumerate}
Note that only one cut that considers reconstructed momentum was used due to the relatively poor momentum resolution of the MWPCs and the difficulty of precisely tracking
particles in the back-angle portions of the drift chamber where the tracks traverse the wire layers nearly perpendicularly (resulting in few crossings between cells, which
help to provide more precise track positions for tracking). In the course of determining the final event selection method, a number of cut schemes were tested and the boundaries
of the cuts were systematically tested as described in Section \ref{ss:12sys}. This methodology was chosen for its robustness to the weaknesses of the 12{$^\circ$} measurement (poor
momentum resolution) and its stability under variance of the cuts.
\subsubsection{Simulation of 12{$^\circ$} Events}
Simulated 12{$^\circ$} events (used to produce $N_\text{MC}$) were treated identically to data events, following the analysis strategy discussed in Chapter \ref{Chap4}. The simulation
was conducted in the same framework as described for the main analysis in Section \ref{sec:sim}, and
the creation of digitized hits in the ToFs and drift chambers (i.e., values from simulation converted to mimic the structure of the experimental data) was conducted using
the same methods as for the main {$\sigma_{e^+p}/\sigma_{e^-p}$} analysis. The digitization of the 12{$^\circ$} system elements was straightforward, due to the discretized nature
of the MWPC $U$, $X$, and $V$ plane hits discussed in Section \ref{sec:12hit}. Recorded energy depositions from the primary particles were associated with discrete wire positions
and then simulated MWPC hits were reconstructed in an identical fashion as data hits. Based on the distribution of hits from data that registered on multiple adjacent wires, simulated
MWPC hits were allowed to also register on multiple wires and rates randomly drawn from the data distribution. SiPM plane digitization for the trigger was also straightforward, in that it only
involved matching the energy deposition threshold in simulation to a value similar to experimental conditions. Due to the fact that good 12{$^\circ$} events produced SiPM hits well above the
experimental threshold, the choice of this threshold in simulation was effectively negligible. After hits were constructed in the SiPM planes and MWPCs, they were tested by position
against the efficiency maps presented in Section \ref{sec:12eff} using a random number drawn against the efficiency. Hits passing the efficiency test were packaged in the same format
as the hits produced from data, and from this point simulation data were treated identically to experimental data, ensuring a robust comparison.
\subsection{Comparison of Data and Simulation}
\label{sec:12comp}
This section provides several figures showing distributions of various kinematic parameters reconstructed by the 12{$^\circ$} telescope analysis for both lepton species
in data and simulation so as to provide a basic overview of the nature of the data and establish the validity of the data/simulation comparison. While plots of
all kinematic parameters would fill many pages, several representative plots that are particular relevant to the analysis are shown. In general, the data/simulation
agreement is very good to within resolutions and the conservative cuts applied to the data avoid regions of disagreement due to slight differences in resolution. Note
that since the final 12{$^\circ$} luminosity measurement (Section \ref{sec:12res}) found a result several percent above the slow control estimate for the absolute luminosity (which was used to generate
the simulation), the integrals of the distributions shown differ by that factor. Uncertainties caused by these residual data/simulation differences were accounted for when
considering the systematic uncertainties due to the various applied cuts. The distributions presented are:
\begin{itemize}
\item $Q^2$ reconstructed from the lepton angle (Figure \ref{fig:12q2}),
\item vertex $z$ reconstructed by the lepton track (Figure \ref{fig:12z}),
\item reconstructed lepton momentum (Figure \ref{fig:12mom}), and
\item $\Delta E'_\theta/E'^2$, the single arm lepton missing energy relative to the expected energy at the reconstructed $\theta$ (Figure \ref{fig:12de}).
\end{itemize}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/12c_q2.pdf}}
\caption[Distributions of $Q^2$ for 12{$^\circ$} leptons in data and simulation]{Distributions of reconstructed $Q^2$ for 12{$^\circ$} leptons of each species, in data and simulation
for the entirety of the data sample.}
\label{fig:12q2}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/12c_z.pdf}}
\caption[Distributions of vertex $z$ for 12{$^\circ$} leptons in data and simulation]{Distributions of reconstructed vertex $z$ for 12{$^\circ$} leptons of each species, in data and simulation
for the entirety of the data sample.}
\label{fig:12z}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/12c_mom.pdf}}
\caption[Distributions of reconstructed momentum for 12{$^\circ$} leptons in data and simulation]{Distributions of reconstructed momentum for 12{$^\circ$} leptons of each species, in data and simulation
for the entirety of the data sample.}
\label{fig:12mom}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/12c_de.pdf}}
\caption[Distribution of single-arm missing energy for 12{$^\circ$} leptons in data and simulation]{Distributions of the single-arm missing energy $\Delta E'_\theta/E'^2$
for 12{$^\circ$} leptons of each species, in data and simulation for the entirety of the data sample.}
\label{fig:12de}
\end{figure}
\subsection{Systematic Uncertainties}
\label{ss:12sys}
Since any uncertainty in the luminosity determination directly affects the uncertainty on
the final {$\sigma_{e^+p}/\sigma_{e^-p}$} ratio result, it was critical to consider all possible causes of systematic
uncertainty in the extraction of the 12{$^\circ$} luminosity, especially any such factors that affect
the relative luminosity of the lepton species. Naturally, the uncertainty of the species-relative
luminosity determination is considerably smaller than the uncertainties of an absolute luminosity determination
due to the numerous systematic effects that are the same for both species, making the 12{$^\circ$} an effective system
for the task at hand (although not necessarily a precise absolute luminosity monitor). Note that no systematic effects that
explicitly arise from the GEM detectors are formally discussed or computed since the GEM data did not directly contribute to
the luminosity measurements in this work. The systematic uncertainties for the
12{$^\circ$} system are discussed in the following sections, and the absolute and relative luminosity
uncertainties are summarized in Table \ref{tab:12ds}. A discussion of the interpretation and implications of these
estimates may be found at the end of this section following a discussion of the details of each contribution estimate.
Regarding the methods used to compute the systematic uncertainties and the way in which they should be
interpreted, the values presented in this section are estimates of the maximum range
that the values of the extracted absolute and relative luminosities could have due to conservative
estimates of the plausible range over which each effect may vary. Due to this, these are not to be interpreted
as Gaussian uncertainties in most cases (as many of the effects are non-Gaussian), and, in general, the
interpreted Gaussian uncertainties corresponding to the values in Table \ref{tab:12ds} are less than or equal to the
presented values (as is discussed at the end of this section). Only effects found to amount to a contribution of 0.01\%
or greater are discussed here, since this is
the order of the overall statistical uncertainty of the dataset of 12{$^\circ$} events.
\begin{table}[thb!]
\begin{center}
\begin{tabular}{|l|c|c|}
\hline
Uncertainty Source & Relative (\%) & Absolute (\%) \\
\hline\hline
ToF trigger efficiency ($\delta_{\epsilon_\text{ToF}}) $ & $\pm0.19$ & $\pm0.25$ \\
\hline
SiPM trigger efficiency ($\delta_{\epsilon_\text{SiPM}}$) & $\pm0.01$ & $\pm0.10$ \\
\hline
MWPC plane efficiency ($\delta_{\epsilon_\text{MWPC}})$ & $\pm0.01$ & $\pm0.05$ \\
\hline
Magnetic field ($\delta_{B})$ & $\pm0.15$ & $\pm0.35$ \\
\hline
Lepton tracking efficiency ($\delta_{\epsilon_{e,\text{track}}})$ & $\pm0.18$ & $\pm0.86$ \\
\hline
Proton tracking efficiency ($\delta_{\epsilon_{p,\text{track}}})$ & $\pm0.10$ & $\pm0.80$ \\
\hline
Beam position/slope ($\delta_\text{BPM}$) & $\pm0.01$ & $\pm0.01$ \\
\hline
Beam energy ($\delta_{E_\text{beam}}$)& $\pm 0.02$ & $\pm 0.02$ \\
\hline
Detector position ($\delta_\text{det}$) & $\pm 0.02$ & $\pm 0.20$ \\
\hline
Fiducial cuts ($\delta_\text{fid}$) & $\pm 0.12$ & $\pm 0.22$ \\
\hline
Elastic cuts ($\delta_\text{elas}$) & $\pm 0.27$ & $\pm 1.63$ \\
\hline
Radiative corrections ($\delta_\text{rad}$) & $\pm0.08$ & $\pm0.45$ \\
\hline
Elastic form factors ($\delta_\text{ff}$) & $\pm0.14$ & $\pm1.20$ \\
\hline
TPE at $\theta = 12^\circ$ ($\delta_\text{TPE}$)* & $\pm0.10 $ & $\pm0.10 $\\
\hline\hline
Total including TPE uncertainty ($\delta_{12^\circ,\text{TPE}}$) & $\pm0.47\%$ & $\pm2.44\%$ \\
\hline
Total without TPE uncertainty ($\delta_{12^\circ}$) & $\pm0.46\%$ & $\pm2.44\%$ \\
\hline
\end{tabular}
\end{center}
\caption[Systematic uncertainties of the 12{$^\circ$} luminosity determination]{A summary of the contributions to the systematic uncertainty
in the determination of {$\sigma_{e^+p}/\sigma_{e^-p}$} and of the absolute single-species luminosity
in the 12{$^\circ$} monitors in percent, as discussed in detail in
Section \ref{ss:12sys}. Absolute uncertainties are averaged between the species for the purpose of quoting a single number. These uncertainties may be
considered to be independent, in general, and thus are added in quadrature to produce the total
uncertainty estimate. Note that the TPE uncertainty (marked by *) contributes when the 12{$^\circ$} result is used as a determination
of the relative luminosity for the {$\sigma_{e^+p}/\sigma_{e^-p}$} result, but is not included for a measurement of TPE at $\theta \approx 12^\circ$ using an
independent luminosity extraction.}
\label{tab:12ds}
\end{table}
\subsubsection{ToF Trigger Efficiency}
Due to the fact that protons from $e^+p$ and $e^-p$ scattering events are distributed differently
among the ToF scintillator bars at backwards angles (Figure \ref{fig:12tdist}) and the fact that a recorded ToF hit is
required for the 12{$^\circ$} trigger (Section \ref{ss:12dtrig}), any unaccounted for anisotropy among the efficiencies
of the ToF bars would introduce a shift in the relative {$\sigma_{e^+p}/\sigma_{e^-p}$} measurement. Since the 12{$^\circ$} trigger is only
concerned with the efficiency of generating hits for protons of relatively low momentum ($\sim$10$^2$ MeV/$c$) in the ToFs
(which deposit a large amount of energy in the
scintillator) and since those protons strike the central region of the bar (the most efficient region for generating
a signal in both photomultipliers), the expected efficiency for the ToF
element of the 12{$^\circ$} trigger is essentially unity from physics considerations alone. Due to issues with the PMT
couplings and electronics used to readout the ToF bars, however, some inefficiencies were likely present in the
experiment that are difficult to account for in the simulation, which assumes a nearly perfect efficiency for
12{$^\circ$} event protons.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/12tofhitdist.png}}
\caption[Distributions of proton ToF hits for $e^+p$ and $e^-p$ 12{$^\circ$} events]{Distributions of the ToF bars struck
by the protons in elastic $e^+p$ and $e^-p$ events in which the lepton was recorded by the left 12{$^\circ$} telescope, normalized for comparison
between events of the different lepton species. Note that 12{$^\circ$} $e^+p$ events have protons at more backward
angles (higher ToF index) than $e^-p$ events due to the toroidal field which caused positively charged particles
to be out-bending. Due to the differences in these distributions, any anisotropy in ToF bar efficiencies could
introduce a systematic shift in the {$\sigma_{e^+p}/\sigma_{e^-p}$} determination in the 12{$^\circ$} system.}
\label{fig:12tdist}
\end{figure}
Via estimates of the ToF efficiencies from both dedicated triggers (Section \ref{ss:addtrig}) and ``scintillator sandwich'' measurements,
it was determined that the efficiency for proton detection can be conservatively estimated to be in excess of 99.5\% for the protons
of interest \cite{russell1}. Due to the fact that the Monte Carlo assumed near perfect efficiency for the events
of interest, the effect of unaccounted for inefficiencies in the simulation could be estimated by rejecting events
in specific bars with an artificial efficiency of 99.5\%, and examining the effect on the electron-positron ratio
and the absolute luminosity estimates. This rejection was computed for all subsets of the seven bars that are a
part of the 12{$^\circ$} trigger for each side (i.e., reducing the efficiency of all single bars, pairs of bars, triplets of bars, etc.).
The resulting effect on the {$\sigma_{e^+p}/\sigma_{e^-p}$} ratio is shown in Figure \ref{fig:tofsys}. Since the maximal error on the ratio can
occur for multiple combinations of small numbers of bars, the systematic uncertainty of {$\sigma_{e^+p}/\sigma_{e^-p}$} in the 12{$^\circ$} system
was conservatively assigned to be the maximum value of the deviation in the ratio under this method ($\delta_{\epsilon_\text{ToF,rel}} = 0.19\%$).
The uncertainty of the absolute luminosity for each species from this effect was taken to be the mean effect from all 0.5\% efficiency
drop combinations, i.e., $\delta_{\epsilon_\text{ToF,abs}} = 0.25\%$.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/12tofsys.png}}
\caption[Effect of ToF efficiency uncertainty on the 12{$^\circ$} luminosity ratio]{Estimated possible effect on the extracted 12{$^\circ$} luminosity ratio due to
uncertainty in the efficiency of the ToF trigger. The uncertainty of 0.5\% in the efficiency of each bar was applied for each
subset of bars involved in the trigger, as described in the text. The maximal deviation was $\delta_{\epsilon_\text{ToF,rel}} = 0.19\%$.}
\label{fig:tofsys}
\end{figure}
\subsubsection{SiPM Trigger Efficiency}
While electron and positron tracks in the 12{$^\circ$} telescopes illuminated approximately the same total regions of the SiPM planes due to the telescope
acceptance being constrained by other elements, the distributions of the illumination of the planes by the two different species may
be different due to the change in the cross sections for elastic scattering across the ranges of $\theta$ accepted for each species. The SiPM
plane efficiencies computed in Section \ref{sec:12eff} and shown in Figure \ref{fig:sipmeff} are in general quite good ($>$99\%) throughout
their active regions. The small anisotropies in the efficiency could lead to an artificial difference in the extracted electron and positron
luminosities. The anisotropies in the SiPM plane efficiencies are on the percent level, and taking the behavior of SiPM Planes 1 and 3 as
the worst case scenarios, a model of a 1\% efficiency drop with an uncertainty of the drop in efficiency of 0.5\%
in a band of local $x$ of width $\sim$30 mm provides a conservative test case for determining the effect on the 12{$^\circ$} trigger. The relative
normalized illumination of the SiPMs between $e^+$ and $e^-$ hits differs by at most 0.1\%, as determined by simulation. So, assuming
the maximally asymmetric case between species, in which the lepton hit distributions differ by 0.1\% in the aforementioned model
uncertain region (corresponding to 30\% of the detector plane), the resulting change in the ratio of the number of accepted hits between lepton
species is shifted by less than 0.01\%. Thus, even for a very conservative estimate in the uncertainty in the knowledge of the
SiPM plane efficiencies, the high efficiency, uniformity of the efficiency, and small acceptance of the telescopes (making the hit distributions
for each species very insensitive to differing slopes in the cross sections for each species as a function of $\theta$), the effect
on the relative extracted luminosity is small and may be conservatively quoted as $\delta_{\epsilon_\text{SiPM,rel}}=0.01\%$.
Assigning an overall conservative uncertainty of $0.1\%$ to the overall efficiencies of the individual SiPM planes from the statistics of the
estimate in Section \ref{sec:12eff} and track reconstruction, this directly contributes an overall trigger efficiency uncertainty of
$\delta_{\epsilon_\text{SiPM,abs}}=0.1\%$ to the absolute luminosity measurement.
\subsubsection{MWPC Plane Efficiency}
The uncertainty introduced due to MWPC plane inefficiency is similar in nature to that introduced by the SiPM plane inefficiency, but the
MWPC planes, especially in the left telescope, exhibited even more consistent efficiency than the SiPM planes, as discussed in
Section \ref{sec:12eff} and shown in Figure \ref{fig:mwpceff}. In the left telescope
(MWPC Planes 0--2) the anisotropies are on the scale of a few tenths of a percent and millimeter width, but are relatively randomly distributed across
the planes rather than the larger scale bands in the SiPMs. Given this difference, but still accounting for the fact that all three
MWPCs in a telescope must have hits to generate a track, the uncertainty of the relative luminosity determination is estimated
to be no more than half the value more rigorously computed for the SiPMs: $\delta_{\epsilon_\text{MWPC,rel,left}}=0.005\%$.
For the right side MWPCs (Planes 3--5), the situation is somewhat different due to the presence of the inactive wires, which amount
to hard acceptance edges for 12{$^\circ$} events. While the possibility of using two-wire hits to mitigate the effect of inactive wires
was explored, it was found that the spurious hits due to noise introduced by accepting such hits would amount to a more detrimental
effect than implementing a model of the inactive wires in the simulation to account for their effect. To estimate the possible
errors introduced by uncertainty in the simulation model, first note that the inactive wires correspond to about 0.8\% of the total
active area of the three planes. Then, if the illumination of these regions differs between species on the
level of 0.1\% and the uncertainty in the placement and inefficiency of these regions (hits may be constructed in these regions
due to hits on the surrounding wires in the plane containing the inactive wire) is assumed to be $\pm$5\% for the sake of providing
a wide estimate, the induced asymmetry remains smaller than the quoted left side relative uncertainty.
As an additional test, the simulation of the MWPC inactive wires was tested in two separate ways. In the first method (the method
ultimately used for the analysis), simulation hits were treated on the wire-plane level (as data hits are treated in the main analysis)
and thus the inactive wires provided no hit information and lowered the efficiency in the region of the hit reconstruction planes
surrounding them. In the second method, the inefficiency due to the wires was implemented via the maps shown in Figure \ref{fig:mwpceff}.
The difference in the overall accepted track rates in the simulation between these methods was found to be on the order of 0.01\%. While
the wire-plane hit reconstruction digitization method demonstrably better mimics the data (both in terms of methodology and comparison
between the resultant of hit and track distributions), the difference between the methods is taken as the uncertainty in this case. Since the
estimates for the left and right side differ but are each small effects in the overall uncertainty, for the simplicity of assigning a single number to
the MWPC efficiency effect, the uncertainty is estimated as: $\delta_{\epsilon_\text{MWPC,rel}}=0.01\%$ for both sides. Since this is a considerably
smaller than other uncertainties, this choice does not significantly impact the final systematic uncertainty estimate.
The MWPC efficiencies were computed with extremely high statistics and with very precise five-plane tracking, as discussed in
Section \ref{sec:12eff}, and thus the overall uncertainty of the MWPC efficiencies is at most 0.02\%. Since all three MWPC
planes in a telescope are required to reconstruct an accepted 12{$^\circ$} event, the absolute luminosity uncertainty from this source
is conservatively estimated as $\delta_{\epsilon_\text{MWPC,abs}}=0.05\%$.
\subsubsection{Magnetic Field}
As described in Section \ref{sec:magsur} and Reference \cite{Bernauer20169}, a large effort was undertaken to properly model the
magnetic field of the OLYMPUS spectrometer for simulation and track reconstruction since any uncertainty in the field directly
corresponds to uncertainty in the acceptance of electron and positron scattering events. Uncertainty in the field in the 12{$^\circ$}
telescope region can occur due to uncertainty in the measurement of the vector components of the field, uncertainty in the
reconstructed position of the Hall probe used for the measurements, and errors/residuals in the field model used to fit and
interpolate the field for the simulation and reconstruction.
The region surrounding the 12{$^\circ$} telescopes is among the hardest to model due to the telescopes occupying the region near the
``pinch'' of the toroid (i.e., the region where the coils most-closely approach each other). As noted in the description of
the field model, regions near the coils are sensitive to the thin filament model used to approximate the toroid coils resulting
in residuals between the field model and the survey measurements. Field components in the OLYMPUS $y$ direction are the strongest experienced
by 12{$^\circ$} tracks and affect the in/out-bending of tracks. Field components in the OLYMPUS $x$ result in approximately azimuthal bending. Components
of the field along $z$ are nearly parallel to 12{$^\circ$} tracks and are less than 50 G throughout the region experienced by 12{$^\circ$} tracks, and
thus any uncertainties in this field are negligible. Additionally, the $x$ component of the field is also very small in this
region, especially in the area near the target cell where deflections would most strongly affect the telescope acceptance. Simulations
indicate that shifts of even $\sim$100\% in the magnitude of $B_x$ are lesser effects than realistic uncertainties in $B_y$,
and thus are sub-dominant effects.
Having established $B_y$ as the most critical element of the field for the 12{$^\circ$} system, the magnitudes of uncertainties in this
region were examined. Figure \ref{fig:fy} shows the field model calculation for the magnitude of $B_y$ in the region, while
Figure \ref{fig:fyres} shows the residual in $B_y$ between the field model and the survey measurements. Comparing the gradients
in $B_y$ experienced by 12{$^\circ$} tracks between survey points (spaced by either 5 or 10 cm), and noting that the uncertainty of
the position of the survey points is at least an order of magnitude smaller than the spacing, any shift in the field grid relative
to the true position may be neglected compared to the residuals between the survey and the model shown, which are on the order of tens of gauss
(or a few percent of the magnitude of $B_y$). It should be noted, however, that the residuals are not uniformly distributed, and
exhibit two key elements that lessen their effect:
\begin{enumerate}
\item residuals along the first meter of the track, where deflections most strongly affect the acceptance, are extremely small compared
to the pinch region, and
\item tracks experience regions of approximately equal in magnitude positive and negative residuals as they approach the telescopes
leading to at least some cancellation of effects on the acceptance due to these regions.
\end{enumerate}
Thus, a model for determining the uncertainty in acceptance due to field uncertainties that systematically scales
the field on the order of a percent represents a conservative estimate of the acceptance uncertainty since a full systematic
shift in all field components separates the acceptances of the two lepton species to a significantly higher degree than
the observed deviations in the field model.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/fieldy.pdf}}
\caption[Magnetic field model for $B_y$ in the 12{$^\circ$} region]{Field model interpolation calculation for $B_y$ in the region
important for the 12{$^\circ$} telescopes, on a triangulated grid of the survey measurement points. Approximate positions of the
target cell and MWPC tracking planes are marked in blue for reference.}
\label{fig:fy}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/fieldyres.pdf}}
\caption[Survey data/field model residuals for $B_y$ in the 12{$^\circ$} region]{Residual between the field model interpolation calculation for $B_y$ and
the measurements at surveyed points in the region
important for the 12{$^\circ$} telescopes, on a triangulated grid of the survey measurement points. Approximate positions of the
target cell and MWPC tracking planes are marked in blue for reference.}
\label{fig:fyres}
\end{figure}
To examine the effects of field scaling uncertainty, simulations were conducted in which a set of generated events was propagated
using several scaled fields (implemented by varying the toroid current over $\pm$5\% from its nominal value) but reconstructed
using the nominal field model. The effects on the species-relative and absolute luminosity extractions
are shown in Figures \ref{fig:magsys} and \ref{fig:magsysabs}, respectively. Linear models were found to be good descriptions for the
variation in the extracted relative luminosity and the individual species absolute luminosities as a function of toroid current, and
the resulting fits are shown in the figures. The integrated average residual seen by 12{$^\circ$} track is considerably less than the 18 G average
RMS residual quoted in Reference \cite{Bernauer20169}, indicating that a shift of the field model of 0.5\% in the tracking/bending regions
(i.e., a shift in the toroid current of 25 A) represents a conservative estimate of the field error. Thus, using the results of the simulations
shown in Figures \ref{fig:magsys} and \ref{fig:magsysabs}, the relative and absolute systematic uncertainties due to the field may be estimated
as $\delta_{B,\text{rel}} = \pm0.15\%$ and $\delta_{B,\text{abs}} = \pm0.35\%$, respectively.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/magsys.pdf}}
\caption[Effect of magnetic field uncertainty on the relative 12{$^\circ$} luminosity]{Effect of scaling the magnetic field (from the nominal setting of
5000 A for the toroid current) on the extraction of {$\sigma_{e^+p}/\sigma_{e^-p}$} in the 12{$^\circ$} system. Note that the error bars on the points are statistical, but that
the statistical uncertainty between points is highly correlated due to the fact that each point arises from the same generated events.}
\label{fig:magsys}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/magsysabs.pdf}}
\caption[Effect of magnetic field uncertainty on the absolute 12{$^\circ$} luminosity]{Effect of scaling the magnetic field (from the nominal setting of
5000 A for the toroid current) on the extraction of the absolute luminosities in the 12{$^\circ$} system. Note that the error bars on the points are statistical, but that
the statistical uncertainty between points is highly correlated due to the fact that each point arises from the same generated events.}
\label{fig:magsysabs}
\end{figure}
\subsubsection{Lepton Tracking Efficiency}
Given the way in which the leptons in the 12{$^\circ$} arm were tracked with the GEANT4E tracker and the minimal number of hits
available for each track in the final 12{$^\circ$} analysis (three total hits in a telescope from the three MWPC planes), a track
could be fit to the vast majority of 12{$^\circ$} events having a hit in each of the three planes that passed the track candidate
selection criteria (Section \ref{sec:12track}). This was due to the kinematic flexibility
of tracker to fit to the three points aside from rare failure modes of the either GEANT4E or the Levenberg-Marquardt minimization
routine. Anytime a lepton track was produced for an event, it was considered individually and with all possible proton track pairings,
and thus the systematics of the quality of the reconstruction are captured in the consideration of the effects of the elastic
and fiducial cuts applied to the tracks.
To estimate the possible effect on the relative luminosity extraction, the rate of successful tracking for track candidates
was examined for each species in the simulation. Note that not all hits produced in simulation are components of a desired
lepton track due to hits produced by secondaries, unusual hard scattering that causes tracks to deviate heavily from standard trajectories,
etc., and thus some rejection of events is expected. For positron runs the simulation candidate-to-track efficiency was found
to be 96.76\%, while for electrons the efficiency was slightly lower at 96.58\%. While it is extremely unlikely that this difference
is entirely due to an asymmetry in rejection of good events, as indicated by visual inspect of such events,
the contribution to the systematic uncertainty due to this effect is conservatively estimated as the difference in these efficiencies:
$\delta_{\epsilon_{e,\text{track,rel}}} = 0.18\%$
An exact determination of the effect on the absolute luminosity extraction is difficult since the tracking inefficiency is not
necessarily representative of the fraction of ``good events'' lost due to lepton tracking failures. To assess the fraction of
events in the sample of missed tracks that likely would have been counted as a good event if reconstructed, approximately
100 such events were inspected by eye using the visualization routine for the simulation propagation (see Appendix \ref{chap:ed}).
Making a generous assessment, perhaps a quarter of such tracks were within the acceptance of the telescope but ultimately not
reconstructed. Thus, the uncertainty of the absolute luminosity from this effect is taken to be a quarter of the overall
inefficiency: $\delta_{\epsilon_{e,\text{track,abs}}} = 0.86\%$
\subsubsection{Proton Tracking Efficiency}
While the effect of proton tracking has been minimized to the extent possible in the analysis due to the difficulty of tracking
the high $\theta$ protons in 12{$^\circ$} events (as discussed in Section \ref{sec:12ana}), the uncertainty in the reconstruction efficiency for the exclusive events may
provide some species-relative effects due to the difference in the relative distributions of the protons in $e^+p$ and $e^-p$ scattering (i.e.,
the protons from each event type are distributed differently in the drift chamber and thus sample different cells). As discussed
in Section \ref{sec:recon}, two methods of track reconstruction were utilized for the OLYMPUS drift chambers and thus available
for reconstruction of the protons in 12{$^\circ$} events: EAA and SA. The EAA
tracker used for the reconstruction of the leptons in the main 12{$^\circ$} analysis is quite efficient in the region relevant to 12{$^\circ$}
events, but the SA tracker suffers from notable inefficiencies in the region. Due to this, the SA tracker is not useful for a final
analysis, but may be used to estimate the magnitude of the effects of drift chamber and tracking inefficiency on the 12{$^\circ$} result.
To make such an estimate, the 12{$^\circ$} analysis was re-run using protons reconstructed using the SA tracker and the resulting difference between
the SA and EAA analyses was used to estimate the effect of proton tracking efficiency. Note that at the time this analysis was performed
the SA tracker was not in its final state and improved somewhat in its performance at back angles for subsequent analyses.
While the absolute values of the measured
$e^\pm p$ cross sections drop on the order of a percent in the SA analysis due to the missing protons, the change in the relative
measurement is not as drastic and can be used as an estimate in the uncertainty caused by a large inefficiency in the proton reconstruction
on the relative measurement. The induced differences in the value of {$\sigma_{e^+p}/\sigma_{e^-p}$} at $\theta \approx 12^\circ$ due to the two different
tracking methods are shown in Table \ref{tab:12saeaa}. The size of the effect was found to be on the order of 0.1\%. Since the
SA tracking used for this analysis represents a significantly worse proton reconstruction capability than the EAA tracking used
for the main 12{$^\circ$} analysis, this was taken as a conservative estimate of the effect of proton tracking on the luminosity
ratio extraction: $\delta_{\epsilon_{p,\text{track,rel}}} = 0.2\%$. For the estimate of the effect on the absolute extraction, approximately
half of the decrease in the absolute extraction between SA and EAA reconstruction was taken as a rough estimate of this uncertainty:
$\delta_{\epsilon_{p,\text{track,abs}}} = 0.8\%$.
\begin{table}[thb!]
\begin{center}
\begin{tabular}{|l|c|c|c|}
\hline
Event side & {$\sigma_{e^+p}/\sigma_{e^-p}$} with EAA & {$\sigma_{e^+p}/\sigma_{e^-p}$} with SA & Difference\\
\hline\hline
Lepton left & 1.000 & 1.002 & 0.2\% \\
\hline
Lepton right & 0.994 & 0.993 & 0.1\% \\
\hline
\end{tabular}
\end{center}
\caption[Difference in the determination of {$\sigma_{e^+p}/\sigma_{e^-p}$} in the 12{$^\circ$} system between EAA and SA proton tracking]{Difference in the determination of {$\sigma_{e^+p}/\sigma_{e^-p}$} in the 12{$^\circ$} system between EAA and SA proton tracking.}
\label{tab:12saeaa}
\end{table}
\subsubsection{Beam Position and Slope}
The quoted uncertainty of the OLYMPUS beam position reconstruction from the beam position monitor surveys and model fits was
\SI{100}{\micro\meter} on the absolute beam positions and \SI{20}{\micro\meter} on the species-relative shift error \cite{bernauer1}.
To assess the sensitivity of the 12{$^\circ$} system to errors in the knowledge of the beam position, simulated data sets were generated at a
variety of beam shifts that exceeded the aforementioned expected position uncertainty by an order of magnitude (i.e., shifts of several
mm). Figures \ref{fig:bxsys} and \ref{fig:bysys} show the shifts in the absolute rates for each lepton species in each 12{$^\circ$} telescope
for shifts in the beam $x$ and $y$ positions, respectively, while the other position dimension is held fixed. Taking the largest
slopes as a conservative estimate of the effects of beam position results in an uncertainty in the absolute rate due to beam position
uncertainty of approximately $0.027\%$/mm and an uncertainty in the species-relative luminosity of $0.033\%$/mm. In addition to the
data shown in the figures, several other beam positions and slopes on the order of several mm at each BPM were tested as well and observed
to exhibit no effects larger than those shown in the figures. It is also notable that the 12{$^\circ$} system is insensitive to any uncertainties
in the shape of the beam profile (i.e., something deviating from the Gaussian envelope described in Section \ref{sec:beam}) since such effects
are on much smaller length scales than the beam shifts described in this section.
Since the uncertainty in the beam position reconstruction (both absolute and relative) is approximately an order of magnitude less than
\SI{1}{\mm}, the systematic uncertainties of the relative and absolute luminosity measurements in the 12{$^\circ$} system
due to the beam position and slope is very conservatively estimated to be $\delta_\text{BPM,rel} = \pm0.01\%$ and $\delta_\text{BPM,abs} = \pm0.01$.
Thus, the 12{$^\circ$} monitors were extremely robust to changes in beam position and slope providing a good complement to the SYMB system, which exhibits
a much more notable dependence on beam shifts.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/beamx.pdf}}
\caption[Effect of beam $x$ shifts on the 12{$^\circ$} luminosity]{Effects of varying the simulated beam $x$ position on the rates for each species
in each 12{$^\circ$} telescope for fixed $y_\text{beam} =0$. Note that the error bars are slightly overestimated due to correlation between the
Monte Carlo data sets. For each combination, the shifts in rates are at most on the order of 0.1\%/mm in both the absolute and species-relative
measurements.}
\label{fig:bxsys}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/beamy.pdf}}
\caption[Effect of beam $y$ shifts on the 12{$^\circ$} luminosity]{Effects of varying the simulated beam $y$ position on the rates for each species
in each 12{$^\circ$} telescope for fixed $x_\text{beam} =0$. Note that the error bars are slightly overestimated due to correlation between the
Monte Carlo data sets. For each combination, the shifts in rates are at most on the order of 0.1\%/mm in both the absolute and species-relative
measurements.}
\label{fig:bysys}
\end{figure}
\subsubsection{Beam Energy}
The $1\sigma$ uncertainty of the DORIS beam energy was determined by the DESY accelerator group to be $\pm 0.1$ MeV for each
species ($E_\text{beam}\approx 2010$ MeV) \cite{brinker1}. While the two species beams had slightly different measured energies ($\Delta E\approx0.5$ MeV), this was accounted
for in the generation of simulated events using the measured values of the DORIS dipole magnet current saved in the slow control database throughout
data-taking. Thus, it is the uncertainty on the knowledge of the relative absolute energies (rather than the absolute
energy) difference that affects the uncertainty of the final results. The 0.1 MeV uncertainty on the individual beam energies was estimated by
testing the DORIS beam with various perturbations and was continuously stabilized with a system of correction magnets. Due to the importance
of precision for the OLYMPUS results, this control of the beam energy was significantly better than the $\sim$5 MeV ($\sim$0.1\%) beam energy
uncertainty that was present when DORIS was operated as in $e^+e^-$ collider mode for the ARGUS experiment \cite{Albrecht:1996gr,DORIStab}. To estimate the magnitude of the
effect of this uncertainty on the extracted luminosity in the 12{$^\circ$} system, the Rosenbluth cross section (Equation \ref{eq:Ros}) was computed as a
function of $\theta$ for the nominal beam energy and beam energies $\pm 1\sigma$ from it. Note that shifts in radiative corrections due to the
beam energy are considerably smaller effects, and thus the variation in the Rosenbluth cross section is a sufficient approximation for this
estimate. Figure \ref{fig:ebeamsys} shows the ratio of this
cross section at $+1\sigma$ beam energy to that at $-1\sigma$, i.e. the effective shift that would occur in the ratio if one lepton species
has a beam energy lower than expected from the energy measurement by 0.1 MeV and the other higher by 0.1 MeV. The maximum deviation of this ratio in the 12{$^\circ$} acceptance
is approximately 0.04\%, and the uncertainty of {$\sigma_{e^+p}/\sigma_{e^-p}$} at 12{$^\circ$} may thus be reasonably estimated as $\delta_{E_\text{beam,rel}} = 0.02\%$ with
a similar effect on the absolute cross section: $\delta_{E_\text{beam,abs}} = 0.02\%$
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/espread12deg.pdf}}
\caption[Effect of $E_\text{beam}$ uncertainty on the Rosenbluth cross section near $12^\circ$ ]{Ratio of the Rosenbluth $e^\pm p$ cross section (using the Bernauer
form factor parametrizations \cite{BerFFPhysRevC.90.015206}) in the vicinity of the 12{$^\circ$} acceptance for a beam of $E_\text{beam} = 2010+0.1$ MeV to that at $E_\text{beam} = 2010-0.1$ MeV, i.e. $\pm 1\sigma$
around the nominal beam energy.}
\label{fig:ebeamsys}
\end{figure}
\subsubsection{Detector Position}
Due to the large number of degrees of freedom available for uncertainties in the positions of detectors relevant to 12{$^\circ$}
event reconstruction (positions and rotations of each of the MWPC and SiPM planes, and additional degrees of freedom
form the ToFs and drift chambers), it was not practical nor instructive to approach this uncertainty via a simulation-based method.
Notably, the acceptance of the 12{$^\circ$} system is almost entirely dependent on the position of the outermost plane used in each telescope
(i.e., MWPC planes 2 and 5) since they cover the smallest solid angle ranges relative to the other detector elements (both in the
12{$^\circ$} and the proton tracking detectors). Thus, the dominant uncertainties from acceptance issues related to poorly reconstructed
detector positions arise from the outermost planes. While there could be additional uncertainties from poor track reconstruction due
to uncertain knowledge of the offsets between planes, these were minimized by aligning the detectors relatively using both events at
nominal magnetic field and straight tracks from zero field runs tracked in the rest of the planes excluding the one being studied.
For this reason, uncertainties in the placement of the outermost planes may be considered the dominant source of detector
position systematic error. Since for even only one plane there are six degrees of freedom in the placement of the detector, a simulation-based
approach still would require a relatively impractical computing investment and thus a method involving considerations of the
effects of geometry and the $e^\pm p$ cross section is used instead.
First, considering the rotational degrees of freedom of the detectors, basic geometrical arguments may be used to estimate the size of the effect. Leptons
striking the 12{$^\circ$} detector planes do so approximately perpendicularly to the plane, and so any rotation of the plane relative to this standard
incidence angle would reduce the effective acceptance of the plane. While, of course, not all 12{$^\circ$} tracks strike the planes truly
perpendicularly, this approximate case is sufficient for estimating the size of the effect. For a small rotation by angle $\psi$ about an axis
in the plane of the MWPC detector, the acceptance area presented to an incoming track varies as $\cos\psi$. Given that uncertainties of the
rotations are on the order of 0.2$^\circ$ for the MWPC survey \cite{bernauer2}, this would correspond to a change on the order
of less than a part in $10^5$ in the acceptance. Thus, the uncertainty due to errors in the rotational placements is nearly certain
to be minimal for both the relative and absolute luminosities.
To consider the positional degrees of freedom the detectors, geometric considerations in conjunction with a
Rosenbluth cross section calculation (similar to that conducted
for the beam energy uncertainty estimate) were used. The residuals of the MWPC position reconstruction from the survey were approximately
\SI{200}{\micro\meter} \cite{bernauer2}, and thus shifts of that order were considered for each of the three available degrees of
freedom. The outermost MWPCs were nominally located \SI{2.68}{\meter} from the center of the target along the $\theta=12^\circ$
lines in the OLYMPUS $x$-$z$ plane, and the survey positions differed from these values on the order of several mm. At this
position, the MWPC planes' extent (110 mm$\times$110 mm) covers a geometrical solid angle (not necessarily corresponding to a track
phase space solid angle) range of $\phi = \pm5.6^\circ$ and $\theta = 12\pm1.2^\circ$. Shifting the detector towards or away from
the target along the 12{$^\circ$} line changes these ranges on the order 0.04\%/mm, as can be seen in Figure \ref{fig:zs}, and so is on the order
of hundredths of a percent for shifts on the order of \SI{200}{\micro\meter}. Similarly, the results of shifting the detector up or
down are shown in Figure \ref{fig:ys}. In this case, the results are even smaller, amounting to changes of order $10^{-7}$ for shifts
of hundreds of micrometers. Since shifts in neither of these directions significantly change the cross section of $e^\pm p$ events sampled
(the first does not change the mean or central value of $\theta$, while the second is predominantly a shift in $\phi$ under which
the cross section is invariant), these geometrical factors are a good estimator of the cross section change due to such shifts.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/awayshift.pdf}}
\caption[Effect on angular acceptance of the 12{$^\circ$} system for shifts toward/away from the target center]{Effect on the solid angle encompassed
by the outermost MWPC planes when shifted toward or away from the target cell center along the 12{$^\circ$} line. The effect in both
$\theta$ and $\phi$ is approximately 0.04\%/mm.}
\label{fig:zs}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/udshift.pdf}}
\caption[Effect on angular acceptance of the 12{$^\circ$} system for shifts up/down]{Effect on the solid angle encompassed
by the outermost MWPC planes when shifted up or down. The effect in both
$\theta$ and $\phi$ is $\sim$1$\cdot 10^{-7}$ for shifts on the order of \SI{200}{\micro\meter}.}
\label{fig:ys}
\end{figure}
For the final direction of position shift for an MWPC plane, perpendicular to the $\theta = 12^\circ$ line in the $y=0$ plane (i.e., the local $x$ of the
detectors), more care must be taken since this involves a shift in the $\theta$ acceptance of the detector and thus a change in the sampled cross
section, which rapidly changes as a function of $\theta$. Additionally, different $\theta$ ranges are sampled for each species, further
adding possible sources of uncertainty. For reference, the normalized $\theta$ distributions accepted for each species are shown in Figure
\ref{fig:tddist}.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/theta12deg.pdf}}
\caption[Normalized lepton $\theta$ distributions of accepted $e^+p$ and $e^-p$ events in the 12{$^\circ$} luminosity system]{Normalized distributions of
the lepton $\theta$ for electron- and positron-proton elastic scattering events accepted by the
12{$^\circ$} system.}
\label{fig:tddist}
\end{figure}
A shift in this direction corresponds to a shift of a $0.02^\circ$/mm in $\theta$ for small shifts with
a negligible change in the total $\theta$ coverage (and, of course, no change in $\phi$). Since the distributions
shown in Figure \ref{fig:tddist} are of approximately the same width, the cross-section at the mode angle for each species acceptance
is taken as an approximate stand-in for the cross section in the acceptance and the resultant shifts in the $e^+p$ and $e^-p$
cross sections for $\theta$ shifts on the order of several hundredths of a degree were computed using the Rosenbluth
formula. The effects on the error that would be expected due to such shifts on the simulated cross section, and thus
on the luminosity, are shown for the individual lepton species and the ratio of the species in Figure \ref{fig:ts}. Since a
\SI{200}{\micro\meter} shift corresponds to a shift of approximately $0.004^\circ$, the resultant relative effects due to this uncertainty
on the luminosity extractions are 0.014\%.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/thetashift.pdf}}
\caption[Effect of shifts in the $\theta$ acceptance of the 12{$^\circ$} telescopes]{Effect on the simulated cross section
in the 12{$^\circ$} acceptance, and thus on the luminosity extraction, due to shifts in the $\theta$ acceptance of the detectors
away from nominal for each species individually and the ratio. Note that expected shifts from survey uncertainties are
considerably smaller than the range of shifts presented.}
\label{fig:ts}
\end{figure}
Combining the various effects discussed in this section, and noting that for both the relative and absolute luminosities shifts
in effective $\theta$ are the dominant contributers, the estimate for the systematic uncertainties due to detector position
are $\delta_\text{det,rel} = 0.02\%$ and $\delta_\text{det,abs} = 0.20\%$.
\subsubsection{Fiducial Cuts}
Since the 12{$^\circ$} telescopes had limited acceptance and poor reconstruction resolution due to the use of only the MWPC planes
in the final analysis, fiducial cuts were only made on the reconstructed $y$ and $z$ positions of the lepton vertex in
the OLYMPUS global coordinate system as described in Section \ref{sec:12ana}. Note that any attempt to make a fiducial cut in $\phi$ is extremely dangerous
since deviations of the OLYMPUS magnetic field from a perfect toroidal field experienced by 12{$^\circ$} tracks focus electrons
and defocus positrons in $\phi$ and thus any fiducial cut within the reconstructed distributions explicitly introduces
a false shift between the species. The possible systematic effects of this focusing, however, are accounted for in the
previous considerations of the effect of the magnetic field and shifts in the detector position/acceptance.
For both the $y$ and $z$ fiducial cuts, the cuts were made at the widest reasonable values due to the system's lack
of position reconstruction resolution and based on the simulation distributions that were generally free of background
and mis-reconstructed tracks. Thus, to test the effects of these cuts each value was further constrained to
reasonably tighter cuts in each case. A sample histogram of reconstructed $y$ vertex positions (for left going electrons)
with a Gaussian fit is presented in Figure \ref{fig:sampy}, which is representative of these distributions in the
different configurations. Note that the nominal fiducial cut was symmetric and placed well into the tails
of the distributions at $\pm12$ mm to avoid cutting good events in the tails. The effect of varying the $y$ fiducial cut is shown in Figure \ref{fig:yfid}.
Taking the smallest reasonable cut as $\pm8$ mm, since at cuts smaller than that the good Gaussian region of the distribution
is cut into, the maximal effect of reasonable cuts on the absolute luminosity is 0.2\% while the relative luminosity is
changed by at most 0.07\%.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.2\textwidth]{figures/sampy.pdf}}
\caption[Distribution of $y$ vertex positions for left-going 12{$^\circ$} electrons]{Distribution of
reconstructed $y$ vertex positions for left-going 12{$^\circ$} electrons with a Gaussian fit applied
for reference.}
\label{fig:sampy}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/yfid.pdf}}
\caption[Effect of vertex $y$ fiducial cut on the extracted 12{$^\circ$} luminosity]{Ratio of the absolute and species-relative
luminosities extracted in the 12{$^\circ$} for varying $y$ fiducial cuts to the value at the nominal cut of $\pm12$ mm. Cuts smaller
than 8 mm begin to cut into the Gaussian region shown in Figure \ref{fig:sampy} and are thus not considered reasonable cuts.}
\label{fig:yfid}
\end{figure}
The question of the $z$ fiducial cut is somewhat more complicated than the $y$ cut due to the fact that the target
distribution has an irregular shape (described in Section \ref{sec:tarsim}) and the poor 12{$^\circ$} resolution
caused by the limitation to three-plane tracks in combination with the small-angle lever arm for this reconstruction.
Additionally, the acceptance of the 12{$^\circ$} events is constrained by the target chamber on the downstream side, leading the magnetic
field to create a difference in the reconstructed $z$ distributions for each species. The reconstructed distributions
for each species are shown in Figure \ref{fig:12z}. The nominal fiducial cut was placed as $-380$ mm $<z<250$ mm. Due to the fact
that the 12{$^\circ$} system has an unrestricted view of the upstream end of the target, the cut on that end has considerably less effect
than the upstream cut as can be seen in the distributions. To estimate the effect of the downstream cut, the distributions were
examined and an upper bound of $z=225$ mm was determined to be a reasonable estimate of the tightest reasonable cut.
The results are summarized in Table \ref{tab:12zcut}. The effect on both the absolute and relative luminosities was found to be
on the order of 0.1\%.
\begin{table}[thb!]
\begin{center}
\begin{tabular}{|l|c|}
\hline
Luminosity measurement & Nominal/Tight Cut Ratio \\
\hline\hline
$e^+$ Absolute & 1.0003 \\
\hline
$e^-$ Absolute & 0.9991 \\
\hline
$e^+/e^-$ Relative & 1.0012 \\
\hline
\end{tabular}
\end{center}
\caption[Effect of varying the upstream $z$ fiducial cut for 12{$^\circ$} events]{Change to the absolute and relative 12{$^\circ$} luminosity extractions
that occurs when making an upstream $z$ fiducial cut at 225 mm relative to the nominal cut at 250 mm.}
\label{tab:12zcut}
\end{table}
Taking the two fiducial cut effects together by combining them in quadrature, the contributions to the absolute and relative
uncertainties are estimated as $\delta_\text{fid,abs} = 0.22\%$ and $\delta_\text{fid,rel} = 0.12\%$.
\subsubsection{Elastic Cuts}
Due to the relatively poor resolution of the 12{$^\circ$} telescope for most kinematic variables, the philosophy for the
final analysis regarding cuts on such variables was to keep such cuts wide so as to avoid cutting into regions
of good data. The fact that background contributions were very small in exclusively reconstructed 12{$^\circ$} events permits
such a philosophy, but it is important to assess the possible effects of such cuts by examining the effects of tightening
cut boundaries. As described in Section \ref{sec:12ana}, the four kinematic elastic cuts made in the 12{$^\circ$} analysis were:
\begin{enumerate}
\item coplanarity ($\Delta\phi$),
\item vertex $z$ correlation ($\Delta z$),
\item beam energy reconstructed from the two track $\theta$ values assuming elastic kinematics ($E_{\text{beam},\theta}$), and
\item and the single-arm missing energy of the lepton over the square of the energy as computed by the expected
elastic energy from the reconstructed $\theta$ ($\Delta E'_\theta/E'^2$).
\end{enumerate}
Following a similar procedure as was used for the assessment of the fiducial cuts, the boundaries for each of these
cuts was tightened over a reasonable range and the effect on the resultant luminosity extraction was computed. Since each of these
cuts was quite broad, tightening the cuts has a larger systematic effect on the luminosity extraction than broadening the cuts
(since relatively few events are outside the cuts), and thus the following studies primarily concern examination
of the effects of tighter cuts.
The nominal coplanarity ($\Delta\phi$) cut was placed conservatively at $\pm4.5^\circ$ around $180^\circ$ due to the lack
of resolution in the lepton $\phi$ reconstruction and general uncertainty in the quality of proton reconstruction.
This cut was tightened to $\pm3.0^\circ$ in four steps to examine the uncertainty associated with this cut, where
$\pm3.25^\circ$ was judged to be the narrowest reasonable cut. The results of this study are shown in Figure \ref{fig:phicut}.
Tightening the cut lowers to the absolute
luminosity estimate (as the data resolution in $\phi$ is slightly broader than in simulation), but leaves the relative
luminosity stable. The maximum effect on the luminosity from this cut is determined to be 0.01\% relative and 0.15\% absolute.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/phicutsys.pdf}}
\caption[Effect of coplanarity cut on the extracted 12{$^\circ$} luminosity]{Ratio of the absolute and species-relative
luminosities extracted in the 12{$^\circ$} system for varying $e^\pm p$ track coplanarity ($\Delta\phi$) cuts to the value at the nominal cut of $\pm4.5^\circ$.}
\label{fig:phicut}
\end{figure}
Varying the cut on the vertex $z$ correlation between the lepton and proton track from the nominal value of $\pm 200$ mm produced
effects similar to those observed for the coplanarity cut. Figure \ref{fig:zcut} shows the results of the $z$ cut study, in which the
cut was varied towards the minimum reasonable value of $\pm130$ mm. From this information, the maximum effect on the luminosity
extraction is estimated as 0.05\% relative and 0.8\% absolute.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/zcutsys.pdf}}
\caption[Effect of vertex $z$ correlation cut on the extracted 12{$^\circ$} luminosity]{Ratio of the absolute and species-relative
luminosities extracted in the 12{$^\circ$} for varying vertex $z$ correlation cuts ($\Delta z$) to the value at the nominal cut of $\pm200$ mm.}
\label{fig:zcut}
\end{figure}
The final two cuts ($E_{\text{beam},\theta}$ and $\Delta E'_\theta/E'^2$) are heavily correlated as they both involve the expected
elastic energy of the lepton from the lepton $\theta$. First they will be considered separately in a fashion similar to the previous
two cut studies, and then their correlation will be discussed to determine the overall contribution from these cuts to the systematic
uncertainty. For the $E_{\text{beam},\theta}$ cut, the range of the cut was varied from the nominal value of $1950\pm650$ MeV down
to $1950\pm300$, while the nominal cut of $\Delta E'_\theta/E'^2 < 5\cdot 10^{-4}$ MeV$^{-1}$ was varied down to $\Delta E'_\theta/E'^2 < 2\cdot10^{-4}$ MeV$^{-1}$
Since these cuts are very sensitive to multiple aspects of the detector resolution, the studied cut range was extended well beyond
values considered reasonable. Figures \ref{fig:ebangsys} and \ref{fig:deesys} show the results of the studies for the
$E_{\text{beam},\theta}$ and $\Delta E'_\theta/E'^2$ cuts, respectively.
For the $E_{\text{beam},\theta}$ cut (Figure \ref{fig:ebangsys}), the effect on the relative luminosity is quite flat even for the tightest cuts, and thus
the contribution of error to the relative luminosity from this effect alone may be estimated to be 0.18\% (the maximum
deviation from unity in the study). For the absolute luminosity, the deviation from the nominal value
quickens for cuts below a half-width of 450 MeV, indicating that this region is not a valid cut region due to its sensitivity
to distribution rapidly changing in this area. The deviation from unity at $\pm450$ MeV is thus taken as the estimate
of the absolute uncertainty due to this effect: 1.40\%.
Variance of the $\Delta E'_\theta/E'^2$ cut (Figure \ref{fig:deesys}) causes a smaller absolute effect than the $E_{\text{beam},\theta}$ angles cut, but
notably shows different behaviors between species. Due to the poor momentum resolution for reconstructing the leptons, a cut
at $\Delta E'_\theta/E'^2 < 3.5\cdot10^{-4}$ MeV$^{-1}$ was considered the tightest reasonable cut, resulting in an uncertainty
of 0.20\% for the relative luminosity and 0.24\% for the absolute value for this effect alone.
Comparing the values from these two effects, they individually indicate a similar magnitude of uncertainty for the relative
luminosity while differing in the absolute luminosity contribution. The larger effect of $E_{\text{beam},\theta}$ on the absolute
luminosity is likely due to uncertainty in the proton tracking in data, which broadens this distribution relative to simulation in
which the recoil protons are more accurately reconstructed. While it is known that these effects are correlated, it is difficult to
fully decouple them due to the complexity of the variables. Thus, as a very conservative estimate, they are considered independent
and are added in quadrature for the final estimate for elastic cut uncertainty contribution to the luminosity.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/ebangsys.pdf}}
\caption[Effect of reconstructed beam energy from angles cut on the extracted 12{$^\circ$} luminosity]{Ratio of the absolute and species-relative
luminosities extracted in the 12{$^\circ$} system for varying cuts on the beam energy reconstructed from track angles assuming elastic kinematics ($E_{\text{beam},\theta}$)
to the value at the nominal cut of $1950\pm650$ MeV.}
\label{fig:ebangsys}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/deesys.pdf}}
\caption[Effect of lepton missing energy cut on the extracted 12{$^\circ$} luminosity]{Ratio of the absolute and species-relative
luminosities extracted in the 12{$^\circ$} system for varying cuts on missing energy of the lepton relative to the expected
elastic energy from the lepton $\theta$ over the square of the energy ($\Delta E'_\theta/E'^2$)
to the value at the nominal cut of $<$5$\cdot 10^{-4}$ MeV$^{-1}$.}
\label{fig:deesys}
\end{figure}
Combining the four cut variation effects in quadrature the total uncertainties due to the elastic cuts
amount to $\delta_\text{elas,rel} = 0.27\%$ and $\delta_\text{elas,abs} = 1.63\%$ for the relative and absolute
extractions, respectively. The fact that these uncertainties are large relative to other sources demonstrates
that the GEMs would have been a valuable contribution to the 12{$^\circ$} measurement had it been feasible, due to the
large improvement they provide to tracking resolution. It is notable that the MWPCs, originally designed to provide
alignment/efficiency measurement capability for the GEMs and extra low-resolution track points, performed extremely
well to provide a luminosity measurement of sufficient precision to satisfy the OLYMPUS goals.
\subsubsection{Radiative Corrections}
To assess the effect of radiative corrections on the extracted relative and absolute luminosities, the experiment's
simulation included a number of radiative corrections schemes, implemented via multiple stored event weights as
described in Section \ref{sec:radgen}. Figure \ref{fig:rcsys} shows the effect on the extracted species
relative luminosity for the available radiative corrections schemes, while Figure \ref{fig:rcsysabs} shows
the effect on the absolute luminosity extractions for each species. The simulated data shown in the figures
includes most of the Run II dataset (several thousand runs) and therefore the statistical uncertainty, which is
also highly correlated between the points, is extremely small relative to the variation in the points due to the weights.
Several models included in the radiative corrections (in particular the Born approximation and soft photon approximation (SPA))
are included only for comparison with historical experimental data and are not considered realistic models. Additionally,
the inclusion of full vacuum polarization effects derived from $e^+e^-\rightarrow\text{hadrons}$ cross sections is
preferred to the inclusion of only lepton vacuum polarization effects \cite{Actis2010,vacpolweb,vacpolpres}. Further details on the different radiative
corrections schemes included in the figures may be found in Section \ref{sec:radgen}.
After discounting the previously aforementioned unrealistic models, the spread in the remaining results was
used to estimate the systematic uncertainty introduced by radiative corrections. The maximally different results
occurred between the Mo \& Tsai (Reference \cite{MoRevModPhys.41.205}) correction under the photon $\Delta E$ method and the methods with full vacuum polarization
included for both the absolute and relative measurements. The radiative correction uncertainty was thus conservatively
taken to be the maximal spread: $\delta_\text{rad,rel} = \pm0.08\%$ and $\delta_\text{rad,abs} = \pm0.45$.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.0\textwidth]{figures/rcsys.pdf}}
\caption[Effect of radiative corrections uncertainty on the 12{$^\circ$} luminosity ratio]{Effects of applying different
radiative corrections schemes to the simulation on the resulting extracted relative 12{$^\circ$} luminosity. The maximal deviation
between realistic models was $\delta_\text{rad,rel} = \pm0.08\%$. Details on the three main
correction schemes included above (Maximon \& Tjon, Meister \& Yennie, and Mo \& Tsai) may be found in
\cite{MaximonPhysRevC.62.054320}, \cite{MeisterPhysRev.130.1210},
and \cite{MoRevModPhys.41.205}, respectively, and a more detailed discussion of these methods may be found in
Section \ref{sec:radgen}.}
\label{fig:rcsys}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.0\textwidth]{figures/rcsysabs.pdf}}
\caption[Effect of radiative corrections uncertainty on the absolute 12{$^\circ$} luminosity]{Effects of applying different
radiative corrections schemes to the simulation on the resulting extracted absolute 12{$^\circ$} luminosity for each species. The maximal deviation
between realistic models was $\delta_\text{rad,abs} = \pm0.45\%$. Details on the three main
correction schemes included above (Maximon \& Tjon, Meister \& Yennie, and Mo \& Tsai) may be found in References
\cite{MaximonPhysRevC.62.054320}, \cite{MeisterPhysRev.130.1210},
and \cite{MoRevModPhys.41.205}, respectively, and a more detailed discussion of these methods may be found in
Section \ref{sec:radgen}.}
\label{fig:rcsysabs}
\end{figure}
\subsubsection{Elastic Form Factors}
Due to the fact that the $Q^2$ range accepted by the 12{$^\circ$} telescopes for each of the lepton species is slightly different
in the single toroid polarity, there is an uncertainty introduced from the uncertainty in the magnitude of the proton
elastic form factors and the variation in the form factors at small $Q^2$. Figure \ref{fig:12degq2} shows the distribution in $Q^2$ of
events of each lepton species in the 12{$^\circ$} telescopes, which notably exhibits a shift of approximately 0.3 GeV$^2/c^2$ between
the two event types. Thus, the form factor (a function of $Q^2$) will not cancel completely in the ratio of the integrated acceptances
for each species. Additionally, any uncertainty in the magnitude of the form factors directly contributes systematic uncertainty
to the absolute luminosity extraction.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/Q2_12deg.pdf}}
\caption[Normalized $Q^2$ distributions of accepted $e^+p$ and $e^-p$ events in the 12{$^\circ$} luminosity system]{Normalized distributions of
the $Q^2$ computed from the reconstructed lepton $\theta$ for electron- and positron-proton elastic scattering events accepted by the
12{$^\circ$} system. Note the differences of the systematic $\sim$0.3 GeV$^2/c^2$ shift between the acceptance ranges of the two species and the
longer tail at higher $Q^2$ values for electrons caused by the in-bending of the negatively-charged electrons from the upstream end of the target cell.}
\label{fig:12degq2}
\end{figure}
To estimate the uncertainty in the 12{$^\circ$} luminosity estimate due to uncertainties in the proton form factors, a similar
method was used as in the previous section to estimate the effects of radiative corrections uncertainties. The OLYMPUS
simulation includes additional event weights for several form factor models including, the Bernauer, et al.\ fits
(Reference \cite{BerFFPhysRevC.90.015206}) and the Kelly model (Reference \cite{KellyPhysRevC.70.068202}), as well as dipole and
point-like proton models for reference. Most notably, the Bernauer model exhibits a structure in $G_M$ in the vicinity
of $Q^2 \approx 0.2$ GeV$^2/c^2$ (Figure 20.b. in Reference \cite{BerFFPhysRevC.90.015206}) that is not present in the majority of form factor fits that creates disagreement in the
prediction of different form factor models within the 12{$^\circ$} system acceptance. The spread in the resulting luminosities when using the Bernauer and Kelly
models, which does not exhibit the aforementioned structure in $G_M$, was taken as an estimate of the systematic uncertainty due to the form factors, discounting the spread due to the
unrealistic proton and dipole form factors. Figures \ref{fig:ffsys} and \ref{fig:ffsysabs} show the results of this study
for the species-relative and absolute luminosity extractions, respectively, in which it was found that the uncertainties due
to the form factors could be estimated as $\delta_\text{ff,rel} = \pm0.14\%$ and $\delta_\text{ff,rel} = \pm1.2\%$.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.0\textwidth]{figures/ffsys.pdf}}
\caption[Effect of form factor uncertainty on the relative 12{$^\circ$} luminosity]{Effects of applying different
form factor models to the simulation on the resulting extraction of the species-relative 12{$^\circ$} luminosity
ratio, including the unphysical dipole and point proton models. The systematic uncertainty was estimated as the spread in the Bernauer and Kelly parametrizations
(References \cite{BerFFPhysRevC.90.015206} and \cite{KellyPhysRevC.70.068202}, respectively): $\delta_\text{ff,rel} = \pm0.14\%$.
Note that radiative corrections were applied to these results using the default method (exponentiation,
Maximon \& Tjon \cite{MaximonPhysRevC.62.054320}) in each case, but that the effects of the changing form factor were accounted for in calculating
the radiative corrections.}
\label{fig:ffsys}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.0\textwidth]{figures/ffsysabs.pdf}}
\caption[Effect of form factor uncertainty on the absolute 12{$^\circ$} luminosity]{Effects of applying different
form factor models to the simulation on the resulting extraction of the absolute 12{$^\circ$} luminosity
for each species, including the unphysical dipole and point proton models. The systematic uncertainty was estimated as the spread in the Bernauer and Kelly parametrizations
(References \cite{BerFFPhysRevC.90.015206} and \cite{KellyPhysRevC.70.068202}, respectively): $\delta_\text{ff,rel} = \pm1.2\%$.
Note that radiative corrections were applied to these results using the default method (exponentiation,
Maximon \& Tjon \cite{MaximonPhysRevC.62.054320}) in each case, but that the effects of the changing form factor were accounted for in calculating
the radiative corrections.}
\label{fig:ffsysabs}
\end{figure}
\subsubsection{TPE at 12{$^\circ$} }
\label{ss:tpe12sys}
As noted in Section \ref{sec:12posstpe}, the underlying physics process measured by the 12{$^\circ$} system is the same process ($e^\pm p$ elastic scattering)
that is under examination in the main detector for TPE contributions and thus it is not excluded that a difference in the $e^+p$ and
$e^-p$ cross sections at $\theta \approx 12^\circ$ due to TPE could exist and systematically shift any attempt to determine the
relative luminosity from this measurement. To estimate the possible size of this shift, several methods may be considered:
\begin{enumerate}
\item comparison with the other OLYMPUS luminosity measurements (amounting to a measurement of {$\sigma_{e^+p}/\sigma_{e^-p}$} at $Q^2 \approx 0.165$ GeV$^2/c^2$, $\epsilon \approx 0.98$
as is discussed in Section \ref{sec:12TPE}),
\item comparison with existing data, and
\item examination of the spread of theoretical predictions for the value of {$\sigma_{e^+p}/\sigma_{e^-p}$} in 12{$^\circ$} region.
\end{enumerate}
The question of the first point is addressed in Section \ref{sec:12TPE} and effectively does not treat the 12{$^\circ$} result as a
luminosity measurement, so here it is attempted to estimate this uncertainty from outside sources. From the standpoint of data
constraints, the VEPP-3 experiment (Reference \cite{vepp3PhysRevLett.114.062005}) used forward elastic $e^\pm p$ scattering for
luminosity normalization in a similar fashion as OLYMPUS would use the 12{$^\circ$} data as a standalone luminosity normalization and
thus does not provide a useful constraint on the value of {$\sigma_{e^+p}/\sigma_{e^-p}$} at high $\epsilon$/low $Q^2$. The CLAS experiment, which
used photon-induced $e^+e^-$ pair production to balance luminosities, provides a measurement of {$\sigma_{e^+p}/\sigma_{e^-p}$} in the small angle bin with
average $Q^2 = 0.232$ GeV$^2/c^2$, $\epsilon = 0.915$ of $R_{2\gamma} = 0.991\pm 0.009$ (including systematic and statistical uncertainty) and
a variety of measurements at $\epsilon > 0.88$ and $Q^2<0.9$ GeV$^2/c^2$ that all scatter around $R_{2\gamma}=1$ within uncertainties on the
order of 1\% \cite{rimal,ass}. Most of the data prior to the modern experiments, predominantly from the 1960s, consists of data at low $Q^2$ but
higher $\epsilon$ (References \cite{Yount:1962aa,Browman:1965zz,Anderson:1966zzf}) or are at much higher energies/$Q^2$ (References \cite{Cassiday:1967aa,
Bouquet:1968aa,Mar:1968qd}). One older experiment (Reference \cite{Bartel:1967aa}), provides a measurement at a comparable kinematic
point to the 12{$^\circ$} system of $R_{2\gamma}=1.012$, but quotes uncertainties on the order 3.0\%. Thus, at best, existing data constrains this uncertainty to order 0.5\%, and
no precise measurement of {$\sigma_{e^+p}/\sigma_{e^-p}$} at very high $\epsilon$ exists in previous data.
Examining various theoretical and phenomenological models for TPE in this region (References \cite{BerFFPhysRevC.90.015206,Chen:2007ac, Guttmann:2010au,Blunden:2003sp,Chen:2004tw,Afanasev:2005mp,
Blunden:2005ew, Kondratyuk:2005kk, Borisyuk:2006fh,TomasiGustafsson:2009pw}), most predict $R_{2\gamma}$ to be near unity but vary on the order of
a tenth of a percent, as exemplified in Figure \ref{fig:projections}, and thus are all roughly consistent with the previously aforementioned experimental
data. Taking into account these available sources of experimental and theoretical estimates, it is reasonable to apply an additional 0.1\% uncertainty
on the relative and absolute luminosities when the 12{$^\circ$} is taken as a standalone normalization point for the data set:
$\delta_\text{TPE,abs} = \delta_\text{TPE,rel} = \pm 0.1\%$. Any contribution OLYMPUS can make in reducing this uncertainty via
the combination of multiple luminosity measurements would be valuable for the constraint of the VEPP-3 normalization point
and the various models.
\subsubsection{Additional Discussion on Systematic Uncertainties}
As can be seen in Table \ref{tab:12ds}, the 12{$^\circ$} system total systematic uncertainty for the relative luminosity is well below 1\% and thus
provides a sufficiently good measurement for the {$\sigma_{e^+p}/\sigma_{e^-p}$} analysis to meet the OLYMPUS goals. Additionally, the system performed admirably
in providing an approximate absolute luminosity measurement (or cross section measurement if combined with an alternate luminosity measurement).
Furthermore, note that the dominant systematics (ToF trigger efficiency, lepton tracking, efficiency, magnetic field, and elastic cuts)
are all estimated in a non-Gaussian fashion and thus represent ``box-like'' limits. Those wishing to interpret the systematic uncertainty
as a Gaussian $1\sigma$ error could approximate this by multiplying the standard deviation of the uniform distribution of unity width
($1/\sqrt{12}$), although this is not necessarily a correct interpretation or good estimate of the systematic uncertainty. Certain contributions
to the uncertainty are correlated between the left and right telescopes, reducing somewhat
the estimate of the relative uncertainty between the telescopes. The details of this will be discussed in the comparison of the left
and right side results in Section \ref{sec:12res}.
\subsection{Results}
\label{sec:12res}
With the complete analysis method and systematic uncertainty determination, the final results for the 12{$^\circ$} luminosity may be considered.
The results in this section are quoted as ratios of the measured 12{$^\circ$} luminosity (calculated separately for detection of the lepton in
the left and right telescopes and for the combined statistics of both sides) to the luminosity used to generate the simulation event sets
(i.e., the slow control luminosity for a given run). This amounts to reformulating Equation \ref{eq:l12} as:
\begin{equation}
\frac{ \mathcal{L}_{\text{12}^\circ} } {\mathcal{L}_\text{MC} } = \frac{\mathcal{L}_{\text{12}^\circ}}{\mathcal{L}_\text{SC}} = \frac{N_\text{data}}{N_\text{MC}\left(\mathcal{L}_\text{SC}\right)}.
\end{equation}
Note that the value of ${ \mathcal{L}_{\text{12}^\circ} }/{\mathcal{L}_\text{SC} }$ is expected to be unity only within the uncertainty of the slow control luminosity measurement, which, as discussed
in Section \ref{sec:scsum}, is on the order of several percent in both the relative and absolute luminosities.
To provide a scale in units of integrated luminosity, a typical data run consisted of $\sim$1.5$\cdot 10^{36}$ cm$^{-2}$ of recorded integrated luminosity. Thus, the entire data
set used for the analysis in this work ($\sim$2200 runs) corresponds to $\sim$3.1$\cdot 10^{39}$ cm$^{-2}$ ($\sim$3.1 fb$^{-1}$) of data, approximately equally split between the two lepton species.
The results of the 12{$^\circ$} luminosity estimates are summarized in Table \ref{tab:12results}, showing the measurements and associated uncertainties (as determined in Section \ref{ss:12sys})
for detection of the lepton in the left and right arms as well as the measurement combining the statistics of the two samples. Figures \ref{fig:12left}, \ref{fig:12right}, and \ref{fig:12comb}
show the run-by-run estimates for the left arm, right arm, and combined measurements respectively. For each run-by-run sample, the values are histogrammed (weighted by their statistical significance)
to provide a means of determining the effective values of ${ \mathcal{L}_{\text{12}^\circ} }/{\mathcal{L}_\text{SC} }$ for the full dataset.
Statistical errors on the datasets were computed at the 95\% confidence bound on the fit to the mean of this histogram. For all estimates, the statistical uncertainty is effectively negligible
relative to the systematic uncertainties. In general, the widths of the histogrammed distributions of run-by-run
luminosities were found to be only slightly larger ($\sim$0.1\%) than the mean statistical error associated with the estimate from individual runs (1.5\% and 1.7\% for single-telescope measurements of
{$e^+ p$} and {$e^- p$} runs respectively) and the distributions are well represented by Gaussian distributions,
indicating that only minor time-varying systematic effects were present in the slow control luminosity. As will be discussed in Section \ref{sec:alllumi},
the SYMB run-by-run estimates show similar features as the 12{$^\circ$} , indicating that the run-by-run variance was indeed due to effects in the slow control estimate rather than the 12{$^\circ$} estimate.
\begin{table}[htb!]
\begin{center}
\begin{tabular}{|l|c|}
\hline
Measurement & Value \\
\hline\hline
$\mathcal{L}_{\text{12}^\circ,e^+,\text{L}}/\mathcal{L}_{\text{SC},e^+}$ ($e^+$ Left + $p$ Right) & $1.0538 \pm 0.0003\:(\text{stat.}) \pm 0.0244\:(\text{syst.})$ \\
\hline
$\mathcal{L}_{\text{12}^\circ,e^-,\text{L}}/\mathcal{L}_{\text{SC},e^-}$ ($e^-$ Left + $p$ Right ) & $1.0525 \pm 0.0003\:(\text{stat.}) \pm 0.0244\:(\text{syst.})$ \\
\hline
$(\mathcal{L}_{\text{12}^\circ,e^+,\text{L}}/\mathcal{L}_{\text{12}^\circ,e^-,\text{L}})/(\mathcal{L}_{\text{SC},e^+}/\mathcal{L}_{\text{SC},e^-})$ & $1.0012 \pm 0.0004\:(\text{stat.}) \pm 0.0046\:(\text{syst.})$ \\
\hline\hline
$\mathcal{L}_{\text{12}^\circ,e^+,\text{R}}/\mathcal{L}_{\text{SC},e^+}$ ($e^+$ Right + $p$ Left) & $1.0418 \pm 0.0003\:(\text{stat.}) \pm 0.0244\:(\text{syst.})$ \\
\hline
$\mathcal{L}_{\text{12}^\circ,e^-,\text{R}}/\mathcal{L}_{\text{SC},e^-}$ ($e^-$ Right + $p$ Left) & $1.0374 \pm 0.0003\:(\text{stat.}) \pm 0.0244\:(\text{syst.})$ \\
\hline
$(\mathcal{L}_{\text{12}^\circ,e^+,\text{R}}/\mathcal{L}_{\text{12}^\circ,e^-,\text{R}})/(\mathcal{L}_{\text{SC},e^+}/\mathcal{L}_{\text{SC},e^-})$ & $1.0042 \pm 0.0004\:(\text{stat.}) \pm 0.0046\:(\text{syst.})$ \\
\hline\hline
$\mathcal{L}_{\text{12}^\circ,e^+}/\mathcal{L}_{\text{SC},e^+}$ & $1.0478 \pm 0.0002\:(\text{stat.}) \pm 0.0244\:(\text{syst.})$ \\
\hline
$\mathcal{L}_{\text{12}^\circ,e^-}/\mathcal{L}_{\text{SC},e^-}$ & $1.0447 \pm 0.0002\:(\text{stat.}) \pm 0.0244\:(\text{syst.})$ \\
\hline
$(\mathcal{L}_{\text{12}^\circ,e^+}/\mathcal{L}_{\text{12}^\circ,e^-})(\mathcal{L}_{\text{SC},e^+}/\mathcal{L}_{\text{SC},e^-})$ & $1.0030 \pm 0.0003\:(\text{stat.}) \pm 0.0046\:(\text{syst.})$ \\
\hline
\end{tabular}
\end{center}
\caption[Summary of the 12{$^\circ$} luminosity results]{Summary of the results of the measurement of the luminosity in the 12{$^\circ$} system for the measurements using the left and right telescopes for the detection
of the scattered lepton separately, as well the combined result. Determination of the quoted systematic uncertainties is detailed in Section \ref{ss:12sys}. The statistical errors represent the 95\%
confidence interval for the average value of the ratio computed over the entire data set. The measurements are presented as the average adjustment to apply to the integrated slow control luminosity (SCL)
for the entire data set so as to compute the estimate of the integrated luminosity from the 12{$^\circ$} measurements for each lepton species. The $e^+/e^-$ ratios presented here would be measurements
of {$\sigma_{e^+p}/\sigma_{e^-p}$} assuming perfect relative slow control luminosity ($\mathcal{L}_{\text{SC},e^+}/\mathcal{L}_{\text{SC},e^-}=1$), but are better interpreted as normalization points assuming no TPE at 12{$^\circ$}
for the relative luminosity or as measurements of
${N_{e^+p,\text{data} }\left(\epsilon,Q^2\right)}/{N_{e^- p,\text{data}}\left(\epsilon,Q^2\right)}$ at $\epsilon \approx 0.98$, $Q^2 \approx 0.165$ GeV$^2$ that may be normalized by
an independent luminosity measurement (e.g., that of the SYMB system (Section \ref{sec:symblumi})) to compute $R_{2\gamma}$ as in Equation \ref{eq:rat}. See Section \ref{sec:12TPE} for latter analysis.}
\label{tab:12results}
\end{table}
In general, the absolute luminosities determined by the 12{$^\circ$} systems are several percent above the slow control estimate, while the species-relative luminosity determination is within a few tenths
of a percent of unity. The left and right estimates agree well to within uncertainties, providing further evidence for the validity of the measurements.
These estimates provide an effective normalization for the main OLYMPUS results and, with the addition of the SYMB luminosity estimate, an additional measurement
of {$\sigma_{e^+p}/\sigma_{e^-p}$} in the vicinity of $\theta\approx 12^\circ$, as will be discussed in Chapters \ref{Chap6} and \ref{Chap7}.
\begin{sidewaysfigure}
\centerline{\includegraphics[width=1.05\textwidth]{figures/12lumileft.pdf}}
\caption[Luminosity determined by the left 12{$^\circ$} arm by run]{Run-by-run estimate of the integrated luminosity collected by the OLYMPUS experiment relative to the slow control estimate,
as measured by examining {$e^\pm p$} events in which the lepton was detected in the left 12{$^\circ$} telescope in coincidence with a proton in the right drift chamber, subject to the analysis described
in Section \ref{sec:12ana}. The run-by-run values are histogrammed in the right-hand plot and fitted to Gaussian
distributions for each species to produce the estimates of the luminosities for each species over the full data set.}
\label{fig:12left}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\centerline{\includegraphics[width=1.05\textwidth]{figures/12lumiright.pdf}}
\caption[Luminosity determined by the right 12{$^\circ$} arm by run]{Run-by-run estimate of the integrated luminosity collected by the OLYMPUS experiment relative to the slow control estimate,
as measured by examining {$e^\pm p$} events in which the lepton was detected in the right 12{$^\circ$} telescope in coincidence with a proton in the left drift chamber, subject to the analysis described
in Section \ref{sec:12ana}. The run-by-run values are histogrammed in the right-hand plot and fitted to Gaussian
distributions for each species to produce the estimates of the luminosities for each species over the full data set. The outlier positron runs in the vicinity of Run 6050 were taken while
the MWPC data acquisition was malfunctioning, and thus they are excluded from the estimate.}
\label{fig:12right}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\centerline{\includegraphics[width=1.05\textwidth]{figures/12lumiboth.pdf}}
\caption[Luminosity determined by the combined left and right 12{$^\circ$} arm results by run]{Run-by-run estimate of the integrated luminosity collected by the OLYMPUS experiment relative
to the slow control estimate,
as measured by examining {$e^\pm p$} events in which the lepton was detected in either 12{$^\circ$} telescope in coincidence with a proton in the opposite side drift chamber, subject to the analysis described
in Section \ref{sec:12ana}. The run-by-run values are histogrammed in the right-hand plot and fitted to Gaussian
distributions for each species to produce the estimates of the luminosities for each species over the full data set. The outlier positron runs in the vicinity of Run 6050 were taken while
the MWPC data acquisition was malfunctioning, and thus they are excluded from the estimate.}
\label{fig:12comb}
\end{sidewaysfigure}
\section{Luminosity Determined Using the SYMB System}
\label{sec:symblumi}
As originally designed, the symmetric M{\o}ller/Bhabha calorimeter system was to provide a luminosity measurement completely independent from {$e^\pm p$} scattering, examining only
the processes $e^\pm e^-\rightarrow e^\pm e^-$ and $e^+ e^-\rightarrow \gamma\gamma$ involving beam leptons and atomic electrons from the hydrogen gas within the target. This method
of measuring the luminosity would avoid any assumptions regarding the magnitude of TPE effects in {$e^\pm p$} scattering at forward angles that are necessary when using the 12{$^\circ$} measurement
as a luminosity normalization. The original measurement principle was to count such lepton-lepton scattering events in which the outgoing particles scatter symmetrically about the
beam axis ($\theta\approx 1.29^\circ$ at $E_\text{beam}=2.01$ GeV) and then detected in coincidence in the left and right calorimeters via the deposit of approximately 1 GeV of energy
in each and, as in the other OLYMPUS analyses, compare the measured rates to those expected from Monte Carlo simulation. The collimators in front of the calorimeters limited the acceptance
of the detector to such events near the symmetric scattering angle. This method was to provide a very high
statistics measurement of the integrated luminosity using the very high rate of forward lepton-lepton scattering, with statistical uncertainties far smaller than any other element of the analysis.
Unfortunately, for a number of reasons briefly described in the next section and detailed in Reference \cite{oconnor}, this method of luminosity determination was untenable for the OLYMPUS analysis
in that it would have introduced an unacceptably large systematic uncertainty to the final {$\sigma_{e^+p}/\sigma_{e^-p}$} analysis.
To provide a luminosity measurement from the SYMB system that avoided the problems associated with the original method, a new analysis was developed in which the ratio of rates of multiple event
types was detected in the SYMB calorimeters. In particular, two types of events were considered for the analysis:
\begin{itemize}
\item coincidence symmetric lepton-lepton scattering events (as would have been used in the original analysis), and
\item the detection of \textit{simultaneous} (i.e., from the same beam bunch) coincidence symmetric leptons and additionally a single lepton with energy $\sim$2 GeV corresponding
to a very forward elastic {$e^\pm p$} scatter resulting in the deposition of $\sim$1 GeV in one calorimeter and $\sim$3 GeV in the other.
\end{itemize}
While this method greatly reduces the statistical power of the measurement, taking the ratio of multiple event types reduces systematic uncertainties associated with detector efficiency
and eliminates the need to simulate the three separate lepton-lepton event types. This analysis, referred to as the \textit{multi-interaction event} (MIE) method,
and its results are described in Section \ref{sec:mielumi} and Reference \cite{schmidt}
provides complete detail on the method.
In general, this section provides a brief summary of the analyses and results from the SYMB system for the purpose of establishing the necessary results for the main analysis
described in the remaining chapters. Complete discussions of these analyses may be found in the theses of O'Connor and Schmidt (References \cite{oconnor} and \cite{schmidt}).
\subsection{Discussion of the Untenability of the Original Coincident Event Analysis}
As originally conceived, the SYMB analysis would have involved comparing the rates of symmetric $e^-e^-\rightarrow e^-e^-$ scattering for electron beam running
to the rates of symmetric $e^+e^-\rightarrow e^+e^-$ and $e^+e^-\rightarrow \gamma\gamma$ for positron beam running, normalized to simulation of all three processes
in a similar fashion as the other OLYMPUS analyses. This concept was based on a forward calorimeter luminosity monitoring scheme used by the HERMES experiment to
measure the relative luminosity of positron beams of different polarizations incident on a hydrogen target via $e^+e^-\rightarrow e^+e^-$ and $e^+e^-\rightarrow \gamma\gamma$
events \cite{Benisch2001314}. In converting this concept to a measurement of the relative luminosities of different beam species, however, several important
differences were not properly considered. In particular, the cross sections for the aforementioned processes change rapidly as a function of $\theta$ in the region
of the SYMB detectors, and additionally the cross sections for the different processes vary in $\theta$ with different slopes in the region of interest ($\theta\approx1.3^\circ$),
as shown in Figure \ref{fig:cs3}. This difference makes the original method extremely sensitive to small shifts in beam position, beam slope, and the placement of the detectors
(both in absolute space and relative between the two collimators). Additionally, the comparison of cross sections for multiple processes involves the simulation of all such processes
and the application of proper radiative corrections methods for each, introducing an additional large systematic uncertainty to any luminosity result using such a method. Reference \cite{oconnor}
provides a full analysis of these effects. It was determined that given the survey uncertainty, beam position uncertainty, available radiative corrections schemes at the time
of the analysis, and other systematic effects there was at least a 2.8\% systematic uncertainty in any species-relative luminosity measurement made using this method. Use of such
an imprecise luminosity would spoil the resulting uncertainty on the $R_{2\gamma}$ determination well beyond the 1\% uncertainty goals of the experiment. Analyses that attempted to
use this method found values of $\mathcal{L}_{e^+}/\mathcal{L}_{e^-}$ that deviated by several percent or more from the measurements of all other systems, and additionally found
that the measured rate varied inexplicably by several percent over time.
\begin{figure}[thb!]
\centerline{\includegraphics[width=0.9\textwidth]{figures/CS3.pdf}}
\caption[Tree-level differential cross sections for SYMB processes]{Calculated tree-level differential crosses sections for M{\o}ller scattering (red), Bhabha scattering (blue),
pair annihilation (magenta), {$e^- p$} elastic scattering (black) in the vicinity of the SYMB calorimeter acceptance.
(Figure reproduced from \cite{PerezBenito20166}.)}
\label{fig:cs3}
\end{figure}
In addition to the inherent sources of uncertainty in this method, it is also believed that the electronics used in the SYMB detector to assess the local-max condition (requiring
that the central crystal in the calorimeter have the highest energy deposited) and the left/right coincidence module may have had failure modes that affected the detector during
data-taking. Regarding the latter, the coincidence histogram showed behavior away from the main (1 GeV, 1 GeV) peak that could not be reproduced in simulation, and was possibly
due to a small timing error in the window of coincidence used by the coincidence module between the two calorimeters. Regarding the former, it was hypothesized that for events with
large energy deposition in a single calorimeter, the comparator module which determines if the central crystal has the highest deposition could be saturated at its maximum value leading
to an erroneous local maximum event rejection \cite{compsat}. These two possible issues pose additional concerns for the coincidence measurement, and must be avoided or suppressed
in any alternative methods of extracting a luminosity measurement from the SYMB.
\subsection{The Multi-Interaction Event Luminosity Determination}
\label{sec:mielumi}
With the originally proposed method for producing a luminosity analysis from the SYMB system invalidated, other methods were sought to provide a means
of estimating the luminosity using the SYMB that was immune to the problems that were believed to be responsible for the failure of the coincidence
method. A method was developed that took advantage of the fact that, in addition to the (1 GeV, 1 GeV) peak corresponding to coincident symmetric
leptons, the SYMB single-side-master histograms showed additional peaks corresponding to other event types (e.g., occurrence of two lepton-lepton
events in coincidence creating a peak at (2 GeV, 2 GeV), a lepton-lepton event in coincidence with an {$e^\pm p$} event in which the {$e^\pm p$} lepton deposits
$\sim$2 GeV in one of the calorimeters creating a peak at (1 GeV, 3 GeV), etc.). An example of the left-master histogram showing these
peaks is shown in Figure \ref{fig:lmmie}. The MIE analysis compares the relative rates of symmetric lepton-lepton events and events in which
a symmetric lepton-lepton event occurs in coincidence with an elastically scattered {$e^\pm p$} lepton in the right side calorimeter as recorded in the left-master
histogram (the (1,1) and (1,3) peaks in Figure \ref{fig:lmmie}) and normalizing to the simulated rate for the {$e^\pm p$} leptons (noting that the coincidence lepton-lepton
rate cancels in the ratio, eliminating the need to simulate it). This offers several important advantages over the original lepton-lepton coincidence method:
\begin{enumerate}
\item By taking a ratio of rates, any species-dependent detector efficiency variations are canceled to first order.
\item By using one of the side-master histograms, any unknown problems with the electronics for the coincidence histogram are irrelevant.
\item Since the high energy deposition (3 GeV) occurs in the right calorimeter while the left side receives only 1 GeV for the events of interest,
the left side is not near the comparator saturation region in which its local-max condition may erroneously fail. Since the method examines
the left-master plot, no requirements are placed on the right-side deposition making it irrelevant if the right side local max comparison fails
due to the high energy deposition.
\item The need to simulate the three processes of M{\o}ller scattering, Bhabha scattering, and pair annihilation is eliminated, leaving only simulation
of the {$e^\pm p$} lepton rate.
\item The MIE method is considerably less sensitive to effects that introduce very large systematics to the original coincidence method such as beam position and detector position
since the rate of coincident leptons cancels in the ratio.
\end{enumerate}
The disadvantages of the MIE method include a significant reduction in statistical precision relative to the coincidence method (since it depends on two events being recorded
from a single beam bunch) and the fact that only the left-master histogram provides the necessary (1 GeV, 3 GeV) peak needed for the analysis since the right-master histogram
ADC range was not set to include 3 GeV events from the left side. These drawbacks, however, are greatly outweighed by the robustness of the method.
\begin{figure}[thb!]
\centerline{\includegraphics[width=0.9\textwidth]{figures/leftmaster.png}}
\caption[Example of the SYMB left-master histogram in data]{The left-master histogram from SYMB data, which was filled for any event in which the left SYMB calorimeter
met its energy deposition (ADC count) threshold with the center crystal having the highest deposition regardless of the ADC count in the right calorimeter. The visible
peaks correspond to the various combinations of scattering events that can occur in coincidence (i.e., from the same beam bunch), as described in the text.
(Figure reproduced from \cite{schmidt}.)}
\label{fig:lmmie}
\end{figure}
This analysis is detailed in full in Reference \cite{schmidt}, but the essential details are provided here for the purpose of establishing the method
for use as one of the luminosity estimates for the final $R_{2\gamma}$ result.
\subsubsection{Principle of the Measurement}
As noted, the MIE method compares the number $N_{(1,1)}$ of symmetric lepton-lepton events (i.e., M{\o}ller scattering, Bhabha scattering, and pair annihilation) to
the number $N_{(1,3)}$ of events where a symmetric lepton-lepton event occurs in the same beam bunch in which an elastically {$e^\pm p$} lepton reaches the right side calorimeter
recorded by the left master histogram, normalized to the expected cross section of {$e^\pm p$} leptons in the right calorimeter from simulation $\sigma^\text{MC}_{e^\pm p\rightarrow \text{R}}$
and the number of beam bunches $N_b$ in the data sample. As derived in Reference \cite{schmidt}, this estimate must also account for the variance in integrated
luminosity delivered by single beam bunches $v_b$, and the average cubed integrated luminosity of single beam bunches $\left< \mathcal{L}_b^3 \right>$ that enter the formula in
higher order terms due to effects caused by the fluctuation of the likelihood of multiple simultaneous events occurring with fluctuations in the beam bunch charge and the possibility of more
than two events occurring simultaneously (which ``moves'' events out of the peaks of interest). In principle, these higher order terms also depend on the total luminosity (the quantity
being measured) and the cross-section for all processes that may enter the measurement $\sigma_\text{tot}$. These may be safely estimated as the slow control luminosity
and the dominant (1,1) cross section respectively since they enter only in the higher order terms making any error in their determination a small effect. The result of the derivation
in Reference \cite{schmidt} provides the following formula for the integrated luminosity of a given data sample using the MIE method:
\begin{equation}
\mathcal{L}_\text{MIE} = \frac{N_{(1,3)}N_b}{N_{(1,1)}\sigma^\text{MC}_{e^\pm p\rightarrow \text{R}}} - \frac{v_bN_b^2}{\mathcal{L}_\text{SC}} -
N_b\sigma_\text{tot}\left[\left(\frac{v_bN_b}{\mathcal{L}_\text{SC}} + \frac{\mathcal{L}_\text{SC}}{N_b} \right)^2 - \frac{N_b\left< \mathcal{L}_b^3 \right>}{\mathcal{L}_\text{SC}} \right].
\end{equation}
For reference, the second term is approximately 1\% of the leading term, while the third term is approximately 0.1\%. Since the uncertainty of the measurement is on the order of
the third term, consideration of higher orders was unnecessary.
The beam parameters $N_b$ and $v_b$ were provided by the DORIS beam parameter archives associated with the slow control system.
The procedure for determining the count rates in data ($N_{(1,3)}$ and $N_{(1,3)}$) is detailed in Reference \cite{schmidt},
but essentially amounted fitting the centroids of the relevant peaks in the left master histogram, placing rectangular box cuts around those centroids, and then
integrating the boxed regions. It was determined that the choice of box size significantly affect the absolute value of $\mathcal{L}_\text{MIE}$, but did not as significantly
affected the species-relative measurement $\mathcal{L}_{\text{MIE},e^+}/\mathcal{L}_{\text{MIE},e^-}$. Given that no clear method existed to determine the exact proper size
of the box cut, no absolute luminosity determination is quoted for the MIE method and the size of the box cut was tested as a systematic uncertainty of the relative measurement.
\subsubsection{Systematic Uncertainties}
\label{sec:miesys}
As previously noted, the uncertainty on the absolute measurement of the luminosity from the MIE method is quite large (at least several percent)
due to the large variation in absolute count rates that occurs when the sizes of the cut boxes are varied. Additionally, there is no a priori methodology
for determining the ``correct'' box size, and thus it is extremely difficult to place a specific estimate of the absolute luminosities.
The analysis is capable, however, of producing an estimate of the species-relative luminosity with
uncertainty small enough to meet the goals of OLYMPUS.
The main sources of uncertainty for the species-relative MIE luminosity estimate are summarized in Table \ref{tab:miesys}, as analyzed in Reference \cite{schmidt}.
The total systematic of the MIE analysis for the species-relative luminosity was found to be $\delta_\text{MIE}=\pm 0.27\%$, dominated by the contributions from
uncertainty in the beam position and slope, uncertainty in the modeled positions and orientations of the calorimeters and collimators, and the choice of the size
of the cut boxes used in the analysis. Estimates of the effects were conducted in a manner similar to those described in Section \ref{ss:12sys} for the 12{$^\circ$}
system (the exact methods are described in \cite{schmidt}). Due to the relatively small magnetic field in the region traversed by particles from the target to the SYMB ($\lesssim50$ G),
magnetic field plays a small role in the MIE uncertainty, while beam energy and radiative corrections are comparably-sized effects for the MIE and 12{$^\circ$} analyses. Since the {$e^\pm p$} events
in the SYMB calorimeters are much further forward than those in the 12{$^\circ$} system ($\epsilon = 0.99975$, $Q^2 = 0.002$ GeV$^2$), any uncertainty due to TPE effects is expected to be
much smaller than other effects that were tested (since the TPE uncertainty was 0.1\% and the TPE contribution must go to zero at $\epsilon = 1$).
\begin{table}[thb!]
\begin{center}
\begin{tabular}{|l|c|c|}
\hline
Uncertainty Source & Relative (\%) \\
\hline\hline
Beam position/slope ($\delta_\text{BPM}$) & $\pm0.21$ \\
\hline
Detector/collimator position ($\delta_\text{geo}$)& $\pm0.13$ \\
\hline
Cut box sizes ($\delta_\text{cuts}$) & $\pm0.10$ \\
\hline
Magnetic field ($\delta_{B})$ & $\pm0.05$ \\
\hline
Radiative corrections ($\delta_\text{rad}$) & $\pm0.03$ \\
\hline
Beam energy ($\delta_{E_\text{beam}}$) & $\pm0.01$ \\
\hline\hline
Total ($\delta_\text{MIE}$) & $\pm0.27\%$ \\
\hline
\end{tabular}
\end{center}
\caption[Systematic uncertainties of the SYMB MIE luminosity determination]{A summary of the contributions to the systematic uncertainty
in the determination of {$\sigma_{e^+p}/\sigma_{e^-p}$} from the SYMB MIE luminosity estimate. These uncertainties may be
considered to be independent, in general, and thus are added in quadrature to produce the total
uncertainty estimate.}
\label{tab:miesys}
\end{table}
Due to the relative simplicity of the data output of the SYMB system, there are relatively few identifiable possible causes of systematic uncertainties for the
MIE analysis. The system, however, is very sensitive to effects such as beam position and detector geometry (effects to which the 12{$^\circ$} is quite insensitive).
In general, the sensitivities to systematic effects are very complimentary between the 12{$^\circ$} and MIE analyses, with large effects such as tracking efficiency and magnetic
field in the 12{$^\circ$} system being either irrelevant or much smaller effects in the MIE analysis. The statistical precision of the MIE analysis is comparable to
the 12{$^\circ$} analysis, and thus effectively negligible. Due to this, the 12{$^\circ$} and MIE luminosity estimates provide an important cross check for the luminosity used in the
final {$\sigma_{e^+p}/\sigma_{e^-p}$} analysis and present the opportunity to present a measurement of {$\sigma_{e^+p}/\sigma_{e^-p}$} in vicinity of 12{$^\circ$} as well (Section \ref{sec:12TPE}).
\subsubsection{Results}
\label{sec:mieres}
Since the MIE analysis is only able to use the left-master histogram (since the right-master histogram range was not set so as include the necessary (3 GeV, 1 GeV) peak
for the MIE calculation), the MIE analysis produces a single estimate for the relative luminosity. Similarly to the 12{$^\circ$} results (Section \ref{sec:12res}), this estimate is quoted as a ratio
relative to the slow control luminosities for each species. As computed in Reference \cite{schmidt}, the estimate of the species relative luminosities relative to slow control
for the dataset of interest was:
\begin{equation}
\frac{\mathcal{L}_{\text{MIE},e^+}}{\mathcal{L}_{\text{MIE},e^-}} \cdot\frac{\mathcal{L}_{\text{SC},e^-}}{\mathcal{L}_{\text{SC},e^+}} = 1.0055 \pm 0.0010\:(\text{stat.}) \pm 0.0027\:(\text{syst.}).
\label{eq:mie}
\end{equation}
This result is very consistent with the 12{$^\circ$} estimate and the expected uncertainties associated with the slow control determination. This result may be used either as
a luminosity normalization point at very forward angles, or combined with the 12{$^\circ$} estimate to provide a high-confidence estimate for the main result. The run-by-run luminosity
estimate from the MIE method and the projected fit distributions are shown in Figure \ref{fig:mie}.
\begin{sidewaysfigure}
\centerline{\includegraphics[width=1.05\textwidth]{figures/mielumi.pdf}}
\caption[Luminosity determined by the multi-interaction event SYMB analysis by run]{Run-by-run estimate of the integrated luminosity collected by the OLYMPUS experiment relative to the slow control estimate,
as measured by the SYMB multi-interaction event analysis. The data and analysis methods are from Reference \cite{schmidt}. The run-by-run values are histogrammed in the right-hand plot and fitted to Gaussian
distributions for each species to produce the estimates of the luminosities for each species over the full data set.}
\label{fig:mie}
\end{sidewaysfigure}
\section{Discussion of the Luminosity Analyses}
\label{sec:alllumi}
In general, despite the issues with the performance of the GEMs in the 12{$^\circ$} telescopes and the SYMB coincidence event analysis, the species-relative luminosity measurements for OLYMPUS
achieved the necessary level of uncertainty to permit an overall uncertainty on the measurement of $R_{2\gamma}$ of less than 1\%. The excellent agreement of the MWPC-only 12{$^\circ$}
{$e^\pm p$} analysis and the MIE analysis in the SYMB system provides a high degree of confidence in the luminosity measurements, since each system is subject to very different
systematic uncertainties. Figure \ref{fig:12mie} shows the run-by-run ratio of the combined left/right 12{$^\circ$} and MIE estimates. Notably, this run-by-run ratio shows less time variance
in the individual luminosity estimates relative to the slow control luminosity, indicating that the two methods captured systematic effects that were not accounted for in the slow control
estimate. While the MIE method sacrifices the complete exclusion of {$e^\pm p$} scattering that would have been part of the coincidence symmetric lepton-lepton scattering
method, the additional reduction in uncertainty provided by the MIE analysis via the cancellation of efficiency uncertainties in the ratio of counts and the reduction of the number
of required simulated physics processes and the low uncertainty on the estimate of possible TPE for such forward scattering ($\epsilon = 0.99975$) make the estimate very robust. Additionally,
the careful analysis performed for the 12{$^\circ$} system provided not only a high precision relative luminosity measurement but also an absolute luminosity estimate for each species with uncertainty
of only a few percent, which may be useful for future physics analyses with the OLYMPUS data.
For the {$\sigma_{e^+p}/\sigma_{e^-p}$} analysis that follows in the remaining chapters, the luminosity measurements may be combined into a single average normalization point or taken individually to provide
either an estimate of the systematic effects of the variation of the relative luminosity scale or to provide an measurement of $R_{2\gamma}$ in the vicinity of $\theta=12^\circ$. These different
cases are discussed in Section \ref{sec:thegoddamnresults}. For the case of the averaged single normalization point, the two analyses (weighted by their uncertainties) provide an estimate of
the species-relative integrated luminosity relative to the slow control luminosity over the course of the full dataset under consideration of:
\begin{equation}
\frac{\mathcal{L}_{e^+}}{\mathcal{L}_{e^-}} \cdot\frac{\mathcal{L}_{\text{SC},e^-}}{\mathcal{L}_{\text{SC},e^+}} = 1.0048 \pm 0.0024\:(\text{combined stat. + syst.}).
\label{eq:avrellumi}
\end{equation}
\begin{sidewaysfigure}[htb!]
\centerline{\includegraphics[width=1.05\textwidth]{figures/12overmie.pdf}}
\caption[Ratio of the 12{$^\circ$} and MIE luminosities run]{Run-by-run ratio of the combined left+right 12{$^\circ$} luminosity (Figure \ref{fig:12comb}) and the MIE luminosity (Figure \ref{fig:mie}).
The run-by-run values are histogrammed in the right-hand plot and fitted to Gaussian
distributions for each species to produce the estimates of the luminosities for each species over the full data set.}
\label{fig:12mie}
\end{sidewaysfigure}
\chapter{The $e^+p/e^-p$ Cross Section Ratio Analysis}
\label{Chap6}
The final piece of the OLYMPUS result for $R_{2\gamma}$ is the analysis of the elastic {$\sigma_{e^+p}/\sigma_{e^-p}$} ratio in the main tracking volumes
of the detector. As has been discussed previously, this involves not only a robust method of selection elastic {$e^+ p$} and {$e^- p$} events
from the data sample, but also proper representation of the detector system in the simulation so that the elastic selection method
may be applied equally to the data and the simulation. This chapter discusses the performance of the main tracking detectors (the
drift chambers and time-of-flight (ToF) scintillators), the implementation of the tracking detectors in the simulation, and a method
of selecting elastic events and conducting background subtraction to produce a result for {$\sigma_{e^+p}/\sigma_{e^-p}$} over the full acceptance of the detector.
Additionally, a preliminary estimate of the systematic uncertainty in the {$\sigma_{e^+p}/\sigma_{e^-p}$} analysis is discussed in Section \ref{sec:mainsys}.
The results of this analysis are presented in Chapter \ref{Chap7}.
Note that the analysis described here is only one of several conducted using the OLYMPUS data to produce a {$\sigma_{e^+p}/\sigma_{e^-p}$} result. Multiple
analyses using unique methods were conducted by different members of the OLYMPUS collaboration so as to provide an estimate of the
systematic uncertainty in the result due to choices made in the elastic event selection and background selection. These analyses utilized
different methods for particle-type identification, combinations of kinematic cuts to produce the elastic event sample, models for the remaining
background after cuts, and orderings of the various steps in the analysis to provide providing a robust examination of the effects of analysis
decisions on the final result. Information on several of these analyses may be found in References \cite{schmidt}, \cite{russell}, and \cite{oconnor},
and Section \ref{sec:indana} discusses the comparison of the analyses.
\section{Spectrometer Performance and Modeling in Simulation}
\label{sec:specperf}
Characterization of the detectors involved in the reconstruction of elastic {$e^\pm p$} events was critical to the analysis, as it allowed
a detailed implementation of the detector response in simulation so as to ensure that the simulated detector accurately represented
the acceptance of the detector during the experiment. In particular, it was critical to model the efficiency and resolutions of the
drift chambers and ToFs in detail so that reconstruction of simulated tracks could occur on equal footing as tracks in experimental data.
This section describes the measurements and implementation of these parameters in simulation, particularly focusing on the drift chamber
efficiencies.
\subsection{Drift Chambers}
\label{sec:wcperf}
Since the drift chambers were the main reconstruction detectors for OLYMPUS, it was critical to properly model them in the Monte Carlo,
especially with regard to efficiencies (which affected the effective acceptance of the detector) and resolutions (which affected the validity
of applying the same elastic event selections to data and simulation). Each of these quantities was measured throughout the chamber using the
experimental data and globally-fit track information.
\subsubsection{Efficiency}
If the drift chambers had been highly efficient for track detection throughout their volumes, they would have affected the acceptance of the detector
very minimally. This, however, was not the case for the conditions of the OLYMPUS experiment. Furthermore, because the beam environment differed
between {$e^- p$} and {$e^+ p$} experiment modes (predominantly through the backgrounds caused by M{\o}ller and Bhabha scattering of beam leptons from the atomic
electrons in the target gas which caused hits in the innermost drift chamber layers), it was critical to precisely measure the efficiencies in each running
mode to avoid an artificial shift in {$\sigma_{e^+p}/\sigma_{e^-p}$} due to an asymmetry in drift chamber efficiencies in the two modes. In particular, the efficiency of the drift
chambers was lowest in the innermost layers where the wires were exposed to higher rates from particles in the target which had relatively low energy, but were not
so low energy as to be completely contained away from the chambers by the magnetic field. An example of the inner chamber efficiency is show in Figure \ref{fig:badwc}, which was
additionally reduced by a defective high voltage supply card in the central region. In the outer layers, the efficiency was typically much higher.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/SL0Lallthree.pdf}}
\caption[Efficiency for all three wires to fire in the left innermost drift chamber cell layer]{Probability for all three wires in the innermost layer of left drift chambers cells
to fire for a track in {$e^- p$} running as a function of position in the drift chamber plane. In addition to a reduction in efficiency due to high rates (visible at distance of $<600$ mm from the
first wire), this layer contained a defective high voltage supply card that significantly reduced the efficiency of a 390 mm block in a correlated way. Additionally a single
defective wire channel at distance 1170 mm reduced the all-three hit probability in that region. }
\label{fig:badwc}
\end{figure}
An additional factor of consideration was that blocks of five adjacent wire chamber cells shared a single high voltage distribution card and discriminator card.
Thus, issues with either the high voltage applied to the wires or the low voltage that powered the a discriminator card could simultaneously affect up to 15 wires,
introducing the possibility of correlated inefficiencies between wires. While studies indicated that correlation did not occur between wires that did not share a card,
significant efficiency correlations were observed for wires connected to the same card. Since a track in the drift chambers can be constructed without hits in each layer,
it was critical to model such correlation effects so as to avoid over-predicting the number of reconstructible events in simulation.
To account for such effects, the drift chambers were not modeled with simple efficiency maps for each layer in the same fashion as the planes of the 12{$^\circ$} telescopes
(Section \ref{sec:12eff}). For each cell layer in the wire chambers (three single-wire layers), eight maps were calculated corresponding to the $2^3$ possible combinations
of hit/no hit for the three wire layers. To determine the efficiency for a given cell layer, the data was tracked with the layer completely excluded from consideration (both
in terms of contributing hits to possible tracks and in being required to match a track pattern (Section \ref{sec:track})). The data used for the efficiencies mapping were a sample
of $\sim$100 data runs of each lepton species beam sampled evening across the entirety of the dataset (approximately 10\% of the full dataset used for the analysis). Each data run was
tracked six times, with a different cell layer masked for each reconstruction (removing the corresponding layers on both the left and right sides). A rough elastic selection
was applied to the tracks to assure reasonable track quality, and the masked layer was checked for hits corresponding to the selected track. Hits were required to be within
several times the resolutions described in the next section to be counted as associated with a track and thus mark a wire efficient. This cutoff was found to be very stable once outside
of the resolution peak. As described, for each layer the probability for each of the eight wire hit combinations was calculated as a function of position in the cell layer plane and maps
of these probabilities were generated for each layer. In the simulation, tracks were tested at the cell layer level using a random draw against these maps to determine which hits
(if any from the track) would be passed from that cell layer to the reconstruction of simulation. Separate sets of maps were constructed for position and electron beam operation since
the noise conditions in the innermost layers differed significantly enough to induce an efficiency difference. An example set of maps for an outer layer, where the efficiency was
quite uniform and high, is shown in Figure \ref{fig:wcsl5}.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/SL5Lmaps.pdf}}
\caption[Set of correlated efficiency maps for the outermost cell layer in the left chamber]{Probability maps for the eight possible combinations of hits in the outermost left cell layer
during {$e^+ p$} running, where $x$ is the distance from the first wire and $\phi$ is the track azimuthal angle. The three layers are labeled 0, 1, and 2, and the plot titles indicate which
wires are hit in a given map. Note that the dominant probability is for all three wires to fire, outside of
the region of a known disconnected cell at $x\approx1400$ mm. Additionally, lower single wire efficiencies occur at two other locations but do not cause a significant correlated probability of
missing all three hits in the cell layer.}
\label{fig:wcsl5}
\end{figure}
Wires that were known to be disconnected or to have malfunctioning readout equipment were excluded from the analysis of both data and simulation entirely, rather
than implementing 0\% efficiency in the maps. This approach is superior due to the fact that while a track may pass predominantly through the region of a deactivated
wire, it may also pass through the active region of an adjacent wire and produce a hit that reconstructs to the position in the deactivated cell. This was especially
common in the forward portions of the drift chambers where tracks passed wires with large angles relative to the normal vector of the wire planes. Thus, the ``dead'' regions
due to inactive wires had soft efficiency edges rather than hard cutoffs in acceptance, which was modeled in the maps as described. This complete model of the drift
chamber efficiencies permitted OLYMPUS to achieve a high degree of data/simulation agreement in final yields (up to the effect of TPE), providing evidence for the use of
the method encapsulated in Equation \ref{eq:rat}.
\subsubsection{Time Resolution}
The inherent time resolution of the drift chambers was dominated by the physics of ionization drift rather than any time scales
inherent to the capabilities of the TDCs used to measure times for each wire. Dispersion caused by random interactions of the drifting ions
with the drift gas worsened the resolution roughly linearly as a function of the distance between the point of the initial ionization and
the wire. The resolution was found to vary from approximately 20 ns in the vicinity of the wire to slightly more than 30 ns near the edges
of the cell. The resolution was measured by examining the width of the distribution of the difference between the distance reconstructed from
the globally reconstructed track involving the drift chamber time in question and the distance predicted by the time-to-distance (TTD) function for a hit,
which was then converted to a time width via inversion of the TTD function (see Reference \cite{schmidt}). In this way, the method captures at least a portion of the uncertainty in
the TTD function for a given cell in addition to resolution widening caused by the drift gas.
To apply these resolutions to simulation, simulated hits were first produced in the form of distances from drift chamber wires and then converted to times
using the inverse TTD function. This time was then smeared according to a Gaussian of the appropriate width according to its time (distance from the wire).
These ``smeared'' time hits were then saved as the experimental data analogs of the experimentally measured drift times and passed to the track reconstruction
algorithm in an identical fashion to data.
\subsection{Time-of-Flight System}
Due to the importance of the ToF system in the analysis (as the main component of the trigger), it was critical to the results of the experiment to properly
calibrate and simulate the system. Timing offsets, ADC calibrations, etc. for each bar were determined using a detailed data-driven approach,
which is discussed in detail in Reference \cite{russell}. The efficiency of the ToF bars was not simulated using a mapping as in other systems, but rather was
modeled by measuring the response of each bar and the attenuation length of scintillation light as a function of position in the bar in data to create a model
of the scintillator response and efficiency for implementation in the simulation. Such a model was necessary due to the fact that while special triggers were included
at a prescaled rate in the dataset to allow for data-driven direct ToF efficiency measurements, it was found that these triggers were generally swamped by forward event
noise and provided very little useful data for the majority of ToF bars.
Quantities such as timing offsets were determined using an iterative approach
of matching ToF hit positions reconstructed from the PMT timing difference and positions projected from reconstructed trajectories in the drift chambers (excluding
the ToF hit information from the trajectory fit). An example of the success of this methodology is shown in Figure \ref{fig:tofphi}, which shows the excellent agreement
between the reconstructed track $\phi$ and the associated ToF hit $\phi$ position.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/tofphi.pdf}}
\caption[Difference between ToF hit $\phi$ and reconstructed track $\phi$ by bar]{Difference between ToF hit $\phi$ and reconstructed track $\phi$ by bar for {$e^- p$} data, after event
pair selection but prior to background subtraction. Note that the logarithm of the counts is shown by the color scale of the histogram so as to make the mid-angle bars with fewer
counts visible in the figure. In general, the left ToF bars (indices 0-17) had poorer resolutions than the right bars (indices 18-35), but all bars were properly calibrated to match
tracks on average. Such resolution differences were implemented in the simulation so as to make simulated events match data events in reconstruction and analysis.}
\label{fig:tofphi}
\end{figure}
\subsection{Reconstruction}
\label{sec:reconrev}
In general, the reconstruction algorithm based on the elastic-arms algorithm (EAA) (References \cite{OHLSSON1,OHLSSON2}) used for OLYMPUS (Section \ref{sec:track}), performed extremely well. This was achieved
by a careful tuning of the EAA parameters to best model the OLYMPUS tracking environment. This process is described in detail in Reference \cite{russell}. The extensions to EAA implemented to
improve its performance with regards to correctly resolving the wire side ambiguities of the drift chamber hits is described in Reference \cite{schmidt}. To test the efficiency of the
tracker, and to provide a basis for the tuning of the EAA parameters, the capability of the tracker was tested using simulation tracks for which the true trajectories are known, but that have been
converted to the data format with time smearing and the introduction of the wire side ambiguity of the drift chamber hits. Through the tuning of the parameters, tracking efficiency for such tracks
(proper identification of the correct wire side for all drift chamber hits and reconstruction parameters within acceptable bounds of the known true values) was in excess of 99\% for the tracking
of all three relevant particle types throughout the entirety of the acceptance. The difference in efficiencies between the different particle types was within the statistical uncertainty of
the tests \cite{russell2}. While there was no effective method for testing the tracking of data events in a similar fashion, the high efficiency for simulation tracking and the similarity
of efficiency for all particle types provided high confidence in the robustness of the tracker. This was further buoyed by the comparison of data to simulation event selection, which indicated
very good agreement (Section \ref{sec:datasim}).
Resolutions with respect to various reconstruction parameters are discussed in Section \ref{sec:pairsel} in the context of the various kinematic parameters used for the
elastic event selection, and the figures of Appendix \ref{chap:kincuts} illustrate the resolutions on a number of kinematic parameters over the full detector acceptance
for both {$e^- p$} and {$e^+ p$} data.
\section{Method of the Analysis}
\label{sec:mainana}
This section describes the methodology of the elastic {$e^\pm p$} event selection analysis conducted by the author and used
to produce the majority of the results presented in this work. The analysis presented here is one of several $R_{2\gamma}$
analyses conducted for OLYMPUS, each of which used different choices of particle identification, kinematic cuts, background
subtraction models. The variation in these analyses provides a useful measure of certain elements of the systematic uncertainty
in the final result and is discussed later in this chapter. Details on the alternate analyses may be found in any of the other
OLYMPUS PhD theses (References \cite{schmidt,russell,oconnor}), although other analyses were conducted that have not yet been published.
As previously noted, this analysis method was applied to both the experimental data and the events generated from simulation, which were
constructed in such a way so as to precisely mimic the format of the experimental data. This allows completely equal treatment of data
and simulation throughout the entire analysis (track reconstruction, particle identification, kinematic cuts, etc.). From this point forward,
any use of the simulated events makes use of the default event weight (see Section \ref{sec:gen}), which was the exponentiated radiative corrections
model based on the prescription of Maximon and Tjon \cite{MaximonPhysRevC.62.054320}, unless explicitly noted otherwise.
\subsection{Particle Identification and ToF Hit Association}
\label{sec:partid}
In this analysis, the first step was conduct particle identification, i.e., the proper association of particle trajectories reconstructed
by the tracking algorithm (Section \ref{sec:track}) with the true particle type associated with the track. The reconstruction algorithms
used for OLYMPUS attempted to fit a lepton or proton to any matched pattern in the data, and thus the reconstructed data contained tracks
of different particle types associated with the same drift chamber hits. First, the list of all pairs of leptons corresponding to the beam species
and protons in opposite sectors as identified by the tracking algorithm was constructed. This process eliminated all leptons of the opposite
charge.
For each such pair, the validity of the tracker's particle type assignment was assessed using information from the ToF scintillators. For each
track, the trajectory was projected to the ToF panels and the expected bar to be hit in association with the track was calculated. Then, any actual
registered ToF hits within a tolerance of the two bars surrounding the projected hit bar were considered to be a hit that could be associated with the
track.
For each ToF hit, the likelihood that the hit corresponded to the identified particle of the track in question was assessed using the meantime of hit,
i.e., the time recorded for the hit in the ToF bar as the mean of the times recorded for the upper and lower PMT hits corresponding to the time
elapsed between the beam bunch crossing the target and the scattered particle striking the ToF (up to the correction for travel time of the beam between
the crossing time location and the scattering vertex). The use of ToF ADC information was considered, but was not ultimately used for several reasons:
\begin{enumerate}
\item protons could deposit a range of energies in the scintillator bars (including small amounts of energies similar to lepton depositions),
\item it was difficult to assess the total energy deposition of particles that possibly passed through the edges of two bars, and
\item generally the calibration of the ToF ADCs was not as well constrained as that of the TDCs for timing information.
\end{enumerate}
The distributions of the meantimes for tracks associated with each particle type for events with each beam species are shown in Figures
\ref{fig:pemt}, \ref{fig:emt}, \ref{fig:ppmt}, and \ref{fig:pmt}. The electron candidate plot (Figure \ref{fig:emt}) most clearly shows the band
of meantimes corresponding to elastic events (since actual protons tracks tracked as leptons would be tracked with positive curvature resulting
in a positron identification). Each of the other three plots shows the ambiguity introduced by tracking all events with both lepton and proton
assumptions that must be deconvolved by the particle identification portion of the analysis. As suggested by the clean separation of the elastic
electron meantime band (and the visible separation of bands corresponding to leptons and protons in the other plots),
a simple bar-by-bar cut on meantime would achieve much of this goal. This approach worked very well in the backward regions of the detector
where the meantime separation is large (as in the 12{$^\circ$} {$e^\pm p$} analysis (Section \ref{sec:12ana})), but struggled in the intermediate and
forward regions where the meantime bands blend together.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/pemt.pdf}}
\caption[Meantime distribution by ToF bar for proton candidates in $e^-$ beam data]{Distribution of ToF meantimes by bar for tracks identified
by the reconstruction algorithm as protons in $e^-$ beam data.}
\label{fig:pemt}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/emt.pdf}}
\caption[Meantime distribution by ToF bar for electron candidates in $e^-$ beam data]{Distribution of ToF meantimes by bar for tracks identified
by the reconstruction algorithm as electrons in $e^-$ beam data.}
\label{fig:emt}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/ppmt.pdf}}
\caption[Meantime distribution by ToF bar for proton candidates in $e^+$ beam data]{Distribution of ToF meantimes by bar for tracks identified
by the reconstruction algorithm as protons in $e^+$ beam data.}
\label{fig:ppmt}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/pmt.pdf}}
\caption[Meantime distribution by ToF bar for positron candidates in $e^+$ beam data]{Distribution of ToF meantimes by bar for tracks identified
by the reconstruction algorithm as positrons in $e^+$ beam data.}
\label{fig:pmt}
\end{figure}
To achieve better particle identification in the forward regions, a two-dimensional cut was developed using the reconstructed momentum of the track
in question to predict the meantime according to the speed of the particle's travel from the vertex to the ToF bar \footnote{The author wishes to acknowledge R. Russell \cite{russell} for the
essential method of this particle identification cut.}. Since essentially all leptons in the data sample
had $\beta = \frac{v}{c} \approx 1$ a simple maximum meantime cut, tuned bar-by-bar, was sufficient for lepton identification.
More interesting were the protons, whose large mass causes significant variation of $\beta$ as a function of their momentum.
For each track, the path length of the particle's trajectory from the scattering vertex to the ToF panels $l$ was calculated by the tracking algorithm. Then, the value of $\beta$ for
each track was calculated using the momentum as reconstructed by the tracking algorithm $\left|\mathbf{p}\right|$, but assuming the particle had proton mass. That is:
\begin{equation}
\beta_p = \frac{\left|\mathbf{p}\right|}{\left|\mathbf{p}\right|^2 + m_p^2}.
\end{equation}
This was used to predict a meantime under the assumption of proton mass for the particle:
\begin{equation}
\overline{t}_p = \frac{l}{\beta_p c}.
\label{eq:mttp}
\end{equation}
Track pairs were then histogrammed by the measured ToF meantime and the quantity $\overline{t}_p$. An example of such a histogram for a relatively forward ToF bar in $e^+$ beam data (where
the positron-proton disambiguation is most difficult) is shown in Figure \ref{fig:mt2d}. As can be seen this histogram produces a clear separation between correctly and wrongly identified proton
candidates. For each ToF bar, a linear cut between these two bands was optimized to as to produce clear separation between the bands, and thus properly identify tracks that corresponded
to real protons.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/mt2d.pdf}}
\caption[ToF meantime vs. meantime assuming proton mass for Bar 7 in $e^+$ beam data]{Histogram of track pairs by the associated ToF meantime and the meantime assuming the particle
to have proton mass $\overline{t}_p$ (Equation \ref{eq:mttp}) for Bar 7 (a mid-to-forward bar on the left) in $e^+$ beam data.
Note that the band at $\overline{t}_p=0$ corresponding to true protons is well-separated
from the wrongly identified particles in the roughly vertical band. The curvature of the vertical band bends towards the ToF meantime gap, illustrating why a simple meantime is difficult since
small meantimes for real protons were obfuscated by wrongly identified tracks in the upper portion of the vertical band.}
\label{fig:mt2d}
\end{figure}
All lepton-proton pairs were analyzed in this fashion, attempting to find a ToF hit that could be properly associated with the particle type as identified by the tracking algorithm.
Any {$e^\pm p$} pair without a valid ToF hit combination was rejected. In the event that a single pair was found to possibly validly correspond to multiple ToF hit combinations
(an effect that occurred $<1\%$ of the time and was typically due to a particle striking near the edge of a bar and depositing energy in the adjacent bar as well), the best
combination for ToF assignment was determined as the pair of ToF hits with the best correlation of vertex times computed from the ToF meantimes and track path lengths
(i.e., predicting the most similar scatter time for the two particles).
\subsection{Elastic {$e^\pm p$} Pair Selection}
\label{sec:pairsel}
All track pairs with valid ToF hits were then tested by a set of kinematic and fiducial cuts designed to identify pairs with approximately elastic kinematics. The general
philosophy in placing these cuts was to keep them wide (so as to avoid effects from varying resolutions) and to apply fiducial cuts only to the proton (since the acceptance of
the detector for protons was identical for {$e^- p$} and {$e^+ p$} events, while the lepton acceptance was different due to the magnetic field). For the same reason, the reconstructed
parameters of the proton were used to derive quantities such as $Q^2$ and $\epsilon$ so that comparisons of {$e^- p$} and {$e^+ p$} data in these parameters was on the most equal footing
possible.
First, fiducial cuts were applied at $\left|\phi\right|<\pm11.5^\circ$ from the horizontal on the proton in each sector, which avoided the acceptance edges due to the drift chamber frames throughout
the acceptance, and the reconstructed vertex $z$ of the proton at $\left|z\right|=\pm 350$ mm from the center of the target. While other analyses consider fiducial cuts that vary as a function of
the scattering angle, which allowed for the recovery of more events in regions excluded by these cuts, this approach was chosen for this analysis to limit the sensitivity
to errors in simulated acceptance of the target and to provide a cross-check on such methods.
Then each pair was tested against the following elastic kinematic cuts:
\begin{enumerate}
\item Vertex time from ToF meantime and track path length correlation (corrected for vertex position): $\left|\Delta t\right| = \left|t_p-t_{e^\pm}\right| < 5$ ns (Figures \ref{fig:cut1e} and \ref{fig:cut1p})
\item Vertex $z$ correlation: $\left|\Delta z\right| = \left|z_p-z_{e^\pm}\right| < 175$ mm (Figures \ref{fig:cut2e} and \ref{fig:cut2p})
\item Electron-proton elastic angle correlation: $ \left|\theta_p - \theta_{p,\text{elas}}(\theta_{e^\pm})\right| < 7^\circ$ (Figures \ref{fig:cut3e} and \ref{fig:cut3p})
\item Beam energy reconstructed from the track momenta: $\left| E_{\text{beam},p}-E_\text{beam}\right| < 1000$ MeV (Figures \ref{fig:cut4e} and \ref{fig:cut4p})
\item Beam energy reconstructed from the $\theta$ of both tracks assuming elastic kinematics: $ \left|E_{\text{beam},\theta}-E_\text{beam}\right| < 350$ MeV (Figures \ref{fig:cut5e} and \ref{fig:cut5p})
\item Single-arm missing energy of the lepton over the square of the energy as computed by the expected
elastic energy from the reconstructed $\theta_{e^\pm}$: $\Delta E'_\theta/E'^2 < 0.0048$ MeV$^{-1}$ (Figures \ref{fig:cut6e} and \ref{fig:cut6p})
\item Longitudinal (beam direction) momentum balance: $p_{z,p} + p_{z,e^\pm} - p_\text{beam} > -500 $ MeV (Figures \ref{fig:cut7e} and \ref{fig:cut7p})
\item Coplanarity $\left|\Delta\phi-180^\circ\right| = \left|\phi_\text{right}-\phi_\text{left}-180^\circ\right|<7.5^\circ$ (Figures \ref{fig:cut8e} and \ref{fig:cut8p})
\end{enumerate}
Appendix \ref{chap:kincuts} presents 2D histograms of the data events by each of these cut parameters and the lepton scattering angle $\theta_{e^\pm}$ after application of all selection cuts (but
before background subtraction) for each beam species, as noted for each listed cut. As can be seen in these figures, the cuts were chosen to be quite permissive in any individual cut allowing the collection
of the multiple exclusive kinematic cuts to produce a relatively clean elastic sample. The remaining background was predominantly at high lepton $\theta$, low total momentum, and very similar between
the two modes of operation making it straightforward to separate and subtract in the final steps of the analysis.
Note that some of these cuts are heavily correlated (such as the elastic angle correlation and the beam energy fro angles), but that both cuts were applied so as to
make an effective cut that was not axis-parallel in either. Each of these cut values (with the exception of coplanarity) was chosen as approximately five times RMS width of the distribution
of the parameter with the other (non-correlated) cuts applied
at its widest across the acceptance so as to reduce sensitivity to the detector resolutions by erring on the side of including more background.
The elastic {$e^\pm p$} candidate sample passed to the background subtraction methods, however, still only
contained less than 30\% background throughout the whole acceptance due to the strength of the multiple cuts that were applied using the full exclusive reconstruction of the events.
Other analyses chose deliberately tighter cuts fit to the distributions \cite{schmidt,russell,oconnor}.
If no pairs were found with good ToF hit assignments passing all cuts, remaining pairs were assessed for inclusion in the sample if they passed at least five of the seven cuts, with
strict limits still enforced for the lower bounds of the reconstructed beam energy from both angles and momenta and the longitudinal momentum balance. If multiple pairs met these criteria,
the one with the most cuts passed and lowest weighted sum of cut parameters was preferred. Most commonly, this allowed an event to be selected with one or both tracks at higher than expected
momentum, caused by the difficulty of resolving momentum from the bending in the magnetic field as trajectories become straighter at higher momenta.
If after this reduction in cut requirements no pair remained, the event was discarded as a non-elastic {$e^\pm p$} event.
In approximately two-thirds of events with a pair passing these conditions, a single pair was found. Since the likelihood for simultaneous
multiple elastic events in the detector was much less than 1\%, in the remaining third
the pair with the minimum sum of the cut parameters weighted by the width of each cut was chosen as the elastic pair. The selected pair was referred to as an ``initial pair''
and was considered as part of the elastic candidate sample for background subtraction.
\subsection{Background Subtraction}
\label{sec:backsub}
As noted, the final kinematic cut of the analysis (coplanarity of the lepton and positron tracks) was deliberately left open to allow the application
of a background subtraction. Several other background models were considered, including applying background models to the $z$ vertex correlation of the track pairs and
to the elastic $\theta$ correlation of the pairs. The former approach was discarded due to the fact that the shape of a random background (the convolution of the target
distribution for each track) was an irregular shape to fit and the fact that it was found that the background (as determined from the sidebands of the coplanarity distributions)
was found to have strong $z$ vertex correlation, as shown in Figure \ref{fig:zbackcor}. Regarding the latter model, the asymmetry of the distribution complicated the model, which
introduced uncertainties not present in modeling the coplanarity background. Various combinations of 2D models involving pairs of the three considered distributions were also considered,
although none were found to be as robust as the coplanarity method. Notably, the different OLYMPUS analyses included several different approaches to both the method of background subtraction
and to the strictness of cuts prior to background subtraction (i.e., the amount of background in the sample before background subtraction and the final cut).
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/zcorback.pdf}}
\caption[Vertex $z$ correlation of the background sample]{Correlation of the reconstructed $z$ of track pairs (that of the right track minus that of the left track) for track
pairs selected from the sidebands of the coplanarity distribution ($\left|\Delta\phi - 180^\circ\right|>4^\circ$) for {$e^- p$} event selection (left) and {$e^+ p$} event selection (right).
Note that the background had strong vertex correlation throughout the acceptance, indicating that the background was dominated by various scattering events from the target region
and that this distribution was not a useful quantity on which to perform background subtraction.}
\label{fig:zbackcor}
\end{figure}
Since cuts involving the reconstructed momentum and elastic event $\theta$ correlation of the particles heavily
suppressed backgrounds that would be expected to also prefer $\Delta\phi\approx 180^\circ$ (such as pion electro-production \cite{PhysRev.129.1834} and $e^+e^-$ scattering),
it was expected that the background in $\Delta\phi$ would be dominated by random track pairs. For a given small bin in $Q^2$ (or $\theta$), the corresponding $\phi$ distribution
was essentially uniform, constrained by the frames of the drift chambers. Thus, the expected random pair background was the convolution of two uniform distributions, i.e., a triangular
distribution. The initial {$e^\pm p$} pair selection was binned in a 2D histogram of $\Delta\phi$ and $Q^2$ (as determined by the proton $\theta$ assuming the beam energy and elastic kinematics),
with bin width of 0.05 GeV$^2$ in $Q^2$ (corresponding to approximately 35 bins across the full acceptance). The background fraction was computed independently for each $Q^2$ bin, lepton left and right,
event type ({$e^- p$} or {$e^+ p$} ), and for both data and simulation according to the following procedure:
\begin{enumerate}
\item A single $Q^2$ bin was projected as a 1D histogram of $\Delta\phi$, as shown in Figure \ref{fig:ebacks}.
\item A Gaussian plus constant model was fit to the central region of the histogram ($\left|\Delta\phi - 180^\circ\right|<6^\circ$) to find the peak of the coplanarity distribution. For all bins
within the acceptance, this peak was found to be within 0.15{$^\circ$} of 180{$^\circ$} , indicating excellent $\phi$ reconstruction.
\item The sidebands of the distribution (outside of $4\sigma$ of the Gaussian fit of the previous step) were fit to a triangular distribution model ($a-b\left|x-\mu\right|$, where $\mu$ is the mean
of the Gaussian fit in the previous step).
\item This background distribution was passed, along with the data/simulation coplanarity distribution, to the final step of the analysis, which applied the final cut on coplanarity and determined
the fraction of counts within that cut corresponding to background that should be subtracted. Additionally, the 95\% confidence bounds of the background model fit were retained to estimate the
uncertainty in the background model.
\end{enumerate}
The resulting background fractions as a function $Q^2$ (for the final selection described in the next section), are shown in Figure \ref{fig:bfleft} and \ref{fig:bfright} for leptons going left and right
respectively for both leptons species in both data and simulation. In general, it was found that {$e^- p$} data exhibited slightly higher background levels than positron data, consistent with the beam conditions
for each species. The very small ($\mathcal{O}(1\%)$) background levels in simulation may be attributed to radiative events in which a radiated photon causes a large deviation in $\phi$ for one of the tracks
and occasional mis-reconstructed events. As expected, however, the background fraction in simulation was minimal. In the experimental data, the initial pair selection resulted in a maximum background fraction
of $\sim$30\% within the acceptance of the detector ($Q^2\lesssim2.2$ GeV$^2$). The different OLYMPUS analyses used different levels of strictness in their initial elastic selections, resulting in higher
or lower background fractions than this analysis. This difference in modeling provides a valuable test of the robust of the background subtraction models via comparison of the results produced
by each analysis.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/eleftbackground.pdf}}
\caption[Example of the background subtraction method for a single $Q^2$ bin]{Example of the background model fit for left-going electrons for the bin centered at $Q^2=1.525$ GeV$^2$ for the full
{$e^- p$} dataset used for the analysis. The blue points represent the data (with statistical uncertainties), the green line the initial Gaussian+constant fit to the central region
($\left|\Delta\phi - 180^\circ\right|<6^\circ$), and the red line the triangular background fit to the regions beyond $4\sigma$ of the Gaussian peak. For all bins within the acceptance, the model
fit the data extremely well, producing positively sloped triangles as expected without explicitly requiring this condition.}
\label{fig:ebacks}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/bfleft.pdf}}
\caption[Background fraction after initial event selection as a function of $Q^2$ for leptons going left]{Background fraction remaining in the final elastic event selection for leptons detected in the left sector for {$e^- p$}
and {$e^+ p$} events in both data and simulation. The error bars represent the 95\% confidence bounds on the integrals of the background distributions within the final coplanarity cut.}
\label{fig:bfleft}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/bfright.pdf}}
\caption[Background fraction after initial event selection as a function of $Q^2$ for leptons going right]{Background fraction remaining in the final elastic event selection for leptons detected in the right sector for {$e^- p$}
and {$e^+ p$} events in both data and simulation. The error bars represent the 95\% confidence bounds on the integrals of the background distributions within the final coplanarity cut.}
\label{fig:bfright}
\end{figure}
\subsection{Final Processing}
\label{sec:fproc}
With the background model in place, the final event selection was made by placing the final available cut on $\Delta\phi$ and integrating the number of counts remaining to produce
the data and simulation total elastic event yields. This $\Delta\phi$ cut was placed at the $3\sigma$ width of the Gaussian distribution fitted to each $Q^2$ bin in the background modeling
process described in the previous section. These distribution widths are shown as functions of $Q^2$ for leptons going left and right in Figures \ref{fig:copwleft} and \ref{fig:copwright} respectively.
Note, that since the simulation resolution in $\Delta\phi$ was slightly better than the data resolution, the fit cut width of the data sample
for each lepton species was applied to both the data and simulation of that species so as to maintain the consistent treatment of data and simulation as in the initial cuts. For each $Q^2$ bin
of width 0.05 GeV$^2$, region within the $3\sigma$ data width was integrated for both the coplanarity histogram and the background model. The integral of the background model was subtracted from
the integral of the coplanarity histogram to produce the final yields of elastic {$e^\pm p$} events that were used to generate the final results of the experiment.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/copwleft.pdf}}
\caption[Width of the $\Delta\phi$ distribution as a function of $Q^2$ for leptons going left]{The $1\sigma$ width of the Gaussian fit to the coplanarity distribution as a function
of $Q^2$ for leptons in the left sector. The loss of resolution in data near $Q^2 = 1.4$ GeV$^2$ corresponds to the lepton passing through the inefficient region of the left-inner drift
chamber (see Figure \ref{fig:badwc}), which caused the lepton tracks to be comprised of fewer hits. The purity of the simulation data sample resulted in a maintenance of resolution in this region
despite the loss of hits due to the efficiency map implementation.}
\label{fig:copwleft}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/copwright.pdf}}
\caption[Width of the $\Delta\phi$ distribution as a function of $Q^2$ for leptons going right]{The $1\sigma$ width of the Gaussian fit to the coplanarity distribution as a function
of $Q^2$ for leptons in the left sector. The loss of resolution in data near $Q^2 = 1.2$ GeV$^2$ corresponds to the left-going proton passing through the inefficient region of the left-inner drift
chamber (see Figure \ref{fig:badwc}), which caused the proton tracks to be comprised of fewer hits. The purity of the simulation data sample resulted in a maintenance of resolution in this region
despite the loss of hits due to the efficiency map implementation.}
\label{fig:copwright}
\end{figure}
\section{Comparison of Data and Simulation}
\label{sec:datasim}
With the full analysis method applied, comparisons between the data and simulation elastic event yields after background subtraction
provide insight into the quality of the data and analysis as well as a preliminary measure of the absolute elastic {$e^- p$} and {$e^+ p$} cross-sections.
Figure \ref{fig:datamckelly} presents the ratio of the background-subtracted data and simulation yields for {$e^- p$} and {$e^- p$} events, separated by
whether the lepton was detected in the left or right sector of the spectrometer. For this figure, the simulation was conducted using the Kelly
parametrization of the elastic proton form factor \cite{PhysRevC.66.065203} and the Maximon and Tjon radiative corrections prescription \cite{MaximonPhysRevC.62.054320}.
Encouragingly, the data and simulation show agreement on the level of a few percent across the entire acceptance, which is consistent with the expectations in deviations
due to uncertainties in the knowledge of the form factors. Differences in the ratios computed for leptons going left and right are indicative of the approximate systematic
uncertainty in the determination of the absolute elastic {$e^\pm p$} cross sections that are possible with the OLYMPUS data. Additionally, since the simulation events used for
Figure \ref{fig:datamckelly} were generated using the slow control luminosity, the fact that the data/simulation ratio is several percent above unity across the entire
acceptance is very consistent with the measurement of the absolute luminosity from the 12{$^\circ$} system ($\mathcal{L}_{e^\pm}/\mathcal{L}_\text{SC}\approx 1.046\pm0.024$, see
Section \ref{sec:12res}). Thus, careful analyses of the OLYMPUS data may be able to provide information on the absolute {$e^- p$} and {$e^\pm p$} cross sections, and also information
regarding the elastic form factors, with uncertainties on the order of $\sim$5\%.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/dataovermc_mark2.pdf}}
\caption[Ratio of data yields over simulation yields by lepton species and detector side with the Kelly form factor model]{The ratio of data yields over simulation
yields (background-subtracted) by lepton species and detector side with the Kelly form factor model \cite{PhysRevC.66.065203}. Given that uncertainties in the determination of the elastic
proton form factors are of the order of at least several percent, especially at the high end of the $Q^2$ range, the overall agreement between the data and simulation yields
is strong. Note that these results
are normalized to the slow control luminosity, and are thus subject to a shared systematic uncertainty in the absolute value of all points of several percent.}
\label{fig:datamckelly}
\end{figure}
Figure \ref{fig:datamcsum} shows the ratio of the total {$e^- p$} and {$e^+ p$} elastic event yields in data and simulation for three different form factor parametrizations:
the Kelly model \cite{PhysRevC.66.065203}, the Bernauer model \cite{BerFFPhysRevC.90.015206}, and the dipole form factor (Equation \ref{eq:dipff}). The sum of the {$e^- p$} and {$e^+ p$} yields
is a particularly useful quantity for examination of the behavior of the different form factor models, since any effects from TPE or lepton charge-odd radiative corrections that
are present in the data but not the simulation are canceled in this sum, providing a means of determining the absolute cross section in the absence of such effects. As expected, the models
agree at low $Q^2$ where existing data constrains the models well, but considerable deviations are apparent as $Q^2$ increases. Since the luminosity normalization is shared between
the models, which effectively removes uncertainty contributions from the absolute normalization when comparing the relative values of the ratios in different models, the large deviations
between the form factor models at higher $Q^2$ indicate that the OLYMPUS data can provide constraints on future form factor models. The possibility of these additional physics results
will be considered in future OLYMPUS publications.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/formfactors_mark2.pdf}}
\caption[Background-subtracted lepton-summed data yields over simulation for several form factor models]{Ratio of the summed elastic {$e^- p$} and {$e^+ p$} yields of each species in
data and simulation for several form factor models \cite{PhysRevC.66.065203,BerFFPhysRevC.90.015206}. In the summed yield of {$e^- p$} and {$e^+ p$} , effects from TPE and $\mathcal{O}(\alpha^3)$
radiative corrections from lepton vertices cancel. Given the power of the OLYMPUS data to discriminate between models, there exists the potential in the OLYMPUS data to inform future
models of the proton elastic form factors. Note that these results are normalized to the slow control luminosity, and are thus subject to a shared systematic uncertainty
in the absolute value of all points of several percent.}
\label{fig:datamcsum}
\end{figure}
\section{Systematic Uncertainties}
\label{sec:mainsys}
While the final analyses of the systematic uncertainties associated with the OLYMPUS $R_{2\gamma}$ result were still underway at the time of writing,
this section provides a preliminary assessment of the systematic uncertainties associated with the results presented in Chapter \ref{Chap7}, with a particular
focus on identifying the remaining dominant uncertainties for consideration in the later stages of the analysis. Note that the systematic uncertainties from
the MIE luminosity determination (Section \ref{sec:miesys}) and the 12{$^\circ$} system {$e^\pm p$} measurement, used either as a luminosity determination or as an additional kinematic point
(Section \ref{ss:12sys}), are independent from the effects discussed here. In particular, any uncertainty from the luminosity determination applies as a constant shift shared by
all points in the determination of $R_{2\gamma}$, rather than as a point-to-point uncertainty.
The estimates presented here should be considered preliminary, and a final analysis (including additional detail) will accompany the published OLYMPUS results.
\subsection{Detector Acceptance and Efficiency}
The complexities of the wire chamber acceptance, efficiency, and its interaction with the magnetic field (both in terms of particle trajectories and the time-to-distance
calibration) make a simulation-based approach to estimating the systematic uncertainties associated with such effects infeasible. Due to issues with the drift chamber
efficiency, uncertainties in the geometric survey of the detector, and the difficulty of the drift chamber time-to-distance parametrization, these sources of systematic uncertainty
are the dominant contributions to the overall uncertainty, and thus it is critical to properly assess them. The redundancy of the left and right detector
systems in OLYMPUS provides a means of estimating the systematic uncertainties from these effects based on examination of the data.
Ideally, the elastic event yields separated by
the lepton going left or right in the detector would be identical, or at least any deviations between the left and right yields would be completely accounted for
in simulation. Any unexplained left/right deviations provide a first-order estimate of systematic uncertainties due to detector acceptance, efficiency, and magnetic field. Figure
\ref{fig:leftright} shows the lepton-left/right ratio of background-subtracted yields for {$e^- p$} and {$e^+ p$} events in data and simulation, and Figure \ref{fig:ratleftright} shows the ratio
of the {$R_{2\gamma}$} results for the separate lepton-left and lepton-right samples. While the simulation captures some of the structure in the ratio of left/right yields in data, the structure
is not fully accounted for by the inefficiencies implemented in simulation. In particular, the ``peak-valley'' structure between $Q^2=1.1$ GeV$^2$ and $Q^2=1.6$ GeV$^2$ is known to
be related to the highly inefficient region of the left drift chamber (see Figure \ref{fig:badwc}), which reduces the tracking resolution significantly and affects event selection. Most
likely, the applied resolutions in the simulation did not degrade the hit quality sufficiently in simulation to precisely match the conditions of data tracking, leading to less difference
in the left/right comparison in simulation than data.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/leftoverright_mark2.pdf}}
\caption[Ratio of lepton left/right background-subtracted yields in data and simulation]{Ratio of the lepton left/right background-subtracted elastic {$e^\pm p$} yields in data and simulation,
separated by lepton species. Structures in this ratio that differ between data and simulation are indicative of effects such as differences in acceptance, detector efficiency, and magnetic
field between the left and right sectors that are not fully accounted for in the simulation. These effects contribute to the total systematic uncertainty estimate. The dominant
``peak-valley'' structure between $Q^2=1.1$ GeV$^2$ and $Q^2=1.6$ GeV$^2$ is known to be related to the highly inefficient region of the left
drift chamber (see Figure \ref{fig:badwc}), which reduces the tracking resolution significantly and affects event selection.}
\label{fig:leftright}
\end{figure}
Deviations between the data and simulation in Figure \ref{fig:leftright} provide indications of the systematic uncertainty on a possible absolute cross section extraction, while deviations
from unity in Figure \ref{fig:ratleftright} provide an indication of how such uncertainties affect the {$e^- p$} and {$e^+ p$} yields differently and thus provide an estimate of the uncertainty of the
{$R_{2\gamma}$} result. While the lepton-left and lepton-right data sample are statistically independent, there remain deviations in the value of {$R_{2\gamma}$} of $\sim$1\% that cannot be attributed to statistical
variation between the samples, and thus this is taken as an estimate of the systematic uncertainty due to detector acceptance and efficiency effects (which are additionally heavily convolved
with the effects of the magnetic field uncertainty). At this stage, this the dominant systematic uncertainty for the {$R_{2\gamma}$} result, and work is ongoing to reduce it if possible.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/ratleftright_mark.pdf}}
\caption[Ratio of the $R_{2\gamma}$ results for leptons going left and right in the detector]{Ratio of the $R_{2\gamma}$ results for events in which the lepton was detected in the left and right
sectors of the detector. The error bars are statistical, and it should be noted that the lepton-left and lepton-right datasets are statistically independent.
Deviations from unity of this ratio that cannot be attributed to statistics provide a means of estimating systematic uncertainties due to effects such as acceptance, magnetic field,
and detector efficiency.}
\label{fig:ratleftright}
\end{figure}
\subsection{Elastic and Fiducial Cuts}
To control uncertainties due to the choices of the elastic and fiducial event cuts used in the {$R_{2\gamma}$} analysis, these cuts were varied and the results recomputed to examine their
effect on the value of {$R_{2\gamma}$} . Additionally, a potentially stronger analysis of the effects of choices in the {$e^\pm p$} event selection is provided by the comparison of the independent
analyses of the OLYMPUS data \cite{schmidt,russell} discussed in Section \ref{sec:indana}. It remains useful, however, to consider the effects of the cut choices within a single
analysis.
Each of the eight fiducial, pair selection, and elastic cuts applied in the analysis (listed in Section \ref{sec:mainana}) was varied individually by $\pm10\%$ in range, and
the resulting change in the value of {$R_{2\gamma}$} between these variations was computed bin-by-bin. The quadrature sum of the variations between the values of {$R_{2\gamma}$} at $\pm10\%$ cut ranges for
the eight cuts is shown in Figure \ref{fig:cutsys}. For most $Q^2$ bins, the sum of the variations is less than 0.2\%, with the notable exception of the bin at $Q^2=1.725$ GeV$^2$. To an
extent, this lack of effect is expected due to the strategy used for this analysis, which placed loose cuts and allowed the exclusively reconstructed kinematics to determine event selection
rather than placing tight elastic cuts that would be more sensitive to detector resolutions. The cause of the larger uncertainty in the $Q^2=1.725$ GeV$^2$ bin is under investigation, but
is dominated by the uncertainty associated with the fiducial cut on the reconstructed $z$ vertex of event pairs.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/cutsystot.pdf}}
\caption[Estimate of the total systematic uncertainty due to event selection cuts]{Estimate of the total systematic uncertainty due to event selection cuts, computed as the quadrature sum
of the variations in {$R_{2\gamma}$} caused by varying the boundaries of each of the cuts listed in Sections \ref{sec:pairsel} and \ref{sec:fproc} by $\pm10\%$. For most $Q^2$ bins, this uncertainty
is on the order of 0.2\% or smaller, with the exception of the bin at $Q^2=1.725$ GeV$^2$.}
\label{fig:cutsys}
\end{figure}
Additionally, effects from the choices made in particle identification (i.e., the cuts made in the measured vs. expected ToF meantime discussed in Section \ref{sec:partid}) were also examined
by a similar procedure of varying cut boundaries. In most ToF bars, the separation between leptons and protons was extremely clear and thus variations in the cut had essentially no effect.
In the regions where pair particle identification is most ambiguous (the ToF bars in the central region of each detector side), differences between the independent analyses (Section \ref{sec:indana}),
which each treated particle identification in a different manner, are likely a more robust measure of the systematic uncertainties due to such analysis choices.
\subsection{Background Subtraction}
Due to the qualitative nature of the various choices made in choosing a background subtraction model (the kinematic quantities on which subtraction is performed,
the models used for the elastic peak and background in those quantities, etc.), the final assessment of the systematic uncertainty due to background subtraction will involve
a comparison of the independent {$R_{2\gamma}$} analyses using the OLYMPUS data, each of which applied different approaches to this problem. It is useful, however, to examine the uncertainties
inherent to the background subtraction method used in the analysis presented in this work (Section \ref{sec:backsub}).
As shown in Figures \ref{fig:bfleft} and \ref{fig:bfright}, an uncertainty in the background subtraction arises from the uncertainty in the fit of the background model to the
data. This uncertainty increases for data as a function of $Q^2$ as the statistics of the data decrease, but remains relatively constant and comparably small for simulation. In the
highest $Q^2$ bins used for the analysis, the 95\% confidence interval for the background fit to the absolute {$e^- p$} and {$e^+ p$} yields in data reaches approximately 2\%. Much of this variation
is statistical, however, and thus captured somewhat in the final statistical uncertainties associated with those bins. An additional cross check on the background subtraction was conducted
by varying the range in the coplanarity histogram away from the peak to which the background model was fit between outside of $3\sigma$ to outside of $5\sigma$ of the peak. These alterations
to the procedure did not affect the background fraction more than the confidence intervals associated with the background fit. Thus, conservatively, a systematic uncertainty of $\sim$0.5\% may
be ascribed to the background subtraction in the highest $Q^2$ bins, with much smaller contributions at lower $Q^2$.
\subsection{Radiative Corrections}
While a concerted effort was made to properly account for radiative corrections in the analysis of the OLYMPUS data via the full-simulation scheme
described in Section \ref{sec:radgen}, the choice of radiative corrections prescription that is used affects the final {$R_{2\gamma}$} result due to the different
approximations and assumptions made by different models. The OLYMPUS radiative generator was designed to permit the OLYMPUS results to be presented
using a variety of different radiative corrections schemes so as to facilitate comparison with other data. This uncertainty may be eliminated by matching radiative corrections schemes when
comparing different experiment results or experimental data to theoretical or phenomenological models, but it is worthwhile
to assess the overall effect of different models on the {$R_{2\gamma}$} results.
Figures \ref{fig:ratexpo} and \ref{fig:ratmaxmo} present the ratios of the {$R_{2\gamma}$} results that arise from making key choices in the application of radiative corrections.
The former shows the ratio of the result when the Maximon and Tjon radiative corrections scheme \cite{MaximonPhysRevC.62.054320} is applied using exponentiated and
non-exponentiated methods, while the latter shows the ratio of the results using the Maximon and Tjon scheme and the Mo and Tsai scheme \cite{MoRevModPhys.41.205} (the two
most commonly applied radiative {$e^\pm p$} prescriptions). Each of these ratios shows deviation from unity that increases with $Q^2$, reaching 0.5\%-1.0\% at the upper end of the
OLYMPUS acceptance. This illustrates the importance of matching radiative corrections schemes when comparing datasets and predictions and the importance of understanding
the exact methods used to apply radiative corrections in different datasets.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/exponetiation.pdf}}
\caption[Ratio of the $R_{2\gamma}$ results for simulation using exponentiated and non-exponentiated radiative corrections]{Ratio of the $R_{2\gamma}$ results for simulation using
exponentiated and non-exponentiated radiative corrections under the prescription of Maximon and Tjon \cite{MaximonPhysRevC.62.054320}.
The statistical uncertainties for each data point are suppressed since they are completely correlated in each radiative corrections model.}
\label{fig:ratexpo}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/maxmo.pdf}}
\caption[Ratio of the $R_{2\gamma}$ results for simulation using the Maximon/Tjon and Mo/Tsai radiative corrections prescriptions]{Ratio of the $R_{2\gamma}$ results for simulation using
the Maximon/Tjon \cite{MaximonPhysRevC.62.054320} and Mo/Tsai \cite{MoRevModPhys.41.205} radiative corrections prescriptions. For each simulation, the radiative corrections were
applied with exponentiation. The statistical uncertainties for each data point are suppressed since they
are completely correlated in each radiative corrections model.}
\label{fig:ratmaxmo}
\end{figure}
\subsection{Form Factors}
While the choice of proton elastic form factor model significantly affects the extracted absolute {$e^- p$} and {$e^+ p$} cross sections (Section \ref{sec:datasim}), it is expected in an analysis
of {$\sigma_{e^+p}/\sigma_{e^-p}$} that effects from the form factor model used should be extremely small in the absence of any errors in the analysis that introduce a bias in the way the kinematics (in particular the value
of $Q^2$) for {$e^- p$} and {$e^+ p$} events are computed. In Figure \ref{fig:ratkb} presents the ratio of the {$R_{2\gamma}$} results computed using the Kelly \cite{PhysRevC.66.065203} and Bernauer \cite{BerFFPhysRevC.90.015206}
form factor models. Across the full acceptance, the deviation between the two is much less than 0.01\%, indicating that this is a minimal effect in the {$R_{2\gamma}$} result and providing a sanity check
on the basic calculations of the reconstructed kinematics. For reference, a similar ratio between the {$R_{2\gamma}$} results using these form factors, the dipole form factor, or even treating the proton
as a point particle gives extremely similar results, providing confidence in the implementation of the form factors in the radiative event generator.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.12\textwidth]{figures/diffformfactors.pdf}}
\caption[Ratio of the $R_{2\gamma}$ results for simulation with the Kelly and Bernauer form factor models]{Ratio of the $R_{2\gamma}$ results for simulations conducted
with the Kelly \cite{PhysRevC.66.065203} and Bernauer \cite{BerFFPhysRevC.90.015206} form factor models. As expected, the total effect from the choice of form factor model is extremely
small (much less than 0.01\% across the full acceptance). The statistical uncertainties for each data point are suppressed since they are completely correlated in each form factor model.}
\label{fig:ratkb}
\end{figure}
\subsection{Discussion of Current Total Systematic Uncertainty Estimate}
Due to the dominant effects of the detector acceptance and efficiency that contribute to the current systematic uncertainty estimate for this analysis, analogous effects that were
shown to be small in the 12{$^\circ$} systematic uncertainty analysis (Section \ref{ss:12sys}), such as beam energy and position, are not considered here, but will be considered as part of the final OLYMPUS results.
References \cite{schmidt} and \cite{russell} include discussion of additional systematic effects such as those from the ToF detector and track reconstruction efficiencies, but these effects were found
to be small relative to the aforementioned dominant effects. For the preliminary results shown in Chapter \ref{Chap7}, a conservative 1.5\% bin-to-bin total systematic uncertainty
is quoted for the {$R_{2\gamma}$} results in light of the fact that final analysis of the {$R_{2\gamma}$} results and the systematic uncertainties was ongoing at the time of writing.
This, however, is likely a considerable overestimate in most bins (especially at low $Q^2$) and thus should only be considered in the preliminary context of these
results. Systematic uncertainties due to the luminosity normalization and for the independent bin constructed using the results from the 12{$^\circ$} system are separate from
this estimate and are discussed in detail in Sections \ref{sec:miesys} and \ref{ss:12sys}, respectively.
\chapter{Results and Discussion}
\label{Chap7}
With the methodology of the analysis in place, and in combination with the relative luminosity analyses presented in Chapter \ref{Chap5}, the OLYMPUS
result for $R_{2\gamma}$ (Equation \ref{eq:rat}) may be constructed. This chapter presents the results of the analysis described in Chapter \ref{Chap6},
the result for the value of {$\sigma_{e^+p}/\sigma_{e^-p}$} as measured in the 12{$^\circ$} system using the MIE luminosity normalization, as well as a preliminary comparison of the different
$R_{2\gamma}$ analyses conducted using the OLYMPUS data. As mentioned in Section \ref{sec:datasim}, further physics results from the OLYMPUS data pertaining
to the absolute elastic {$e^\pm p$} cross section and proton form factors may be part of future publications.
While such analyses are not part of this work, it is expected that subsequent publications will include results of this nature.
The following results represent $\sim$3.1 fb$^{-1}$ of data, approximately evenly split between the two leptons species. This corresponded to a total yield (after background subtraction)
of approximately $4\cdot 10^6$ accepted elastic {$e^\pm p$} events for each lepton species. The simulation dataset, generated according
to the principles described in Section \ref{sec:sim}, included approximately $2\cdot 10^9$ radiative elastic events, which ensured that the statistical uncertainty of
the simulated dataset was negligible in comparison to both the statistical and systematic uncertainties on the data in all regions of the OLYMPUS acceptance. The strong statistical
power of the OLYMPUS dataset allows examination of the ratio in continuous fashion across the entire OLYMPUS acceptance.
The results that follow should be considered preliminary and subject to further investigations of the systematic uncertainties in the analysis. It is expected, however, that these results
will accurately represent the effective trends in $R_{2\gamma}$ that will be presented in subsequent OLYMPUS publications, as indicated by the constraints on the systematic uncertainties discussed
in Section \ref{sec:mainsys} and the consistency of the independent analyses of the OLYMPUS data presented in Section \ref{sec:indana}.
\section{Note on the Choice of the Luminosity Normalization}
As discussed in Section \ref{sec:alllumi}, the 12{$^\circ$} (Table \ref{tab:12results}) and MIE (Equation \ref{eq:mie}) species-relative luminosity measurements were found to be extremely consistent,
both as a function of run number (matching the systematic variations relative to slow control, indicated by the lack of time-varying structure in Figure \ref{fig:12mie}) and in value (since the
two determinations are well within the estimated uncertainties of each other). Since the result for $R_{2\gamma}$ requires only a relative luminosity measurement, rather than an absolute
integrated luminosity for each species, the result of the MIE method was chosen as the species-relative luminosity normalization for the results shown, which permits the use of the 12{$^\circ$} system
result as an additional kinematic point for the measurement of $R_{2\gamma}$. Since the 12{$^\circ$} elastic {$e^\pm p$} measurement exhibits systematic uncertainties that are largely independent from the
reconstruction of elastic events in the main spectrometer, this point provides a valuable cross check on the measurement $R_{2\gamma}$ in the rest of the acceptance as well as a precise and valuable
check on the expectation that TPE effects go to zero at high-$\epsilon$. The value of $R_{2\gamma}$ at $\theta\approx 12^\circ$ is discussed in Section \ref{sec:12TPE}.
For possible future analyses of the elastic proton form factors and {$e^\pm p$} cross section, it will likely be necessary to use the measurement of the absolute luminosities for each species from the 12{$^\circ$}
system, since the MIE absolute luminosity analysis is subject to large uncertainties due to event selection cuts. The strong consistency in the relative measurement between the two methods, as well
as the agreement between the 12{$^\circ$} absolute estimate and the simulation at forward angles (Section \ref{sec:datasim}), provides a high level of confidence in the absolute estimate for future analyses
of the OLYMPUS data.
\section{$R_{2\gamma}$ Results as a Function of $\epsilon$ and $Q^2$}
\label{sec:thegoddamnresults}
Figures \ref{fig:ratq2}, \ref{fig:ratq2bb}, \ref{fig:rateps}, and \ref{fig:ratepsbb} present the results for {$R_{2\gamma}$} as found by the analysis
described in Chapter 6 as a function of $Q^2$ and $\epsilon$ in two different binnings for each. In each figure, the point at $\theta=12^\circ$
($Q^2=0.165$ GeV$^2$, $\epsilon=0.98$) is shown with its full systematic plus statistical uncertainty estimate (Section \ref{sec:12TPE}) while the
error bars on the other points are statistical only. As discussed in Section \ref{sec:mainsys}, the total systematic uncertainty for each point for these
results may be considered on the order of 1.5\%, although for many bins this is likely an overestimate. The magnitude of the total uncertainty from the MIE
luminosity determination, which would apply as a common shift to all data points simultaneously, is represented by the gray box above the horizontal axis of each figure.
Additionally, each figure presents the theoretical \cite{BerFFPhysRevC.90.015206,Chen:2007ac, Guttmann:2010au} and phenomenological
\cite{Blunden:2003sp,Chen:2004tw,Afanasev:2005mp,Blunden:2005ew, Kondratyuk:2005kk, Borisyuk:2006fh,TomasiGustafsson:2009pw}
models initially presented with the OLYMPUS projections in Figure \ref{fig:projections}. The results and models were computed using the Maximon and Tjon prescription for
radiative corrections \cite{MaximonPhysRevC.62.054320} and the methodology described in Section \ref{sec:radgen}. Tables \ref{tab:finebins} and \ref{tab:widebins} list
the values presented by the plots in each binning, as well as the average $Q^2$ and $\epsilon$ for each bin.
As previously noted, these results are preliminary and are subject to further analysis, particularly relating to the final estimate of the systematic
uncertainty in each bin. As discussed in Section \ref{sec:indana}, it is highly likely that the final OLYMPUS results will strongly resemble the results
presented here, but at the time of writing the finalization of the OLYMPUS results for publication was still underway.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/ratq2_withmieunc_mark2.pdf}}
\caption[Result for $R_{2\gamma}$ as a function of $Q^2$ (fine bins)]{Preliminary results for {$R_{2\gamma}$} from the analysis presented in Chapter \ref{Chap6}, binned finely
as a function of $Q^2$. The error bars on the points represent the statistical uncertainty of the analysis, with the exception of the point at $Q^2=0.165$ GeV$^2$
where the error bar represents the total statistical plus systematic uncertainty. The gray box represents the total (statistical plus systematic) uncertainty
from the MIE luminosity analysis, which applies as a constant normalization shift to all data points. The data points in this plot are summarized in Table \ref{tab:finebins}.
(Phenomenological models: \cite{BerFFPhysRevC.90.015206,Chen:2007ac, Guttmann:2010au}, theoretical models:
\cite{Blunden:2003sp,Chen:2004tw,Afanasev:2005mp,Blunden:2005ew, Kondratyuk:2005kk, Borisyuk:2006fh,TomasiGustafsson:2009pw})}
\label{fig:ratq2}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/ratq2_withmieunc_tdrbins_mark2.pdf}}
\caption[Result for $R_{2\gamma}$ as a function of $Q^2$ (wide bins)]{Preliminary results for {$R_{2\gamma}$} from the analysis presented in Chapter \ref{Chap6}, binned coarsely
as a function of $Q^2$, approximately matching the bins represented by the data points in the projections of Figure \ref{fig:projections}.
The error bars on the points represent the statistical uncertainty of the analysis, with the exception of the point at $Q^2=0.165$ GeV$^2$
where the error bar represents the total statistical plus systematic uncertainty. The gray box represents the total (statistical plus systematic) uncertainty
from the MIE luminosity analysis, which applies as a constant normalization shift to all data points. The data points in this plot are summarized in Table \ref{tab:widebins}.
(Phenomenological models: \cite{BerFFPhysRevC.90.015206,Chen:2007ac, Guttmann:2010au}, theoretical models:
\cite{Blunden:2003sp,Chen:2004tw,Afanasev:2005mp,Blunden:2005ew, Kondratyuk:2005kk, Borisyuk:2006fh,TomasiGustafsson:2009pw})}
\label{fig:ratq2bb}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/rateps_withmieunc_mark2.pdf}}
\caption[Result for $R_{2\gamma}$ as a function of $\epsilon$ (fine bins)]{Preliminary results for {$R_{2\gamma}$} from the analysis presented in Chapter \ref{Chap6}, binned finely
as a function of $\epsilon$. The error bars on the points represent the statistical uncertainty of the analysis, with the exception of the point at $\epsilon=0.98$
where the error bar represents the total statistical plus systematic uncertainty. The gray box represents the total (statistical plus systematic) uncertainty
from the MIE luminosity analysis, which applies as a constant normalization shift to all data points. The data points in this plot are summarized in Table \ref{tab:finebins}.
(Phenomenological models: \cite{BerFFPhysRevC.90.015206,Chen:2007ac, Guttmann:2010au}, theoretical models:
\cite{Blunden:2003sp,Chen:2004tw,Afanasev:2005mp,Blunden:2005ew, Kondratyuk:2005kk, Borisyuk:2006fh,TomasiGustafsson:2009pw})}
\label{fig:rateps}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/rateps_withmieunc_tdrbins_mark2.pdf}}
\caption[Result for $R_{2\gamma}$ as a function of $\epsilon$ (wide bins)]{Preliminary results for {$R_{2\gamma}$} from the analysis presented in Chapter \ref{Chap6}, binned coarsely
as a function of $\epsilon$, approximately matching the bins represented by the data points in the projections of Figure \ref{fig:projections}.
The error bars on the points represent the statistical uncertainty of the analysis, with the exception of the point at $\epsilon=0.98$
where the error bar represents the total statistical plus systematic uncertainty. The gray box represents the total (statistical plus systematic) uncertainty
from the MIE luminosity analysis, which applies as a constant normalization shift to all data points. The data points in this plot are summarized in Table \ref{tab:widebins}.
(Phenomenological models: \cite{BerFFPhysRevC.90.015206,Chen:2007ac, Guttmann:2010au}, theoretical models:
\cite{Blunden:2003sp,Chen:2004tw,Afanasev:2005mp,Blunden:2005ew, Kondratyuk:2005kk, Borisyuk:2006fh,TomasiGustafsson:2009pw})}
\label{fig:ratepsbb}
\end{figure}
\begin{table}[thb!]
\begin{center}
\begin{tabular}{ccccc}
\hline
$\left<Q^2\right>$ (GeV$^2$) & $\left<\epsilon\right>$ & {$R_{2\gamma}$} & Stat. Uncertainty & Syst. Uncertainty \\
\hline\hline
0.165 & 0.978 & 0.9975 & $\pm0.0003$ & $\pm0.0046$ \\
\hline
0.624 & 0.898 & 0.9932 & $\pm0.0018$ & $\sim\pm0.0150$ \\
\hline
0.674 & 0.887 & 0.9962 & $\pm0.0020$ & $\sim\pm0.0150$ \\
\hline
0.724 & 0.876 & 0.9970 & $\pm0.0023$ & $\sim\pm0.0150$ \\
\hline
0.774 & 0.865 & 0.9946 & $\pm0.0025$ & $\sim\pm0.0150$ \\
\hline
0.824 & 0.853 & 0.9977 & $\pm0.0028$ & $\sim\pm0.0150$ \\
\hline
0.874 & 0.841 & 1.0016 & $\pm0.0032$ & $\sim\pm0.0150$ \\
\hline
0.924 & 0.829 & 1.0007 & $\pm0.0036$ & $\sim\pm0.0150$ \\
\hline
0.974 & 0.816 & 1.0052 & $\pm0.0040$ & $\sim\pm0.0150$ \\
\hline
1.024 & 0.803 & 1.0044 & $\pm0.0044$ & $\sim\pm0.0150$ \\
\hline
1.074 & 0.789 & 1.0029 & $\pm0.0049$ & $\sim\pm0.0150$ \\
\hline
1.124 & 0.775 & 1.0035 & $\pm0.0055$ & $\sim\pm0.0150$ \\
\hline
1.174 & 0.761 & 1.0120 & $\pm0.0061$ & $\sim\pm0.0150$ \\
\hline
1.246 & 0.739 & 1.0098 & $\pm0.0050$ & $\sim\pm0.0150$ \\
\hline
1.347 & 0.708 & 1.0081 & $\pm0.0062$ & $\sim\pm0.0150$ \\
\hline
1.447 & 0.676 & 1.0031 & $\pm0.0074$ & $\sim\pm0.0150$ \\
\hline
1.568 & 0.635 & 1.0209 & $\pm0.0079$ & $\sim\pm0.0150$ \\
\hline
1.718 & 0.581 & 1.0254 & $\pm0.0106$ & $\sim\pm0.0150$ \\
\hline
1.868 & 0.524 & 1.0103 & $\pm0.0135$ & $\sim\pm0.0150$ \\
\hline
2.038 & 0.456 & 1.0203 & $\pm0.0177$ & $\sim\pm0.0150$ \\
\hline
\end{tabular}
\end{center}
\caption[{$R_{2\gamma}$} results in the finer binning (Figures \ref{fig:ratq2} and \ref{fig:rateps})]{Values of the preliminary {$R_{2\gamma}$} results for the finer binning
presented in Figures \ref{fig:ratq2} and \ref{fig:rateps}, including the mean $Q^2$ and $\epsilon$ of all events in each bin.}
\label{tab:finebins}
\end{table}
\begin{table}[thb!]
\begin{center}
\begin{tabular}{ccccc}
\hline
$\left<Q^2\right>$ (GeV$^2$) & $\left<\epsilon\right>$ & {$R_{2\gamma}$} & Stat. Uncertainty & Syst. Uncertainty \\
\hline\hline
0.165 & 0.978 & 0.9975 & $\pm0.0003$ & $\pm0.0046$ \\
\hline
0.666 & 0.889 & 0.9951 & $\pm0.0012$ & $\sim\pm0.0150$ \\
\hline
0.879 & 0.840 & 0.9997 & $\pm0.0013$ & $\sim\pm0.0150$ \\
\hline
1.220 & 0.747 & 1.0083 & $\pm0.0028$ & $\sim\pm0.0150$ \\
\hline
1.534 & 0.647 & 1.0135 & $\pm0.0052$ & $\sim\pm0.0150$ \\
\hline
1.809 & 0.547 & 1.0179 & $\pm0.0097$ & $\sim\pm0.0150$ \\
\hline
2.039 & 0.456 & 1.0203 & $\pm0.0177$ & $\sim\pm0.0150$ \\
\hline
2.238 & 0.372 & 0.9813 & $\pm0.0184$ & $\sim\pm0.0150$ \\
\hline
\end{tabular}
\end{center}
\caption[{$R_{2\gamma}$} results in the wider binning (Figures \ref{fig:ratq2bb} and \ref{fig:ratepsbb})]{Values of the preliminary {$R_{2\gamma}$} results for the wider binning
presented in Figures \ref{fig:ratq2bb} and \ref{fig:ratepsbb}, including the mean $Q^2$ and $\epsilon$ of all events in each bin.}
\label{tab:widebins}
\end{table}
\subsection{The {$\sigma_{e^+p}/\sigma_{e^-p}$} Ratio at $\epsilon\approx 0.98$ ($\theta\approx 12^\circ$)}
\label{sec:12TPE}
While the 12{$^\circ$} measurement of {$\sigma_{e^+p}/\sigma_{e^-p}$} may be taken as a normalization point for the relative luminosity of electron and positron
data collected by the experiment or used in conjunction with the MIE measurement to provide a combined high-precision estimate,
it may also be taken as an additional measurement of $R_{2\gamma}$ at $\epsilon \approx 0.98$, $Q^2 \approx 0.165$ GeV$^2$ if the MIE
is used as an independent measure of the relative luminosity. This is especially valuable due to the fact that the MIE measurement, while
depending in part on elastic {$e^\pm p$} scattering, was at a much more forward angle ($\epsilon \approx 0.99975$, $Q^2 \approx 0.002$ GeV$^2$)
where the expectation of $R_{2\gamma}=1$ is extremely strong (physics considerations demand that $R_{2\gamma}=1$ at $\epsilon=1$). Taking this
approach and utilizing the results of Sections \ref{sec:12res} and \ref{sec:mieres}, the result is:
\begin{equation}
R_{2\gamma}\left(\epsilon = 0.98,Q^2 = 0.165\:\text{GeV}^2 \right) = 0.9975 \pm 0.0010\:(\text{stat.}) \pm 0.0053\:(\text{syst.}),
\end{equation}
where the uncertainties are the result of adding the uncertainties of the individual measurements in quadrature, since the uncertainties of
the two measurements are essentially completely independent. The measurement is quite valuable for several reasons:
\begin{enumerate}
\item it provides a consistency check on the overall scale of $R_{2\gamma}$ as measured in the main spectrometer by providing
a measurement from an independent system with largely independent systematic uncertainties,
\item it constrains the behavior of $R_{2\gamma}$ at high $\epsilon$,
\item it provides some discriminating power between different models for $R_{2\gamma}$ in the high-$\epsilon$ region, and
\item it offers insight into the offset that should be applied to the luminosity normalization points used for the VEPP-3 {$R_{2\gamma}$} results so
as to allow direct comparison of those data to models and other experiments.
\end{enumerate}
This measurement constitutes a new benchmark for {$\sigma_{e^+p}/\sigma_{e^-p}$} in the forward scattering ($\theta\approx 10^\circ$) region, providing a previously
unachieved level of total uncertainty of $\sim$0.54\%.
\section{Comparison of Independent Analyses}
\label{sec:indana}
At the time of writing, three independent $R_{2\gamma}$ analyses of the OLYMPUS data were available: the two analyses described in References \cite{schmidt} and
\cite{russell} and the analysis described in this work. Each analysis made use of the same
sets of tracked and simulated data, but applied different approaches to the analysis. Differences in the analyses included the approaches to particle identification,
application of cuts in different kinematic variables, varying stringency in shared kinematic variable cuts, different background subtraction methods and models, as well
as different orderings of analysis components.
Figures \ref{fig:comp} and \ref{fig:compbb} show the results for {$R_{2\gamma}$} from the three analyses plotted together as a function of $Q^2$ in the same binning schemes as Figures
\ref{fig:ratq2} and \ref{fig:ratq2bb} respectively. The three analyses agree extremely well in terms of the general trend as a function of $Q^2$, in that it rises monotonically from
a few tenths of a percent below unity at $Q^2=0.6$ GeV$^2$ to 2-3\% above one at the upper end of the $Q^2$ acceptance. The point-to-point variations are typically on the order of a
few tenths of a percent, which is consistent with the systematic uncertainty associated with choices of kinematic cuts and background subtraction models discussed in Section \ref{sec:mainsys}.
Certain deviations between the analyses that are larger (such as the difference between the Henderson analysis and the other two in the $Q^2=1.725$ GeV$^2$ bin) are likely associated with
localized increases in systematic uncertainties arising from analysis choices, such as those shown for the analysis of this work in Figure \ref{fig:cutsys}. At the time of writing, several
additional analyses were underway to provide further cross-checks of the final results prior to publication, but in general the analyses show strong consistency and provide a reasonable
level of confidence in the general features of the OLYMPUS {$R_{2\gamma}$} result.
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/mit3_R2gamma_mark2.pdf}}
\caption[Comparison of three $R_{2\gamma}$ analyses (fine bins)]{Results for $R_{2\gamma}$ as a function of $Q^2$ in the binning presented in Figure \ref{fig:ratq2} as found by three different
analyses of the OLYMPUS dataset: Schmidt \cite{schmidt}, Russell \cite{russell}, and Henderson (this work). Each analysis used the same data and simulation but
different approaches to particle identification, elastic event selection, background subtraction, etc. In general, the statistical uncertainties shown for the separate analyses are highly, but
not entirely, correlated due to differences in the total number of events accepted by each analysis.}
\label{fig:comp}
\end{figure}
\begin{figure}[thb!]
\centerline{\includegraphics[width=1.15\textwidth]{figures/mit3_R2gamma_bigbins_mark2.pdf}}
\caption[Comparison of three $R_{2\gamma}$ analyses (wide bins)]{Results for $R_{2\gamma}$ as a function of $Q^2$ in the binning presented in Figure \ref{fig:ratq2bb} as found by three different
analyses of the OLYMPUS dataset: Schmidt \cite{schmidt}, Russell \cite{russell}, and Henderson (this work). Each analysis used the same data and simulation but
different approaches to particle identification, elastic event selection, background subtraction, etc. In general, the statistical uncertainties shown for the separate analyses are highly, but
not entirely, correlated due to differences in the total number of events accepted by each analysis.}
\label{fig:compbb}
\end{figure}
\section{Implications for Two-Photon Exchange and \\ the Proton Form Factors}
The results of the three TPE experiments (OLYMPUS, VEPP-3 \cite{vepp3PhysRevLett.114.062005}, and CLAS \cite{PhysRevLett.114.062003,ass}) consistently indicate the presence of a contribution
from TPE to elastic {$e^\pm p$} scattering, as each shows a generally increasing trend in the value of {$\sigma_{e^+p}/\sigma_{e^-p}$} with increasing $Q^2$/decreasing $\epsilon$. While the experiments differ in kinematic coverage,
each provides results that are consistent with an effect of $\lesssim$3\% at $\epsilon\approx 0.4$. The general magnitude of this effect is consistent with a number of the theoretical
and phenomenological models (as shown in the OLYMPUS results figures), while models predicting larger effects (such as the Yang phenomenological model \cite{Chen:2007ac}) are well excluded. The addition of the
high-statistics OLYMPUS data begins to place stronger constraints on the more similar models, as it provides the first indications of the shape of the evolution of {$R_{2\gamma}$} as a function
of $Q^2$ or $\epsilon$. In particular, the OLYMPUS data strongly suggests that {$R_{2\gamma}$} indeed passes below unity at $\epsilon\gtrsim 0.85$ and $Q^2\lesssim 0.6$ GeV$^2$ as initially suggested by the CLAS
results (see Figure \ref{fig:crap}).
While the OLYMPUS data appear to agree well with the Bernauer phenomenological model \cite{BerFFPhysRevC.90.015206}, which would suggest that the form factor discrepancy may be explained by
TPE contributions to elastic {$e^\pm p$} scattering, the assumptions underlying the model as well as the relative consistency of the combined experimental data with the general trends and magnitudes
of other models remain a question that must be examined carefully by the hadronic physics community prior to drawing any definitive conclusions. Since most theoretical calculations do not predict
$R_{2\gamma}<1$ at high $\epsilon$, as indicated by the experimental data, this behavior in particular will need to be considered as models and calculations for the proton form factors are
assessed. As noted, however, the three experiments
provide a clear indication of the presence of TPE in {$e^\pm p$} scattering, and the strength of the OLYMPUS data should provide strong guidance in discriminating between proton form factor and TPE models
going forward.
\section{Outlook}
OLYMPUS has measured the {$\sigma_{e^+p}/\sigma_{e^-p}$} ratio to high statistical precision over the kinematic range $(0.4 \leq \epsilon \leq 0.9)$, $(0.6 \leq Q^2 \leq 2.2)$ GeV$^2/c^2$, with systematic
uncertainties on the order of 1-2\%. The OLYMPUS data indicates that {$\sigma_{e^+p}/\sigma_{e^-p}$} is below unity at high-$\epsilon$ and then rises monotonically by several percent across the range of the data.
The systematic uncertainties are currently dominated by knowledge of the drift chamber acceptance and efficiency, and will likely be reduced as further studies are completed prior
to the publication of the final {$R_{2\gamma}$} results. Additionally, the OLYMPUS 12{$^\circ$} telescope tracking system, in combination with the multi-interaction event luminosity determination, has provided
an extremely precise measurement (0.56\% total uncertainty) of {$R_{2\gamma}$} at high $\epsilon$/small $\theta$. This result provides a valuable normalization point for the VEPP-3 TPE
experiment \cite{vepp3PhysRevLett.114.062005}, providing guidance for the absolute scale of their luminosity normalization point at similar kinematics.
While the three TPE experiments indicate that there is a significant contribution from TPE to elastic {$e^\pm p$} scattering, whether the effect is large enough to fully account for the
$\mu_pG_E/G_M$ ratio discrepancy will likely be a topic of considerable debate in the hadronic physics community. In particular, the validity of different models in the kinematic
ranges relevant to the experimental data will need to be carefully considered before drawing definitive conclusions. While the measurement of a very large TPE effect ($\sim$6\% at
$\epsilon=0.4$) would have likely provided a complete and convincing resolution of the discrepancy, there may be value in an additional measurement of {$R_{2\gamma}$} at higher $Q^2$ (where the form factor
ratio discrepancy is more significant) to provide final confirmation of the trends suggested by the TPE experiments below $Q^2=2$ GeV$^2$. Such an experiment, however, would be extremely challenging
due to the reduced elastic cross section at higher $Q^2$ in combination with increased cross sections for background processes (which differ between {$e^- p$} and {$e^+ p$} scattering).
Future OLYMPUS publications will include the {$R_{2\gamma}$} result (after final analyses of the systematic uncertainties are conducted) and likely also limited results on the absolute cross section and
proton elastic form factors, as discussed in Section \ref{sec:datasim}. The consistency between the analyses presented in Reference \cite{schmidt}, Reference \cite{russell}, and this work
suggest that the final result from OLYMPUS will strongly resemble the preliminary results shown, but further analyses will be conducted to provide greater confidence in the OLYMPUS results.
The OLYMPUS results provide the most statistically significant measurement of {$R_{2\gamma}$} over a wide kinematic range, including extending measurements to higher $Q^2$ values where the form factor
discrepancy is more significant. The data will be valuable in distinguishing between different phenomenological and theoretical TPE models, providing insight into the structure of the proton
and the possible causes of the form factor discrepancy.
\section*{Acknowledgments}
\begin{singlespacing}
Foremost, OLYMPUS was a decade-long project (from conception to the final analyses) involving a collaboration of 60+ people from 6 countries who made
contributions to the various phases of the experiment that are far to numerous to list here. In particular, of course, I am extremely grateful to my
advisor Professor Richard Milner. It has been a true privilege to learn from Richard's expertise over the past six years, both in terms of physics and in management of an experiment.
Thank you also to my other committee members Professors Robert Redwine and Robert Jaffe, who both provided excellent guidance in the construction of this thesis.
The staff, technicians, and engineers of the MIT-Bates Research and Engineering Center were not only instrumental to the design and construction of OLYMPUS,
but are also great people and were an absolute pleasure to work with from my first day at MIT. While I risk
missing somebody by listing names, I learned much of what I know about putting together an experiment from Jim Kelsey, Chris Vidal, Peter Binns, Brian O'Rourke,
and Joe Dodge, who all made several treks to Hamburg to lend their expertise to the construction of the experiment.
Numerous technicians and engineers at DESY worked on the construction of the experiment as well, and while I never worked with any of them closely,
their important contributions must be acknowledged.
Special recognition goes to the DESY accelerator team, led by Frank Brinker, who delivered consistent and well-maintained electron and positron beams to the experiment
throughout our data-taking periods (and would greet the off-going OLYMPUS night shift crew with a hearty ``moin moin'' and handshake at 7 AM as they arrived for the day).
I would also like to thank the managers of the MIT Engaging computing cluster, who provided a, quite frankly, miraculous resource for the analysis of this experiment.
Although I have never met them in person, they were always quick to resolve any issues we had in using the cluster and patient in putting up with my excessive disk usage.
Looking further into the past, I would like to thank two teachers, in particular, from my junior high and high school years, Christine Moore and Terry Crane,
who encouraged me to think seriously about physics and helped open the door to a much larger academic world.
Among members of the OLYMPUS collaboration, I would like to thank several in particular. First, thanks to Michael Kohl, Uwe Schneekloth, and Douglas Hasell
for their roles in making OLYMPUS happen and keeping things on track (and especially to the latter for helping take care of us MIT students as we traveled back and forth
from Hamburg and pushed through the analysis).
While not on the author list, Joanne Gregory was an immense help in all aspects of coordinating travel to and from Germany, finding
places to stay while there, and generally navigating LNS and MIT. Her expertise in managing the hadronic physics group will most certainly be missed.
Several of the OLYMPUS post-docs were monumentally critical in making OLYMPUS happen: Jan Bernauer, Alexander Winnebeck, and J\"{u}rgen Diefenbach. Each of them
performed apparent miracles in finding last-minute solutions to hardware and software problems, and given another six years in grad school I don't think I could learn everything
they would have had to teach me about experimental physics.
A true place of honor among my acknowledgments belongs to the other OLYMPUS graduate students: Axel Schmidt, Colton O'Connor, Rebecca Russell, and Lauren Ice.
Each of them played a critical role in constructing the OLYMPUS results, taking on responsibilities above a grad student's pay-grade and finding creative, excellent
solutions to many of the challenges we faced.
In addition to their enormous contributions to the experiment, they were great friends who made the whole experience of living in Germany and the grind of building,
operating, and conducting the analysis for the experiment much better.
My parents, Patricia and Scott, are certainly responsible for much of this accomplishment, as they have always been unwaveringly supportive of my education
and my goals, and instilled in me a strong desire to learn and fight for answers. Thank you, Mom and Dad.
More than anybody else, my wife Hilary has helped me through grad school (even marrying me in the thick of it). Through the time spent apart in Hamburg, the frustration
when the analysis was stalled, and the crush of work that accompanied the last several months of the project, she has always been loving and supportive, all while going
through law school and the bar exam herself. Thank you, Hil.
\end{singlespacing}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,575 |
VIC Vaxx Mandates To Be Scrapped Within Weeks
Facebook Post By Jamie McIntyre
So that's the reason is it – not because discrimination is illegal and unconstitutional, and the fraud of Covid is only pushed by those such as yourselves who take money to lie, (and fail to disclose your conflict) we have a pandemic and to pressure innocents, to be jabbed by an experimental drug disguised as a quackzinne with the side effect of death.
Australian National Review
Independent media that investigates Covid fraudsters
Www.anrnews.com
Victorian Government Considering Removing Vaxxination Mandate Three Months Earlier Due to 'High-Profile' Incidents
The state government is looking to scrap vaccine mandates due to threats and abuse towards workers in the retail industry, including a staff member who was allegedly pushed down an escalator after asking an unmasked man for his vaccination certificate.
The controversial coronavirus vaccination mandates in Victoria could be scrapped in the New Year as concerns rise over abuse towards retail workers who have to enforce the rules on shoppers.
The state government is looking at removing the directive in all but high-risk settings as early as January, according to the Herald Sun.
In October, Premier Daniel Andrews suggested the mandate could be in place till April or even longer due to a spike in COVID-19 cases at the time.
Government officials said a series of incidents of workers being abused for enforcing the jab mandate has led them to reconsider their approach.
"The high rate of vaccinations, the passing of the pandemic bills which sidelines the chief health officers, and the high-profile incidents of violence have led to a change of thinking," a source told the publication.
"There will still be mandates in place in high-risk settings even after any change."
Vaccination mandates in Victoria could be scrapped by January after a number of incidents against workers in the retail industry.
In one case, a Melbourne bookstore has been forced to hire private security guards, at a cost of $4,000 a week, over attacks on staff members.
Victoria Police are investigating an incident where a Dymocks employee was allegedly pushed down an escalator after he reportedly asked a customer to prove his vaccination status and to check-in to the store.
CCTV footage showed the staff member try and stop the man, wearing a distinctive rainbow hoodie, from going down the escalator to the book store before he was pushed with his head hitting a step.
The worker is seen clutching his head as he laid motionless on the moving escalator.
A bystander pushed the emergency stop button as other Good Samaritans rushed to his aid. It's believed he was treated for a mild concussion.
The man who pushed the staff member is seen going back up the escalator seconds later.
A man pushed a Dymocks worker for refusing to check-in to the store and show his vaccination status in Melbourne.
The staff member is believed to have suffered a mild concussion after the assault.
The owner of the store said it was the third incident linked to COVID-19 requirements at the store last Friday.
Other incidents include an elderly woman slapped a female employee and a middle-aged businessman shoved past a male staff member at the QR check-in area.
"We've owned this book shop for 20 years and we've never had a physical assault against any staff member, but we had three in one day," Melissa Traderso told ABC Radio Melbourne.
"Just have some respect for these poor, innocent people who are just doing their job."
Unvaccinated shoppers were banned from entering non-essential venues on November 26 despite previously being allowed.
There are concerns from health officials some Victorians were reluctant to take the booster shot.
The government will reportedly seek advice from the Australian Technical Advisory Group on Immunisation regarding the third jab.
Mr Andrews said this week he was hoping to remove mandates "as quickly as possible" with the state already achieving high vaccination rates.
"But they're based on health advice and we will continue to follow health advice," he said during a press conference.
"We want to get back to the settings as they had been foreshadowed and the changes that had been foreshadowed as quickly as possible."
Victoria recorded nine deaths related to coronavirus with 1,365 cases in the 24 hours to midnight on Saturday.
Just over 93 per cent of Victorians have received one dose of a coronavirus vaccine and 91.1 per cent are fully vaxxinated.
Here's what others had to say:
John Harris
when I can walk into a supermarket, then a crowed Australia Post, then leave and enter a chemist full of people, then walk out of there into a shop next door like a JB HIFI, Supercheap auto etc and be stopped at the door and asked for my 'papers please' and told I can't come in you know this has nothing to do with the virus, it is all about punishing people, I have reported a few businesses to the 'The Global Economic Forum' hopefully the email they were sent actually had some effect.
Sharon Michael
We do not have loud enough voices in Victoria to fight for us. Everyone seems to be in Queensland! Come on Victorians!
Chris Hanks
What did the government expect when they make the people do their illegal dirty work
Bronte Banks
To be honest this is what I feel like doing to anyone who believes all this crap.
More of leftist Zuckerberg's censoring, he finds the term, 'gate to tyranny' hate speech / false information or what ever, had to put a line through this so it doesn't trigger his automatic 'hate speech sensors…….Or was it that I said Each way Albo will be worse?
Brian Gordon
If they r trying to kill everyone why would this bother them?
Vagg Bakalakis
We gotta be cruel to be kind.
Dani Harris
This is the exact reason why anarchy exists. And mass non compliance.
Original Source: https://www.facebook.com/100001092086344/posts/4683012675078422/
Bitcoin 2.0 and Our World Coin Merger On Target Says, Key Leaders In Both Projects, And Some Think It May Led A Surge In Price
Bitcoin Dips Below $40K As 'Death Cross' Looms on Price Charts | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 500 |
Brachaspis nivalis, is an alpine short-horned grasshopper, endemic to the South Island of New Zealand. Brachaspis nivalis is brachypterous and flightless, therefore they travel by hopping. They bask during the day so need open habitat.
Taxonomy/history
The species was first reported and described by Captain Frederick Wollaston Hutton from the Mount Cook area and Marlborough in 1897, but was put in the genus Pezotettix with some synonyms at that time. In 1898, Hutton proposed a new genus "Brachaspis" (from its short and broad sternal shield) and put in B. nivalis some other collections. In 1967, Bigelow revised Brachaspis and redefined three species. This genus is monophyletic, nested within the New Zealand alpine grasshopper group.
Brachaspis nivalis includes populations from Canterbury and Kaikoura and is morphologically distinguished from the other species (Brachaspis collinus & Brachaspis robustus). Although hybridization between B. nivalis and B. collinus is suggested by ITS DNA sequence data in Mount Lyford where these species are sympatric, they are still considered different phylogenetic species on the basis of morphology, mtDNA (COI) sequence and geographical distribution. Furthermore, the genetic sequence data (COI & ITS) suggests that B. nivalis comprises northern (St Arnaud (SA)) and southern (Fox Peak (FP)) subgroups. Both mtDNA and nuclear markers of southern B. nivalis and B. robustus are very similar with evidence of hybridization between these two species.
Type information
Pezotettix nivalis; unspecified primary type of species Brachaspis nivalis (Hutton, 1898). Hutton, F.W. (1897). The Grasshoppers and Locusts of New Zealand and the Kermadec Islands. Proceedings and Transactions of the New Zealand Institute 30: 135–150.
Locality: New Zealand, New Zealand South I: Canterbury, Mt. Cook region; .
Type specimen: Female; G. E. Mannering; the type is deposited in the Canterbury Museum, Christchurch.
Habitat and distribution
Brachaspis nivalis is common in rocky montane areas with scattered plants (unlike B. collinus which are mostly found in tussock grass in the Nelson region). The elevational range of B. nivalis is between 600 and 2000 metres above mean sea level. The distribution of B. nivalis is widespread at high elevation in South Island New Zealand from Marlborough, Canterbury and north Otago.
Morphology
Brachaspis nivalis is polymorphic and has cryptic coloration resembling the surrounding rocky habitat. The color morphs can be either grey or grey mottled brown. Individuals with scarlet, purple or indigo-black flash-display of the hind legs have been collected at various sites (unlike the reddish brown hind legs in B. collinus). Males are usually smaller than females. The known body length of males range between 15-24 mm, and the females between 16-40 mm. The length of the hind femur of males are 8.5-12.5 mm, and the females are 11.5-17 mm. Generally, the body size of B. nivalis is smaller than B. robustus. Body size of adult B. nivalis increases with elevation, the largest individuals are found above 1200m asl and the smallest on stream edges at low elevation.
Diet
Brachaspis nivalis shows multiple and opportunistic feeding behavior. It is a herbivore and prefers to eat the floral parts of plants. It was observed to feed on plants such as Hebe spp., Epilobium spp., Celmisia spp., Poa spp., Wahlenbergia albomarginata, Anisotome aromatica, Chionochloa pallens, Coprosma pumila, Pittosporum crassicaule, ferns such as Austroblechnum penna-marina, mosses such as Polytrichum juniperinum and unidentified lichens. In addition, ingestion of arthropods was found in adult B. nivalis, but it may be upon opportune contact.
Life history
Males undergo six instars and females undergo seven instars to become adults. First and second instars are suggested to be abundant during January and February. The mating activity of B. nivalis extends throughout the life span of adults. The maximum longevity of male adults are 21.8 months, and the for females are 26.1 months.
Reproduction
Brachaspis nivalis shows multiple mating with a different non-bonded mate on each occasion. When a male tries to mate, it often aggressively mounts a resisting female. The male firmly grabs the female to prevent detachment by sudden disturbance. Mating pairs have been observed from spring to autumn (September - April). Gravid females were also observed from September to May except April, and number were highest in January and February. Females show multiple oviposition. Each egg pod may contain 20-30 eggs and first instars are observed in late December or early January.
Conservation
Brachaspis nivalis was assessed to be not threatened (NT) level of NZTCS in 2022. This status has not changed from prior assessments in 2014 and 2010. However, if further population genetic research suggests that the small, low-elevation forms are distinct from their montane relatives, the conservation status of the low-elevation forms has to be considered threatened by flooding events, land development, weed invasion and introduced predators. The population abundance of B. nivalis correlates with soil temperature. Therefore, the increase in mean temperature due to global warming may cause B. nivalis to lose suitable habitat in the future.
References
Acrididae of New Zealand
Endemic fauna of New Zealand
Acrididae
Endemic insects of New Zealand | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,443 |
Q: CustomNaming strategy with mapstruct SPI I'm trying to get mapstruct working on beans with an different setter naming convention. Some of the beans in the project are name like withValue(string val)
Based on the documentation it is possible to accomplish implementing AccessorNamingStrategy in the project and configuring the SPI.
Based on this I created my own NamingStrategy and have the following file created in my project
/META-INF/services/org.mapstruct.ap.spi.AccessorNamingStrategy
with the fully quaified name of my custom implementation in it.
But I couldn't seem to get the spi working for my custom naming strategy.
Digging in to the Options.java I found the property "mapstruct.alwaysGenerateServicesFile" need to be set to true.
But the annotation processor in intelliJ seem to reject this property as not recoganized by any of the annotation processors.
I also see this property "mapstruct.alwaysGenerateServicesFile" not documented in http://mapstruct.org/documentation/1.1/reference/html/index.html#configuration-options
Is this feature still supported in MapStruct. Did anybody got custom naming strategy working in your project.
A: I actually got it
Need to package /META-INF/services/org.mapstruct.ap.spi.AccessorNamingStrategy
and the
CustomNamingStrategy in a separate jar and include it in the main project.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,708 |
Cred closes $120m round as valuation jumps to $450m
Freecharge founder Kunal Shah, who launched Cred as his second venture, has positioned the company as a platform for all forms of credit card bill payments.
Bengaluru: Fintech startup Cred has closed a $120 million funding round, led by existing backers Ribbit Capital, Gemini Investments, a personal investment vehicle of Yuri Milner's DST Global, and Sequoia Capital.
While regulatory filings show an infusion of $100 million, the company said it has racked up an additional $20 million from investors. This will be one of the largest funding rounds for a lessthan-a-year-old Indian company. New investors joining the financing round, which values the startup at $450 million, include Tiger Global, Hillhouse Capital, General Catalyst, Greenoaks Capital and Dragoneer. The Bengaluru-based Cred was valued at $75 million when it first raised capital last year.
Freecharge founder Kunal Shah, who launched Cred as his second venture, has positioned the company as a platform for all forms of credit card bill payments. Consumers are offered gift vouchers and discount coupons at retail outlets across the country for paying their bills on Cred.
Confirming the new funding round to ET, Shah said the fresh capital will be used to expand into international markets and strengthen its merchant ecosystem. He said the company was in talks with various banks to strike partnerships and offer the Cred platform to help them disburse personal loans through credit cards.
Shah said the company did not intend to compete with banks on the credit front, but will instead be their front-end interface as they aim to push credit card usage in the country.
ET had reported that Cred was in talks to raise a $100 million series B round in its March19 edition. It also later reported that Hillhouse Capital and General Catalyst were likely to back the firm.
Hillhouse Capital Group
General Catalyst Partners
Fintech / 6 hours ago
Fintech / 1 day ago
Fintech / 3 days ago | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 2,882 |
The labour of rising from the ground, said the artist, will be great, as we see it in the heavier domestic fowls; but, as we mount higher, the earth´s attraction, and the body´s gravity, will be gradually diminished, ´til we shall arrive at a region where the man will float in the air without any tendency to fall: no care will then be necessary, but to move forwards, which the gentlest impulse will effect.
Gravity is the theme for a new exhibition at the Crawford Gallery, Cork which touches on the idea of physics, gravitational forces and even deep space.
The exhibition contains a variety of works from over 50 artists including Dorothy Cross' new work Whale. Cross' is a unique interpretation of gravity, with the skeleton of a whale hung from the fabric of the gallery itself. Located in the Crawford's historic sculpture galleries, it works perfectly with the marbles and plaster-works that surround it.
The exhibition was opened by Minister Jimmy Deenihan on July 15th and runs until 29th October.
The exhibition features a variety of pieces from the collection of the 3rd Earl of Rosse, William Parsons.
Parsons built the 'Leviathan of Parsonstown' on his estate in County Offaly in the 1840s. The largest telescope of the nineteenth century, the Leviathan was considered a marvelous technical and architectural achievement. He used it to catalogue a number of galaxies including the famous 'Whirlpool Galaxy'.
With spectacular off site installations by Cross and Johan Lorbeer, the exhibition is well worth a visit. It's great to see science and art combining once again in the Crawford - a building financed by WH Crawford, a man who himself was intrigued by both. | {
"redpajama_set_name": "RedPajamaC4"
} | 439 |
{"url":"https:\/\/www.gradesaver.com\/textbooks\/math\/other-math\/thinking-mathematically-6th-edition\/chapter-2-set-theory-2-4-set-operations-and-venn-diagrams-with-three-sets-concept-and-vocabulary-check-page-91\/1","text":"Thinking Mathematically (6th Edition)\n\nIn order to perform set operations such as $(A \\cup B) \\cap (A \\cup C)$, begin by performing any set operations inside parentheses. This is a universal rule in mathematics, and it ensures we don't do operations in the wrong order and get the wrong result.","date":"2018-05-25 17:27:00","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8283178210258484, \"perplexity\": 345.82057507440004}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-22\/segments\/1526794867140.87\/warc\/CC-MAIN-20180525160652-20180525180652-00397.warc.gz\"}"} | null | null |
Yum Yum !
**_tl;dr_** : The project is deployed [here](https://meny-demo.netlify.com), but beware that it is a work in progress so expect a lot of unfinished stuff, and also note that all text is in french.
### What is this repo about ?
The goal of this project is to provide a way for me and my wife to store recipes that we have tried and tweaked to our liking, share them, and be able to edit / delete them as needed.
Ultimately anybody could come to the site and use the app to browse the recipes or even add their own recipes.
Nothing fancy or new, but it was a good idea for me to build a decent-sized app and try some new technologies / libs in the process.
### Is it finished ?
Well, no, not at all :D
The basic functionnality is there, so one can come to the site and browse / add / edit recipes, but some key features are not yet implemented:
- Auth : the idea is to add an actual authentification process with firebase (since I am using firebase for the data), and some user profile page / option. For now there is a simple dummy menu in the nav to simulate different users in the client.
- Images : having customized images for each recipe that one can upload / edit / tweak, for now most of the recipe specifics images display a pug wrapped in a scarf.
- Animations / Transitions / Polish : I haven't had a chance to pay too much attention to this so far so the UI is a bit rough (especially on small devices), but improvement is definitely planned.
- Production optimizations / PWA features : offline support, app manifest, bundle size reduction, all of this and more is on the list but this will come after the other features.
The list will be updated as I make progress on these.
### Can I see where we are now ?
Sure, right now the repo is hooked up to a dummy firebase project with a few recipes for external testing, and thanks to the awesome guys at netlify, the app is deployed and updated on each git push.
Here is the url : [https://meny-demo.netlify.com](https://meny-demo.netlify.com)
You can choose a dummy user and add recipes if you want. You can also edit the recipes that you have added and browse the other ones.
Note that the users will all be french so it is all in french language for now.
### Anything else ?
You can contact me if you have any questions.
If you want to know more about what I learned when working on this project, see [this file](/THOUGHTS.md).
| {
"redpajama_set_name": "RedPajamaGithub"
} | 3,021 |
Wendy Dyes
Wendy Dyes Nashville Elegance
Nash Elegance
Luxury Home Sales
Brentwood is an affluent suburb of Nashville located in Williamson County, Tennessee. The population was 37,163. Great Schools, parks, shopping and more, you will live among the Country music stars such as : Kix Brooks Luke Bryan Jeremy Camp Christopher Cross Skeeter Davis Little Jimmy Dickens Melinda Doolittle Ronnie Dunn Karen Fairchild (Little Big Town) Margo Smith Carrie Underwood Jack White (The White Stripes) Trisha Yearwood Nathan Followill (Kings of Leon) Ke$ha Dolly Parton Joe Don Rooney (Rascal Flatts) Kimberly Schlapman (Little Big Town) John Schlitt Hillary Scott (Lady Antebellum)
Brentwood, TN Properties
Find a Home in Brentwood, TN
New Listings (12)
Single Family Homes for Sale (171)
Hermitage TN
Old Hickory TN
East Nashville- Edgehill
Franklin TN
Green Hills/Forest Hills / Belle Meade
Hendersonille TN
Leipers Fork, TN
Lenox Village
Music Row
Listings courtesy of RealTracs as distributed by MLS GRID. Based on information submitted to the MLS GRID as of Jan 27, 2023 4:39:am. All data is obtained from various sources and may not have been verified by broker or MLS GRID. Supplied Open House Information is subject to change without notice. All information should be independently reviewed and verified for accuracy. Properties may or may not be listed by the office/agent presenting the information.
Wendy Dyes Nashville Elegance 5552 Franklin Pike, STE 202, Nashville , TN 37220 O: 615-807-0340
Wendy Dyes is a successful marketing and sales professional with over two decades of experience of serving clients. She has extended her passions outside of the office by working with such origination's as the Boys and Girls Club of America, is a member of the Junior League of Nashville, Big Brothers Big Sisters of Middle Tennessee, and serves with many projects dealing with women and children in Nashville TN.
Today Wendy is focused on what she does best! That's working with real estate clients in Middle Tennessee and helping her friends navigate the difficult real estate market to reach their dreams. Buying a home is more than a transaction, it's buying your dreams, and Wendy Dyes is serving all of Middle Tennessee and Nashville TN purchase your dream.
I know you will love Nashville, TN as much as I do. Here's a list of a few of my favorite places to visit: Ryman auditorium, Music Row, Schermerhorn Symphony Center, The Parthenon in Nashville, Elvis Presley Studio B Nashville, Ernest Tubbs Nashville, The Hermitage-Home of President Andrew Jackson, The Country Music Hall of Fame and Museum
© 2023 Wendy Dyes Nashville Elegance Terms of Use Privacy Policy Fair Housing Accessibility Statement Site Map Admin Login | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,035 |
\section{Introduction}
The volume of illicit network traffic continues to grown dramatically, with the number of high-profile attacks including \gls{DDoS}, botnet, and ransomware rising by over 45\% annually \cite{botstat}, and the losses incurred expected to exceed 6 trillion US dollars in 2021~\cite{lossstat}. Effective countermeasures to thwart ever-evolving cyber threats are therefore urgently needed. Traditional \glsfirst{NIDS} largely apply finite rules, preset by human experts, to detect anomalies. This approach lacks flexibility and is often prone to subversion~\cite{beforeyouknew}. \glsfirst{ML} is increasingly used to detect cyber intrusions, due to its ability to discover complex statistical patterns hidden in data streams, which can aid in discriminating anomalies based on feature differences~\cite{buczak2015survey}.
\gls{ML} is a powerful tool, yet adopting it meaningfully for security purposes is not straightforward. \gls{ML} techniques used in areas including imaging and natural language processing have been directly applied to \gls{NID} (e.g., \cite{lin2018idsgan}), without adequate analysis of their suitability for this task. For instance, reconstruction-based algorithms like autoencoders were originally designed to learn to recreate benign samples that contain similar patterns, e.g., the same object type in images \cite{ xia2015learning}. However, when deployed for intrusion detection, whether an autoencoder is able to reconstruct heterogeneous benign traffic originating from various applications is rarely discussed~\cite{mirsky2018kitsune}.
Secondly, widely-used evaluation methodologies involve training and testing \gls{NID} models on a single dataset, collected in the same controlled environment.
This makes it difficult to assess if the trained models can truly generalize to previously unseen traffic mixes~\cite{sommer2010}. Moreover, detecting high-volume attacks promptly, before a target system becomes overloaded and unable to thwart malicious traffic with potential to cause severe damage following early system compromise, is difficult. This capability is however critical to the availability and revenue of online businesses~\cite{ba20}.
In this paper, we address the above challenges and propose \textbf{NetSentry\xspace}, a novel \gls{DL}-based \gls{NIDS} that reliably detects a range of malicious traffics with similar patterns, indicative of incipient high-impact network attacks.
As such, we make the following \textbf{key contributions}:
\begin{enumerate}[itemsep=0pt, label=(\arabic*), topsep=0pt]
\item We scrutinize several attack chains and identify key temporal inter-relations between illicit traffic occurring in the wild; based on this analysis, we design Bidirectional Asymmetric LSTM (Bi-ALSTM), an original ensemble of sequential neural models that effectively captures the temporal dynamics of malicious traffic and classifies specific threats, including \gls{DoS}, Port Scanning, and Brute Forcing;
\item Since not every attack type can be distinguished accurately with limited information available at the network layer, we introduce a novel training technique that relies on feature augmentation and abstract labeling. The feature augmentation scheme improves the heterogeneity of cyber attacks that were collected in a controlled environment, which helps NN models learn a more robust decision boundary. Abstract labeling, on the other hand, prevents overfitting by grouping similar types of attacks into one class;
\item We train our Bi-ALSTM on a large dataset published by the Canadian Institute for Cybersecurity, we cross-evaluate our approach with a previously unseen dataset collected in a different network topology, and we compare its performance against that of state-of-the art benchmarks. Results demonstrate Bi-ALSTM outperforms existing approaches by at least 33\% in terms of F1 score;
\item We discuss practical aspects of deploying NetSentry\xspace in real-life, including computational overhead and
robustness to a range of evasion attacks.
\end{enumerate}
To our knowledge, NetSentry\xspace is perhaps the first principled \gls{DL}-based \gls{NIDS} that tackles cyberthreats
by focusing on the early stages that are essential to the success of large-scale and high-impact attacks such as botnet and ransomware.
\section{Threat Model \& Anatomy of Attacks}
\label{sec:threat_model}
We start from the key observation that in practice traffic flows shall not be considered in isolation, either as benign or malicious. There exist important temporal correlations among different cyber attacks, especially those with high-impact, which rarely occur independently. For instance, assume that an adversary has zero knowledge of a potentially vulnerable target. Conducting a successful webshell injection attack has at least two pre-requisites:
\emph{(i)} port scanning against the target, so as to uncover that it runs a web serve; and \emph{(ii)} web API enumerating, to verify if file upload is allowed.
That is, the attacker must follow a certain sequence of actions (each an attack itself), which would create distinct network traces at various stages. We remark that essential correlations among different kinds of network attacks have not been explored before, but are potentially useful to design a reliable \gls{NIDS}.
Hence, we decompose network attacks from the perspective of an active adversary, and summarize them into different attack chains. These typically start with gathering information about a target and conclude when a specific technical goal was achieved. We consider three key attack chains, namely botnet, web intrusion, and ransomware, revealing that they are supported by a similar methodology.
While our attack~chain view may appear on the surface related to earlier Botnet infection modeling, where attack stages are fingerprinted~\cite{gu2007bothunter}, our modeling approach and subsequent NIDS design are fundamentally different. This is because \textit{the attack chains we consider aim to reveal the common stages shared by different large-scale cyber attacks}, so as to impede a specific range of cyber intrusions by interrupting any of these early stages.
\subsection{Attack Chain Analysis}
We particularly focus on two network attack goals, which bring severe damage to target systems. The first is to obtain system privileges temporarily or permanently, by exploiting various security flaws. The second is to overload the system by occupying all its resources. Instead of looking at each attack type individually, we investigate what processes, i.e., attack chains, an adversary must follow to achieve any of these goals, when having zero knowledge about a target. We consider three unique attack chains that are specific to botnet, web intrusion, and respectively ransomware, as shown
in~Figure~\ref{fig:attack_chain}.\footnote{We note that certain sophisticated attacks may have longer attack chains that expand to application level~\cite{ifflander2019hands}. However, by detecting their early stages, their later exploitation actions can be prevented.}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{images/attack-chains.pdf}
\caption{Attack chains employed by large-scale high-impact threats, e.g., botnets, web intrusion, and ransomware. Observe that similar steps are repeated by all, which \textbf{NetSentry\xspace} exploits for detection.}
\label{fig:attack_chain}
\vspace*{-1em}
\end{figure}
\textbf{Botnet/Mirai.} Botnets are collections of Internet-accessible devices hijacked by an attacker, and are usually employed to carry out large-scale, high-impact \gls{DDoS} attacks. Mirai is one of the most notorious instances in recent years, with $\sim$600,000 devices infected at its peak
\cite{antonakakis2017understanding}. Subsequent Mirai variants expand the attack surface to SSH, HTTPS, FTP, etc., but inherit the methodology of the canonical version.
Mirai follows a chain-like methodology that entails \textit{information gathering} $\rightarrow$ \textit{vulnerability scanning} $\rightarrow$ \textit{privilege escalation} $\rightarrow$ \textit{\gls{DDoS}}. Specifically, (1) TCP SYN packets are sent towards the entire IPv4 address space, on ports 23 and 2323 (Telnet); (2) after identifying potential victims, Mirai attempts to bruteforce the victims' credentials
using a dictionary -- this process is deemed as vulnerability scanning; (3) upon successful login, a victim's IP and
credentials are forwarded to a report server that infects the victim with malicious code; (4) newly infected devices become members of the botnet, and either participate in victim discovery or \gls{DDoS} attacks \cite{antonakakis2017understanding}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.96\linewidth]{images/architecture-rev.pdf}
\caption{NetSentry\xspace architecture. Feature Augmentor only applied during training. $\oplus$ is fusion operation detailed in Eq.~\ref{eqn:combine}.}
\label{fig:arch}
\vspace*{-1em}
\end{figure*}
\textbf{Web intrusion.} Web applications
integrate a technology stack that includes storage, web engines, \glspl{OS}, and communications. Hence, various vulnerabilities are often exploited.
Web intrusion can be jointly modeled with the Intrusion Kill Chain \cite{hutchins2011intelligence} and the OWASP Web Penetration Guideline \cite{owasp}, where the former outlines a general intrusion process, while the latter provides specific attack vectors.
The attack process entails \textit{information gathering} $\rightarrow$ \textit{vulnerability scanning} $\rightarrow$ \textit{attacking privileged targets} $\rightarrow$ \textit{exploitation}; or \textit{information gathering} $\rightarrow$ \textit{vulnerability scanning} $\rightarrow$ \textit{privilege escalation}.
Web intrusion resembles botnet and ransomware in terms of vulnerability discovering approach, but differs at the later stage, since exploitation is always target-specific, e.g. a \gls{XSS} attack targets web app users, while SQL injection targets Web APIs.
Our focus is on the early stage
when the attacker tries to breach a trusted boundary,
since later steps occur at system level and are invisible to a \gls{NIDS}.
\textbf{Ransomware/WannaCry.}
Ransomware is a relatively new type of threat that blocks user access to their private data until a ransom is paid to the attacker.
For instance, by exploiting EternalBlue~\cite{eternalblue}, WannaCry gains system access via the \gls{SMB} protocol on Windows systems, encrypts user data, and spreads itself to other hosts~\cite{kao2018dynamic}. The attack chain of WannaCry follows a loop consisting of: \textit{information gathering} $\rightarrow$ \textit{vulnerability scanning} $\rightarrow$ \textit{privilege escalation \& exploitation} $\rightarrow$ \textit{information gathering}.
We focus on the procedure ransomware uses to discover and control new victims (instead of the file encryption applied), as the attack is conducted via the network. Wanna\-Cry employs repeated TCP scanning on port 445 (serving the \gls{SMB} protocol
~\cite{kao2018dynamic}. Targets are further fingerprinted and remote access is achieved by injecting code via a crafted packet, which would be mishandled by SMBv1. WannaCry then encrypts the data on the victim machine and discovers other vulnerable targets.
\section{NetSentry\xspace Design}
\label{sec:system_design}
In what follows we present \textbf{NetSentry\xspace}, an original \gls{NIDS} design that harnesses the unique feature extraction capabilities of recurrent neural models, to detect large-scale, high-impact attacks. NetSentry\xspace builds on our observation of correlations between malicious traffics, and handles \gls{NID} as a time-sensitive task, leveraging an ensembling structure to capture richer contexts and detect intrusions with high efficacy.
\subsection{Attack Detection Strategy}
\label{sec:defensive_scheme}
Our attack chains analysis revealed that information gathering, vulnerability scanning and \gls{DoS} are applied across various types of attacks and share the same semantic. Latent network attacks, such as malware downloading, code injection, \gls{CSRF}, and other zero-day attacks, which can obtain system privileges and proceed to exploitation, always follow massive vulnerability scanning, since this is the most efficient way to discover weak entry points. A common argument is that zero-day attacks are heterogeneous, which poses difficulties to any detection logic.
However, we argue that \emph{as long as automated activities can be recognized in time, the subsequent zero-day attacks can be blocked,} in order to minimize the chances that attackers may uncover weaknesses and compromise a system. As such, we keep the scope of our detection targets narrow, yet well-directed, as suggested in \cite{sommer2010}. With this in mind, we design NetSentry\xspace, a \gls{NIDS} that effectively tackles cyber intrusion by detecting risks at an \emph{early stage}. This also applies to tactics that deviate from the standard attack chains described, as long as they incorporate any common stages to achieve the same end goal.
We maintain that \gls{NID}, especially of automated attacks, should be treated as a time-sensitive task. Here we consider `time-sensitive' those network intrusions exhibiting temporal correlations among \emph{consecutive traffic flows}, which could potentially exert substantial impact on the decision-making process. This is because a single traffic flow, whose features are extracted as a datum, may not fully reflect the intention of the communications. A straightforward example is a TCP flow encapsulating a complete HTTP request. Assume the flow is terminated quickly after the server responds. Without looking at previous and subsequent traffic, it is impossible to assert whether the flow was initiated by a legitimate user or by \gls{DoS} tools. Conversely, if we observe a series of statistically similar communications between a pair of hosts, the confidence of classifying them as malicious becomes higher. Network attacks generated with the same tool usually serve the same purpose. Although they would encapsulate different payloads in consecutive flows for obfuscation or fuzzing purposes, this difference is invisible to a \gls{NIDS} that only has access to protocol headers and timing information. Thus, we leverage sequential neural models in NetSentry\xspace to learn such similarity of successive flows generated with automated tools.
\subsection{System Architecture}
NetSentry\xspace is a \gls{NIDS} that examines the statistical features of network flows and detects illicit traffic via an ensemble of sequential neural models. A traffic flow is built by grouping packets according to a five-tuple (Src IP, Dst IP, Src Port, Dst Port, Protocol).
Recall
that automated tools tend to initiate multiple almost identical flows towards targets during a short period of time. This means that the statistical features of malicious flows share a large degree of similarity. Thus, monitoring the similarities and discrepancies of consecutive flows between pairs of hosts plays an essential role in recognizing anomalies. To learn relevant temporal correlations of the traffic flows and to differentiate malicious patterns, NetSentry\xspace incorporates 4 key building blocks, as shown in Figure~\ref{fig:arch}, namely:
\begin{itemize}[itemsep=0pt,topsep=0pt]
\item Flow Aggregator \& Feature Extractor: groups packets into flows and extracts associated statistical features;
\item Sequence Generator: groups flows originating from the same pair of hosts into fixed-length sequences, to be fed as inputs to anomaly detection logic;
\item Feature Augmentor: increases the variability of a fraction of malicious traffic features that are non-essential in anomaly detection, but if left unchanged may increase the risk of model becoming trapped in local optima;
\item Anomaly Detector (Bi-ALSTM): an ensemble of two asymmetric \gls{LSTM}-based neural networks operated bidirectionally, taking flow sequences as input to detect malicious traffic.
\end{itemize}
Next, we explain the inner workings of each component, then detail our bidirectional sequential neural model in \S\ref{sec:ensemble}.
\begin{table*}[]
\centering
\small
\bgroup
\def1.0{1.2}
\begin{tabular}{c|c|p{10cm}}
\Xhline{2.3\arrayrulewidth}
Feature Type & \multicolumn{1}{l|}{Direction} & Name \\ \hline
\multirow{3}{*}{timing-based} & forward & Flow \gls{IAT}\textsuperscript{*}, packets/sec \\ \cline{2-3}
& backward & Flow IAT\textsuperscript{*}, packets/sec \\ \cline{2-3}
& bi-direction & duration, Flow \gls{IAT}\textsuperscript{*}, packets/sec, bytes/sec, active time\textsuperscript{*}, idle time\textsuperscript{*} \\ \hline
\multirow{3}{*}{protocol-based} & forward & \# packets, packet length\textsuperscript{*}, PSH counts, URG counts, header length, initial TCP window size, avg segment size, subflow\textsuperscript{\dag} \\ \cline{2-3}
& backward & \# packets, packet length\textsuperscript{*}, PSH counts, URG counts, header length, initial TCP window size, avg segment size, subflow\textsuperscript{\dag}, \\ \cline{2-3}
& bi-direction & packet length\textsuperscript{*}, flag counts\textsuperscript{\S}, down/up ratio, protocol \\ \hline
ID-based & None & flow ID, src IP, dst IP, src port, dst port, timestamp \\
\Xhline{2\arrayrulewidth}
\end{tabular}
\egroup
\caption{Features used in NetSentry\xspace. \textbf{*} means (min, max, avg, std) are computed for a given property. \textbf{\dag} indicates where (avg packets, avg bytes) are computed. \textbf{\S} indicates (FIN, SYN, RST, PSH, ACK, URG, CWE, ECE) are counted in flows.}
\label{table:features}
\end{table*}
\subsubsection{Feature Extraction}
\label{sec:feature_extraction}
NetSentry\xspace employs a two-step process to extract numerical or categorical information (features) of the traffic observed, i.e., packet grouping and statistics computation. The former involves aggregating into flows packets generated between same pairs of applications, which can be achieved by monitoring origin, destination, and protocol fields.
Since NetSentry\xspace operates at the network layer and is not guaranteed to have access to packet payloads, we confine consideration to features that encompass timing statistics and protocol information. We find that employing popular open-sourced tools for feature extraction, such as CICFlowMeter \cite{flowmeter} (which should be able to extract 80+ statistical features), is problematic.
Indeed, CICFlowerMeter uses a faulty mechanism to identify the end of TCP flows, which results in benign traffic often being mislabeled as malicious, and vice versa.
With the CICFlowMeter feture extractor, if a new incoming TCP packet has a FIN flag set, the packet is immediately deemed to be the last packet in that \textit{flow}.
Obviously, this does not strictly follow the four-way handshake of TCP connection teardown. We show in Figure~\ref{fig:flowgrouping} that the premature assessment of termination leads to mislabeling, which is especially relevant to automated attacks such as \gls{DoS}.
Assume that A is performing simple HTTP DoS attacks targeting B, quickly reusing a same port 8888. Also assume each time it is B who decides to terminate the TCP connections. Then, applying the mechanism described above on two consecutive flows from (A, 8888) to (B, 80) would generate 4 complete flows and an incomplete one,
In this case, any flows from (A, 8888) to (B, 80) should be labeled as DoS. However, Flow \#2 and \#4 only consist of two packets (ACK and FIN) which cannot reflect any malicious purpose, while Flow \#3 that should be labeled as malicious, is marked benign because of its wrongly perceived direction. We confirm that this type of mislabeling occurs for DoS-Hulk attacks in the publicly available CSE-CIC-IDS-2018 dataset~\cite{ids2018} (which we use after correct relabeling), but further instances may exist.
\begin{figure}[]
\centering
\includegraphics[width=0.83\columnwidth]{images/flowgrouping.pdf}
\caption{Incorrect flow labeling due to wrong TCP termination rules. Blue curly braces indicate the flow is labeled correctly; red curly braces indicate mislabeling.}
\label{fig:flowgrouping}
\end{figure}
We fix this logic error (along with other programming bugs encountered) in CICFlowMeter, by following a complete four-way handshake to terminate TCP flows. The timeout mechanism is preserved for stateless protocols. We also note that the original tool further extracts partial features that are not well-defined. Hence we revise the code and only output 69 features per flow, as summarized in Table \ref{table:features}
\footnote{We will release our feature extractor's source code upon publication.}
\subsubsection{Sequence Generation}
Network anomaly is often nuanced, as a single network flow may be benign on its own, but observing multiple similar instances active at the same time may strongly suggest an automated attack in progress. Therefore,
it is necessary to observe consecutive flows between hosts (applications) to further confirm malicious activity.
As such, a \textbf{sequence} in NetSentry\xspace is defined as successive flows using the same protocol between a pair of hosts, that is aggregated by (Src IP, Dst IP, Protocol). This is because with automated attacks, many-to-one (DoS, brute forcing) and many-to-many (port scanning) port attacks between a pair of hosts are common. Therefore, we regard a sequence through grouping not by the tuple used for flow aggregation, but by the origin and destination addresses, along with protocol type.
\setlength{\textfloatsep}{2pt
\begin{algorithm}[t]
\small
\caption{NetSentry\xspace's sequence generation algorithm incorporating sliding windows and timeout thresholding.}
\label{seq_gen}
\begin{algorithmic}[1]
\Inputs{Tiemout value $\tau$, and window size $\alpha$}
\Initialize{\textit{conn\_table}: A connection table storing the incoming flows from the feature extractor. \\
\textit{seq\_list}: A FIFO list buffering the inputs for the feature augmentor or the anomaly detector.}
\While {True}
\State {$start\_time$ $\gets$ $now()$}
\For{$flow$ from the feature extractor}
\State {$conn\_table[flow.id].add(flow)$}
\If {$len(conn\_table[flow.id]) > \alpha$}
\State {$seq\_list.add(conn\_table.remove(flow.id))$}
\EndIf
\If { $now() - start\_time > \tau$}
\State {$seq\_list.addAll(conn\_table.removeAll())$ }
\State {$start\_time$ $\gets$ $now()$}
\EndIf
\EndFor
\EndWhile
\end{algorithmic}
\end{algorithm}
We adopt a flexible approach to generating sequences, which is a combination of sliding window and timeout thresholding techniques, as described by Algorithm~\ref{seq_gen}. NetSentry\xspace allows two user-defined parameters for this purpose, namely window size \(\alpha\) and timeout value \(\tau\), and maintains a connection table with two columns: \textit{ID} and \textit{seq\_list}. The \textit{ID} of each flow is a 3-tuple (Src IP, Dst IP, Protocol) and for each \textit{ID}, a FIFO \textit{seq\_list} is maintained, storing the flows with the same \textit{ID}. A newly generated flow is added to the \textit{seq\_list} determined by its \textit{ID}. Once the length of any \textit{seq\_list} is larger than \(\alpha\), the elements in the list are regarded as a sequence to be passed to the neural model.
Meanwhile, after every \(\tau\) seconds, the entire connection table is emptied regardless of the length of the \textit{seq\_lists}. Any list whose length is less than \(\alpha\) is padded to \(\alpha\) for the purpose of alignment.
This design is customizable: the larger \(\alpha\) is, the more comprehensive~the context that the ensemble model obtains, but the higher the memory requirements; the smaller \(\tau\) is, the more timely~clas\-sification can be achieved, at the cost of more compute resources.
\subsubsection{Feature Augmentation}
\label{sec:aug}
Since data used for \gls{ML} training are largely collected in controlled environments, synthetically generated attacks may not offer an accurate view of network threats occurring in real-world~\cite{sommer2010}, which prevents the model from learning a reliable decision boundary. For example, a victim HTTP server was set up to produce the CSE-CIC-IDS2018 dataset \cite{ids2018}; during HTTP DoS generation, during HTTP DoS generation, all flows encapsulated the same backward payloads from victim to attacker, resulting in little variability in payload-related features (see Figure~\ref{fig:violin}, left). In reality, it is hard to predict how the victim would respond, and we show in \S\ref{sec:evaluation} that such artificially low variability leads to poor generalization abilities for a range of supervised models.
\begin{figure}[t]
\vspace*{-0.5em}
\centering
\includegraphics[width=0.92\columnwidth]{images/violin.pdf}
\vspace*{-1em}
\caption{Violin plots of 4 features before and after data augmentation. All values are normalized between 0 and 1.}
\label{fig:violin}
\end{figure}
We mitigate this problem by augmenting a collection of payload-related features to emulate a more realistic network environment. Specifically, we set up an HTTP victim server and an attacking client. The client only makes single requests with the \texttt{Keep-Alive} header over one TCP connection, to emulate HTTP \gls{DoS} attacks. The size of each request and response is sampled from two discrete uniform distributions:
\setlength{\belowdisplayskip}{2pt} \setlength{\belowdisplayshortskip}{2pt}
\setlength{\abovedisplayskip}{2pt} \setlength{\abovedisplayshortskip}{2pt}
\begin{align*}
http\;request\;size &\sim \mathcal{U} (100, 400),\\
http\;response\;size &\sim \mathcal{U} (100, 15000).
\end{align*}
In total, we generate 2,000 flows.
A graphical illustration of the augmentation process is depicted in Figure \ref{fig:aug_data}. For each sequence that contains single-request HTTP DoS attacks (excluding Slowloris), (1) a flow is randomly sampled from the \texttt{AugBase} set and expanded to sequence length \(\alpha\); (2) random noise \( \sigma \sim \mathcal{N}(0, 5)\) is added to each payload-related cells to mimic minor differences among flows in a sequence; (3) finally, payload-related features in the original sequence are replaced by the new features generated at Step 2. By applying such augmentation, the new payload features of different flows in the same sequence would not differ much, but the features among different sequences would look considerably different. The distributions and the means of a subset of payload features in the augmented set are shown in Figure~\ref{fig:violin} (right). We use augmented data only for training.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{images/aug_data_new2.pdf}
\caption{Illustration of the data augmentation process. Yellow cells denote payload features in original training data; green cells represents payload features from \texttt{AugBase}. $\Sigma$ is a noise matrix with each element sampled from $\sigma \sim \mathcal{N}(0, 5)$.}
\label{fig:aug_data}
\end{figure}
It is also worth noting that the augmented data need not originate from any real traffic, since we only change parts of the features. However, what the model can learn from the augmented set is that \emph{(i)} payload features possess high variability within some attacks (DoS in our case) whose logic does not rely on specific payloads, thus payload features should not be utilized for decision; and \emph{(ii)} the rest of features (payload-irrelevant) are more valuable in distinguishing augmented attacks. We \emph{choose not to remove payload-related features altogether because they may be important for the model to differentiate other types of attacks}, such as Slowloris, which only sends a small amount of payload during a long span. In \S\ref{sec:evaluation}, we demonstrate that augmentation boosts the performance of several supervised models.
\subsubsection{Ensemble Network}
The sequential ensemble neural network is the critical component of NetSentry\xspace and is responsible for detecting malicious traffic based on the inputs provided by the sequence generator or feature augmentor. As explained in \S\ref{sec:defensive_scheme}, detecting automated cyber attacks is a time-sensitive task and hinges on temporal correlations between network flows. Let \(X:= \{\bm{x}_{1}, \bm{x}_{2}, ..., \bm{x}_{\alpha}\} \) and \( Y:= \{y_{1}, y_{2}, ..., y_{\alpha}\}\) denote a sequence of inputs and respectively their corresponding correct prediction. Then time-sensitive intrusion detection can be formalized as
\begin{align*}
\widetilde{\mathbf{\theta}} = \mathop{\arg\max}\limits_{\mathbf{\theta}}P_{\mathbf{\theta}}\left(y_{1}, y_{2}, ...,y_{a} | \bm{x}_{1}, \bm{x}_{2}, ..., \bm{x}_{a}\right),
\end{align*}
where the sequential model is parameterized by $\mathbf{\theta}$.
NetSentry\xspace leverages two \gls{LSTM}-based models to approximate the probability function above. In the next section, we first introduce the LSTM models we employ, and show both conceptually and empirically that an ensemble of such sequential models, preferably with different architectures, is key to improving the overall \gls{NID} performance.
\section{A Sequential Ensemble for NID}
\label{sec:ensemble}
We overview the different blocks that lay foundations for our Bi-ALSTMs, then explain the ensembling and why our approach is essential for high-performance classification.
\subsection{\glsfirst{LSTM}}
As a variant of \gls{RNN}, \gls{LSTM} \cite{hochreiter1997long} incorporates gating functions to simulate the update of memory units along time and has shown excellent ability to model long-term dependencies in sequential data \cite{sutskever2014sequence, kumar2022land}. An LSTM maintains two states: a cell state \( c_{t}\) and a hidden state \( h_{t}\), which is computed based on inputs up to timestamp \(t\), i.e., \( \mathbf{x}_{1}, ..., \mathbf{x}_{t} \;(\mathbf{x}_{i} \in \mathbb{R}^{d})\). To maintain long-term dependencies, the \gls{LSTM} cell has \textit{input}~(\( i_{t}\)), \textit{forget} (\( f_{t}\)), and \textit{output} (\( o_{t}\)) gates controlling the information flowing through at different timestamps. Gates are modeled by single-layer neural networks with parameters \(W_{x i}, W_{h i}, W_{x f}, W_{h f}, W_{x o}, W_{h o} \in \mathbb{R}^{h \times d} \) and associated biases, i.e.,
\begin{flalign}
&i_{t} =\sigma\left(W_{x i} x_{t}+W_{h i} h_{t-1}+b_{i}\right),&\label{eq:lstm-i}\\
&f_{t} =\sigma\left(W_{x f} x_{t}+W_{h f} h_{t-1}+b_{f}\right),\\
&c_{t} =f_{t} \circ c_{t-1}+i_{t} \circ \tanh \left(W_{x c} x_{t}+W_{h c}, h_{t-1}+b_{c}\right), \label{eq:new_cell} \\
&o_{t} =\sigma\left(W_{x o} x_{t}+W_{h o} h_{t-1}+b_{o}\right),\\
&h_{t} =o_{t} \circ \tanh \left(c_{t}\right),\label{eq:lstm-o}
\end{flalign}
where \( \sigma\) denotes the sigmoid function and \( \circ\) represents element-wise product. When a new input \( x_{t}\) is given to the \gls{LSTM}, the cell state \( c_{t} \) is updated with information from \( x_{t} \) and the previous cell state \( c_{t-1}\). The proportion of \( x_{t}\) and \( c_{t-1}\) in the new cell state is determined by \( i_{t} \) and \( f_{t} \), as in Eq.~\ref{eq:new_cell}. \( h_{t}\) can be perceived as a non-linear transformations on \( c_{t} \), and are always used for downstream tasks, such as classification. In NetSentry\xspace, an \gls{MLP} \(g_{\phi}(\cdot)\) is used to approximate the probability of \(y_{t}\) given \(h_{t}\).
When it comes to intrusion detection against automated network attacks, the order of attack flows in a sequence is less important.
Hence, at any timestamp \( t\), both previous inputs and subsequent inputs may be equally highly correlated with \( x_{t}\). \textit{The traditional \gls{LSTM} can only model temporal information unidirectionally as time evolves.} In other words, the hidden representation \(h_{t}\) only comprises the context before or at timestamp \( t\). To generate more comprehensive hidden representations for anomaly detection, a \gls{Bi-LSTM} which runs two \glspl{LSTM} separately (one forward,~one backward), and whose hidden states are concatenated before given to \(g_{\phi}(\cdot)\), can be used. The objective of \gls{Bi-LSTM} is thus:
\[
\begin{aligned}
&\max_{W, b, \bm{\phi}} p(y_{1}, y_{2}, ...,y_{a} | \bm{x}_{1}, \bm{x}_{2}, ..., \bm{x}_{a}, W, b, \bm{\phi}) \\
&= \max_{W, b, \bm{\phi}} \prod_{t=1}^{\alpha} p(y_{t}| \bm{x}_{1}, ..., \bm{x}_{\alpha}, W, b, \bm{\phi})
= \max_{W, b, \bm{\phi}} \prod_{t=1}^{\alpha} g_{\phi}(h_{t}),
\end{aligned}
\]
where \( W\) and \( b\) are the weights and biases in \gls{LSTM} cells and \(\bm{\phi}\) the parameters of the \gls{MLP}. \gls{Bi-LSTM} is one of the benchmarks we consider in evaluating our work.
\subsection{ConvLSTM}
\gls{ConvLSTM} \cite{xingjian2015convolutional} was first proposed to model spatiotemporal data, such as radar echo maps, whose spatial correlations cannot be extracted by fully connected layers in \gls{LSTM}. Conv\-LSTM tailors the convolution operation into \gls{LSTM} by replacing matrix multiplication operations in (\ref{eq:lstm-i})--(\ref{eq:lstm-o}) with convolution, as follows:
\begin{flalign}
&i_{t} =\sigma\left(W_{x i} * \mathcal{X}_{t}+W_{h i} * \mathcal{H}_{t-1}+b_{i}\right),&\\
&f_{t} =\sigma\left(W_{x f} * \mathcal{X}_{t}+W_{h f} * \mathcal{H}_{t-1}+b_{f}\right),\\
&\mathcal{C}_{t} =f_{t} \circ \mathcal{C}_{t-1}+i_{t} \circ \tanh \left(W_{x c} * \mathcal{X}_{t}+W_{h c} * \mathcal{H}_{t-1}+b_{c}\right) \hspace*{-1em}\\
&o_{t} =\sigma\left(W_{x o} * \mathcal{X}_{t}+W_{h o} * \mathcal{H}_{t-1}+b_{o}\right),\\
&\mathcal{H}_{t} =o_{t} \circ \tanh \left(\mathcal{C}_{t}\right),
\end{flalign}
in which \( *\) denotes the convolution operator, $ \mathcal{X}_{t}\in \mathbb{R}^{d \times 1}$ is the input, and $W_{x i}, W_{h i}, W_{x f}, W_{h f}, W_{x o}, W_{h o} \in \mathbb{R}^{k \times c}$ are the convolution kernels. The above applies to a single Conv\-LSTM unit, which can be further extend to multiple layers as traditional \gls{LSTM} does.
A pooling layer can be added between each two \gls{ConvLSTM} layers to reduce computation.
An immediate concern is whether applying convolution-embedded models on network traffic data without obvious spatial information would be effective. In fact, \gls{CNN} not only gained success in computer vision \cite{he2016deep, yang2021msta}, but also in areas including web traffic fingerprinting \cite{deepfp} and mobile traffic forecasting~\cite{zhang:2019}. As the sequential data we deal with are one-dimensional, we implement a 1-D \gls{ConvLSTM}, which takes inputs with channel and length dimensions, where channel by default equals 1.
\textbf{Differences from CNN-LSTM.} The CNN-LSTM, or \gls{LRCN}, is a combination of CNN and LSTM, first proposed for visual recognition. It differs from \gls{ConvLSTM} in that in the former a separate \gls{CNN} handles spatial information before providing input to \gls{LSTM}. In contrast, the latter has a compact form. While previous studies apply CNN-LSTM to \gls{NID} \cite{jiang2020network}, our work is the first to leverage \gls{ConvLSTM} and study the differences between the two structures. Our results in \S \ref{sec:evaluation} reveal that \gls{ConvLSTM} consistently outperforms CNN-LSTM.
\subsection{Bidirectional Asymmetric LSTM}
\gls{Bi-LSTM} and Bi-\gls{ConvLSTM} can be perceived as ensemble models with two separate \gls{LSTM} units. The hidden states from two units are concatenated so that future information is accessible at the current timestamp \( t\), which can potentially benefit the downstream classification task. Since our targets are sequences with \textit{similar} malicious flows, it is the hidden states at the two ends of the structures that can acquire most information. The hidden representations in the middle of a sequence from two (Conv-)\gls{LSTM} units yield certain amount of redundant information when the same architecture is used.
\setlength{\textfloatsep}{5pt
\begin{algorithm}[!b]
\small
\caption{The training algorithm for \gls{Bi-ALSTM}}
\label{bialstm_training}
\setstretch{1.2}
\begin{algorithmic}[1]
\Inputs{\mbox{$\mathcal{D} := \{S_{1},...,S_{n}\}; S_{i} := \{ (\mathbf{x}_{i1}, y_{i1}),...,(\mathbf{x}_{i\alpha}, y_{i\alpha})\},$} \\
$\lambda_{1} := L_{2}\; weight\;decay, \; \lambda_{2} := learning\;rate.$ }
\Initialize{ Denote $h_{W_{fc}}(\cdot)$ and $h_{W_{conv}}(\cdot)$ as the \gls{LSTM} and \gls{ConvLSTM} unit parameterized by $W_{fc}$ and $W_{conv}$, and predictor $g_{\phi}(\cdot)$ parameterized by $\phi$. $W_{fc}$, $W_{conv}$, $\phi$ set via Xavier initialization~\cite{glorot2010understanding}. }
\While {model has not converged}
\For{$S_{i}$ sampled from $\mathcal{D}$}
\State {$\overrightarrow{X_{i}} \gets (\mathbf{x}_{i1}, ..., \mathbf{x}_{i\alpha} ), \overrightarrow{\mathbf{y}_{i}} \gets (y_{i1}, ..., y_{i\alpha}) $}
\State {$\overleftarrow{X_{i}} \gets reverse(\overrightarrow{X_{i}})$}
\State {$\overrightarrow{H}_{i, fc} \gets h_{W_{fc}}(\overrightarrow{X}_{i})$}
\State {$ \overrightarrow{H}_{i, conv} \gets reverse(h_{W_{conv}}(\overleftarrow{X}_{i})) $}
\State {$ \overrightarrow{H}_{i} \gets \overrightarrow{H}_{i, fc} \oplus \overrightarrow{H}_{i, conv} $} \Comment{\parbox[t]{.3\linewidth}{Eq. \ref{eqn:combine}}}
\State {$\mathcal{L} \gets NLL(softmax(g_{\phi}(\overrightarrow{H}_{i}), \overrightarrow{\mathbf{y}_{i}}) + $
\Statex \qquad \qquad $\lambda_{1}(||W_{conv}||_{2}^{2} + ||W_{fc}||_{2}^{2} + ||\phi||_{2}^{2})$}
\State {$
\phi, W_{conv}, W_{fc} \gets Adam(\mathcal{L}, \phi, W_{conv}, W_{fc}; \lambda_{2})$}
\EndFor
\EndWhile
\State {\Return {$W_{fc}, W_{conv}, \phi$}}
\end{algorithmic}
\end{algorithm}
Given the fact that different architectures are likely to exploit different facets of input features for classification \cite{raghu2021vision}, we propose \textbf{\glsfirst{Bi-ALSTM}}, which consists of two different, asymmetric \gls{LSTM} units, one for forward and the other for backward processing, to generate intermediate representations that incorporate more comprehensive temporal contexts. The hidden states from two different units are first linearly combined, then fed through an activation function. Precisely, denote \(\mathbf{h}_{fc}^{t} \in \mathbb{R}^{N_{1}} \) as the hidden state generated by \gls{LSTM} at timestamp \(t\), and \( \mathbf{h}_{conv}^{t} \in \mathbb{R}^{N_{2}} \) generated by \gls{ConvLSTM} (operated backwards). The final hidden states are formed through the following fusion operation:
\begin{equation}
\label{eqn:combine}
\mathbf{h}^{t} = \tanh\left( \frac{\mathbf{U}_{conv}\mathbf{h}_{conv}^{t}}{||\mathbf{U}_{conv}\mathbf{h}_{conv}^{t}||_{2} } + \frac{\mathbf{U}_{fc}\mathbf{h}_{fc}^{t}}{||\mathbf{U}_{fc}\mathbf{h}_{fc}^{t}||_{2} }\right),
\end{equation}
where \( \mathbf{U}_{conv} \in \mathbb{R}^{N \times N_{2}}\) and \( \mathbf{U}_{fc} \in \mathbb{R}^{N \times N_{1}}\) are learnable parameters. \(\mathbf{h}_{fc}^{t}\) and \(\mathbf{h}_{conv}^{t}\) are projected to the same subspace and \(L_{2}\)-normalized, so that in the learning process, any single unit would not easily dominate the final results. This design allows two asymmetric \glspl{LSTM} to produce hidden states with different dimensions, meaning that two
different \gls{LSTM} structures can be tuned separately and flexibly before building the \gls{Bi-ALSTM}.
Finally, a single FC layer \(g_{\phi}(\cdot): \mathbb{R}^{N} \rightarrow \mathbb{R}^{C}\) with softmax function is used to approximate the probability of the samples belonging to each class.
\textbf{Computational complexity}: We first derive the time complexity of a single-step forward pass of LSTM. Given that the time complexity of $W_{x}x_{t}$ is $O(hd)$, it is easy to know that the time complexity inside each nonlinearity $\sigma$ is $O(hd+h^{2}) = O(h(d + h))$. Assuming $\sigma$ and $tanh$ have a constant time complexity, applying $\sigma$ or $tanh$ element-wise yields the complexity of $h$, which can be omitted due to the existence of $O(h^{2})$. Therefore, the time complexity of a single-step forward pass equals $O(h(d+h))$. Similarly, given that $W_{h} * \mathcal{X}_{t}$ has time complexity $O(dkc$), the time complexity of a single-step forward pass of ConvLSTM is $O(dkc + dkc^{2}) = O(dkc^{2})$. Assume the input sequence to Bi-ALSTM has the length $n$. The computational complexity of a single-layer Bi-ALSTM results in $O(n(h(d+h)+dkc^{2}))$.
We use negative log likelihood with \(L_{2}\) regularization as the loss function. \gls{Bi-ALSTM} is stochastically optimized via back propagation through time by the Adam algorithm. The complete training process follows Algorithm \ref{bialstm_training}.
\subsection{Why an All-range Multi-class Classifier is Unfeasible}
Applying supervised methods for \gls{NID} commonly involves training a multi-class classifier that seeks to detect as many types of malicious attacks as possible. However, we argue that this approach would lead to an overfitted model because the \textit{ambiguity of the true attack labels} is widely overlooked. To understand this, consider a classifier is trained to differentiate two types of network attacks: SQLmap fuzzing vs. Web password Bruteforcing,
both of which repetitively initiate HTTP requests to a given API. Although the two types of attacks would incorporate different payloads (SQL scripts and user/password pairs respectively), this discrepancy appears negligible on a statistical level, because the contents of payloads would not be extracted. Given that both trigger database I/O operation, the extracted timing information would be hardly differentiable.
Thus if a trained model can distinguish between such attacks, it has to be overfitted and learn a decision boundary that~is unique to the dataset, rather than truly understand the differences. Attack labels usually indicate the purposes and techniques behind them, but when it comes to network layer, the attack realizations, i.e., traffic flows, would not be clearly dissimilar. In this regards, using sequential models to distinguish as many types of attacks as possible is unrealistic.
\subsection{Abstract Labeling}
Given that it is hard to correctly predict every type of automated cyber attack with a network-based algorithm, reverting to a binary detector seems sensible. However, since we augment DoS attacks, we decide to use \textbf{abstract labeling} in order to evaluate the augmentation technique and to avoid the aforementioned overfitting issue.
Specifically, we assign a number of abstract, generic labels, including \textit{benign}, \textit{DoS}, \textit{portscanning} and \textit{bruteforcing \& fuzzing}, as ground truth during training.
On one hand, this approach can clearly illustrate the influence of our feature augmentation technique.
On the other hand, the model would not put effort in distinguishing the subtle differences between, e.g., DoS HOIC and DoS LOIC, which may not be separable by a network-based algorithm.
\section{Experiments}
\label{sec:evaluation}
We implement NetSentry\xspace in PyTorch and train the model on a GeForce Titan X GPU. To build the \gls{Bi-ALSTM}, we use an \gls{LSTM} unit for the forward pass and a {ConvLSTM} unit for the backward pass. The \gls{LSTM} unit has the following structure: $dropout(0.5) \rightarrow lstm(65, 48) \rightarrow lstm(48, 48)$, where the arguments in $lstm(\cdot, \cdot)$ denote the input size and the hidden size. The \gls{ConvLSTM} unit encompasses $convlstm1D(1, 3, 3) \rightarrow convlstm1D(3, 6, 3) \rightarrow maxpool()$, in which the arguments in $convlstm1D(\cdot, \cdot, \cdot)$ represent the input channel size, output channel size, and kernel size. The fused hidden states are passed through: $dropout(0.3) \rightarrow MLP(32, 5)$. The $L_{2}$ penalty and learning rate are set to 0.5 and 0.001 respectively.
For the Flow Aggregator/Sequence Generator, apart from our TCP termination fix (see \S\ref{sec:feature_extraction}), we set flow timeout to 30s and subflow duration 5s in CICFLowMeter. The Sequence Generator uses timeout $\tau = 30$s and window size $\alpha=10$.
\subsection{Datasets}
We experiment with two datasets published by the \gls{CIC}, as described below.
\textbf{CIC-IDS-2017} \cite{sharafaldin2018toward} contains most common cyber attacks, including bruteforcing, heartbleed, botnet, (D)DoS, Infiltration, and Web attacks. Traces were collected in a LAN, with benign traffic generated by profiling normal online behaviors of 25 users on different \glspl{OS}, including Win Vista, Win 7, Win 8, Win 10, Mac OS, and Ubuntu~12.
Attacks are produced by 4 different machines with one running Kali Linux and three Win 8.1. The traffic
collection spans 5 working days, and in total 51.1 GB of pcap files are open-sourced.
The feature sets of the corresponding {\tt pcap} files were published, but as we reveal in Section \ref{sec:feature_extraction}, these were extracted incorrectly. Hence, we only use the raw capture files in our experiments.
\textbf{CSE-CIC-IDS2018} \cite{ids2018} was generated
in a much larger environment where an organizational LAN with five subnets simulated give different departments.
450 machines act as normal users and 50 as attackers. The dataset contains a wider range of benign traffic, including HTTPS, HTTP, SMTP, POP3, IMAP, SSH, and FTP, and contains a larger attack collection (17 types). The dataset spans 10 days.
\textbf{Self-collected traffic} -- since \textit{FTP-Bruteforce} and \textit{DoS-SlowHTTP\-Test} attacks were erroneously collected in CSE-CIC-IDS2018,
the generated traffic merely contains SYN and RST packets. To mitigate this, we collected FTP-brute\-forcing traffic ourselves, generating 4,050 flows. We decide not to collect \textit{DoS-SlowHTTPTest} traffic, since a similar type of attack, i.e., \textit{DoS-Slowloris}, exists in CSE-CIC-IDS2018.
We remove the mislabeled traffic and merge the self-collected traffic with the CSE-CIC-IDS2018 dataset. In the rest of our paper, we use CSE-CIC-IDS2018 to refer to the merged dataset. We employ our revised version of CICFlowMeter to generate network flows based on the {\tt pcap} files. The statistics of both datasets are shown in Table \ref{table:dataset}.
\begin{table}[h]
\small
\centering
\def1.0{1.2}
\begin{tabular}{p{1.35cm}p{1.3cm}p{1.2cm}p{1.2cm}p{1.5cm}}
\Xhline{2\arrayrulewidth}
& \# \mbox{Features} & \# Instances & Anomaly ratio & Automated attack ratio \\ \hline
IDS-2017 & 69 & 2,607,289 & 0.2 & 0.189 \\
IDS-2018 & 69 & 8,786,169 & 0.1806 & 0.1802 \\
\Xhline{2\arrayrulewidth}
\end{tabular}
\caption{Statistics of datasets used for experimentation. ID-based features in Table \ref{table:features} not included in the training; `protocol' is hot-encoded. Thus, only 65 features used in total.}
\label{table:dataset}
\end{table}
\subsection{Cross-Evaluation}
We adopt a rigorous evaluation methodology, aiming to show the true generalization ability of our design, by \textbf{cross-evaluation}. Normally, a dataset is split into training and testing subsets, and the results on the test set compared across different algorithms. However, network environments are heterogeneous and data collected in one environment may not accurately reflect the diversity seen in practice. To test if an algorithm can truly distinguish the same type of malicious traffic in a different network topology, we also evaluate it on a second, unseen dataset. CSE-CIC-IDS2018 is split into training (70\%) and test (30\%) sets. To maintain time consistency, training data is selected from events that took place before the test data.
CIC-IDS-2017 is purely used for cross-evaluation, after the model was trained on the former.
\begin{table*}[t]
\footnotesize
\centering
\bgroup
\def1.0{1.2}
\begin{tabular}{c|ccc|ccc}
\Xhline{2\arrayrulewidth}
\multirow{2}{*}{Algorithm} & \multicolumn{3}{c|}{CSE-CIC-IDS2018} & \multicolumn{3}{c}{CIC-IDS-2017 (X-eval)} \\
& precision & recall & F1 & precision & recall & F1 \\ \hline
RIPPER & 0.9983 & 0.0981 & 0.1786 & 0.0873 & 0.0106 & 0.0190 \\
Decision Tree & 0.9989 & 0.9990 & 0.9990 & 0.5385 & 0.3717 & 0.4398 \\
MLP & 0.9989 & 0.9962 & 0.9976 & 0.6736 & 0.4631 & 0.5435 \\
CNN & 0.9947 & 0.9951 & 0.9949 & 0.7705 & 0.6344 & 0.6958 \\
Autoencoder & 0.7783 & 0.7500 & 0.7639 & 0.4362 & 0.4197 & 0.4278 \\
OC-NN & 0.9722 & 0.5310 & 0.6868 & 0.7844 & 0.5136 & 0.6208 \\
Kitsune & 0.6310 & 0.6081 & 0.6193 & 0.4086 & 0.3932 & 0.4007 \\
DAGMM & 0.8666 & 0.8253 & 0.8454 & 0.4159 & 0.3116 & 0.3576 \\
Bi-LSTM & 0.9990 & 0.9979 & 0.9985 & 0.7258 & 0.4209 & 0.5317 \\
CNN-Bi-LSTM& \textbf{0.9996} & 0.9982 & 0.9989 & 0.8813 & 0.3750 & 0.5261 \\
Bi-ConvLSTM & 0.9984 & 0.9971 & 0.9977 & 0.8721 & \textbf{0.9693} & 0.9178 \\
Bi-ALSTM & 0.9994 & \textbf{0.9990} & \textbf{0.9992} & \textbf{0.9116} & 0.9446 & \textbf{0.9275} \\ \hline
\multicolumn{1}{p{1.75cm}|}{Algorithm + } & \multicolumn{3}{c|}{CSE-CIC-IDS2018} & \multicolumn{3}{c}{CIC-IDS-2017 (X-eval)} \\
\multicolumn{1}{p{1.75cm}|}{augmentation} &
\multicolumn{1}{p{0.8cm}}{precision} & recall & \multicolumn{1}{p{0.65cm}|}{F1} & precision & recall & F1 \\ \hline
RIPPER & 0.9980 & 0.0934 & 0.1709 & 0.4998 & 0.1837 & 0.3687 \\
Decision Tree & 0.9989 & \textbf{0.9993} & \textbf{0.9991} & 0.5897 & 0.8556 & 0.6914 \\
MLP & 0.9989 & 0.9963 & 0.9976 & 0.7540 & 0.8690 & 0.8071 \\
CNN & 0.9925 & 0.9847 & 0.9886 & 0.7453 & 0.8687 & 0.8021 \\
Bi-LSTM & 0.9991 & 0.9956 & 0.9973 & 0.8555 & 0.9777 & 0.9125 \\
CNN-Bi-LSTM & 0.9996 & 0.9966 & 0.9981 & 0.8479 & 0.9683 & 0.9041 \\
Bi-ConvLSTM & 0.9996 & 0.9975 & 0.9985 & 0.8728 & 0.9780 & 0.9222 \\
Bi-ALSTM & \textbf{0.9997} & 0.9976 & 0.9987 & \textbf{0.9190} & \textbf{0.9800} & \textbf{0.9485} \\
\Xhline{2\arrayrulewidth}
\end{tabular}
\egroup
\caption{Precision, recall, and F1 score for Bi-ALSTM and benchmarks. NB: only supervised algorithms can be trained with augmented data and only HTTP (D)DoS attacks are augmented; semi-supervised methods only require benign traffic for training.}
\label{table:overall_results}
\end{table*}
\subsection{Benchmarks}
For comparison, we implement a range of benchmarks, including basic ML/DL structures (\gls{MLP}, \gls{CNN}, autoencoder, RIPPER \cite{lee1998data}, Decision Tree); state-of-the-art anomaly/ intrusion detectors, i.e., \gls{OC-NN}~\cite{ruff2018deep}, Kitsune/KitNET \cite{mirsky2018kitsune} and \gls{DAGMM} \cite{zong2018deep}; and three \gls{Bi-LSTM} \mbox{variants~\cite{jiang2020network}.}
\gls{OC-NN} \cite{ruff2018deep} and \gls{DAGMM} \cite{zong2018deep} are offline semi-supervised algorithms for general anomaly detection.
\gls{OC-NN} aims to learn a mapping for the benign samples to a kernel space where the majority of them can be enclosed by a hypersphere.
During the testing phase, the distances from the samples to the center of the hypersphere represent the anomaly score of the data.
Different from \gls{OC-NN}, \gls{DAGMM} models the benign data from a probabilistic perspective with a mixture of Gaussian distributions.
The negative probability of the data being sampled from the \gls{PDF} represents the anomaly score.
Kitsune \cite{mirsky2018kitsune} is an online semi-supervised \gls{NIDS}. It uses an ensemble of shallow autoencoders to learn the features of benign data in different subspaces; a final autoencoder fits the correlations of the reconstruction errors from the shallow autoencoders.
The neural architecture is named KitNET.
During testing, the reconstruction errors are computed to represent the degree of abnormality. For a fair comparison, KitNET is trained in an offline manner with more than one epoch.
For semi-supervised algorithms (\gls{OC-NN}, \gls{DAGMM}, KitNET and Autoencoder), an anomaly ratio \( \alpha \) needs to be preset, indicating the proportion of anomalous samples, and during the testing phase, the data with the top \( \alpha \times 100\% \)
of the anomaly scores are classified as anomalous. The anomaly ratio is set to 0.189 and 0.1802 on CIC-IDS-2017 and CSE-CIC-IDS2018 respectively, which is the same percentage of automated attacks in the two datasets.
RIPPER \cite{lee1998data} and Decision Tree are two basic machine learning models, where the first one aims to generate a simple ruleset for classifications while the second embeds rules in a tree by recursively finding the best splits.
The structures of \gls{Bi-LSTM} and Bi-\gls{ConvLSTM} resemble the units in \gls{Bi-ALSTM}. For CNN-Bi-LSTM, an extra \gls{CNN} block, with the structure:
$Conv1D(1, 3, 3)$ $\rightarrow$ $MaxPooling$ $\rightarrow$ $Conv1D(3, 6, 3)$ $\rightarrow$ $MaxPooling$, is implemented. The arguments in $Conv1D(\cdot, \cdot, \cdot)$ represent input channels, output channels, and kernel sizes.
Prior to testing, we retrain all benchmarks with all the features in the CSE-CIC-IDS2018 dataset, which is richer than the datasets used for training in the original papers.
\subsection{Evaluation Metrics}
The average precision, recall and F1 score are commonly used to evaluate the performance of anomaly detection algorithms. These metrics can be measured based on the \gls{TP}, \gls{FP}, \gls{TN} and recall are computed as
$precision$ $=$ $TP/(TP+FP)$, $recall$ $=$ $TP/(TP+FN)$.
The precision indicates how likely the algorithm would give true alarms, and the recall measures how sensitive the algorithm is towards anomalies. There exists a trade-off between precision and recall, and to obtain an overall performance measure, their harmonic average is computed, i.e., the F1 score:
$F1 = 2 \times \frac{precision \;\times\; recall}{precision \;+\; recall}$.
We do not measure accuracy i.e., the percentage of the correctly classified samples, which is unlikely to reveal the algorithms' true NID performance: consider a dataset with 80\% benign and 20\% malicious instances; a model that classifies everything as benign has the same accuracy as a model that correctly recognizes all but 20\% of the benign~traffic.
For the ML-based algorithms that output an anomaly score for each test instance rather than just the predicted class, system administrators may choose a threshold higher than 0.5, which guarantees that the classifier has a lower \gls{FPR}. We plot the \gls{ROC} curve for sequential models, to evaluate their performance when the anomaly threshold is varied in $[0, 1]$. The \gls{ROC} curve is obtained by plotting the \gls{TPR} against \gls{FPR}. The closer to 1 the \gls{AUC} is, the better the classifier performs.
Given that our datasets consist of multiple types of cyber attacks, we further plot the \gls{ECDF} of the anomaly score with respect to each type of traffic on the crossed evaluated dataset, to illustrate the confidence of each sequential model.
Beyond the metrics for classification performance, we also care about the computational overhead of our design, and therefore report \gls{MACs}, the number of parameters and the concurrent processing capacity of each model on a edge GPU. \gls{MACs} and the parameter numbers reveal the complexity of different algorithms at a micro level, while the concurrent processing capacity on GPU can reflect the computational bottlenecks.
\subsection{Performance without Augmented Data}
We summarize our comparison in terms of threat detection performance between our Bi-ConvLSTM/-ALSTM models and the benchmarks considered, in Table~\ref{table:overall_results}. In the upper half, the different algorithms are trained on non-augmented data. The performance of the benchmark algorithms and those adopted by our NetSentry\xspace is similar on the CSE-CIC-IDS2018 dataset, most of them attaining average metrics above 0.99. CNN-Bi-LSTM slightly outperforms other algorithms in terms of precision, while \gls{Bi-ALSTM} yields the highest recall and F1 score. An interesting finding is that the semi-supervised algorithms for general anomaly detection may not be suitable for network intrusion detection. Autoencoder, OC-NN and DAGMM cannot compete with basic supervised ML algorithms. One of the core assumptions for semi-supervised anomaly detection is that the algorithm can learn the characteristics of benign data, by estimating the probability, reconstructing the benign samples or finding an appropriate hyper-boundary enclosing them. However, network traffic is highly heterogeneous, serving with different protocols various applications, such as email, web browsing, streaming, etc.
It remains questionable whether the aforementioned assumption holds on such a large range of `benign data'. Besides, detecting malicious traffic, especially automated attacks (which is our objective), appears to be a time-sensitive task. Therefore, observing a single instance may be insufficient to make reliable decisions. Consequently, existing anomaly detection algorithms tend to ignore this, which leads to modest results.
\begin{figure}[b]
\centering
\includegraphics[width=0.95\columnwidth]{images/roc-auc.pdf}
\caption{\gls{ROC} curves of \gls{LSTM}-based algorithms. \gls{AUC} behind~labels. Models trained w/o (left) and w/ augmented data (right).}
\label{fig:roc_auc}
\end{figure}
\emph{The advantage of Bi-ConvLSTM and \gls{Bi-ALSTM} can be clearly seen on cross-evaluation results, where our models maintain consistently competitive performance.} Both attain F1 scores above 90\%, while other supervised algorithms, including \gls{Bi-LSTM} and CNN-Bi-LSTM exhibit a significant performance drop (F1 score around 50\%). We notice that though both CNN-LSTM and \gls{ConvLSTM} are proposed to handle spatiotemporal data, there is an obvious difference in performances, both in terms of F1 score (Table~\ref{table:overall_results}) and \gls{ROC} (Figure~\ref{fig:roc_auc}) on intrusion detection. As shown in Figs~\ref{fig:confuxion} (a), (b), \gls{Bi-LSTM} and CNN-Bi-LSTM do not learn a reliable decision boundary between benign traffic and \gls{DoS} attacks without the augmented data. Bi-ConvLSTM clearly outperforms them, yet still exhibits a high probability of classifying \gls{DoS} as port scanning attacks, as illustrated in Figure~\ref{fig:confuxion}(c), whereas \gls{Bi-ALSTM} is the most reliable (Figure~\ref{fig:confuxion}(d)).
\begin{figure}[t!]
\centering
\includegraphics[trim=10 10 10 10 10 clip,height=0.9\columnwidth,angle=270]{images/all_cms-r.pdf}
\caption{Normalized confusion matrices (row values add to 1) for LSTM-based models cross-evaluated on CIC-IDS-2017. Models trained with/without augmented dataset (left/right). B represents \textbf{B}enign, D \textbf{D}oS, BF \textbf{B}ruteforcing and \textbf{F}uzzing, and PS \textbf{P}ort\textbf{S}caning. Numbers on diagonals are recalls.
}
\label{fig:confuxion}
\end{figure}
\subsection{Performance with Augmented Data}
The data augmentation procedure we propose is highly effective in helping the models generalize well. \emph{When the supervised models are trained with the augmented dataset, a remarkable performance gain can be observed in the cross-evaluation results} (Table~\ref{table:overall_results}, bottom half). The F1 scores of the benchmarks increase by at least 16\%, and \gls{Bi-LSTM} and CNN-Bi-LSTM even jump to 90\%. The improvements of Bi-ConvLSTM and \gls{Bi-ALSTM} are less noticeable since outstanding results can be achieved even without augmented data, but still, the former reaches the highest recall and \gls{Bi-ALSTM} is the most robust in terms of overall performance.
Observing confusion matrices in the second column in Figure~\ref{fig:confuxion}, all models reveal roughly the same pattern, as opposed to the corresponding results on the first. This confirms that \emph{augmentation encourages models to learn associating timing info rather than payload features} in the classification task.
\subsection{Impact of Feature Arrangement}
We investigate the influence of the feature arrangement and kernel size on the performance of \gls{ConvLSTM}. For this, we experiment with 3 different kernel sizes, namely (3, 5, 7), and two sets of feature arrangements. Specifically, the 1D feature vectors are logically ordered and randomly shuffled. Note that most of the features listed in Table~\ref{table:features} in the Appendix are computed for forward traffic only, backward traffic only, and bidirectionally. Logically ordered features means that they follow an alternating order of forward, backward, and bi-direction. In each experiment, we train both unidirectional \gls{ConvLSTM} and Bi-\gls{ConvLSTM} with the augmented dataset for 10 epochs and repeat the process 5 times. The mean and error bars of the F1 score on the cross-evaluation dataset (CIC-IDS-2017) are illustrated in Figure~\ref{fig:conv}.
Intuitively, one might expect \gls{ConvLSTM} would only work with sequential data possessing clear spatial information, such as videos. However, we find that \gls{ConvLSTM} is robust to 1D traffic features regardless of their arrangement. Indeed, the results in Figure~\ref{fig:conv} demonstrate that there is no significant gap between the model trained with logically ordered features or randomly shuffled ones. For most cases, the mean in the former case is slightly higher than in the latter, while the error bars show a large degree of overlap.
\label{sec:featurearrange}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{images/conv_kernel.pdf}
\vspace*{-1em}
\caption{The F1 scores attained by (Bi-)\gls{ConvLSTM} with different kernel sizes and different order of features.}
\label{fig:conv}
\end{figure}
We also find that although F1 scores tend to rise sightly with larger kernel sizes when the \gls{ConvLSTM} is trained with the ordered feature arrangement, this increase is not significant. Considering the growth in computation overhead with using a larger kernel size, we argue that training both Bi-\gls{ConvLSTM} and \gls{Bi-ALSTM} with a kernel size equal to 3 (which was also the case for the results presented in Table \ref{table:overall_results}) is sufficient.
\subsection{Performance Gains of \gls{Bi-ALSTM}}
\gls{Bi-ALSTM} not only yields the highest overall detection rate (recall), but also reliably detects each type of cyber attacks, as illustrated in Figure \ref{fig:recall_by_type}. Both \gls{Bi-LSTM} and Bi-ConvLSTM have difficulty recognizing web bruteforcing, \gls{XSS}, and Slowloris attacks, whereas \emph{\gls{Bi-ALSTM} attains up to 3$\times$ higher detection rates}. The only exception is SQL injection, which cannot be detected by all the algorithms. This is because there are only 53 instances of this attack, merely accounting for 0.0006\% of the entire dataset, which is insufficient for the model to learn a reliable decision boundary for classification.
\begin{figure}[h!]
\centering
\includegraphics[trim=0 0 0 25, clip, width=\linewidth]{images/recall_by_type.pdf}
\vspace*{-1em}
\caption{Detection rate (recall) of each type of traffic evaluated on CSE-CIC-IDS2018 (top) and CIC-IDS-2017 (bottom).}
\label{fig:recall_by_type}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{images/ecdf.pdf}
\vspace*{-1em}
\caption{ECDF of the anomaly scores with respect to each type of traffic in CIC-IDS-2017 given by \gls{Bi-LSTM}, Bi-ConvLSTM and \gls{Bi-ALSTM}. All models trained with augmented dataset.}
\label{fig:ecdf}
\end{figure}
We evaluate the quality of the anomaly scores approximated by Bi-(Conv)LSTM and \gls{Bi-ALSTM}. The anomaly score is the value output by the model. Since the activation function of the last layer is softmax, the output is squeezed between $[0, 1]$ and the higher the value, the more anomalous a flow is regarded. System administrators routinely customize an anomaly threshold to lower the \gls{FPR}. Figure \ref{fig:ecdf} plots the \gls{ECDF} of each type of traffic in CIC-IDS-2017 given by the three algorithms, in which the blue line corresponds to benign traffic. The black dashed line is the threshold that sets the \gls{FPR} to 1.5\%, and the area under the other lines to the left of the threshold line represents the proportion of attacks that would be misclassified. We find that \gls{Bi-ALSTM} delivers the lowest \gls{FNR} (2.63\%) compared with Bi-LSTM (10.17\%) and Bi-ConvLSTM (5.87\%).
\subsection{Computational Overhead}
While NetSentry\xspace is primarily designed as an offline \gls{NIDS}, employing it for online NID is also feasible. Table~\ref{table:computeMAC} details the \gls{MACs} the benchmark models and our Bi-ConvLSTM/-ALSTM structures require for a single traffic flow inference, as well as their number of parameters. (CNN-)\gls{Bi-LSTM} are the most computationally expensive, given that multiple fully-connected layers are embedded in the LSTM unit. In contrast, \emph{Bi-ConvLSTM/-ALSTM are relatively lightweight, both involving fewer computations and parameters}. Deploying NetSentry\xspace as an online system next to routers or organizational gateways equipped with a GPU or TPU should thus be straightforward.
Given that edge AI platforms are now available, e.g., Nvidia Jetson Nano \cite{jetson}, running NetSentry\xspace on constrained small-business/ home routers is within reach. Results in Table \ref{table:computeMAC} reveal that Bi-ConvLSTM/-ALSTM can handle 4.5/3.5 Mflows per second, which confirms our practicality assessment.
\begin{table}[t]
\small
\centering
\bgroup
\def1.0{1.2}
\setlength{\tabcolsep}{5pt}
\begin{tabular}{c|ccc}
\Xhline{2\arrayrulewidth}
Model & \gls{MACs}(k) & Parameters(k) & \begin{tabular}[c]{@{}l@{}}Edge GPU\\(Mflow/s)\end{tabular}\\ \hline
MLP & 5.7 & 5.8 & 41.4\\
CNN & 3.2 & 1.5 & 73.7\\
Autoencoder & 10.3 & 10.6 & 22.9\\
OC-NN & 5.2 & 5.2 & 45.4\\
Kitsune & 0.7 & 0.8 & 337.1\\
DAGMM & 5.3 & 5.4 & 44.5\\
Bi-LSTM & 102.2 & 100.7 & 2.3\\
CNN-Bi-LSTM & 116.8 & 112.7 & 2.0\\
Bi-ConvLSTM & 51.8 & 2.7 & 4.5\\
Bi-ALSTM & 66.8 & 41.4 & 3.5\\
\Xhline{2\arrayrulewidth}
\end{tabular}
\egroup
\caption{Computation overhead in terms of \gls{MACs} per inference instance and number of parameters for each model. The last column presents the number of flows (in millions) an edge GPU (Jetson Nano) can process per second.}
\label{table:computeMAC}
\end{table}
Another key merit of NetSentry\xspace is that the system inherently analyses consecutive traffic between pairs of hosts, which is easy to integrate into an \gls{IPS}, without the need for collecting statistics of potentially malicious hosts until reaching full confidence about decisions to enforce. Recall that our system directly gives prediction results about the traffic flows generated between two hosts during a short interval, offering comprehensive contexts to the \gls{IPS} with low \gls{FP} risks. Dynamic firewall rules can also be updated effortlessly, since the atomic processing input of NetSentry\xspace originates from the same pair of hosts.
\section{Discussion}
\label{sec:evasion}
Lastly, we discuss the robustness of our system against different evasion attacks.
\textbf{IP Spoofing:}
IP addresses can be spoofed with little effort, which is also a common approach to generating \gls{DDoS} attacks. Flow-based \gls{NIDS} may be ineffective in preventing such traffic because `identities' are changed frequently. However, it is worth noting that IP spoofing can only be used to initiate stateless \gls{DDoS}, given that any responses from the victim is not guaranteed to be routed back to the attacker. Existing countermeasures such as TCP half-open and ICMP threshold are capable of mitigating those issues. For application-layer \gls{DDoS},
attackers must control the real IP addresses to maintain the states, where NetSentry\xspace will not be fooled.
\textbf{{Traffic Encryption:}}
Traffic Encryption was proposed for evasion attacks \cite{stinson2008towards}, whereby malicious payload is hidden in an encrypted channel. This is however only effective against \gls{NIDS} that examine the syntax of network communications, such as BotHunter \cite{gu2007bothunter}. NetSentry\xspace is designed to extract and analyze timing- and protocol-based statistics. That said, manipulation of payload contents cannot bypass our design.
\textbf{{Adversarial Perturbations:}}
Adding small perturbations to input data may lead to misclassification by \gls{ML} models \cite{co2019}.
Nevertheless, the existing adversarial attacks often require access to model gradients, structures, or numerous queries for weight approximation. In reality, a \gls{ML}-based \gls{NIDS} would not disclose the details of its neural model and tolerate countless queries.
Zhang et al. demonstrate the possibility of attacking \gls{ML}-based intrusion detection algorithms by heuristic-based methods without knowing model's information \cite{zhang:2020}, but it would still take 100$\sim$11,000 queries to generate one adversarial sample. Note that NetSentry\xspace is intended for continuous and repetitive network attacks, meaning that similar queries would trigger alarms much earlier than discovering a valid adversarial sample.
Adversarial perturbations are likely to modify every individual feature to create malicious samples, which is not always practical in the networking domain, since the modified flows are not guaranteed to stem from any real traffic. Consider instead a more pragmatic attack scenario where the adversary slows the attack speed by increasing the time between the packets sent. To evaluate the potential impact of this adaptive attack on NetSentry\xspace, we first pre-process both CSE-CIC-IDS2018 and CIC-IDS-2017 datasets as follows: (1) we group the packets belonging to malicious activity in the {\ttfamily pcap} traces into flows; (2) in each flow, we alter the timestamps of the attacker's packets by expanding the time gap between the previously received/sent packet and the current one, by a fixed multiplier $ m \in \{1, 2, 4, 8\}$ (packets are not delayed if $m=1$); and (3) we alter the timestamps of the victim's packets to ensure the time gaps between these and the attacker's packets still match those in the original flows.
PortScan attacks are excluded from both datasets because the majority only consists of 1--2 packets, and applying the logic above will not change their timestamps at all.
We choose a set of multipliers $ m \in \{1, 2, 4, 8\}$, where the attacker's packets are not delayed if $m=1$. As such,
We obtain three altered versions of the CSE-CIC-IDS2018 and CIC-IDS-2017 dataset. Each variant of CSE-CIC-IDS2018 is split into a training set (70\% of samples) and a test set (30\%), and we augment all the training sets as detailed in \S\ref{sec:aug}, then retrain the Bi-ALSTM. The altered CIC-IDS-2017 datasets are used for cross evaluation.
We measure the Percentage Error (PE) with respect to the F1 score, to understand to what extent the model would degrade when facing malicious traffic that is purposely slowed down by different factors, to attempt evasion. Formally, PE wrt. F1 score is defined as:
\[
PE^{F1}_{i, j} = \frac{F1_{i, j} - F1_{i, i}}{F1_{i, i}} \times 100\%,
\]
where the first subscript denotes the multiplier $m=i$ applied in the dataset used for model training, and the second subscript to slow-down factor in the set used for testing. As shown in Table \ref{table:slowdown}, we find that the maximum PE on the CSE-CIC-IDS2018 is never above 0.35\% and the maximum PE on CIC-IDS-2017 is below 0.58\%. This demonstrates that \emph{manipulating the attack timing has no effective impact on the detection performance of the proposed NetSentry\xspace.}
\begin{table}[t]
\small
\centering
\bgroup
\def1.0{1.0}
\begin{tabular}{c|c|cccc}
\Xhline{2\arrayrulewidth}
& & \multicolumn{4}{c}{Test (CSE-CIC-IDS-2018)} \\ \cline{2-6}
& m & 1 & 2 & 4 & 8 \\ \hline
\multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Train\\ (IDS-2018)\end{tabular}} & 1 & 0 & -0.08\% & -0.02\% & -0.02\% \\
& 2 & -0.32\% & 0 & -0.16\% & -0.24\% \\
& 4 & -0.35\% & 0 & 0 & -0.01\% \\
& 8 & -0.09\% & +0.2\% & 0 & 0 \\ \Xhline{2\arrayrulewidth}
& & \multicolumn{4}{c}{Cross Test (CIC-IDS-2017)} \\ \cline{2-6}
& m & 1 & 2 & 4 & 8 \\ \hline
\multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Train\\ (IDS-2018)\end{tabular}} & 1 & 0 & -0.05\% & -0.11\% & -0.58\% \\
& 2 & +0.1\% & 0 & +0.2\% & -0.39\% \\
& 4 & 0 & -0.01\% & 0 & -0.35\% \\
& 8 & -0.49\% & -0.56\% & -0.11\% & 0 \\
\Xhline{2\arrayrulewidth}
\end{tabular}
\egroup
\caption{Percentage Error wrt. F1 score. Attacker's packets slowed down by factors $\{1, 2, 4, 8 \}$. $m=1$ for original timing. }
\label{table:slowdown}
\end{table}
\section{Related Work}
Network intrusion detection has been the focus of extensive research in the security community. In what follows, we briefly discuss the most relevant work related to ours, highlighting limitations of prior approaches and similarities with the proposed NetSentry\xspace, where appropriate.
\textbf{Defenses through Offensive Footprint Profiling.} Modeling the unique malicious nature of network anomalies is effective for detection. BotHunter \cite{gu2007bothunter} builds infection dialogues to describe the dynamic process of Botnet infection, then employs modularized detection engines to identify the footprint of each stage in an attack. BotSniffer \cite{gu2008botsniffer} identifies bot activity by highlighting the spatiotemporal correlations of Command and Control (C\&C) traffic originating from pre-programmed behaviors.
Profiling malicious code execution paths plays an important role in detecting malware \cite{kolbitsch2009effective, naderi2019malmax}. Likewise, stealth DDoS amplification can be fingerprinted by
its unique two-stage behavior (i.e., scan and attack) \cite{krupp2016identifying}. These contributions demonstrate that modeling the potential links between different attack phases has merit in practice.
Unlike previous works, here we reveal how \emph{different stages are common between different large-scale attacks} and why breaking their sequence is essential to thwarting intrusions.
\textbf{Time-invariant \gls{ML} for \gls{NID}.}
\gls{DL}-based \glspl{NIDS} learn illicit traffic patterns through a spectrum of algorithms, replacing the explicit attack modeling methodology introduced previously. In detecting anomalies, such algorithms
largely performing analysis on a per-sample basis, i.e., using statistical features of a traffic flow, to determine its nature, rather than exploring any potential correlations in network traffic.
\textit{Supervised Learning} approaches, including RIPPER \cite{lee1998data}, \gls{SVM} \cite{yi2011incremental},
and Random Forest \cite{sangkatsanee2011practical},
treat anomaly detection as a classification problem, seeking a decision boundary between benign and malicious traffic. \textit{Semi-supervised Learning} methods
discard anomalous samples during training, and only learn patterns of benign traffic. Kitsune \cite{mirsky2018kitsune} learn to reconstruct benign data via encoder ($\mathcal{E}$) and decoder ($\mathcal{D}$) networks. Samples with high reconstruction errors, i.e., $||x - \mathcal{D}(\mathcal{E}(x)) ||_{2}$, are deemed as malicious. One-Class Deep SVDD \cite{ruff2018deep} believes benign samples can be enclosed by a hyper-sphere, whereas anomalous ones are distinct from the center. Thus, Deep SVDD learns a non-linear transformation that maps innocuous samples into a feature space where the majority of them can be surrounded by a small hyper-sphere. Statistical approaches assume that benign data in nature are densely distributed in the feature space, while anomalies (outliers) are scattered. Dense areas can be approximated by Deep Gaussian Mixture models \cite{zong2018deep} or Generative Adversarial Networks \cite{li2019mad}.
\textbf{Time-sensitive \gls{ML} for \gls{NID}} relies on temporal context along with a sample, to detect any intrusion.
\gls{NIDS} that employ this approach are sparse.
More commonly, it is \gls{HIDS} \cite{du2019lifelong, shen2018tiresias, su2019robust, liu2020deep} that utilize time-sensitive models, such as \gls{LSTM} and \gls{RNN}, because the target data (system calls, logs and security events) present obvious semantic meaning and potential temporal dependencies. Attention-based Graph Neural Networks \cite{deng2021graph} can also be used to model high-dimensional time-series data and spot anomalies. Alternatively, USAD \cite{audibert2020usad} handles time-sensitive tasks by segmenting time-series data into fixed-size windows, and uses adversely trained autoencoders to detect intrusions or anomalies.
Recent studies attempt to model temporal correlations within network attacks and propose a range of RNN-based algorithms \cite{diro2018leveraging, li2018semantic}. However, an appropriate threat model detailing what temporal information is relevant to \gls{NID} is missing.
Moreover, the training inputs are often randomly sampled, which suppresses relevant temporal information and makes \gls{NID} effectiveness questionable. Our NetSentry\xspace design sets to address this particular issue and \emph{takes a dynamic view to cyber attacks, so as to identify possible temporal relationships that exist among different types of attacks}, thereby building a well-directed defensive approach.
\section{Future Work}
As we discuss in Section \ref{sec:evasion}, existing adversarial attacks on \gls{NIDS} add perturbations to the statistical features of traffic flows. There is no guarantee that perturbed features can be mapped back to a sequence of packets to be transmitted in practice. It remains unclear whether conducting adversarial attacks by directly shaping consecutive packet sizes and inter-arrival times can deceive ML-based NIDS. We deem this topic as important, since it could further shed light on the robustness and reliability of our method.
On the other hand, \gls{Bi-ALSTM} is a supervised algorithm that demands a significant amount of data for training, but acquiring up-to-date datasets is not always feasible, given the stealth nature of cyber attacks. Unfortunately, existing semi-supervised algorithms still focus on per-flow classification and neglect temporal context, resulting in the undesirable performance seen in Table \ref{table:overall_results} (Autoencoder, OC-NN, Kitsune, DAGMM). Instead of approximating the distribution of benign flows, estimating the stochastic process of consecutive benign traffic may provide higher reliability, which is also an interesting topic for future study.
\section{Conclusions}
In this paper, we show that large-scale network threats with potential high-impact can be tackled in their early stages, if correctly recognizing the unique temporal dependencies of malicious flows, and we propose NetSentry\xspace to effectively detect such incipient attacks. NetSentry\xspace incorporates a novel data augmentation technique to enhance the generalization ability of supervised algorithms and we design an ensemble \gls{Bi-ALSTM} as the core intrusion detection logic. Extensive results demonstrate that our ensemble structure outperforms a wide range of benchmarks, attaining up to 3$\times$ higher detection rates, under different network environments. Finally, we discuss computation overhead and robustness to evasion attacks, making the case for the feasibility of deploying NetSentry\xspace alongside threat prevention logic in real-world settings.
\section*{Acknowledgments}
This material is based upon work supported
by Arm Ltd and Scotland's Innovation Centre for sensing, imaging and Internet of Things technologies (CENSIS).
\bibliographystyle{ieeetr}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,813 |
exports.greaterThanZero = require("./cga/greater-than-zero-1d.js");
exports.isEqual = require("./cga/is-equal-1d.js");
exports.isZero = require("./cga/is-zero-1d.js");
//2D Functions
exports.area2 = require("./cga/area2-2d.js");
exports.convexHull2 = require("./cga/convex-hull-2d.js");
exports.cross2 = require("./cga/cross-2d.js");
exports.expandPolygon2 = require("./cga/expand-polygon-2d.js");
exports.inCone2 = require("./cga/in-cone-2d.js");
exports.intersection2 = require("./cga/intersection-2d.js");
exports.intersects2 = require("./cga/intersects-2d.js");
exports.intersectsProper2 = require("./cga/intersects-proper-2d.js");
exports.isBetween2 = require("./cga/is-between-2d.js");
exports.isColinear2 = require("./cga/is-colinear-2d.js");
exports.isDiagonal2 = require("./cga/is-diagonal-2d.js");
exports.isDiagonalie2 = require("./cga/is-diagonalie-2d.js");
exports.isEqual2 = require("./cga/is-equal-2d.js");
exports.isLeft2 = require("./cga/is-left-2d.js");
exports.isLeftOn2 = require("./cga/is-left-on-2d.js");
exports.triangulatePolygon2 = require("./cga/triangulate-polygon-2d.js");
//3D Functions
exports.isColinear3 = require("./cga/is-colinear-3d.js");
| {
"redpajama_set_name": "RedPajamaGithub"
} | 6,072 |
SVHC previews next Medical Matters Weekly
Tue, 03/16/2021 - 4:09am -- tim
Related Company:
Vermont Business Magazine Southwestern Vermont Health Care's (SVHC) Medical Matters Weekly with Dr. Trey Dobson, a weekly interactive, multiplatform medical-themed talk show, will feature Family Medicine Physician Jennifer Baker-Porazinski, MD, of Twin Rivers Medical, P.C. in Hoosick Falls, NY, as a guest on its March 17 show. The two will discuss Dr. Baker-Porazinski's personal experience with COVID-19 and her interest in integrative medicine.
The show is produced with cooperation from Catamount Access Television (CAT-TV) and airs live at 2 p.m. on Wednesdays. Viewers can see Medical Matters Weekly live on Facebook at facebook.com/svmedicalcenter and facebook.com/CATTVBennington. Those viewing on Facebook will be able to contribute questions through the chat function.
Dr. Baker-Porazinski was valedictorian of her medical school class at Poznan University of Medical Sciences in Poland. She received her bachelor's degree in biology from Alfred University in New York State. She completed a residency with St. Clare's Family Practice Residency Program in Schenectady, New York and is certified by the American Board of Family Medicine. Dr. Baker-Porazinski completed a fellowship in integrative medicine at the University of Arizona and is a board certified acupuncturist.
The program's host, Trey Dobson, MD, is an Emergency Medicine physician with Dartmouth-Hitchcock Health and serves as Chief Medical Officer for Southwestern Vermont Medical Center in Bennington, Vermont. He is an Instructor of Emergency Medicine at Dartmouth Geisel School of Medicine and a member of the Board of Trustees of Dartmouth-Hitchcock. He is past president of the Vermont Medical Society and currently sits on the Governance Council and performs medical practice peer review for the Vermont Program for Quality in Health Care. He obtained a Masters in Geology from the University of Wyoming and his Medical Degree at The University of Tennessee. Dr. Dobson completed his residency in Emergency Medicine at the University of Virginia.
After the program, the video will be available on area public access television stations. On CAT-TV, viewers will find the show on channel 1075 at 7:30 p.m. Sunday, 1:30 p.m. Monday, 8:30 a.m. Tuesday, 7:30 a.m. Wednesday, 7:30 p.m. Thursday, 7:30 a.m. Friday, and 7 p.m. Saturday. Videos and podcasts are on svhealthcare.org/MedicalMatters, as well as Youtube and on many podcast-hosting platforms, respectively.
Upcoming episodes will feature the following guests:
March 24: Kevin Curtis, MD, medical director of Connected Care and the Center for Telehealth at Dartmouth-Hitchcock Health and an associate professor of Emergency Medicine at the Geisel School of Medicine at Dartmouth.
March 31: Stephen Leffler, MD, an emergency medicine physician and president and chief operating officer at the University of Vermont Medical Center.
April 7: David Veltre, MD, a hand and upper extremity specialist at SVMC Orthopedics in Bennington.
To contribute questions in advance of each week's show, please e-mail [email protected] or post to Facebook with #SVHCMedicalMattersWeekly.
About SVHC Medical Matters Weekly:
Medical Matters Weekly is an interactive, mulitplatform guest-driven talk show hosted by Dr. Trey Dobson. It provides a behind-the-scenes perspective on healthcare, including topics like behavioral health, food insecurity, equitable care, and the opioid crisis. The show is produced in partnership with Catamount Access Television (CAT-TV) and is broadcast on CAT-TV, Greater Northshire Access Television, Facebook Live, YouTube, and podcast platforms.
About SVHC:
Southwestern Vermont Health Care (SVHC) is a comprehensive, preeminent, health care system providing exceptional, convenient, and affordable care to the communities of Bennington and Windham Counties of Vermont, eastern Rensselaer and Washington Counties of New York, and northern Berkshire County in Massachusetts. SVHC includes Southwestern Vermont Medical Center (SVMC), Southwestern Vermont Regional Cancer Center, the Centers for Living and Rehabilitation, and the SVHC Foundation. SVMC includes 25 primary and specialty care practices.
SVMC has earned several prominent distinctions. Most recently, SVMC received the American Hospital Association's Rural Healthcare Leadership Award for transformational change in efforts toward healthcare reform and its fifth consecutive designation within the American Nurses Credentialing Center's (ANCC) Magnet Recognition Program®. It ranked fourth in the nation for healthcare value by the Lown Institute Hospitals Index in 2020 and is one of Vermont's Best Places to Work. SVMC earned an 'A' for hospital safety from the Leapfrog Group for two years in a row. During the pandemic, SVMC and both its skilled nursing facilities, the Centers for Living and Rehabilitation in Bennington, and the Center for Nursing and Rehabilitation at Hoosick Falls, earned perfect scores on a Centers for Medicare and Medicaid Services evaluation meant to determine the ability to prevent transmission of COVID-19 and other infections.
Source: BENNINGTON—March 10, 2021—Southwestern Vermont Medical Center | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,077 |
static bool startup_check(const SubProcess &p)
{
using namespace std::chrono_literals;
std::this_thread::sleep_for(1s);
if (p.IsRunning())
{
// process hasn't exited, good.
return true;
}
LOGA("Electrum: startup check failed, server exited within 1 second");
return false;
}
static void log_args(const std::string &path, const std::vector<std::string> &args)
{
if (!Logging::LogAcceptCategory(ELECTRUM))
{
return;
}
std::stringstream ss;
ss << path;
for (auto &a : args)
{
ss << " " << a;
}
LOGA("Electrum: spawning %s", ss.str());
}
namespace electrum
{
ElectrumServer::ElectrumServer() : started(false), stop_requested(false) {}
ElectrumServer::~ElectrumServer()
{
if (started)
Stop();
if (process_thread.joinable())
{
process_thread.join();
}
}
//! called when electrs produces a line in stdout/stderr
static void callb_logger(const std::string &line) { LOGA("Electrum: %s", line); }
bool ElectrumServer::Start(int rpcport, const std::string &network)
{
if (!GetBoolArg("-electrum", false))
{
LOGA("Electrum: Disabled. Not starting server.");
return true;
}
return Start(electrs_path(), electrs_args(rpcport, network));
}
bool ElectrumServer::Start(const std::string &path, const std::vector<std::string> &args)
{
stop_requested = false;
DbgAssert(!started, return false);
log_args(path, args);
std::unique_lock<std::mutex> lock(process_cs);
process.reset(new SubProcess(path, args, callb_logger, callb_logger));
process_thread = std::thread([this]() {
LOGA("Electrum: Starting server");
try
{
this->process->Run();
}
catch (const subprocess_error &e)
{
LOGA("Electrum: Server not running: %s, exit status %d, termination signal %d", e.what(), e.exit_status,
e.termination_signal);
}
catch (...)
{
LOGA("Electrum: Unknown error running server");
}
this->started = false;
if (!stop_requested && GetBoolArg("-electrum.shutdownonerror", false))
{
// The electrum server exit was not initiated by us, so it
// must have stopped due to some error.
LOGA("Electrum: Bitcoin Unlimited is configured to exit when "
"electrum exits on error. Initiating shutdown.");
StartShutdown();
}
});
started = startup_check(*process);
return started;
}
static void stop_server(SubProcess &p)
{
if (!p.IsRunning())
{
return;
}
LOGA("Electrum: Stopping server");
try
{
p.Interrupt();
}
catch (const subprocess_error &e)
{
LOGA("Electrum: %s", e.what());
p.Terminate();
return;
}
using namespace std::chrono_literals;
using namespace std::chrono;
auto timeout = 60s;
auto start = system_clock::now();
while (p.IsRunning())
{
if ((system_clock::now() - start) < timeout)
{
std::this_thread::sleep_for(1s);
continue;
}
LOGA("Electrum: Timed out waiting for clean shutdown (%s seconds)", timeout.count());
p.Terminate();
return;
}
}
void ElectrumServer::Stop()
{
stop_requested = true;
if (!started)
{
return;
}
try
{
std::unique_lock<std::mutex> lock(process_cs);
stop_server(*process);
}
catch (const std::exception &e)
{
LOGA("Electrum: Error stopping server %s", e.what());
}
process_thread.join();
started = false;
}
bool ElectrumServer::IsRunning() const
{
std::unique_lock<std::mutex> lock(process_cs);
if (!bool(process))
{
return false;
}
return process->IsRunning();
}
void ElectrumServer::NotifyNewBlock()
{
std::unique_lock<std::mutex> lock(process_cs);
if (!bool(process))
{
return;
}
#if BOOST_OS_LINUX
process->SendSignal(SIGUSR1);
#endif
}
ElectrumServer &ElectrumServer::Instance()
{
static ElectrumServer instance;
return instance;
}
} // ns electrum
| {
"redpajama_set_name": "RedPajamaGithub"
} | 2,903 |
\section{Prony's method}\label{sec:prony}\label{sec:multivarprony}
The following is a multivariate generalization of \emph{Prony's method}
that, in its univariate form, goes back to \cite{prony1795}.
We wish to transfer its essence to the more general setting of algebraic varieties of any dimension.
The variant we cite here is useful for this,
but there are many alternative formulations that accentuate different points of view.
For instance, it has been considered in terms of exponential sums with a focus on signal processing
in \cite{kunis2016:pronymultiv,vdohe2017,sauer2017:pronymultiv,mourrain17:polyexp}.
Another variation of Prony's method
is \emph{Sylvester's algorithm} \cite{sylvester1886}.
It is also related to \emph{Macaulay inverse systems} (see e.\,g.\ {}\cite[Chapter~21.2]{eisenbud})
and \emph{apolarity theory}
(cf.~\cite[Lemma~1.15, algorithm in Chapter~5.4]{iarrobinokanev99},
\cite[Chapter~19]{schmuedgen2017}),
which put more emphasis on algebraic and geometric aspects.
\begin{proposition}[{\cite{prony1795}, \cite{kunis2016:pronymultiv}, \cite[Remark~2.8, Corollary~2.19]{vdohe2017}}]
\label{thm:pronyohe:0}
\label{lem:vandermondefactorization}\crefalias{enumi}{proposition}\crefalias{enumii}{proposition}
Let $\mathbbm{k}$ be a field and let $R = \mathbbm{k}[x_1,…,x_n]$ be the polynomial ring in $n$ variables.
Let $σ = \sum_{j=1}^r λ_j \ev_{ξ_j}$ for $λ_j ∈ \mathbbm{k}$
and $ξ_j∈\mathbbm{k}^n$, $1≤j≤r$.
Let $d,d' ∈ ℕ$ and define $H_{d',d} \coloneqq \lr{σ(x^{α+β})}_{\totaldeg{α}≤d', \totaldeg{β}≤d}$.
Then the following properties hold:
\begin{thm-enumerate}
\item\label{lem:vandermondefactorization:1}
$H_{d',d} = V_{≤d'}\transp Λ V_{≤d}$,
where $Λ \coloneqq \diag\lr{λ_1,…,λ_r}$ and
$V_{≤d} \coloneqq \lr{ξ_j^α}_{1≤j≤r,\totaldeg{α}≤d}$.
\item\label{lem:vandermondefactorization:2ab}
If $λ_1,…,λ_r≠0$ and $\ev_{≤d'}\colon R_{≤d'} \to \mathbbm{k}^r$, $x^α\mapsto (ξ_j^α)_{1≤j≤r}$, is surjective,
then:
\begin{thm-enumerateB}
\item\label{lem:vandermondefactorization:2}
$\kernel H_{d',d} = \kernel V_{≤d} = \Id{\{ξ_1,…,ξ_r\}} \cap R_{≤d}$.
\item\label{thm:pronyohe}
$\Z{\kernel H_{d',d}} = \{ξ_1,…,ξ_r\}$ if $d-1≥d'$.
\end{thm-enumerateB}
\end{thm-enumerate}
\end{proposition}
\begin{proof}
%
The factorization $H_{d',d} = V_{≤d'}\transp Λ V_{≤d}$ follows by direct computation.
Furthermore, if $λ_1,…,λ_r≠0$
and $\ev_{≤d'}$ is surjective,
then $V_{≤d'}\transp Λ$ represents an injective map,
so the kernels of $V_{≤d}$ and $H_{d',d}$ are the same
and agree with the truncated vanishing ideal $\Id{\{ξ_1,…,ξ_r\}} \cap R_{≤d}$,
which shows \localref{lem:vandermondefactorization:2}.
Then part~\localref{thm:pronyohe} follows from the observation
that the surjectivity of $\ev_{≤d-1}$ implies
$\V{\kernel V_{≤d}} = \{ξ_1,…,ξ_r\}$;
see \cite[Theorem~2.15]{vdohe2017}.
\end{proof}
\par
Note that, if the points $ξ_1,…,ξ_r$ are not distinct,
then the map $\ev_{≤d'}\colon R_{≤d'} \to \mathbbm{k}^r$
in \localref{lem:vandermondefactorization:2} can never be surjective,
so the surjectivity assumption implies in particular that the points are distinct.
Further, note that the matrix $H_{d',d}$ in \ref{thm:pronyohe:0}
represents the $\mathbbm{k}$-linear map
into the dual space of the vector space $R_{≤d'}$ given by
\[
R_{≤d} \longrightarrow \kdual{R_{≤d'}},\qquad p \longmapsto (q \mapsto σ(p q)),
\]
as well as the $\mathbbm{k}$-bilinear mapping
\[
R_{≤d'} × R_{≤d} \longrightarrow \mathbbm{k},\qquad (q, p) \longmapsto σ(p q).
\]
A map of the form $σ = \sum_{j=1}^r λ_j \ev_{ξ_j}$,
where $\ev_{ξ_j}$ denotes the evaluation homomorphism associated to the point $ξ_j$,
can also be viewed as \emph{exponential sum}.
It satisfies $σ(x^α) = \sum_{j=1}^r λ_j ξ_j^α$ for all $α∈ℕ^n$,
so can be interpreted as a map $ℕ^n \to \mathbbm{k}$,
by composing it with $α \mapsto x^α$.
Also note that $σ$ is the \emph{moment functional} of the finitely-supported measure
$μ\coloneqq \sum_{j=1}^r λ_j \dirac{ξ_j}$,
where $\dirac{ξ_j}$ denotes the Dirac measure supported at the point $ξ_j ∈ \mathbbm{k}^n$ for $1≤j≤r$.
For this interpretation, we usually assume that $\mathbbm{k}$ is $ℝ$ or $ℂ$.
If $\mathbbm{k}=ℂ$ and the weights $λ_1,…,λ_r∈ℂ$ are complex,
then $μ$ is a \emph{signed} (complex) measure,
which is explicitly allowed in this setting.
The signed measure $μ$ satisfies
$\int_{\mathbbm{k}^n} x^α \d μ(x) = \sum_{j=1}^r λ_j ξ_j^α = σ(x^α)$,
so $σ(x^α)$ agrees with the \emph{$α$-th moment} of $μ$.
On top of that, the moments $σ(x^α)$ uniquely determine the map $σ$.
From this point of view,
the statement of \ref{thm:pronyohe} is that
the support of the finitely-supported signed measure $μ$
is already determined by \emph{finitely many} of its moments,
namely the ones that are required to construct the matrix $H_{d-1,d}$.
In fact, in this case, the weights $λ_1,…,λ_r$ can be recovered as well,
by subsequently solving a linear system of equations (cf.~\cite[Algorithm~2.1]{vdohe2017}),
so the measure $μ$ is fully determined by these moments.
The condition that $\ev_{≤d-1}$ is surjective holds if $d$ is sufficiently large,
a trivial bound being $d≥r$,
as can be seen by constructing Lagrange polynomials of degree $r-1$ for the points $ξ_1,…,ξ_r$; cf. \cite[Corollary~2.20]{vdohe2017}.
The ideal
\[
\bigcap_{j=1}^r \idealspan{x - ξ_j}
= \prod_{j=1}^r \idealspan{x - ξ_j}
= \prod_{j=1}^r \idealspan{x_1 - ξ_{j1},…, x_n - ξ_{j n}}
\]
is clearly generated by polynomials of degree at most $r$,
but in the multivariate setting with $n≥2$,
unless the points $ξ_1,…,ξ_r$ are contained in a one-dimensional subspace of $\mathbbm{k}^n$,
this bound can be much larger than necessary.
A more practical sufficient criterion for the evaluation map $\ev_{≤d-1}$ being surjective
is obtained by checking the rank of the matrix $H_{d-1,d}$.
As this rank is at most $r$,
it follows from the Vandermonde factorization in \ref{lem:vandermondefactorization:1}
that $\ev_{≤d-1}$ is surjective if and only if $\rk H_{d-1,d} = r$.
\begin{remark}
%
%
%
%
%
A variation of Prony's method works with Toeplitz matrices
of the form
\[
\lr{\sum_{j=1}^r λ_j ξ_j^{-α+β}}_{α,β∈ℕ^n,\,\maxdeg{α}≤d',\maxdeg{β}≤d}
\]
instead of Hankel matrices,
where the moments are usually bounded in max-degree.
For this to be defined,
the points $ξ_1,…,ξ_r$ must have non-zero coordinates,
so they are contained in the algebraic torus $\lr{ℂ^*}^n$.
This is especially common when working in a trigonometric setting,
with points on the \emph{complex torus}
\[
\T^n \coloneqq \{z ∈ ℂ^n \mid \abs{z_1}=\cdots =\abs{z_n} = 1\}.
\]
%
%
%
%
Moreover, one can work with much more general filtrations of the polynomial ring;
see the statements in \cite[Chapter~2]{vdohe2017}.
See also \cite{ohe2020:pronystructures}
%
for an approach relating Toeplitz and Hankel matrices in this context.
\end{remark}
\section{Sesquilinearity and filtrations}\label{sec:sesquilinearity}
In this section, we set up a framework that allows us
to treat in a unified way the two different settings of
moment problems we are primarily interested in,
namely moment problems on affine space and on the torus.
See \cite[Chapter~2]{schmuedgen2017} for a similar approach
to these concepts.
\begin{definition}
Let $R$ be a ring with a map
$\invol{\blank}\colon R\to R$ satisfying
\begin{align}
\Invol{x + y} = \invol{x} + \invol{y}, &&
\Invol{x y} = \invol{y} \invol{x}, &&
\invol{1} = 1, &&
\Invol{\invol{x}} = x
\end{align}
for all $x,y∈R$.
Then the map $\invol{\blank}$ is called \emph{involution}
and $R$ is an \emph{involutive ring} (also called \emph{${}^*$-ring}).
%
An involutive ring $A$ with involution $\invol[A]{\blank}$
that is also an (associative) algebra over a commutative involutive ring $R$
is an \emph{involutive algebra} (also called \emph{${}^*$-algebra}),
if the involution satisfies
$\Invol[A]{r a} = \invol{r} \invol[A]{a}$
for all $r∈R$ and $a∈A$.
As this property means that there is no ambiguity,
we denote the involution on $A$
by $\invol{\blank}$ as well.
A map $f\colon A \to A$ is \emph{$\invol{}$-semilinear} if
$f(a+b) = f(a) + f(b)$
and $f(r a) = \invol{r} f(a)$ holds for all $r∈R$ and $a,b∈A$.
%
\end{definition}
Common examples of involutive rings include the field of complex numbers $ℂ$
with complex conjugation
as well as square complex matrices with conjugate transposition as involution.
Another important example for our discussion
is given in \ref{ex:involution:conjugation} below.
Also note that any commutative ring (algebra) is an involutive ring (algebra)
with respect to the trivial involution
which leaves every element unchanged.
\begin{definition}\label{def:filtration}
Let $\mathbbm{k}$ be a field
and $A$ an (associative) algebra over $\mathbbm{k}$.
If $F_d \subseteq A$, $d∈ℕ$, is a family of
$\mathbbm{k}$-vector subspaces satisfying
\begin{itemize}
\begin{minipage}{0.525\linewidth}
\item $F_d \subseteq F_e$ for $d,e∈ℕ$ with $d≤e$,
\item $A = \bigcup_{d∈ℕ} F_d$,
\end{minipage}
\begin{minipage}{0.4\linewidth}
\item $1 ∈ F_0$,
\item $F_d \cdot F_e \subseteq F_{d+e}$ for $d,e∈ℕ$,
\end{minipage}
\end{itemize}
then $A$ is a \emph{filtered algebra} over $\mathbbm{k}$
and the family $\{F_d\}_{d∈ℕ}$ is called \emph{filtration} of $A$.
In particular, the filtrations we consider are exhaustive.
For simplicity of notation, we often denote
the filtered components of the filtration by
$A_{≤d} \coloneqq F_d$.
\end{definition}
\begin{example}\label{ex:filtrations}
Let $\mathbbm{k}$ be a field and $R = \mathbbm{k}[x_1,…,x_n]$
be the polynomial ring in $n$~variables over $\mathbbm{k}$, for some $n∈ℕ$.
Then the total degree of polynomials gives rise to a filtration of $R$ where
\[\label{eq:totaldegfiltration}
R_{≤d} = \{p ∈ R \mid \deg(p) ≤ d\}
\]
for $d∈ℕ$.
Similarly, we can define a filtration $\{F_d\}_{d∈ℕ}$, on $R$
in terms of max-degree by
\[\label{eq:maxdegfiltration}
F_d = \bigoplus_{\tsubstack{α∈ℕ^n\\\maxdeg{α}≤d}} \mathbbm{k} x^α.
\]
Note that all the filtered components of these two filtrations
happen to be $\mathbbm{k}$-vector spaces of finite dimension,
which is a useful property when it comes to computations.
Now let $\mathfrak{a}\subseteq R$ be an ideal with $1\notin\mathfrak{a}$
and define $S = R / \mathfrak{a}$.
If $\{F_d\}_{d∈ℕ}$, is any filtration of $R$,
then
$G_d \coloneqq F_d / \lr{\mathfrak{a} \cap F_d}$
defines a filtration of the quotient ring $S$.
For this, observe that $G_d$ can be embedded in $G_{d+1}$
via the injective map
$p + \mathfrak{a} \cap F_d \mapsto p + \mathfrak{a} \cap F_{d+1}$,
for all $p∈F_d$, $d∈ℕ$.
\end{example}
For the remainder of this \lcnamecref{sec:sesquilinearity},
we assume, for simplicity, that $\mathbbm{k}$ is a field of characteristic $0$
together with an involution $\invol{\blank}$
that endows $\mathbbm{k}$ with the structure of an involutive ring.
Moreover, we denote by $R = \mathbbm{k}[x_1,…,x_n]$
the polynomial ring in finitely many variables
and fix a filtration $\{R_{≤d}\}_{d∈ℕ}$
that turns $R$ into a filtered algebra over $\mathbbm{k}$
and has the property that $R_{≤d}$ is a finite-dimensional $\mathbbm{k}$-vector space for every $d∈ℕ$.
Additionally, we assume that $R\subseteq L$
is a $\mathbbm{k}$-subalgebra of an involutive commutative algebra $L$ over $\mathbbm{k}$.
The involution on $L$ is denoted by $\invol{\blank}$ as well.
Typical examples are the following:
\par
\begin{example}\label{ex:involution:trivial}
%
If $\mathbbm{k}$ is any field, let $L = R$
and define the involutions on $\mathbbm{k}$ and $L$ to act trivially.
The filtration $\{R_{≤d}\}_{d∈ℕ}$ on $R$ is defined by total degree as
in \myeqref{eq:totaldegfiltration}.
Of particular interest is the case when
$\mathbbm{k}$ is the field of real numbers $ℝ$ (or a subfield thereof).
\end{example}
\begin{example}\label{ex:involution:conjugation}
If $\mathbbm{k}$ is any field with an involution $\invol{\blank}$,
let $L = \mathbbm{k}[x_1^{±1},…,x_n^{±1}]$ be the ring of Laurent polynomials
and define the involution on $L$ by
\[
\Invol{\sum_{α} p_α x^α} \coloneqq \sum_{α} \invol{p_α} x^{-α},
\]
where $p_α∈\mathbbm{k}$, $α∈ℤ^n$,
which turns $L$ into an involutive algebra.
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
For the filtration on $R$,
in this situation we usually pick the one that is induced by max-degree
as in \myeqref{eq:maxdegfiltration},
%
%
%
%
since $L$ is the coordinate ring of the algebraic torus,
and denote it by $\{R_{≤d}\}_{d∈ℕ}$ again.
Of particular interest is the case $\mathbbm{k} = ℂ$ of complex numbers
with complex conjugation as involution.
In this case, an observation that can be significant in some applications is the following:
If we restrict a Laurent polynomial $p∈L$ to the complex torus $\T^n$,
then the involution $\invol{p}$ is the complex conjugate of $p$ as a function on $\T^n$,
so we have
\[
\invol{p}(ξ) = \conj{p(ξ)}
\]
for all $ξ∈\T^n$, since $ξ^{-α} = \conj{ξ}^α$ for all $α∈ℤ^n$.
In particular, the Laurent polynomial
$p$ is a real function on $\T^n$
if and only if $\invol{p} = p$,
i.\,e.\ $p_α = \conj{p_{-α}}$ for all $α$.
%
%
%
Furthermore, note that,
if $\mathfrak{a} \subseteq L$ is a vanishing ideal of a set contained in $\T^n$,
then it follows that $\invol{\mathfrak{a}} = \mathfrak{a}$.
%
\end{example}
\begin{definition}\label{def:sesquilinearform}
Let $σ\colon L \to \mathbbm{k}$ be a $\mathbbm{k}$-linear map.
Then we define the $\mathbbm{k}$-sesquilinear form
\[
\sform[σ]{\blank,\blank}\colon L × L \longrightarrow \mathbbm{k},\quad (q, p) \longmapsto σ(\invol{q} p),
\]
%
which is $\invol{}$-semilinear in the first and linear in the second argument.
Defining sesquilinear forms to be semilinear in the first
rather than in the second argument is an arbitrary choice.
We choose this convention as it simplifies our notation later on.
%
By restriction, we can also view this as a sesquilinear form on $R$
as well as on the finite-dimensional vector spaces $R_{≤d}$, $d∈ℕ$.
Note that this is a symmetric bilinear form if the involution is trivial.
A form $\sform{\blank,\blank}$ on a $\mathbbm{k}$-vector space $U$
is \emph{Hermitian} if
$\sform{q,p} = \invol{\sform{p,q}}$ for all $p,q∈U$.
%
If the involution is trivial, as in \ref{ex:involution:trivial},
then this always holds for $\sform[σ]{\blank,\blank}$,
as the form is symmetric in that case.
%
When $\mathbbm{k}$ is (a subfield of) the complex numbers $ℂ$,
then a Hermitian form $\sform[σ]{\blank,\blank}$ on $U$
is \emph{positive-semidefinite} if, additionally,
$\sform[σ]{p,p} ≥ 0$ for all $p∈U$.
%
Note that this never holds if
$\mathbbm{k} \nsubseteq ℝ$ and
the involution is linear,
rather than $\invol{}$-semilinear,
unless the form is trivial.
%
%
%
%
%
%
%
\end{definition}
\begin{remark}\label{rem:monomialbasis:hankel:toeplitz}
%
Assume that a family of monomials $\{x^α\}_{α∈J} \subseteq R_{≤d}$
for a suitable index set $J\subseteq ℕ^n$
forms a basis of the finite-dimensional vector space $R_{≤d}$
and that the involution $\invol{\blank}$ is trivial.
Then the Gramian matrix of $\sform[σ]{\blank,\blank}$
with respect to this basis is of the form
\[
\lr{\sform[σ]{x^α,x^β}}_{α,β∈J}
%
= \lr{σ\lr{x^{α+β}}}_{α,β∈J},
\]
which is a (generalized) Hankel matrix.
Likewise, if $\{x^α\}_{α∈J} \subseteq R_{≤d}$ is a basis of $R_{≤d}$,
but $L$ is the ring of Laurent polynomials
with involution $\invol{\blank}\colon L\to L$
defined as in \ref{ex:involution:conjugation},
then the Gramian matrix with respect to this basis
is of the form
\[
\lr{\sform[σ]{x^α,x^β}}_{α,β∈J}
%
= \lr{σ\lr{x^{-α+β}}}_{α,β∈J},
\]
which is a (generalized) Toeplitz matrix.
\end{remark}
\begin{lemma}\label{lem:inducedformonquotient}
Assume that $σ\colon L \to \mathbbm{k}$ is a $\mathbbm{k}$-linear map,
$\mathfrak{a}\subseteq L$ is an ideal such that $\mathfrak{a},\invol{\mathfrak{a}} \subseteq \kernel σ$.
Let $W\subseteq L$ be a $\mathbbm{k}$-vector subspace.
Then the sesquilinear form $\sform[σ]{\blank,\blank}$ on $L$
induces a sesquilinear form
\[
\submodulequotient{W}{\mathfrak{a}} × \submodulequotient{W}{\mathfrak{a}} \longrightarrow \mathbbm{k},\qquad
(\resid{q}, \resid{p}) \longmapsto \sform[σ]{q, p} = σ(\invol{q} p).
\]
\end{lemma}
Here, $\resid{q},\resid{p}$ denotes the residue class of
polynomials $q,p∈W$ modulo $\mathfrak{a}\cap W$.
We denote the induced sesquilinear form on $\submodulequotient{W}{\mathfrak{a}}$
by $\sform[σ]{\blank,\blank}$ again.
Also note that
that the requirements $\mathfrak{a}\subseteq \kernel σ$ and $\invol{\mathfrak{a}}\subseteq \kernel σ$
are equivalent when the sesquilinear form $\sform[σ]{\blank,\blank}$ on $L$ is Hermitian.
\begin{proof}
Let $p,q∈W$.
If $p ∈ \mathfrak{a} \cap W$, then $\invol{q}p$ is contained in
$\mathfrak{a}\subseteq \kernel σ$, so $σ(\invol{q}p) = 0$.
Likewise, if $q ∈ \mathfrak{a} \cap W$, then $\invol{q} p ∈ \invol{\mathfrak{a}} \subseteq \kernel σ$,
so the sesquilinear form on $\submodulequotient{W}{\mathfrak{a}}$ is well-defined.
\end{proof}
\begin{remark}\label{def:inducedmaponquotient}
If $σ\colon L\to \mathbbm{k}$ is $\mathbbm{k}$-linear and $\mathfrak{a}\subseteq L$ is an ideal
such that $\mathfrak{a}\subseteq\kernel σ$,
then the sesquilinear form $\sform[σ]{\blank,\blank}$ on $L$
does \emph{not} induce a sesquilinear form on the quotient spaces
$\submodulequotient{W}{\mathfrak{a}}$.
(Observe that this would need $\invol{\mathfrak{a}}\subseteq \kernel σ$
or require the form to be Hermitian, as in \ref{lem:inducedformonquotient}.)
Many of our arguments here can be transferred to this setting
by working with a sesquilinear \emph{map} instead of a sesquilinear form;
for details we refer to \cite[Definition~3.1.12]{wageringel2021}.
\end{remark}
\section{Factorization properties}\label{sec:factorization}
The Vandermonde factorization of \ref{lem:vandermondefactorization:1}
is an essential aspect of Prony's method.
Here, we analyze how to transfer it from measures on zero-dimensional
to measures on positive-dimensional algebraic varieties.
The statements here are also motivated
by the study of finite-rank Hankel operators as in e.\,g.\ {}\cite{mourrain17:polyexp}.
In the positive-dimensional setting, such operators are not of finite rank anymore,
but some properties are still valid.
Let $\mathbbm{k}, R, L$ be as in \ref{sec:sesquilinearity},
so $\mathbbm{k}$ is a field of characteristic~$0$, $R = \mathbbm{k}[x_1,…,x_n]$ is the polynomial ring in $n$ variables
endowed with a filtration $\{R_{≤d}\}_{d∈ℕ}$
and $L$ is an involutive commutative $\mathbbm{k}$-algebra such that $R\subseteq L$.
We wish to examine more closely the following situation.
Let $\mathfrak{a}\subseteq L$ be an ideal and
let $σ\colon L\to\mathbbm{k}$ be a $\mathbbm{k}$-linear map
with the property that $\mathfrak{a} \subseteq \kernel σ$.
This means that the map $σ$ factors via
the quotient homomorphism
\[
\proj{\mathfrak{a}}\colon L \longrightarrow L/\mathfrak{a},\qquad
p \longmapsto \resid{p} \coloneqq p + \mathfrak{a},
\]
which we denote by $\proj{\mathfrak{a}}$,
and a $\mathbbm{k}$-linear map $\resid{σ}\colon L/\mathfrak{a} \to \mathbbm{k}$, denoted by $\resid{σ}$.
\begin{example}\label{eg:reduced0dim}
Assume that $L$ is the polynomial ring $R$ and $ξ∈\mathbbm{k}^n$
(or that $L$ is the Laurent polynomial ring in $n$ variables and $ξ∈\lr{\mathbbm{k}^*}^n$).
Then,
for the maximal ideal $\maxideal{ξ} = \idealspan{x-ξ} \subseteq L$,
this gives the evaluation homomorphism at the point $ξ$,
\[
\proj{\maxideal{ξ}}\colon L \longrightarrow L/\maxideal{ξ} \cong \mathbbm{k},\qquad
x^α \longmapsto \resid{x}^α = ξ^α,
\]
for $α∈ℕ^n$ (or $α∈ℤ^n$),
so $\proj{\maxideal{ξ}}(p) = p(ξ)$ for $p∈L$.
Note further that, for any $\mathbbm{k}$-linear map $σ\colon L\to \mathbbm{k}$ with $\maxideal{ξ}\subseteq\kernel σ$,
the linear map $\resid{σ}\colon L/\maxideal{ξ}\cong \mathbbm{k} \to \mathbbm{k}$
is determined by a single scalar $λ∈\mathbbm{k}$, with respect to a suitable basis.
Thus, $σ = λ\proj{\maxideal{ξ}} = λ\ev_{ξ} ∈ \kdual L$,
which we can interpret as an exponential sum of rank~$1$
if $λ≠0$ (cf.~\ref{sec:multivarprony}).
More generally, consider the zero-dimensional ideal
$\mathfrak{a} = \bigcap_{j=1}^r \maxideal{ξ_j}$,
for distinct points $ξ_1,…,ξ_r$.
Then it follows from the Chinese Remainder Theorem (cf.~\cite[Chapter~2.1.2, Proposition~5]{bourbaki:comalg}) that
\[
L/\mathfrak{a}
\cong \bigoplus_{j=1}^r L/\maxideal{ξ_j}
\cong \mathbbm{k}^r,
\]
where $\proj{\mathfrak{a}}(p)$ is identified with $\lr{p(ξ_1),…,p(ξ_r)}$ for $p∈L$.
As a $\mathbbm{k}$-linear map with respect to the monomial basis of $L$,
we can view $\proj{\mathfrak{a}}$ as being described by an infinite Vandermonde matrix
associated to the points $ξ_1,…,ξ_r$.
If $σ\colon L\to \mathbbm{k}$ is a $\mathbbm{k}$-linear map with $\mathfrak{a}\subseteq \kernel σ$,
then it is of the form $σ = \sum_{j=1}^r λ_j \ev_{ξ_j}$
with suitable parameters $λ_1,…,λ_r∈\mathbbm{k}$,
which corresponds to an exponential sum of rank~$r$ if $λ_1,…,λ_r≠0$.
\end{example}
The ideal $\mathfrak{a}$ does not need to be radical in this setup.
An explicit example is given in \cite[Example~3.2.2]{wageringel2021};
more generally polynomial exponential series as studied in \cite{mourrain17:polyexp}
correspond to non-radical ideals.
Later on, we will focus on the case in which $\mathfrak{a}$ is a vanishing ideal, though.
As $R$ is endowed with a filtration $\{R_{≤d}\}_{d∈ℕ}$
for which each component $R_{≤d}$ is finite-dimensional
and since $R\subseteq L$,
we can restrict the map $\proj{\mathfrak{a}}\colon L\to L/\mathfrak{a}$
to a map on finite-dimensional vector subspaces
$R_{≤d} \to R_{≤d}/\lr{\mathfrak{a}\cap R_{≤d}}$,
which we denote by $\evd{\mathfrak{a}}{d}$,
as explained in \ref{ex:filtrations}.
An important ingredient of Prony's method
is that we can extract information about the vanishing ideal
from the kernel of the moment matrix, if the moment matrix is sufficiently large;
see \ref{lem:vandermondefactorization:2}.
In the following, we examine what is required to transfer this property
to the setting of ideals which are possibly not of dimension zero,
but are of higher dimension.
This is answered by the following \lcnamecref{thm:hankelopfactorization}
as well as \ref{lem:hankelkernelinjectivity} below.
\begin{theorem}\label{thm:hankelopfactorization}
Let $\mathfrak{a}\subseteq L$ be an ideal and
let $σ\colon L\to\mathbbm{k}$ be a $\mathbbm{k}$-linear map with $\mathfrak{a}\subseteq \kernel σ$.
Then the $\mathbbm{k}$-linear map
\[
H\colon R \longrightarrow \semikdual R,\qquad
p \longmapsto \lr{q \mapsto \sform[σ]{q, p}},
\]
factors as
\begin{equation}
\label{eq:hankelopfactors}
\begin{tikzcd}[row sep=tiny,ampersand replacement=\&]
R \arrow[r, "\proj{\mathfrak{a}}"] \&
\submodulequotient{R}{\mathfrak{a}} \arrow[r] \&
\semikdual{\submodulequotient{R}{\invol{\mathfrak{a}}}} \arrow[r, "\Transp{\proj{\invol{\mathfrak{a}}}}"] \&
\semikdual R,\\
\& p + \mathfrak{a}\cap R \arrow[r, mapsto] \&
\lr{q + \invol{\mathfrak{a}}\cap R \mapsto \sform[σ]{q, p}},
\end{tikzcd}
\end{equation}
where $\Transp{\proj{\invol{\mathfrak{a}}}}(φ) = φ \mathbin{\circ} \proj{\invol{\mathfrak{a}}}$
for $φ ∈ \semikdual{\submodulequotient{R}{\invol{\mathfrak{a}}}}$.
Moreover, the truncated map between finite-dimensional vector subspaces given by
\[
H_{d+δ,d}\colon
R_{≤d} \longrightarrow \semikdual{R_{≤d+δ}},\qquad
p \longmapsto \lr{q \mapsto \sform[σ]{q, p}},
\]
for $d,δ∈ℕ$,
factors as
\begin{equation}
\label{eq:hankelmatfactors}
\begin{tikzcd}[row sep=tiny,ampersand replacement=\&]
R_{≤d} \arrow[d, "\evd{\mathfrak{a}}{d}"] \arrow[r, "H_{d+δ,d}"] \&
\semikdual{R_{≤d+δ}} \\[5ex]
\submodulequotient{R_{≤d}}{\mathfrak{a}} \arrow[r, "\resid{H_{d+δ,d}}"] \&
\semikdual{\submodulequotient{R_{≤d+δ}}{\invol{\mathfrak{a}}}} \arrow[u, "\Transp{\evd{\invol{\mathfrak{a}}}{d+δ}}"],\\
p + \mathfrak{a}\cap R_{≤d} \arrow[r, mapsto] \&
\lr{q + \invol{\mathfrak{a}}\cap R_{≤d+δ} \mapsto \sform[σ]{q, p}}.
\end{tikzcd}
\end{equation}
\end{theorem}
\begin{proof}
Due to the inclusion $\mathfrak{a}\subseteq \kernel σ$,
we have that
\[
σ\lr{\Invol{q + \invol{\mathfrak{a}} \cap R} \lr{p + \mathfrak{a} \cap R}}
= σ\lr{\lr{\invol{q} + \mathfrak{a} \cap \invol{R}} \lr{p + \mathfrak{a} \cap R}}
= σ\lr{\invol{q} p}
= \sform[σ]{q, p},
\]
for all $q,p∈R$,
which shows the first factorization property.
The other one follows analogously.
\end{proof}
The truncated map $H_{d+δ,d}$ is of importance for us,
since we are interested in recovery from finitely many moments.
By \ref{thm:hankelopfactorization},
it holds that
\[
\mathfrak{a}\cap R_{≤d} \subseteq \kernel H_{d+δ,d}
\]
and we ask when this is an equality.
This leads to the following \lcnamecref{lem:hankelkernelinjectivity}.
\par
\begin{corollary}\label{lem:hankelkernelinjectivity}
If the map $\resid{H_{d+δ,d}}\colon
\submodulequotient{R_{≤d}}{\mathfrak{a}} \to \semikdual{\submodulequotient{R_{≤d+δ}}{\invol{\mathfrak{a}}}}$
is injective, then
\[
\kernel\lr{H_{d+δ,d}} =
\kernel\lr{\evd{\mathfrak{a}}{d}} =
\mathfrak{a} \cap R_{≤d}.
\]
\end{corollary}
\begin{proof}
Due to the factorization \myeqref{eq:hankelmatfactors}
and since the map
$\Transp{\evd{\invol{\mathfrak{a}}}{d+δ}}$ is injective,
the equality holds if and only if the map
$\resid{H_{d+δ,d}}$ is injective.
\end{proof}
As the vector space dimension of the codomain of $\resid{H_{d+δ,d}}$
is finite and at least as large as the dimension of the domain,
saying that $\resid{H_{d+δ,d}}$ is injective
is the same as saying that the map $\resid{H_{d+δ,d}}$ has full rank.
As such, this can be regarded as a variant of
the statement about the Vandermonde factorization in
\ref{lem:vandermondefactorization}.
\begin{remark}
In this formalism, $\evd{\mathfrak{a}}{d}$ is always surjective,
which is an important difference from the Vandermonde factorization
considered in \ref{lem:vandermondefactorization:1},
as the Vandermonde matrix considered there can be non-surjective for small $d$.
This is explained further in
\cref{ex:hankelmatfactors:points} below.
There, for an ideal of the form
$\mathfrak{a} = \bigcap_{j=1}^r \maxideal{ξ_j}$,
the dimension of
$\image\lr{\evd{\mathfrak{a}}{d}} = \submodulequotient{R_{≤d}}{\mathfrak{a}}$ as vector space is at most $r$,
but can be smaller.
Equality holds if and only if the corresponding Vandermonde matrix has rank $r$,
which only holds if $d$ is sufficiently large.
\end{remark}
Moreover, we remark that the map $\resid{H_{d+δ,d}}$ is injective in particular when
$σ$ is a moment functional of a measure and
$\mathfrak{a}$ is the vanishing ideal of its support,
as will be shown in \ref{thm:idealequalkernel}.
\begin{example}\label{ex:hankelmatfactors:points}
Let us revisit \ref{eg:reduced0dim},
so let $\mathfrak{a} \coloneqq \bigcap_{j=1}^r \maxideal{ξ_j} \subseteq L$
for distinct points $ξ_1,…,ξ_r ∈ \mathbbm{k}^n$,
where now we assume that $L = R = \mathbbm{k}[x_1,…,x_n]$
is endowed with the trivial involution and the filtration induced by total degree.
If $d$ is sufficiently large,
$\evd{\mathfrak{a}}{d}$ has rank $r$
and we have
$\submodulequotient{R_{≤d}}{\mathfrak{a}} \cong \bigoplus_{j=1}^r R / \maxideal{ξ_j} \cong \mathbbm{k}^r$.
Hence, we also have
$\submodulequotient{R_{≤d+δ}}{\mathfrak{a}} \cong \mathbbm{k}^r$ for all $δ∈ℕ$.
If $σ\colon R\to \mathbbm{k}$ is a $\mathbbm{k}$-linear map with $\mathfrak{a}\subseteq \kernel σ$,
then, by \ref{eg:reduced0dim},
it is of the form $σ = \sum_{j=1}^r λ_j \ev_{ξ_j}$
for some $λ_1,…,λ_r ∈ \mathbbm{k}$.
Thus, the map $\resid{H_{d+δ,d}}$ corresponds to the diagonal matrix
$\diag\lr{λ_1,…,λ_r}$ with respect to the natural bases.
Clearly, it is injective if and only if $λ_1,…,λ_r≠0$,
which illustrates the connection of \ref{lem:hankelkernelinjectivity}
to \ref{lem:vandermondefactorization:2}.
\end{example}
\par
Although for zero-dimensional ideals as in the preceding example
it is enough to consider the case $δ=0$ to infer that
$\kernel H_{d+δ,d} = \mathfrak{a} \cap R_{≤d}$
if $d$ is sufficiently large,
this does not hold in general (cf.~\ref{ex:pointskernelunequaltruncatedideal}).
We will see a non-trivial example in
\ref{ex:signedmeasurewrongkernel},
which involves an ideal of positive dimension.
In connection to that, \ref{thm:idealequalkernelextended}
will show that it can be useful to consider $δ$ larger than $0$.
\section{Recovery of the support from moments}\label{sec:support}
In this \lcnamecref{sec:support},
we explore how to recover the underlying algebraic variety
that a measure is supported on, by using finitely many of its moments.
We consider a non-negative or signed measure $μ$ whose support
lives in the affine space $ℝ^n$ or the complex torus $\T^n$
and wish to find the smallest variety that contains the support.
Following the notation of \ref{sec:sesquilinearity},
we consider the following two cases,
to which we also refer as \emph{affine} and \emph{trigonometric} cases,
respectively:
\begin{enumerate}
\item
$Ω = ℝ^n$,
$\mathbbm{k} = ℝ, L = R = ℝ[x_1,…,x_n]$ with trivial involutions
(cf.~\ref{ex:involution:trivial});
\item
$Ω = \T^n$,
$\mathbbm{k} = ℂ, R = ℂ[x_1,…,x_n], L = ℂ[x_1^{±1},…,x_n^{±1}]$
with complex conjugation and
involution $\invol{\blank}$ on $L$
defined as in \ref{ex:involution:conjugation}.
\end{enumerate}
Additionally, we fix a filtration $\{R_{≤d}\}_{d∈ℕ}$ of $R$
consisting of finite-dimensional vector spaces.
Recall that the support of a non-negative or signed measure
is defined as follows.
\begin{definition}[{cf.~\cite[Chapter~1.3]{schwartz1973}}]
Let $μ$ be a signed measure on $Ω$.
Then
\[
\supp μ \coloneqq
\braced*{ξ∈Ω \mid μ\restrict{U} ≠ 0\text{\ for all open neighborhoods $U\subseteq Ω$, $ξ∈U$}}
\]
%
is called \emph{support} of $μ$,
where $μ\restrict{U}$ denotes the restriction of $μ$ to $U$.
\end{definition}
By convention, we consider the support in terms of the standard topology on $Ω$.
The complement of $\supp μ$ in $Ω$ is the union of all open sets
on which $μ$ is constantly zero and is open,
so $\supp μ$ is a closed set.
When we consider the support in terms of the Zariski topology,
we denote it by $\zariski{\supp μ}$ (as a subset of $Ω$ or $(ℂ^*)^n$).
It is the smallest Zariski-closed set containing $\supp μ$.
This topic has been studied in various forms,
usually in the real affine case with non-negative measures
(e.\,g.\ {}\cite{lasserre2015:algebraicexponential,lasserre2021:empiricalmomentschristoffel})
and an emphasis on finitely-supported measures;
see for instance \cite{laurentrostalski2012}.
The case of plane algebraic curves has also been investigated in
\cite{vetterli2016}, with a focus on the presence of noise.
The case of plane trigonometric curves on the torus has been considered in
\cite{ongie15:piecewisesmooth,ongie2016:piecewiseconstant}.
We unify the different noise-free settings in \ref{thm:idealequalkernel} and
expand the existing results by \ref{thm:idealequalkernelextended},
a statement for compactly-supported signed measures,
as well as \ref{cor:idealequalkernelpolynomial,thm:idealequalkernel:mixture}.
Moreover, we give examples that highlight the differences
between signed and non-negative measures.
\subsection{Signed measures}\label{sec:support:signedmeasures}
Here, we consider a signed measure $μ$ on $Ω$.
If $\mathbbm{k} = ℂ$, as in the trigonometric case, then $μ$ is a complex measure.
As a consequence of the Riesz representation theorem
(see e.\,g.\ {}\cite[Theorem~6.19]{rudin1987}),
these measures can be defined as elements
in the continuous dual space of
the space $\contincompact{0}(Ω)$ of compactly-supported continuous functions from $Ω$ to $\mathbbm{k}$.
We refer to \cite[Chapter~1.2]{schwartz1973} for an extensive treatment of this topic.
In the trigonometric case, all the moments of $μ$ are defined,
as the torus $\T^n$ is compact.
In order to speak of moments
$\int_Ω x^α \d μ$, $α∈ℕ^n$, in the affine case,
we need to make additional assumptions on the measure $μ$,
since the monomials $x^α$ are not compactly-supported functions on $ℝ^n$.
Certainly, the moments are defined when the measure $μ$ itself is compactly supported.
More generally, all the moments are defined
for signed measures with a sufficiently rapid decay toward infinity,
such as those that can be written as a product $μ = g μ_0$
of a Schwartz function $g$ and a tempered distribution $μ_0$
(see e.\,g.\ {}\cite[Chapter~2]{grafakos2014} or \cite[Chapter~7]{schwartz1973}),
which in particular includes Gaussians and mixtures thereof.
In this \lcnamecref{sec:support:signedmeasures},
we focus on signed measures with compact support only,
as these are determined by their moments.
First, let us take note of the following elementary properties of
the support of the product between a measure and a continuous function.
\par
\begin{lemma}\crefalias{enumi}{lemma}
Let $μ$ be a signed measure on $Ω$ and let $f,g ∈ \contin{0}(Ω)$ be continuous functions.
Then:
\begin{thm-enumerate}
\item\label{lem:nonvanishingsupportinclusion}
$\nonV{f} \cap \supp μ \subseteq \supp\lr{f μ}$,
where $\nonV{f} \subseteq Ω$ denotes the set of points
in which $f$ does not vanish.
\item\label{lem:zeroonsupport}
The measure $f μ$ is zero if and only if $f$ vanishes on $\supp μ$.
\item\label{lem:samevanishingonsupport}
If $\nonV{f} \cap \supp μ = \nonV{g} \cap \supp μ$,
then $\supp\lr{f μ} = \supp\lr{g μ}$.
\end{thm-enumerate}
\end{lemma}
\begin{proof}
For \localref{lem:nonvanishingsupportinclusion},
let $ξ ∈ \supp μ$ be any point such that $f(ξ) ≠ 0$
and let $U\subseteq Ω$ be an arbitrary open neighborhood of~$ξ$.
We need to show that $f μ\restrict{U} ≠ 0$.
For this, let $U_0 \subseteq U$ be an open neighborhood of~$ξ$
in which $f$ does not have any roots.
Since $ξ$ is a support point of $μ$,
there exists a compactly-supported continuous function $φ ∈ \contincompact{0}(U_0)$
such that $\int_{U_0} φ\,\d μ ≠ 0$.
Then
%
$ψ \coloneqq \frac{φ}{f} ∈ \contincompact{0}(U_0)$
can be extended trivially to a compactly-supported function $ψ ∈ \contincompact{0}(U)$
and we have $\int_U ψ\,\d (f μ) = \int_{U_0} \frac{φ}{f}\,\d (f μ) = \int_{U_0} φ\,\d μ ≠ 0$
and thus $f μ\restrict{U} ≠ 0$,
which proves the statement.
For part \localref{lem:zeroonsupport}, assume that $f μ$ is zero.
Then $\supp\lr{f μ} = ∅$,
so $f$ vanishes on $\supp μ$ by \localref{lem:nonvanishingsupportinclusion}.
The converse holds by \cite[Chapter~3, Theorem~33, addendum]{schwartz1973}.
Finally, for part \localref{lem:samevanishingonsupport},
observe that the complement of $\supp\lr{f μ}$
consists of the union of all open sets $U\subseteq Ω$ satisfying $f μ\restrict{U} = 0$.
By \localref{lem:zeroonsupport}, this is equivalent to $f$ vanishing on $\supp\lr{μ\restrict{U}}$.
By hypothesis, this is the case if and only if $g$ vanishes on $\supp\lr{μ\restrict{U}}$,
which in turn is equivalent to $g μ\restrict{U} = 0$
and thus completes the proof.
\end{proof}
For the remainder of this \lcnamecref{sec:support:signedmeasures},
we fix a filtration $\{L_{≤d}\}_{d∈ℕ}$ of $L$
for which all the components are finite-dimensional vector spaces.
In the affine case, we may choose $L_{≤d} = R_{≤d}$.
Additionally, we denote by $B_d^L$ and $B_d^R$
any bases of the filtered components $L_{≤d}$ and $R_{≤d}$, respectively.
With this notation, we arrive at the following \lcnamecref{thm:idealequalkernelextended}.
\par
\begin{theorem}\label{thm:idealequalkernelextended}
Let $μ$ be a compactly-supported signed measure on $Ω$,
denote by $\mathfrak{a} \coloneqq \Id{\supp μ} \subseteq L$
the vanishing ideal of (the Zariski closure of) its support
and let $σ\colon L\to\mathbbm{k}$ be its moment functional.
Let $d∈ℕ$. Then
\[
\mathfrak{a} \cap R_{≤d} = \kernel H_{d',d}
\]
holds for all sufficiently large $d'∈ℕ$,
where
$H_{d',d} \coloneqq \lr{\sform[σ]{w,v}}_{w ∈ B_{d'}^L,\,v ∈ B_d^R}$.
\end{theorem}
It then follows from Hilbert's basis theorem that
$\mathfrak{a}$ is generated by $\kernel H_{d',d}$
if $d∈ℕ$ is sufficiently large.
\begin{proof}
%
As the measure $μ$ is compactly supported, all its moments exist.
Let $d∈ℕ$ be arbitrary
and observe that
\[\label{eq:idealsubsetkernel}
\mathfrak{a}\cap R_{≤d} \subseteq \kernel H_{d',d}
= \braced*{p∈R_{≤d}\mid \sform[σ]{q,p} = 0\text{\ for all $q∈L_{≤d'}$}},
\]
for all $d'∈ℕ$.
Indeed, if $p∈\mathfrak{a}$, then $p$ vanishes on the support of $μ$,
so $\sform[σ]{q,p} = \int_Ω \invol{q} p\,\d μ = 0$
for all $q∈L$, by \ref{lem:zeroonsupport}.
More specifically, we have a descending chain
\[
R_{≤d} \supseteq \kernel H_{0,d} \supseteq \kernel H_{1,d} \supseteq \cdots \supseteq \mathfrak{a}\cap R_{≤d}
\]
which must stabilize,
so we can fix a $d'∈ℕ$ such that
\[\label{eq:kernelstabilizes}
\kernel H_{d',d} = \kernel H_{d'+δ,d}
\]
holds for all $δ∈ℕ$.
Assume that $\kernel H_{d',d} \nsubseteq \mathfrak{a}\cap R_{≤d}$.
Then we can choose a polynomial $p ∈ \kernel H_{d',d}$
with $p\notin\mathfrak{a}$,
so $p$ does not vanish everywhere on $\supp μ$.
Hence, by \ref{lem:zeroonsupport},
the signed measure $ν \coloneqq p μ$ is non-zero,
so there exists a compactly-supported continuous function
$φ ∈ \contincompact{0}(Ω)$ such that
$\int_Ω φ\,\d ν ≠ 0$.
By the Weierstrass approximation theorem
(see \cite[Chapter~5, Theorem~8.1]{conway1990} for the affine real\footnote{%
This argument would not hold if, in the affine case,
we were to work over the field of complex numbers,
as the algebra of polynomials on $ℂ^n$ is not closed under conjugation.}
and \cite[Corollary~3.2.2]{grafakos2014}
for the trigonometric version),
the function $φ$ can be uniformly approximated by polynomials in $L$
on a compact set containing the support of the measure $ν$,
which implies that not all moments of $ν$ can be zero.
Hence, there exists a polynomial $q∈L$ such that
$\int_Ω q\,\d ν = \int_Ω q p\,\d μ = \sform[σ]{\invol{q},p} ≠ 0$.
As $\invol{q}∈L_{≤d'+δ}$ for some $δ∈ℕ$,
this implies that $p \notin \kernel H_{d'+δ,d}$,
which is a contradiction to \myeqref{eq:kernelstabilizes},
by the choice of the polynomial $p$.
\end{proof}
\begin{remark}\label{rem:compactsupportdiscussion}
%
%
%
%
%
%
%
%
%
In the proof of \ref{thm:idealequalkernelextended},
the hypothesis that the support of the signed measure $μ$ is compact
does not only guarantee that all its moments exist,
but, more importantly,
it asserts that the signed measure $ν = p μ$ is determined by its moments,
so that $ν$ is already zero if all its moments vanish.
This does not in general hold for measures that are not compactly supported --
not even for rapidly decreasing functions.
For instance, let $g$ be a non-zero Schwartz function on $ℝ^n$
such that all its derivatives vanish at the origin,
i.\,e.\ $(\partial^α g)(0) = 0$ for all $α∈ℕ^n$.
%
%
Then its Fourier transform $\fourier{g}$ is a non-zero Schwartz function satisfying
%
%
\[
(-1)^n \lr{2π\mathrm{i}}^{\totaldeg{α}} \int_{ℝ^n} x^α \fourier{g}(x) \d x
= \lr{\partial^α g}(0)
= 0
\]
for all $α∈ℕ^n$ (cf.~\cite[Proposition~2.2.11\,(10)]{grafakos2014}),
so all the moments of $\fourier{g}$ are zero.
%
%
\end{remark}
\begin{remark}\label{rem:idealequalkernelextended}
%
%
%
%
%
In the affine case of \ref{thm:idealequalkernelextended},
we can choose the filtration of $L$ as $L_{≤d} = R_{≤d}$ for all $d∈ℕ$.
Let $H_{d',d}$ be
the rectangular moment matrix satisfying
the statement of the \lcnamecref{thm:idealequalkernelextended},
so $\mathfrak{a} \cap R_{≤d} = \kernel H_{d',d}$.
By \ref{lem:hankelkernelinjectivity},
this equality can only hold when the induced map
$\resid{H_{d',d}}$ on the quotient spaces,
as in \myeqref{eq:hankelmatfactors} of \ref{thm:hankelopfactorization},
is injective.
This implies $d' ≥ d$, for this choice of filtration in the affine case.
This is in contrast to the statement of \ref{thm:pronyohe}
in which the moment matrix $H_{d',d}$ had a different shape.
\end{remark}
By \ref{thm:idealequalkernelextended},
we can recover the vanishing ideal of the support
from finitely many moments.
In particular, this means that the kernel of the non-truncated moment map
also yields the vanishing ideal,
as the following statement shows.
\par
\begin{corollary}\label{cor:idealequalkernelnontruncated}
Under the assumptions of \ref{thm:idealequalkernelextended}, we have
\[
\mathfrak{a} \cap R = \kernel H,
\]
where $H$ denotes the map
$H\colon R \to \semikdual{L}$, $p\mapsto \lr{q\mapsto \sform[σ]{q,p}}$.
\end{corollary}
\begin{proof}
To see this, first observe that we always have
the inclusion $\mathfrak{a} \cap R \subseteq \kernel H$,
by \ref{lem:zeroonsupport}.
On the other hand, if $p∈\kernel H$, then $p∈R_{≤d}$ for some $d∈ℕ$.
In particular, this implies $\sform[σ]{q,p} = 0$ for all $q∈L_{≤d'}\subseteq L$
and arbitrary $d'∈ℕ$.
Choosing $d'$ as in \ref{thm:idealequalkernelextended},
we therefore obtain
$p∈\kernel H_{d',d} = \mathfrak{a}\cap R_{≤d}$,
so the statement follows.
\end{proof}
\begin{remark}\label{rem:idealequalkernelextended:nobound}
\ref{thm:idealequalkernelextended} does not quantify
what it means for $d'∈ℕ$ to be large enough for the statement to hold.
In general, the choice of $d'$ cannot be made purely based on knowledge
of the support or its vanishing ideal,
but it must inherently depend on the signed measure itself.
Indeed, for arbitrarily large $d,d'∈ℕ$,
one can construct a signed measure with the following properties:
its support is compact and Zariski-dense, so its vanishing ideal is zero,
and all the low order moments vanish so that the matrix
$H_{d',d} = \lr{\sform[σ]{w,v}}_{w ∈ B_{d'}^L,\,v ∈ B_d^R}$
is zero.
%
%
Hence, the kernel of $H_{d',d}$ is non-zero
and thus is not a generating set of the zero ideal,
the vanishing ideal of the support.
In other words, $d'$ is not large enough
for the statement of the \lcnamecref{thm:idealequalkernelextended} to hold.
However, for particular signed measures,
a bound on $d'$ is given in \ref{cor:idealequalkernelpolynomial}.
\end{remark}
\begin{remark}\label{rem:idealequalkernelextended:bases}
In the trigonometric case,
we could also state \ref{thm:idealequalkernelextended}
in a more symmetric fashion
in terms of a matrix for which both rows and columns are indexed
by $B_d^L$, a basis of the filtered component $L_{≤d}$.
We prefer to index the columns by $B_d^R$ instead
because it allows for a finer filtration,
i.\,e.\ the filtered components $R_{≤d}$ can be chosen to be of smaller dimension
than the components $L_{≤d}$,
and every ideal in $L$ can be generated by elements in $R$.
%
Indexing the rows of the matrix by $B_{d'}^L$
is needed in the proof of \ref{thm:idealequalkernelextended}
due to the use of the Weierstrass approximation theorem.
This leads to the question whether a statement
similar to \ref{thm:idealequalkernelextended} is possible
in which rows and columns are indexed by $B_{d'}^R$, $B_d^R$,
i.\,e.\ bases of components of the filtration on $R$
instead of $L$.
In general,
this is answered negatively by the following example,
but a positive answer is possible for non-negative measures,
as will be shown in \ref{sec:support:nonnegativemeasures}.
\end{remark}
\par
\begin{example}\label{ex:signedmeasurewrongkernel}
%
%
We consider the two-dimensional trigonometric case,
so let $n=2$.
Let $v_1 \coloneqq \lr{2,1}, v_2 \coloneqq \lr{1,2} ∈ ℤ^2$ and
define the functionals
\[
σ_j\colon L\longrightarrow ℂ,\qquad
x^α \longmapsto
\begin{cases}
1 &\text{if $\scalarprod{α}{v_j} = 0$,}\\
0 &\text{otherwise,}
\end{cases}
\]
for $α∈ℤ^2$ and $j=1,2$.
These are moment functionals of uniform measures supported on
the one-dimensional varieties in $\T^2$ that are defined by the polynomials
$x_1 - x_2^2$ and $x_1^2 - x_2$, respectively,
and are depicted in \ref{fig:trigonometriclines}.
Thus, the functional $σ \coloneqq σ_1 - σ_2$
is a moment functional of a signed measure.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\begin{axis}[
xmin=0, xmax=1,
ymin=0, ymax=1,
xtick distance = 1,
ytick distance = 1,
minor x tick num = 3,
minor y tick num = 3,
tick style={draw=none},
width = 4.2cm,
height = 4.2cm,
]
\fill[opacity=.3,gray] (2/3,0) -- (5/6,1/6) -- (2/3,1/3) -- (1/3,0) -- cycle;
\fill[opacity=.3,gray] (1/3,1) -- (1/6,5/6) -- (1/3,2/3) -- (2/3,1) -- cycle;
\fill[opacity=.3,gray] (0,0) -- (1/2,1/2) -- (1/3,2/3) -- (0,1/3) -- cycle;
\fill[opacity=.3,gray] (1,1) -- (1/2,1/2) -- (2/3,1/3) -- (1,2/3) -- cycle;
\fill[opacity=.3,gray] (0,2/3) -- (1/6,5/6) -- (0,1) -- cycle;
\fill[opacity=.3,gray] (1,1/3) -- (5/6,1/6) -- (1,0) -- cycle;
\draw[thick] (0.0,0.0) -- (1.0,0.5);
\draw[thick] (0.0,0.5) -- (1.0,1.0);
\draw[thick,dashed] (0.0,0.0) -- (0.5,1.0);
\draw[thick,dashed] (0.5,0.0) -- (1.0,1.0);
\end{axis}
\end{tikzpicture}
\caption{The varieties $\V{x_1 - x_2^2}$ (solid) and $\V{x_1^2 - x_2}$ (dashed)
on the torus $\T^2$ parametrized by $\linterval{0,1}^2$.
The shaded region designates where
the polynomial $g$ from \ref{rem:lagrangecondition} is negative.}
\label{fig:trigonometriclines}
\end{figure}
Observe that $\sform[σ]{x^α, 1} = σ\lr{x^{-α}} = 0$
holds for all $α∈ℕ^2$.
This implies that $\sform[σ]{q,1} = 0$ for all $q∈R$.
Hence, for every choice of $d,d'$,
the polynomial $p\coloneqq 1$ is contained in the kernel of the moment matrix
$H_{d',d} \coloneqq
\lr{\sform[σ]{w,v}}_{w ∈ B_{d'}^R,\,v ∈ B_d^R}$,
where $B_d^R$ denotes a basis of $R_{≤d}$,
with respect to any filtration of $R$.
As $p = 1$ does not vanish on any non-empty variety,
this shows that the statement of \ref{thm:idealequalkernelextended}
does not hold for this matrix $H_{d',d}$
with rows indexed by $B_d^R$ rather than $B_d^L$.
Additionally, this shows that the kernel of the non-truncated map
$R \to \semikdual{R}$, $p\mapsto \lr{q\mapsto \sform[σ]{q,p}}$,
is not in general an ideal in $R$, in the trigonometric case.
For instance, we have $\sform[σ]{x_2^2, x_1} = σ\lr{x^{\lr{1,-2}}} ≠ 0$,
so $x_1 \notin \kernel H$, even though $1 ∈ \kernel H$.
\end{example}
\subsection{Non-negative measures}\label{sec:support:nonnegativemeasures}
In this \lcnamecref{sec:support:nonnegativemeasures},
we consider non-negative measures as well as
statements about signed measures that involve non-negative measures.
The non-negativity is an essential property
that allows us to state
the following \lcnamecref{thm:idealequalkernel}
which is a stronger version of \ref{thm:idealequalkernelextended}.
If $W = R_{≤d}$ is a component of the total degree filtration,
then in the affine case
this statement can also be obtained with a different proof by combining
\cite[Theorem~2.10]{laurentrostalski2012}
and \cite[Lemma~5]{lasserre2021:empiricalmomentschristoffel}.
\par
\begin{proposition}\label{thm:idealequalkernel}
Let $μ$ be a non-negative measure on $Ω$ with finite moments,
let $\mathfrak{a} \coloneqq \Id{\supp μ} \subseteq L$
be the vanishing ideal of (the Zariski closure of) its support
and let $σ\colon L\to\mathbbm{k}$ be its moment functional.
Let $W\subseteq L$ be a $\mathbbm{k}$-vector subspace.
Then $\sform[σ]{\blank,\blank}$ induces a
positive-definite form on $\submodulequotient{W}{\mathfrak{a}}$.
In particular, if $W$ is finite-dimensional and $B$ is a basis of $W$,
let $H \coloneqq \lr{\sform[σ]{w,v}}_{w,v∈B}$.
Then
\[
\mathfrak{a} \cap W = \kernel H.
\]
Furthermore, $H$ is non-singular if and only if the elements of $B$
are linearly independent modulo $\mathfrak{a}\cap W$.
\end{proposition}
For the statement, only finiteness of the moments that occur in $H$ is needed,
so $σ$ must be defined on the subspace $\invol{W}\cdot W \subseteq L$.
\begin{proof}
%
%
First observe that
$\sform[σ]{\blank,\blank}$ is positive-semidefinite,
as $\sform[σ]{p,p} = \int_{Ω} \abs{p(x)}^2 \d μ(x) ≥ 0$ for all $p∈L$.
By \ref{lem:inducedformonquotient},
$\sform[σ]{\blank,\blank}$ induces a form on $\submodulequotient{W}{\mathfrak{a}}$
and we need to show that it is non-degenerate.
%
Assume that $p ∈ W$ is a polynomial such that $\sform[σ]{p,p} = 0$.
Since $\abs{p}^2 ≥ 0$ on $Ω$, it follows from
\cite[Proposition~1.23]{schmuedgen2017} that
$\abs{p}^2$ vanishes on $\supp μ$
and thus $p ∈ \mathfrak{a}$.
Hence, the induced form is non-degenerate
and we have $\kernel H \subseteq \mathfrak{a} \cap W$.
Conversely, if $p ∈ \mathfrak{a} \cap W$,
then $(\invol{q} p)(ξ) = \invol{q}(ξ) p(ξ) = 0$ for all $ξ ∈ \supp μ$
and all $q ∈ L$.
Thus, in particular, we have
$σ(\invol{q} p) = \int_{Ω} \invol{q}(x) p(x) \d μ(x) = 0$
for all $q ∈ W$, so $p∈\kernel H$.
From this, the addendum readily follows.
If $H$ is non-singular, we have $\mathfrak{a}\cap W = \kernel H = 0$,
so the elements of $B$ are linearly independent modulo $\mathfrak{a}\cap W$.
If $H$ is singular, we find a non-trivial linear combination
$q = \sum_{w∈B} q_w w ≠ 0$, $q_w∈\mathbbm{k}$,
with $q ∈ \kernel H = \mathfrak{a}\cap W$,
so $q \equiv 0 \pmod{\mathfrak{a}\cap W}$.
\end{proof}
In particular, \ref{thm:idealequalkernel} holds with $W = R_{≤d}$ for any $d∈ℕ$,
so that
\[
\mathfrak{a} \cap R_{≤d} = \kernel H.
\]
Again, by Hilbert's basis theorem,
the ideal $\mathfrak{a}$ is generated by $\mathfrak{a} \cap R_{≤d}$
if $d$ is sufficiently large.
Hence, for such a number $d$,
the kernel of $H$ generates the ideal $\mathfrak{a}$,
which is the statement of \cite[Theorem~2.10]{laurentrostalski2012},
so we can fully recover the ideal $\mathfrak{a}$ from finitely many moments.
\begin{lemma}\label{lem:idealequalkernelnonneg}
Let $\{F_d\}_{d∈ℕ}$ be a filtration of $R$ or $L$.
Let $μ$ be a signed measure on $Ω$
and $g ∈ F_δ$ for some $δ∈ℕ$
such that $μ_+ = \invol{g} μ$
is a non-negative measure with finite moments
satisfying $\supp μ = \supp μ_+$.
Then
\[
\Id{\supp μ} \cap F_d = \kernel H_{d+δ,d},
\]
for every $d∈ℕ$
with $H_{d+δ,d} \coloneqq \lr{\sform[σ]{w,v}}_{w∈B_{d+δ},\,v∈B_d}$,
where $σ\colon L\to\mathbbm{k}$ denotes the moment functional of $μ$
and $B_d,B_{d+δ}$ denote finite bases of $F_d,F_{d+δ}$, respectively.
\end{lemma}
\begin{proof}
Observe that
\begin{align}
\Id{\supp μ} \cap F_d
\subseteq \kernel H_{d+δ, d}
&=
\braced*{p ∈ F_d \;\middle|\; \int_{Ω} \invol{q} p \,\d μ = 0\text{\ for all $q ∈ F_{d+δ}$}}\\
&\subseteq\label{eq:idealequalkernelpolynomial:inclusions}
\braced*{p ∈ F_d \;\middle|\; \int_{Ω} \Invol{g q} p \,\d μ = 0\text{\ for all $q ∈ F_d$}},
\end{align}
where the last inclusion holds due to
$g F_d \subseteq F_{d+δ}$.
As $\invol{g} μ = μ_+$ is a non-negative measure on $Ω$,
it follows from \ref{thm:idealequalkernel} that
the set \myeqref{eq:idealequalkernelpolynomial:inclusions} is equal to
$\Id{\supp μ_+} \cap F_d$.
Then the statement follows from $\supp μ_+ = \supp μ$.
\end{proof}
For signed measures that are a product of a polynomial and a non-negative measure,
we then obtain the following result,
which in contrast to \ref{thm:idealequalkernelextended}
comes with an explicit bound on the size of the moment matrix
and does not require compactness of the support.
\par
\begin{corollary}\label{cor:idealequalkernelpolynomial}
Let $\{F_d\}_{d∈ℕ}$ be a filtration of $R$ or $L$.
Let $μ = g μ_+$ be a signed measure, where
$μ_+$ denotes a non-negative measure on $Ω$ with finite moments
and $g ∈ F_δ$ a polynomial for some $δ∈ℕ$.
Then
\[
\Id{\supp μ} \cap F_d = \kernel H_{d+δ,d},
\]
for every $d∈ℕ$
with $H_{d+δ,d} \coloneqq \lr{\sform[σ]{w,v}}_{w∈B_{d+δ},\,v∈B_d}$,
where $σ\colon L\to\mathbbm{k}$ denotes the moment functional of $μ$
and $B_d,B_{d+δ}$ denote finite bases of $F_d,F_{d+δ}$, respectively.
\end{corollary}
\begin{proof}
As $\invol{g} g$ and $g$ have the same vanishing set on $Ω$,
it follows from \ref{lem:samevanishingonsupport}
that $\supp\lr{\invol{g} g μ_+} = \supp\lr{g μ_+} = \supp μ$.
As $\invol{g} g μ_+ = \invol{g} μ$ is a non-negative measure on $Ω$,
the result follows from \ref{lem:idealequalkernelnonneg}.
\end{proof}
\begin{remark}\label{rem:pronyposdim}
Under the assumptions that $μ_+$ is the uniform measure on some unknown variety $V\subseteq Ω$,
that $g$ is non-zero on a Zariski-dense subset of $V$
and that sufficiently large integers $d,δ∈ℕ$ are known
such that $g∈F_δ$ and $V$ is generated by polynomials in $F_d$,
then \ref{cor:idealequalkernelpolynomial} gives rise to
a scheme for recovering all the defining data of $μ$ from finitely many of its moments.
In particular, this includes finitely-supported measures as a special case,
for which the variety $V$ is zero-dimensional.
Hence, this may be regarded as an extension of Prony's method
to more general measures.
In this setting, we have $V = \supp μ = \supp μ_+$.
Thus, we obtain the variety from
$V = \V{\mathfrak{a} \cap F_d} = \V{\kernel H_{d+δ,d}}$,
where $\mathfrak{a}\coloneqq\Id{\supp μ}\subseteq L$ denotes the vanishing ideal.
Knowing $V$, one can compute the moments of the uniform measure $μ_+$ on $V$.
Finally, finding $g$ is a linear problem involving only the moments of $μ$ and $μ_+$.
Indeed,
if $B_δ \subseteq F_δ$ represents a basis of $\submodulequotient{F_δ}{\mathfrak{a}}$
and $H \coloneqq \lr{\int_Ω \invol{w} v \,\d μ_+}_{w,v∈B_δ}$ is the corresponding moment matrix,
we have
\[
H \resid{g} = \lr{\int_Ω \invol{w} \resid{g} \,\d μ_+}_{w∈B_δ} = \lr{\int_Ω \invol{w} \d μ}_{w∈B_δ},
\]
where $\resid{g} = \sum_{v∈B_δ} g_v v$ is the reduction of $g$ modulo $\mathfrak{a}\cap F_δ$.
As $H$ is a positive-definite matrix by \ref{thm:idealequalkernel},
this linear system has a unique solution,
so the polynomial $g$ is unique modulo $\mathfrak{a}\cap F_δ$.
Though, we remark that computing the moments of the uniform measure $μ_+$
can be a difficult problem itself
if the variety $V$ is not zero-dimensional.
An approach that proved successful for us is
to find a parametrization of the variety $V$
and then compute the moments numerically with respect to this parametrization.
\end{remark}
We give a few examples of signed measures that illustrate
that the assumption of non-negativity is crucial
for \ref{thm:idealequalkernel}.
\par
\begin{example}\label{eg:negdensity}
%
Let $μ$ be a signed measure supported on the real interval $[-1,1]\subseteq ℝ$ with density $g(x)\coloneqq x$
and denote its moment functional by $σ\colon R \coloneqq ℝ[x] \to ℝ$,
so that $σ(p) = \int_{-1}^1 p(x) g(x) \d x$ for $p∈R$.
In particular, this means that $\sform[σ]{\blank,\blank}$ is not positive-semidefinite.
One checks that, due to symmetry, the even moments $σ(x^{2α}) = 0$ vanish for $α∈ℕ$
and thus $\det\lr{σ\lr{x^{2α+2β}}}_{0≤α,β≤d} = 0$ for all $d∈ℕ$.
Then it follows that $\det\lr{σ\lr{x^{α+β}}}_{0≤α,β≤d} = 0$ if $d$ is even,
for example using the Leibniz formula
or by a suitable permutation of rows and columns.
This means that, for every even $d$,
we find some non-zero polynomial in $R_{≤d}$
that lies in the kernel of the square moment matrix $\lr{σ\lr{x^{α+β}}}_{0≤α,β≤d}$,
even though the variety corresponding to the Zariski closure of the support of the signed measure $μ$
is the entire line $ℝ$, which is defined by the zero-ideal in $R$,
and despite the fact that the monomials are linearly independent modulo the zero-ideal.
Hence, the statement of \ref{thm:idealequalkernel} cannot hold.
However, note that, in this example, the non-truncated Hankel operator
is injective nevertheless,
as stated in \ref{cor:idealequalkernelnontruncated}.
Moreover, as $g$ is a polynomial of degree $1$,
it follows from \ref{cor:idealequalkernelpolynomial}
that the kernel of the rectangular matrix
$\lr{σ\lr{x^{α+β}}}_{0≤α≤d+1,0≤β≤d}$ is zero, for every $d∈ℕ$.
%
%
%
%
\end{example}
In the affine setting with $L_{≤d} = R_{≤d}$, $d∈ℕ$,
and for a finitely-supported signed measure,
it follows from \ref{lem:vandermondefactorization}
that the statement of
\ref{thm:idealequalkernelextended} holds with $d'\coloneqq d$,
as long as $d∈ℕ$ is sufficiently large.
The following example shows that this can fail for small $d$.
\par
\begin{example}\label{ex:pointskernelunequaltruncatedideal}
Let $R = \mathbbm{k}[x]$ be the univariate polynomial ring
and let $\mathfrak{a} = \maxideal{ξ_1}\cap\maxideal{ξ_2}$
with two distinct points $ξ_1,ξ_2∈\mathbbm{k}$.
We consider the map $σ = \ev_{ξ_1} - \ev_{ξ_2}$.
Denote by $H_{d',d}$ the corresponding Hankel matrix, for $d,d'∈ℕ$.
By \myeqref{eq:idealsubsetkernel},
we have $\mathfrak{a}\cap R_{≤d} \subseteq \kernel H_{d',d}$,
but equality does not hold for small $d$.
For instance, if $d'=d=0$,
we have
\[
\mathfrak{a}\cap R_{≤d} = 0 \subsetneq \kernel H_{0,0} = \kernel \lr{0}.
\]
However, if $d$ is sufficiently large, namely $d ≥ 2$, and if $d'≥d$, we have
$\mathfrak{a}\cap R_{≤d} = \kernel H_{d',d}$
by \ref{lem:vandermondefactorization:2},
regardless of the choice of $d'$.
\end{example}
\par
In contrast, we have seen in \ref{ex:signedmeasurewrongkernel}
that a similar statement is not possible
for infinitely-supported signed measures.
More precisely, it is an example in which one has
$\mathfrak{a}\cap R_{≤d} ≠ \kernel H_{d,d}$ for all $d∈ℕ$,
since $1∈\kernel H_{d,d}$, but $1 \notin \mathfrak{a}$.
For a non-negative measure, this would not be possible due to \ref{thm:idealequalkernel}.
For signed measures that are a complex linear combination of non-negative measures,
we obtain the following statement,
which in contrast to \ref{thm:idealequalkernelextended}
bounds the size of the moment matrix
and does not require compactness of the support.
\par
\begin{theorem}\label{thm:idealequalkernel:mixture}
Let $μ = \sum_{j=1}^r λ_j μ_j$, where $λ_j∈ℂ^*$ and
$μ_j$ are non-negative measures on $Ω$ with finite moments.
Assume that $δ∈ℕ$ such that there exist elements
$h_j ∈ L_{≤δ}$, $1≤j≤r$, such that
$h_j ≥ 0$ on $Ω$
and
\[\label{eq:lagrangelikecondition}
\supp\lr{h_j μ_k} = \begin{cases}
\supp μ_k & \text{if $k = j$},\\
∅ & \text{otherwise}.
\end{cases}
\]
Then
\[
\Id{\supp μ} \cap L_{≤d} = \kernel H_{d+δ,d},
\]
holds for all $d∈ℕ$
with $H_{d+δ,d} \coloneqq \lr{\sform[σ]{w,v}}_{w∈B_{d+δ}, v∈B_d}$,
where $σ\colon L\to\mathbbm{k}$ denotes the moment functional of $μ$
and $B_d,B_{d+δ}$ denote bases of $L_{≤d},L_{≤d+δ}$, respectively.
\end{theorem}
\begin{proof}
%
Since $h_j μ_k = 0$ for all $k≠j$,
we have
\[\label{eq:unweightedmeasure}
h_j μ = h_j λ_j μ_j = h_j λ_j μ_+,
\]
where we define $μ_+ \coloneqq \sum_{k=1}^r μ_k$.
Letting $g \coloneqq \sum_{j=1}^r λ_j h_j ∈ L_{≤δ}$,
we thus have
$\invol{g} μ = \sum_{j=1}^r \abs{λ_j}^2 h_j μ_+$,
which is a non-negative measure.
Its support satisfies
\[
\supp\lr{\sum_{j=1}^r \abs{λ_j}^2 h_j μ_+}
%
= \bigcup_{j=1}^r \supp\lr{h_j μ}
= \supp μ,
\]
where the first equality holds due to \myeqref{eq:unweightedmeasure}
%
%
and the second due to $\supp μ_j = \supp\lr{h_j μ_j}$.
Hence, the statement follows from \ref{lem:idealequalkernelnonneg}.
\end{proof}
\begin{remark}\label{rem:lagrangecondition}
Note that elements $h_j∈L_{≤δ}$ satisfying \myeqref{eq:lagrangelikecondition} exist,
as long as $δ∈ℕ$ is large enough and
the Zariski closures of $\supp μ_j$, $1≤j≤r$,
are varieties such that each pair of them does not share a common irreducible component.
This allows for elements $f_j∈L$
such that $f_j$ vanishes on $\supp μ_k$ for all $1≤k≤r$ with $k≠j$
and $f_j$ is non-zero on a dense subset of $\supp μ_j$,
so we can choose $h_j \coloneqq \invol{f_j} f_j$, for $1≤j≤r$.
In particular, we can apply \ref{thm:idealequalkernel:mixture}
to \ref{ex:signedmeasurewrongkernel}
with $f_1 \coloneqq x_1^2 - x_2$, $f_2 \coloneqq x_1 - x_2^2$.
With $δ\coloneqq 2$, we then have $h_1,h_2 ∈ L_{≤δ}$
in terms of the max-degree filtration
and the hypotheses of the \lcnamecref{thm:idealequalkernel:mixture} are satisfied.
The Laurent polynomial $g = h_1 - h_2$ constructed in the proof
of the \lcnamecref{thm:idealequalkernel:mixture}
is non-negative on one of the components
and non-positive on the other, as depicted in \ref{fig:trigonometriclines},
so that $\invol{g} μ$ is a non-negative measure.
\end{remark}
\minisec{Acknowledgments}
The author thanks Stefan Kunis for helpful comments and for discussions and support.
Parts of this manuscript are incorporated in the thesis \cite{wageringel2021}.
\setlength{\emergencystretch}{1em}%
\setcounter{biburlnumpenalty}{5000} %
\printbibliography{}
\end{document}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,898 |
\section{Introduction}
\label{intro}
In recent years' NASA missions such as Time History of Events and Macroscale Interactions during Substorms (THEMIS), Van Allen Probes (RBSP) and Magnetospheric Multiscale (MMS) advanced our understanding of the complex inter-connections of geospace environment because of the availability of in-situ data. Some of these in-situ data are the boundary conditions and parametric input to many space environment models, and are critical to enable accurate now-casts and forecast. However, a trusted operational system would rely on continuous and long-running measurements of them. A solution for that need can be ground-based measurements of key parameter inputs. The PLASMON project (PLASmasphere MONitoring, an FP7-SPACE-2010-1 Collaborative Project) is an outstanding example for efforts to produce important key parameters, like plasmasphere densities, with the use of ground-based whistler measurements [\cite{lichtenberger2013plasmasphere}]. As part of PLASMON, the global AWDANet network [\cite{lichtenberger2008automatic}; \cite{lichtenberger2009new}] - consisting of 28 VLF receiver stations -, can be extended with the capability of recording whistler mode chorus emissions at stations with magnetic footprint $L > 4 (3)$. In particular, we will show in this paper how rising tone chorus emissions can be used as a proxy to estimate the in-situ thermal plasma conditions, which form the low-energy boundary condition of many of our current state-of-the-art radiation belt and ring current models.
Coherent chorus emissions are typically observed as rising/falling tones in the frequency range of $0.1 f_{ce} < f < 0.8 f_{ce}$ with discontinuity at $0.5 f_{ce}$, where $f_{ce}$ is the electron gyrofrequency [\cite{JGR:JGR7608}; \cite{KOONS19901335}; \cite{santolik2003spatio}; \cite{SAZHIN1992681} ]. These emissions are typically excited during geomagnetic storms close to the magnetic equator in low-density plasmas near outside the plasmapause. Chorus emissions are known to be generated via wave-particle interactions with an anisotropic distribution of energetic electrons (few keV- 100 keV) injected from the plasmasheet [\cite{kennel1966limit} ; \cite{anderson1977vlf} ; \cite{ledocq1998chorus}; \cite{meredith2001substorm}; \cite{omura2009nonlinear}; \cite{santolik2010wave‐particle}; \cite{li2013characteristics}; \cite{JGRA:JGRA51364}]. Anisotropic angular distributions of substorm injected energetic electrons (also called source population [\cite{JGRA:JGRA51985}]) are able to provide free-energy for chorus wave excitation [\cite{thorne2013rapid}, and references therein] and cause isotropic pitch angle distribution (PAD) in the energy range of the interacting particles. Attention of radiation belt modelers recently turned to whistler mode chorus waves due to its role in both accelerating electrons to MeV energies in the Earth's outer radiation belt [\cite{horne1998potential}; \cite{summers1998relativistic}; \cite{summers2002model}; \cite{reeves2013electron}; \cite{thorne2013rapid}; \cite{li2014radiation}] and in pitch angle scattering of electrons into the atmospheric loss cone [\cite{lorentzen2001observations}; \cite{o2004quantification}; \cite{thorne2005timescale}; \cite{hikishima2010microburst}]. The generation of chorus emissions is known to be driven by electron cyclotron resonance [\cite{kennel1966limit}; \cite{kennel1967unstable}; \cite{tsurutani1974postmidnight}; \cite{nunn1997numerical} ; \cite{chum2007chorus}; \cite{katoh2007computer}, \cite{katoh2007relativistic};\cite{omura2008theory}].\\
\citet{omura2008theory} and \citet{omura2011triggering} proposed a nonlinear wave growth theory for chorus wave generation. They assumed that linear instability excites a coherent whistler mode wave which triggers the non-linear process. They found a relationship between measurable characteristics (frequency sweep rate $\partial \omega/\partial t$, optimum wave amplitude $\Omega_{w0}$, threshold amplitude $\Omega_{th}$) of rising-tone emissions and the distribution function of energetic electrons (number density $N_h$, parallel and perpendicular thermal velocity,$V_{t||}$ and $V_{t\perp}$, respectively) participating in wave-particle interaction. Their theory reveals the amplitude dependency of frequency sweep rate of chorus emissions at the generation region close to the magnetic equator. During quasi-parallel propagation away from the magnetic equator, wave amplitude of chorus emissions undergo a convective growth due to the gradient of the magnetic field, but $\partial \omega/\partial t$ is affected only by cold plasma dispersion. During its slightly oblique propagation away from the equator, the gap at $0.5 f_{ce}$ is formed by nonlinear wave damping via Landau resonance \cite{hsieh2018}.\\
The above mentioned features of the theory led the AWDANet Team to start to develop a method to derive density and thermal velocities of energetic electrons (source population) from chorus emissions recorded on the ground after that they were projected from the ground to the equatorial generation region by a propagation model. When we developed our chorus-inversion method to monitor the equatorial source population, we took into account that the following data are available on AWDANet stations: 1) electromagnetic wave recordings (fs = 20 kHz) 2) equatorial electron plasma number density from PLASMON and 3) electron gyrofrequency obtained from a chosen geomagnetic field model via the station's L value. The 2) and 3) points assume that chorus emissions propagate quasi-parallel to the magnetic field.\\
The main objective of this study is to apply and validate the chorus-inversion method. The theoretical background of chorus-inversion is described in Section \ref{theo_disc}. In the third section, we present the results of our method on 16 chorus emissions selected from EMFISIS data of Van Allen Probes spacecraft A. Then, we validate the results with simultaneously measured HOPE data from the same spacecraft and analyze the theoretical amplitudes and growth rates. To support the validation process, we also analyzed the changes of total electron flux and thermal anisotropies from HOPE and Magnetic Electron Ion Spectrometer (MagEIS) instruments. Section \ref{sum_con} gives summary and conclusion.
\section{Determination of thermal velocity and density of energetic electrons}\label{theo_disc}
The inversion method consists of two phases (Figure \ref{process}). First we estimate the parallel and minimum perpendicular thermal velocity of the source population using the relativistic solution of electromagnetic R-mode wave instability of \citep{doi:10.1063/1.872932} ($1^{st}$ phase blue box in Fig.\ref{process}). Using these thermal velocities, a direct estimation of $N_h/N_c$ is obtained from the frequency sweep rate of a chorus emission using nonlinear wave growth theory ($2^{nd}$ phase blue box). For this study, the inputs are gyrofrequency $\Omega_e$, plasma frequency $\omega_{pe}$, frequency sweep rate of an individual chorus emission $\partial \omega / \partial t$ and the mean frequency of the assumed band of linear growth $\omega_{rm}$, all from EMFISIS measurements (red boxes on Fig.\ref{process}). More about assumptions (green boxes in Fig.\ref{process}) is in the descriptions of the theories mentioned above.
\begin{figure}[h]
\centering
\includegraphics[width=35pc]{process.png}
\caption{Chorus-inversion method: Inputs are from EMFISIS wave measurements (red boxes) only. As the first step, thermal momentum $U_{t||}$ and average perpendicular velocity $V_{\perp 0}$ are calculated assuming that linear wave growth is the initial phase of chorus generation. The second phase is governed by non-linear wave growth. Here, we replace the wave amplitude $\Omega_w$ with the optimum amplitude $\Omega_{opt}$ in order to obtain $N_h$. For the calculation of $N_h$, we use the output of the first phase, $U_{t||}$ and $V_{\perp 0}$. At the end of the process, we obtain the bi-Maxwellian function parameters of energetic electrons responsible for chorus emission generation. In the green boxes we note some important assumptions.}
\label{process}
\end{figure}
\subsection*{Relativistic linear growth-rate of R-mode plasma waves}
A band of whistler-mode waves is usually present at or below the starting frequency of chorus emissions and acts as a triggering wave for nonlinear wave growth mechanism. This band is assumed to be generated due to relativistic whistler-mode instability that is driven by temperature anisotropy of the source population, $A^M = T_{\perp}/T_{||} -1 = V_{t\perp}^2/V_{t||}^2 -1$ in the case of bi-Maxwellian distribution function. The instability of electromagnetic R-mode waves in a relativistic plasma was studied by \citet{doi:10.1063/1.872932}. They expressed the linear growth rate as:\\
\begin{equation}\label{lin_inst_growth}
\omega_i = \frac{\pi \omega_{pe}^2 \eta_{rel}}{[2\omega_r + \omega_{pe}^2 |\Omega_e|/(\omega_r - |\Omega_e|)^2]} \{A_{rel} - A_c\},
\end{equation}
where $\eta_{rel}$ is the fraction of the relativistic particle distribution near resonance, which is proportional to the ratio of hot and cold electron density, $N_h/N_c <<$1.
{$A_{rel}$ is the relativistic pitch-angle anisotropy of the resonant particles, which in the non-relativistic limit is equal to $A^M$. The critical anisotropy is
\begin{equation}\label{AC}
A_c = \frac{1}{\Omega_e/\omega_r -1}.
\end{equation}
In their paper, \citet{doi:10.1063/1.872932} evaluated the linear wave growth rate as a function of frequency $\omega_r$, by numerical integration along the resonance ellipse for different distribution functions, and studied the effects of key parameter changes. In the case of bi-Maxwellian distribution, they found that the variation of $N_h/N_c$ only affects the magnitude of the growth rate. Similarly, the increase of $A_{rel}$ is followed by increasing growth rate, in addition, the frequency range of the instability is slightly spreading. Another important key parameter is the ratio of electron plasma and gyrofrequency $\omega_{pe}/\Omega_e$: decreasing $\omega_{pe}/\Omega_e$ shifts the maximum growth rate to higher frequencies. Likewise, decreasing the hot electron temperature ($U_{t||}$), increases the frequency of the maximum growth rate, also thins the unstable frequency range.\\
We assume that the linear growth rate takes the maximum value at the mean frequency of whistler-instability's wave band $\omega_{rm}$, that frequency is only determined by $\omega_{pe}/\Omega_e$ and $U_{t||}$. In the chorus-inversion $\omega_{pe}/\Omega_e$ and $N_c$ is known, therefore those $U_{t||}$ that produces the maximum linear growth rate of the whistler-mode instability at $\omega_{rm}$ can be the estimate for initial parallel thermal momentum of source population. Moreover, the minimum resonant anisotropy required for instability, $A_c$, provides the minimum value of $V_{t\perp}$. At this stage of the chorus-inversion, we use an arbitrary $N_h$, because it does not affect the frequency of the maximum value. $N_h$ is calculated in the second step of chorus-inversion method employing the nonlinear wave growth theory.
\subsection*{Nonlinear wave growth theory}
Linear wave growth induces the initial amplitudes of emissions followed by nonlinear wave growth (\citet{omura2008theory} and \citet{omura2011triggering}) which is responsible for growing amplitude and rising frequency of chorus emissions assuming parallel propagation at the generation region. \citet{omura2009nonlinear} proposed that the formation of gap between upper- and lower-band is due to the nonlinear damping mechanism caused by slightly oblique propagation away from the equator.
The frequency sweep rate of chorus emission is obtained from the inhomogeneity ratio of the relativistic second-order resonance condition at the magnetic equator,
\begin{equation}\label{freqsw}
\frac{\partial \tilde{\omega}}{\partial t} = \frac{0.4 s_0 \omega}{s_1} \tilde{\Omega_{w}},
\end{equation}
where $s_0=\tilde{V}_{\perp 0}\chi/\xi $, $s_1=\gamma(1-\tilde{V}_R/\tilde{V}_g)^2$, $\tilde{\Omega}_{w} = eB_{w}/(m_0 \Omega_{e0})$ and $\tilde{\omega} = \omega/\Omega_{e0}$ is the normalized frequency. $B_w$ is the wave magnetic field, $ \chi^2=(1+\xi^2)^{-1}$ and $ \xi^2=\omega(\Omega_e-\omega)/\omega_{pe}^2$, $\tilde{V}_g$ is the group velocity normalized by the speed of light $c$. $\tilde{V}_{\perp 0}$ is the averaged perpendicular velocity of the source population. The first order cyclotron resonance condition provides the resonance velocity,
\begin{equation}\label{Vres}
\tilde{V}_R = \chi\zeta(\omega - \Omega_{e}/\gamma) = \frac{\tilde{\omega}^2 - \sqrt{\tilde{\omega}^4 + (\tilde{\omega}^2+ \tilde{V}_p^2)(1 - \tilde{\omega}^2 - \tilde{V}_{\perp 0}^2)}}{\tilde{\omega}^2+ \tilde{V}_p^2}\tilde{V}_p.
\end{equation}
$\tilde{V}_R$ is dependent upon $V_{\perp0}$, because we expressed the Lorentz-factor as $\gamma = [1 - (V_R^2 + V_{\perp0}^2)/c^2]^{-1/2}$. $\tilde{V}_p = V_p/c$ is the phase velocity.
\citet{omura2011triggering} found that the frequency change of a rising-tone chorus is due to the nonlinear term $\mu_0c^2k J_B/B_w$ in the cold plasma dispersion relation. This gradual deviation in frequency can exist when the triggering wave amplitude is close to the optimum wave amplitude:
\begin{equation}\label{optampl}
\widetilde{\Omega}_{w0}=0.81 \pi^{-5/2} \frac{Q}{\tau} \frac{s_1 \widetilde{V}_g}{s_0 \widetilde{\omega} \widetilde{U}_{t\parallel}} \left(\frac{\widetilde{\omega}_{ph} \widetilde{V}_{\perp 0} \chi}{\gamma} \right)^2 \exp\left(- \frac{\gamma^2 \widetilde{V}_{R}^2}{2 \widetilde{U}_{t\parallel}^2}\right),
\end{equation}
where $Q$ represents the depth of electron hole with typical value 0.5. $\tau = T_N / T_{tr}$ is the ratio of nonlinear transition time and nonlinear trapping period, where $T_N$ represents the time required for the formation of nonlinear current. Typical range of $\tau = 0.25-1$ is concluded from theory \cite{omura2011triggering}, simulation \cite{hikishima2012} and observation \cite{kurita2012}. $\tilde{U}_{t\parallel}= U_{t\parallel}/c$ is the parallel thermal momentum of the source population. \\
The threshold amplitude for the amplification of a chorus element is derived from the consideration that the temporal growth rate should be positive at the equator \cite{omura2009nonlinear}. Waves can only grow when the optimum amplitude is higher than the threshold amplitude and the triggering wave amplitude exceeds the threshold amplitude,
\begin{equation}\label{treampl}
\tilde \Omega_{th}=\frac{100\pi^3\gamma^3\xi}{\tilde\omega\tilde\omega^4_{ph}\tilde V_{\perp0}^5\chi^5} \left( \frac{\tilde a s_2 \widetilde{U}_{t\parallel}}{Q}\right)^2\exp\left(\frac{\gamma^2 \tilde V_R^2}{\widetilde{U}_{t\parallel}^2} \right),
\end{equation}
where $s_2=\frac{1}{2\xi \chi} \left\lbrace \frac{\gamma \omega}{\Omega_e} \left( \frac{V_{\perp 0}}{c} \right)^2 - \left[ 2+\Lambda \frac{\chi^2(\Omega_e-\gamma \omega)}{\Omega_e-\omega} \right] \frac{V_R V_p}{c^2} \right\rbrace $ is the coefficient related to the gradient of magnetic field in the inhomogeneity ratio (Eq. (10) of \citet{omura2009nonlinear}), $\tilde a = ac^2/\Omega_{e0}=4.5c^2/(LR_E\Omega_{e0})$ is the scale length of the dipole magnetic field. $\Lambda = \omega/\Omega_e$ for inhomogeneous electron density model ($\Lambda$=1 for constant electron density model).\\
The nonlinear wave growth is:
\begin{equation}
\Gamma_N = \frac{Q \omega_{ph}^2}{2} \left(\frac{\zeta}{\Omega_w \omega}\right)^{1/2} \frac{V_g}{U_{t||}} \left( \frac{V_{\perp 0} \chi}{c\pi\gamma} \right) exp \left( -\frac{\gamma^2 V_R^2}{2 U_{t||}^2}\right).
\end{equation}
To estimate the energetic electron density, we replace the wave amplitude in Eq. (\ref{freqsw}) with the optimum wave amplitude (\ref{optampl}):
\begin{equation}\label{hotden}
\tilde{\omega}_{ph} = \omega_{pe} \left(\frac{N_h}{N_c}\right)^{1/2} = \sqrt{\frac{\partial\omega}{\partial t} \frac{\pi^{5/2} \tau}{0.324 Q} \frac{\tilde{U}_{t||}}{\tilde{V}_g}\exp\left( \frac{\gamma^2 \tilde{V}_{R}^2}{2 \tilde{U}_{t\parallel}^2}\right)} \frac{\gamma}{\tilde{V}_{\perp 0} \chi},
\end{equation}
giving an upper-bound of $N_h$.
In the case of known thermal velocities, the number density of the source population $N_h$ can be derived from $\partial\omega/\partial t$. The relativistic linear growth-rate theory provides the estimate of $\tilde{U}_{t\parallel}$ and the average perpendicular velocity $V_{0\perp} = \sqrt{\pi/2}V_{t\perp}/c$, where we assume the bi-Maxwellian distribution.
\section{Discussion}\label{disc}
\subsection*{Case studies from EMFISIS data}
On 14 November 2012, the impact of a geomagnetic storm with a minimum Dst $\sim$ -108 nT was observable on Van Allen Probes A measurements. Chorus emissions were measured by Van Allen Probes A EMFISIS instrument from 10 to 16 UT, see Figure \ref{ANI_flux} a) .
We have selected 16 full, strong chorus emissions from EMFISIS continuous burst mode wave data ($\mathrm{28.6 \mu s}$ time resolution and $\mathrm{\sim 12 kHz}$, maximum observable frequency \cite{Kletzing2013}) between 11 and 12 UT. The Van Allen Probes spacecraft A was close to the plasmapause (L = 5.42 - 5.87) in the dawn sector (MLT = 4.92-5.61) and crossed the magnetic equator (mlat = 0.755 - (-0.649) $\deg$). \\
At that time, the gap at half-gyrofrequency was not formed clearly, and relatively small number of emissions existed. Therefore we concluded that a) the satellite was at the generation region and b) wave-particle interaction corresponds to the small number of emissions did not affect significantly the particle distribution of the source population .
On Figure \ref{fig1}a, three series of rising-tone emissions with large-amplitude $\mathrm{\sim 0.1-0.5 nT}$ are shown. The multicomponent wave measurement allows us to estimate the angle between the direction of propagation and the background magnetic field $\theta$, the ellipticity and planarity of these emissions by the singular value decomposition (SVD) method \citep{doi:10.1029/2000RS002523}. The waves exhibit quasi-parallel propagation (Fig. \ref{fig1}b), high coherence (Fig. \ref{fig1}c) and right-hand polarization (Fig. \ref{fig1}d). We present our method through the analysis of the three events in Figure \ref{inv_res}.
\begin{figure}[h]
\centering
\includegraphics[width=30pc]{elso_kep_v2.png}
\caption{Van Allen Probes EMFISIS-A burst data recorded on 14 November 2012: 11:01:16.67 UT (first column), 11:14:22.67 UT (second column), 11:15:58.67 UT (third column). a) Spectrogram of single-axis (BuBu) magnetic field. White dashed lines contour the assumed band of linear wave growth. b) Poynting vector angle $\theta$ with respect to the background geomagnetic field $B_0$, c) Planarity and d) Ellipticity ( magnetic PSD is greater than $\sim 10^7nT^2/Hz$) }
\label{fig1}
\end{figure}
To estimate the parallel and the minimum perpendicular thermal velocities, first we identify the band of whistler-mode waves corresponding to the linear wave growth. The lower and upper limits of these bands are 790-1200 Hz, 790-1265 Hz and 820-1130 Hz, respectively, indicted by white dashed lines in Figure \ref{fig1}.a. From EMFISIS measurement \cite{kurth2015} we obtain $\omega_{pe}/\Omega_{e} \sim$ 5.17, 5.46 and 5.41, respectively. Assuming arbitrary $N_h$, we search for those $U_{t||}$ value, that produces the maximum linear growth rate at the mean frequency of the linear wave growth band $\omega_{rm}$. In the knowledge of $U_{t||}$, a minimum estimate for $V_{\perp0}$ can be calculated from (\ref{AC}) and
\begin{equation}
A_c =\frac{V_{t\perp}^2}{V_{t||}^2}-1 =\frac{(V_{\perp0}/\sqrt{\frac{\pi}{2}})^2}{(U_{t||}/\gamma_R)^2} - 1
\end{equation}
In Figure \ref{inv_res}, we present the results of three emissions selected from Figure \ref{fig1}, plotting frequency sweep rates, amplitudes and growth rates. The top row of plots present the spectrogram, instantaneous frequency (blue lines) and fitted curve (dashed white lines) of rising-tone emissions.
The relation between measured (yellow solid lines) and theoretical amplitudes are plotted in the middle panels of Figure \ref{inv_res}: optimum wave amplitudes (blue lines) are in the same order as the measured amplitudes, and have higher value than threshold amplitude (red solid and dashed lines). Moreover, the observed amplitudes start to grow when they exceed the threshold amplitude. Although we used $\tau$= 0.25 and 0.5 for chorus-inversion, the optimum amplitude is the same, because a constant value of $\partial\omega/\partial t$ determine the product of $\tau$ and $ N_h/N_c$. Threshold amplitude is the function of $N_h/N_c$, which changes with $\tau$. Therefore using $\tau = $0.5 and 0.25 yields a lower (red solid line) and upper (red dashed line) estimate of the threshold amplitude. The optimum amplitude of the 2012-11-14UT11:14:24.570 event slightly differs from the measured one at higher frequencies: it can be due to an overlapping, separate upper-band chorus emissions, or convective growth.
In the bottom row, yellow dashed lines represents $\omega_{rm}$ as they cross linear growth rate curve (dashed red lines) at the maximum. Nonlinear growth rate (blue solid ($\tau=0.5$) and dashed ($\tau=0.5$) lines) is higher than linear growth rate, as it was proposed by \citet{summers2012a}. The frequency range of the linear instability is confined to $\sim 500-1500$ Hz. (We used $\tau$ = 0.25-0.5 instead of 0.25-1 in these plots, because the values of nonlinear growth rate corresponding to $\tau = 1$ are almost two order higher than linear growth rates, difficult to show the changes of the latter one in the plot.)\\
The method of chorus-inversion is sensitive to the value of $\omega_{rm}$. To obtain the standard deviation of thermal velocities, the frequency of lower and upper edge of the band are used. In Figure \ref{h_inv_res}, $V_{t||}$ (middle panel) and $V_{t\perp}$ (bottom panel) of the 16 chorus elements are marked with the squares in the middle of the red bars, and the vertical extent of the bars represent the standard deviation of $V_{t||}$ and $V_{t\perp}$. The magnitude of the standard deviation depends on the width of the linear growth rate band.
On the second step, we obtain the instantaneous frequencies of chorus emissions at the zero crossings of the wave magnetic field's perpendicular component with respect to the background magnetic field. Assuming that the frequency of the main part of the chorus emissions is a linear function of time, we can approximate the frequency sweep rate $\partial\omega/\partial t$ with a constant value. \\
Substitute the derived values of $U_{t||}$, $V_{\perp0}$, and $\partial\omega/\partial t$ into (\ref{hotden}), $N_h/N_c$ can be calculated directly. Note, that the replacement of wave amplitude with optimum amplitude leads to an upper estimate of $N_h/N_c$. As we already mentioned, the ratio of nonlinear transition time and nonlinear trapping period $\tau$ could be between 0.25 and 1, this provides an interval for $N_h/N_c$. In the top panel of Figure \ref{h_inv_res}, $N_h/N_c$, corresponding to $\tau = $0.25-1, is shown with red errorbars and typically between 0.002 and 0.012. The red square in the middle of the errorbars corresponds to the $\tau=$0.73, which is the best fit to HOPE data (blue errorbars).
\begin{figure}[h]
\centering
\includegraphics[width=35pc]{masodik_kep_525.png}
\caption{Chorus emissions from 2012-11-14UT11:01:17.986 (left column), 2012-11-14UT11:14:24.570 (middle column) and 2012-11-14UT11:15:59.202 (right column). In the upper row the spectrogram, instantaneous frequency (blue lines) and the linear approximation (dashed white lines) of the emissions are shown. The optimum amplitudes (middle panels, blue lines) are of the same order as the measured amplitude (yellow lines), and are not affected by the change of $\tau$. Threshold amplitudes of $\tau $= 0.5 and 0.25 are plotted by solid and dotted red lines, respectively. The threshold amplitude does not depend on $\tau$ directly. However, changes of $\tau$ modify $N_h/N_c$, which affects the threshold amplitude. Bottom panels: linear growth rate (dashed red line), $w_{rm}$ (yellow dashed lines). Nonlinear wave growth rates are plotted by blue solid ($\tau=0.5$) and dotted ($\tau=0.25$) lines. }
\label{inv_res}
\end{figure}
\subsection{Comparison of results of the inversion and in-situ measurements (HOPE data)}
The Helium, Oxygen, Proton, and Electron (HOPE) Mass Spectrometer \cite{Funsten2013} measures the fluxes of electrons and dominant ion species in the energy range of 1 eV
- 50 keV, in 36 logarithmically spaced steps (before September 2013) that was later modified to 72 log-spaced steps, at an energy resolution $\Delta E_{FWHM} /E \approx 15\%$. The $4π sr$ field of view is attained by 5 polar pixels (consisting of individual detectors) and the spin of the spacecrafts, however, HOPE data sampling is not spin synchronized. As a result, electron flux data is available as a function of energy and pitch angle.
In this section, we compare the output of the inversion $[N_h, U_{t||}, V_{\perp0}]$, with those derived from HOPE measurements, based on the following equations (\citet{wu2013lininst} and \citet{goldstein2014VAPplas}):
\begin{linenomath*}
\begin{equation}
N_h^* = 2\pi \int_0^{\pi} \int_{v_{min}}^{v_{max}} f(v,\alpha) v^2 dv \sin \alpha d\alpha
= 2\pi \sum_j \sum_i J_{ij} \left(\frac{2E_i}{m_e}\right)^{-1/2} \sin \alpha_j dE_i d\alpha_j,
\end{equation}
\end{linenomath*}
\begin{align}
V_{t||}^* &= \frac{2\pi}{3N_h} \int_0^{\pi} \int_{v_{min}}^{v_{max}} v^2(\cos\alpha)^2 f(v,\alpha) v^2 dv \sin \alpha d\alpha \nonumber\\
&= \sqrt{\frac{2\pi^2}{m_e}} \frac{1}{3N_h} \sum_j \sum_i J_{ij} (E_i)^{1/2} \sin \alpha_j \cos^2 \alpha_j dE_i d\alpha_j,
\end{align}
\begin{align}
V_{t\perp}^* &= \frac{\pi}{3N_h} \int_0^{\pi} \int_{v_{min}}^{v_{max}} v^2(\sin\alpha)^2 f(v,\alpha) v^2 dv \sin \alpha d\alpha \nonumber \\ &= \sqrt{\frac{\pi^2}{2m_e}} \frac{1}{3N_h} \sum_j \sum_i J_{ij} (E_i)^{1/2} \sin \alpha_j \sin^2 \alpha_j dE_i d\alpha_j,
\end{align}
where $N_h^*, V_{t||}^*, V_{t\perp}^*$ are the hot electron density, parallel and perpendicular velocities from HOPE measurements. $f(v,\alpha)$ is the hot electron distribution function in the velocity $v$ and pitch angle $\alpha$ space. This theoretical description is substituted with measurable quantities, such as flux $J$, mean energy $E$ and energy width $dE_i$ of the specific energy channel. Indices $i,j$ represent the given energy channels and pitch angle bin.\\
To identify the highest and lowest energy channel of the instrument corresponding to the relativistic resonance energy of given chorus emissions, namely $[v_{min},v_{max}]$, we employ the expression of Lorentz-factor from \citet{doi:10.1063/1.872932}:
\begin{equation}\label{lorentz}
\gamma_R = \frac{ -1 + (ck/\omega_r)[\{(ck/\omega_r)^2 -1\}(1+u_{\perp}^2/c^2)(\omega_r/\Omega_e)^2 +1] }{\{(ck/\omega_r)^2 -1\}(\omega_r/\Omega_e)},
\end{equation}
where $k$ is wavenumber vector, $u{\perp}$ is perpendicular momentum.
Here, we substitute the lowest and half-gyro frequency value of each chorus emissions to $\omega_{r}$, and replace $u_{t\perp}$ with the average values of perpendicular momentum derived from critical anisotropy and parallel thermal momentum. (Note, that the use of Eq.\ref{Vres} gives almost identical result.) The energy range of the comparison is based on the lower-band of the selected chorus emissions, because the upper-band of them overlaps with other upper-bands of chorus emissions that may have already modulated the lower energy part of the hot electron distribution.
To determine the standard deviation of $N_h^*, V_{t||}^*$, and $V_{t\perp}^*$, we consider the neighboring energy channels of lowest and highest energy channels, altogether 6 channels, and we use all combinations (nine), to pick up the minimum, maximum and mean values.
\begin{figure}[h]
\centering
\includegraphics[width=35pc]{harmadik_kepb73.png}
\caption{Results of the chorus-inversion of the selected 16 chorus emissions (referred by their date). $N_h/N_c$ obtained from HOPE measurements are shown with blue error bars. The result of the inversion for $N_h/N_c$ is a range (red errorbars), the minimum and maximum value of this interval corresponds to $\tau$ = 0.25 and 1, respectively. Top panel also shows the best fit of $\tau=$ 0.73 of the inversion to HOPE measurements with red squares. Middle and bottom panel: parallel and perpendicular thermal velocities from the inversion (red) and HOPE (blue) with error bars.}
\label{h_inv_res}
\end{figure}\\
The results of ${N_h/N_c}^*, V_{t||}^*$, and $V_{t\perp}^*$ derived from HOPE measurements are plotted with light blue squares in Fig. \ref{h_inv_res}, the error bars show the standard deviation. $N_h/N_c^*$ values derived from HOPE measurements (blue) are in the range of $N_h/N_c$ (red errorbars) corresponding to $\tau$ = 0.25 and 1 . The normalized root-mean-square deviation between the HOPE ($N_h/N_c^*, V_{t||}^*$, and $V_{t\perp}^*$) and the theoretical ($N_h/N_c, V_{t||}$, and $V_{t\perp}$) values are ${N_h/N_c}_{NRMS}\sim13\%, {V_{t||}}_{NRMS} \sim6\%$, and ${V_{t\perp}}_{NRMS}\sim10\%$, respectively
\begin{figure}
\subfloat{\includegraphics[width = \textwidth]{negyedik_kep.png}}\\
\subfloat{\includegraphics[width = \textwidth]{otodik_kep.png}}
\caption{a. Spectrogram of single-axis (BuBu) magnetic field (top panel) measured by EMFISIS onboard Van Allen Probes spacecraft A between 10-13 UT 14 November 2012. b. Pitch-angle anisotropy map derived from HOPE-A and MAGEIS-A measurements with resonance energy ranges of the analyzed cases (white lines). c. Total electron fluxes measured by HOPE-A and MAGEIS-A (Top panel). d. AE index.}
\label{ANI_flux}
\end{figure}
To affirm our results, we further analyzed anisotropy (Fig.\ref{ANI_flux}b), omni-directional flux (Fig.\ref{ANI_flux}c), Ae index (Fig.\ref{ANI_flux}d) and wave magnetic data (Fig.\ref{ANI_flux}a) in longer timescale of 10-13UT. During this time interval, the spacecraft was flying away from the Earth to higher L shells (4-6), and was moving from the nightside to the morning sector MLT = 4-6. The pitch-angle anisotropy map in the second panel is calculated by the method of \citet{JGRA:JGRA14601} using both HOPE-A (few eV - 50keV) and MAGEIS-A (15-224 keV) particle flux measurements. The anisotropy map shows two strong anisotropic bands, one starts at 100 keV and ends $\sim 35$ keV at 10:55UT, and the other starts below 10 keV and runs parallel with the previous one. In our interpretation these anisotropic bands are the result of an injection from the plasma sheet and the plasma was accelerated in a convective transport from the nightside to the dayside. The more isotropic region between the two bands is presumably due to wave-particle interaction between electrons in this energy range and chorus emissions (see top panel). This explanation agrees with the resonance energy ranges of the analyzed chorus emissions: the upper white line corresponds to the starting frequency of these emissions, the lower one corresponds to half the gyrofrequency.
\section{Summary and conclusion}\label{sum_con}
A new method is presented to derive $N_h, V_{t||}$, and $V_{t\perp}$ from the EMFISIS wave measurement \emph{only}. To extract these parameters from the wave data, we assumed that a) the frequency sweep rate of the chorus elements is proportional to the optimum wave amplitude, b) the optimum wave amplitude is proportional to the density of energetic electrons and c) the nonlinear wave growth generation is anticipated by linear growth rate, which is always present on the dynamic spectra as a band of whistler-mode waves close to the starting frequency of chorus emissions.
16 strong chorus emissions close to the generation region (magnetic equator) were analyzed. The output data of chorus-inversion, $N_h, V_{t||}$, and $V_{t\perp}$, were compared with the same quantities derived from the HOPE measurements in the energy range of the relativistic resonance of the selected chorus emissions, showing a good agreement (${N_h/N_c}_{NRMS}\sim13\%, {V_{t||}}_{NRMS} \sim6\%$, and ${V_{t\perp}}_{NRMS}\sim10\%$). The measured amplitudes are consistent with the optimum and threshold amplitudes of nonlinear wave growth theory, the nonlinear growth rate has positive values in the entire frequency range of chorus emissions, contrarily to the prediction of the linear growth rate theory.
In the next step, the method presented here will be extended with chorus emissions recorded on the ground, replacing the in-situ wave measurements. This extension requires a suitable chorus propagation model. This way, the density of the energetic electrons can be estimated from ground data, forming a new complement or a stand-alone source of these important data (energetic electron density, parallel and perpendicular thermal velocity: $N_h, V_{t||}$, and $V_{t\perp}$ ).
\acknowledgments
The research leading to these results received funding from the Hungarian National Research, Development and Innovation Office under grant agreements NN116408 and NN116446.
This work was also supported by JSPS KAKENHI grants 15H05815 and 17H06140.
This research was supported by the Los Alamos Space Weather Summer School, funded by the Center for Space and Earth Sciences at Los Alamos National Laboratory.
Processing and analysis of the HOPE data was supported by Energetic Particle, Composition, and Thermal Plasma (RBSP-ECT) investigation funded under NASA's Prime contract no. NAS5-01072. All RBSP-ECT data are publicly available at the Web site http://www.RBSP-ect.lanl.gov/
All RBSP-EMFISIS data used in this paper are available from http://emfisis.physics.uiowa.edu/. The research at University of Iowa was supported under NASA prime contract NAS5-01072.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,173 |
{"url":"http:\/\/wiki.math.toronto.edu\/DispersiveWiki\/index.php?title=Duhamel's_formula&direction=prev&oldid=5237","text":"# Duhamel's formula\n\nDuhamel's formula expresses the solution to a general inhomogeneous linear equation as a superposition of free solutions arising from both the initial data and the forcing term. For instance, the solution to the inhomogeneous initial value problem\n\n$u_t - Lu = F; \\quad u(0) = u_0$\n\nfor some spatial operator L, is given by\n\n$u(t) = e^{tL} u_0 + \\int_0^t e^{(t-t')L} F(t')\\ dt',$\n\nprovided that L has enough of a functional calculus, and u, u0, F have enough regularity, to justify all computations. (If L is constant coefficient, then the Fourier transform can usually be used to justify everything so long as one works in the category of tempered distributions.) Note that the case L=0 is simply the fundamental theorem of calculus, indeed one can view Duhamel's formula as the fundamental theorem of calculus twisted (conjugated) by the free propagator etL.\n\nFor equations which are second order in time, the formula is slightly more complicated. For instance, the solution to the inhomogeneous initial value problem\n\n$u_{tt} - Lu = F; \\quad u(0) = u_0; \\quad u_t(0) = u_1$\n\nis given (formally, at least) by\n\n$u(t) = \\cos(t\\sqrt{L}) u_0 + \\frac{\\sin(t\\sqrt{L})}{\\sqrt{L}} u_1 + \\int_0^t \\frac{\\sin((t-t')\\sqrt{L})}{\\sqrt{L}} F(t')\\ dt'.$\n\nAnyhow, we note that in this case the solution can be cast in the standard form. So, let us introduce the vectors\n\n${\\underline y}=\\left[\\begin{matrix} u \\\\ w \\end{matrix}\\right]$\n\nand\n\n${\\underline \\Phi}=\\left[\\begin{matrix} 0 \\\\ F(t) \\end{matrix}\\right]$\n\nwith the matrix\n\n${\\hat M}=\\begin{bmatrix} 0 & 1 \\\\ L & 0 \\end{bmatrix}$\n.\n\nWe can write the second order equation as\n\n${\\underline y}_{tt}-{\\hat M}{\\underline y}={\\underline \\Phi(t)}$\n\nand write down the solution as expected in the original Duhamel's formula, that is\n\n${\\underline y}(t) = e^{t\\hat M} {\\underline y}_0 + \\int_0^t e^{(t-t')\\hat M} {\\underline \\Phi}(t')\\ dt',$\n.\n\nUseful applications of this approach can be found for systems having a Hamiltonian flow.","date":"2013-05-23 21:03:09","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 9, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9562811255455017, \"perplexity\": 265.2288983122545}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2013-20\/segments\/1368703788336\/warc\/CC-MAIN-20130516112948-00036-ip-10-60-113-184.ec2.internal.warc.gz\"}"} | null | null |
SAVE on our great combo offers!
The Vermatik RapidZAP 16W Electric Fly Zapper Is ideal for controlling flying insects in the home, office or commercial premises.
Our super polar PVC strip curtain is ideal for use in cold rooms, chillers and walk in fridges, where it is important to control temperature and minimise the loss of conditioned air. They can also withstand temperatures from -55c - +25c. You will receive the correct number of strips, along with our unique stainless steel hanging system for added durability and ease of installation.
You can keep control of flying insects all year round with this mains powered Vermatik Insect Killer. It is ideal for use in the home, office and other commercial outlets. The ultraviolet lamps attract the flying insects into the electrically charged grid and fall into the removable easy to clean tray. The Zapper runs on low energy bulbs so great for flying insect control at very low operating costs. Crafted using high-quality aluminium alloy, the Vermatik Electric Bug Zapper offers maximum protection from all kinds of flying insects, including mosquitos, wasps, flies, and more!
The Vermatik ProZAP Professional 30W Electric Fly Zapper Is ideal for controlling flying insects in the home, office or commercial premises.
A high quality PVC strip curtain, manufactured using SteriTouch® antimicrobial technology. Our antimicrobial PVC strip curtain has been developed for use in any environment where hygiene control is important. The PVC strip contains additives that will actively kill bacteria, such as E.Coli, MRSA and Salmonella. It also prevents the growth of biofilm, mould and fungi. It has also been made using the same PVC material as our super polar grade, so that it can withstand temperatures from -55c - +25c. You will receive the correct number of strips, along with our unique stainless steel hanging system for added durability and ease of installation.
The Vermatik RapidZAP 20W Electric Fly Zapper Is ideal for controlling flying insects in the home, office or commercial premises.
The Vermatik ProZAP Professional 40W Electric Fly Zapper Is ideal for controlling flying insects in the home, office or commercial premises.
© 2019 Strip Curtains Direct. | {
"redpajama_set_name": "RedPajamaC4"
} | 8,343 |
{"url":"https:\/\/wordassociations.net\/en\/words-associated-with\/Converse","text":"# Associations to the word \u00abConverse\u00bb\n\n## Wiktionary\n\nCONVERSE, verb. (formal) (intransitive) To talk; to engage in conversation.\nCONVERSE, verb. To keep company; to hold intimate intercourse; to commune; followed by with.\nCONVERSE, verb. (obsolete) To have knowledge of (a thing), from long intercourse or study.\nCONVERSE, noun. \u200b(now literary) Familiar discourse; free interchange of thoughts or views; conversation; chat.\nCONVERSE, adjective. Opposite; reversed in order or relation; reciprocal.\nCONVERSE, noun. The opposite or reverse.\nCONVERSE, noun. (logic) Of a proposition or theorem of the form: given that \"If A is true, then B is true\", then \"If B is true, then A is true.\" equivalently: given that \"All Xs are Ys\", then \"All Ys are Xs\".\n\n## Dictionary definition\n\nCONVERSE, noun. A proposition obtained by conversion.\nCONVERSE, verb. Carry on a conversation.\nCONVERSE, adjective. Of words so related that one reverses the relation denoted by the other; \"parental' and filial' are converse terms\".\nCONVERSE, adjective. Turned about in order or relation; \"transposed letters\".\n\n## Wise words\n\nWe should have a great fewer disputes in the world if words were taken for what they are, the signs of our ideas only, and not for things themselves.\nJohn Locke","date":"2020-08-15 13:04:20","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5672707557678223, \"perplexity\": 14268.799046900986}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-34\/segments\/1596439740848.39\/warc\/CC-MAIN-20200815124541-20200815154541-00115.warc.gz\"}"} | null | null |
\section{Introduction}\label{Introduction}
Light plays an essential role in quantum communication and is
indispensable in most practical applications, for example quantum
cryptography. Photons are attractive carriers of quantum
information because the interactions of light with the
surroundings are normally weak, but for the same reason it is
generally difficult to prepare, manipulate, and measure quantum
states of light in a nondestructive way. Repeated interactions
provide a method to increase the effective coupling strength
between light and matter, and the backreflection of light in a
cavity thus constitutes an interesting tool, in particular,
because experiments are currently moving into the strong coupling
regime
\cite{strongcoupling1,strongcoupling2,strongcoupling3,strongcoupling4},
where coherent dynamics takes place on a faster time scale than
dissipative dynamics.
In this paper we propose a versatile setup consisting of an array
of cavities and passive optical elements (beam splitters and phase
shifters), which allows for quantum state engineering, quantum
state purification, and non-destructive number resolving photon
detection. The setup builds on two basic ingredients: The Hong-Ou-Mandel
interference effect \cite{HongOuMandel} generalized to input
pulses containing an arbitrary number of photons and the
possibility of projection onto the subspace of even or the
subspace of odd photon-number states by use of cavity quantum
electrodynamics in the strong coupling regime.
Regarding quantum state engineering, the basic setup provides a
possibility to conditionally generate photon-number
correlated states. More specifically, the setup allows us to project an arbitrary photonic two-mode input state
onto the subspace spanned by the state vectors
$|n\rangle|n\rangle$ with $n=0,1,2,\ldots$. We denote this
subspace by $S$. The scheme is probabilistic as
it is conditioned on a specific measurement outcome. The success
probability equals the norm of the projection of the input state
onto $S$ and is thus unity if the input state already lies in $S$.
In other words, the setup may be viewed as a filter
\cite{EntanglementFilter}, which removes all undesired components
of the quantum state but leaves the desired components unchanged.
We may, for example, use two independent coherent states as
input and obtain a photon-number correlated state as output.
Photon-number correlated states, for example
Einstein-Podolsky-Rosen (EPR) entangled states \cite{EPR}, are an
important resource for quantum teleportation \cite{Teleportation1,
Teleportation2, Teleportation3, Teleportation4, Teleportation5},
entanglement swapping \cite{Swapping1, Swapping2, Swapping3},
quantum key distribution \cite{QKD1, QKD2}, and Bell tests
\cite{Bell1, Bell2}. In practice, however, the applicability of
these states is hampered by noise effects such as photon losses.
Real-world applications require therefore entanglement
purification. The proposed setup is very attractive for
detection of losses and can in particular be used to purify
photon-number entangled states on site. If a photon-number
correlated state, for example an EPR state, is used as input, the
desired state passes the setup with a certificate, while states
which suffered from photon losses are detected and can be
rejected.
Photon losses are an especially serious problem in quantum
communication over long distances. It is not only a very common
source of decoherence which is hard to avoid, but also typically
hard to overcome. The on-site purification protocol mentioned
above can easily be adopted to a communication scenario such that
it allows for the purification of a photon-number correlated state
after transmission to two distant parties.
Purification of two mode entangled states has been shown
experimentally for qubits \cite{Purification1,Purification2} and
in the continuous variable (CV) regime \cite{Purification3,
Purification4}. (CV-entanglement purification is especially
challenging \cite{GaussianImp1,GaussianImp2,GaussianImp3}. Nevertheless, several proposals have been made to accomplish this task \cite{PurificationProp1,PurificationProp2,PurificationProp3, PurificationProp4,PurificationProp5,PurificationProp6}, and very recently Takahashi {\it et al.}\ succeeded in an experimental demonstration \cite{jonas}.) A special advantage of our scheme lies in the fact that it does not only allow for detection of arbitrary photon losses, but is
also applicable to many modes such that entanglement can be
distributed and purified in a network.
With a small modification, the basic setup can be used for
number resolved photon detection. The ability to detect photons in
a number resolved fashion is highly desirable in the fields of
quantum computing and quantum communication. For example, linear optics quantum computation relies crucially on
photon number resolving detectors \cite{KLM,Kok07,Obrian07}.
Moreover, the possibility to distinguish different photon-number
states allows for conditional state preparation of nonclassical
quantum states \cite{Prep1,Prep2,Prep3}, and plays a role in Bell
experiments \cite{Zukowski93} and the security in quantum
cryptographic schemes \cite{Crypto1,Crypto2}. Other applications
include interferometry \cite{Interferometry} and the
characterization of quantum light sources
\cite{LightSources1,LightSources2}.
Existing technologies for photon counting
\cite{APD,Silberhorn04,Banzek03,Cryo1,Cryo2,Cryo3,Others1,Others2,Others3,QuantumDots1,QuantumDots2,QuantumJumps,
QND1, QND2} such as avalanche photodiodes, cryogenic devices, and
quantum dots typically have scalability problems and cannot
reliably distinguish high photon numbers, destroy the quantum
state of light in the detection process, or do not work for
optical photons. Here, we present a non-destructive number
resolving photo detection scheme in the optical regime. This
quantum-non-demolition measurement of the photon number allows for
subsequent use of the measured quantum state of light. An
advantage of the counting device put forward in this work compared
to other theoretical proposals for QND measurements of photon
numbers
\cite{QNDprop1,QNDprop2,QNDprop3,QNDprop4,QNDprop5,QNDprop6} is
the ability to detect arbitrarily high photon numbers with
arbitrary resolution. The scheme is based on testing successively all possible prime factors and powers of primes and the resources needed therefore scale moderately with (width and mean of the) photon number distribution. In particular, a very precise photon number measurement can be made even for very high photon numbers by testing only few factors if the approximate photon
number is known.
The paper is structured as follows. We start with a brief
overview of the main results in Sec.~\ref{Overview}. In
Sec.~\ref{Filter}, we explain how the conditional projection onto
$S$ can be achieved and discuss some properties of the proposed
setup in the ideal limit, where the atom-cavity coupling is infinitely strong and the input pulses are infinitely long. In Sec.~\ref{Detector}, we show that a modified version of the setup can act as a non-destructive photon number resolving detector, and in Sec.~\ref{Purification}, we investigate the
possibility to use the setup to detect, and thereby filter out, losses. In Sec.~\ref{Nonideal}, we consider the significance of finite input pulse length and finite coupling strength, and we obtain a simple analytical expression for the optimal choice of input mode function for coherent state input fields. Section~\ref{Conclusion} concludes the paper.
\section{Overview and main results}\label{Overview}
The most important ingredient of the proposed setup is the possibility to use
the internal state of a single atom to control whether the phase
of a light field is changed by $\pi$ or not \cite{interaction}. The basic mechanism, which is explained in Fig.~\ref{cavity}, has
several possible applications, including preparation of
superpositions of coherent states \cite{cat}, continuous two-qubit
parity measurements in a cavity quantum electrodynamics network
\cite{parity}, and low energy switches \cite{switch}. Concerning the
experimental realization, basic ingredients of the scheme such as
trapping of a single atom in a strongly coupled cavity and preparing of
the initial atomic state have been demonstrated experimentally in \cite{reichelreadout}, where the decrease in cavity field intensity for an atom in the state $|{\uparrow}\rangle$ compared to the case of an atom in the state $|{\downarrow}\rangle$ is used to subsequently measure the state of the atom. State preparation and readout for a single atom in a cavity have also been demonstrated in \cite{rempereadout}. Another promising candidate for an experimental realization is circuit quantum electrodynamics. See, for instance, \cite{circuitQED} for a review.
\begin{figure}
\includegraphics[width=\columnwidth]{figure1}
\caption{\label{cavity}(Color online) A single atom with level
structure as shown in (a) is placed in a cavity in the strong
coupling regime. The light field, which is on resonance with the
cavity, couples the ground state level $|{\uparrow}\rangle$
resonantly to the excited state $|e\rangle$, and the state
$|e\rangle$ decays to the state $|{\uparrow}\rangle$ through
spontaneous emission at a rate $\Gamma$. (b) If the atom is in the
state $|{\downarrow}\rangle$, the incident field is not affected
by the presence of the atom, and for a sufficiently slowly varying
input pulse, the interaction with the resonant cavity changes the
phase of the light field by $\pi$. (c) If the atom is initially in
the state $|{\uparrow}\rangle$, the possibility of spontaneous
emission prevents the light field from building up inside the
cavity (provided the photon flux of the input beam is not too
high), and the incoming field is reflected from the input mirror
without acquiring a phase shift. This transformation is insensitive to the precise values of $g$, $\Gamma$, and the cavity decay rate as long as the system is in the strong coupling and weak driving regime.}
\end{figure}
The generation of quantum superposition states can be achieved as
follows. The atom is initially prepared in the state
$(|{\uparrow}\rangle+|{\downarrow}\rangle)/\sqrt{2}$, and the
input field is chosen to be a coherent state $|\alpha\rangle$.
After the interaction, the combined state of the atom and the
light field is proportional to
$|\alpha\rangle|{\uparrow}\rangle+|-\alpha\rangle|{\downarrow}\rangle\propto
(|\alpha\rangle+|-\alpha\rangle)(|{\uparrow}\rangle+|{\downarrow}\rangle)
+(|\alpha\rangle-|-\alpha\rangle)(|{\uparrow}\rangle-|{\downarrow}\rangle)$,
and a measurement of the atomic state in the basis
$\ket{\pm}=(|{\uparrow}\rangle\pm|{\downarrow}\rangle)/\sqrt{2}$
projects the state of the light field onto the even
$|\alpha\rangle+|-\alpha\rangle$ or the odd
$|\alpha\rangle-|-\alpha\rangle$ superposition state. More
generally, the input state $\sum_nc_n|n\rangle$, where $|n\rangle$
is an $n$-photon Fock state, is transformed into the output state
$\sum_n(1/2\pm(-1)^n/2)c_n|n\rangle$, i.e., the input state is
conditionally projected onto either the subspace spanned by all
even photon-number states or the subspace spanned by all odd
photon-number states without destroying the state.
With this tool at hand, we can project an arbitrary two-mode input
state onto the subspace $S=\textrm{span}(\ket{n}\ket{n})$,
$n=0,1,2,\ldots$. If two modes interfere at a 50:50 beam splitter, a state of form $|n\rangle|n\rangle$ is transformed into a superposition of products of even photon-number states. If we apply a 50:50 beam splitter operation to the input state, project both of the resulting modes conditionally on the subspace of even photon-number states, and apply a second 50:50 beam splitter
operation, the input state is thus unchanged if it already lies in
$S$, but most other states will not pass the measurement test. To
remove the final unwanted components, we apply opposite phase
shifts to the two modes (which again leaves $\ket{n}\ket{n}$
unchanged) and repeat the procedure (as shown in Fig.~\ref{setup}). For an appropriate choice of phase shifts, the
desired state is obtained after infinitely many repetitions. In
practice, however, a quite small number is typically sufficient.
If, for instance, the input state is a product of two coherent states $|\alpha\rangle|\alpha\rangle$ with $|\alpha|^2=4$, the fidelity of the projection is $0.573$ for one unit, $0.962$ for two units, and $0.999998$ for three units. The scheme is easily generalized to an $M$ mode input
state. In this case, we first project modes 1 and 2 on $S$, modes
3 and 4 on $S$, etc, and then project modes 2 and 3 on $S$, modes
4 and 5 on $S$, etc.
The setup can also be used as a device for photon number resolving
measurements if the phases applied between the light-cavity
interactions are chosen according to the new task. Each photon-number state $|n\rangle$ sent through the array leads to a
characteristic pattern of atomic states. As explained in section
\ref{DestructiveScheme}, one can determine the photon number of an
unknown state by testing the prime factors and powers of primes in
the range of interest in subsequent parts of the array. The scheme
scales thereby moderately in the resources. Three cavity pairs
suffice for example for detecting any state which is not a
multiple of three with a probability of $93.75\%$. However, in
this basic version of the counting scheme, the tested photon state
may leave each port of the last beam splitter with equal
probability. Deterministic emission of the unchanged quantum state
of light into a single spatial mode is rendered possible if
we allow atoms in different cavities to be entangled before the interaction with the field (see section \ref{QNDScheme}). More generally, the proposed scheme allows to determine the difference in photon numbers of two input beams without changing the photonic state.
The correlations in photon number between the two modes of states
in $S$ facilitate an interesting possibility to detect photon
losses. To this end the state is projected onto $S$ a second time. If photon loss has occurred, the state is most likely orthogonal to $S$, in which case we obtain a measurement outcome, which is not the one we require in order to accept the projection as successful. On the other hand, if photon loss has
not occurred, we are sure to get the desired measurement outcome.
We note that loss of a single photon can always be detected by
this method, and the state can thus be conditionally recovered
with almost perfect fidelity if the loss is sufficiently small. We
can improve the robustness even further, if we use an $M$-mode
state. It is then possible to detect all losses of up to $M-1$
photons, and even though it is $M$ times more likely to lose one
photon, the probability to lose one photon from each mode is
approximately $(Mp)^M$, where $p$ is the probability to lose one
photon from one mode and we assume $Mp\ll1$. In a situation where
many photon losses are to be expected, this procedure allows one
to obtain photon-number correlated states with high fidelity,
although with small probability.
We can also distribute the modes of a photon-number correlated
state to distant parties, while still checking for loss, provided
we send at least two modes to each party. As the proposed scheme
can be used as a filter prior to the actual protocol it has an
important advantage compared to postselective schemes. If the
tested entangled state is for example intended to be used for
teleportation, the state to be teleported is not destroyed in the
course of testing the photon-number correlated resource state.
The dynamics in Fig.~\ref{cavity} requires strong coupling, a
sufficiently slowly varying mode function of the input field, and
a sufficiently low flux of photons. To quantify these
requirements, we provide a full multi-mode description of the
interaction of the light with the cavity for the case of a
coherent state input field in the last part of the paper. We find
that the single atom cooperativity parameter should be much larger
than unity, the mode function of the input field should be long
compared to the inverse of the decay rate of the cavity, and the
flux of photons in the input beam should not significantly exceed
the rate of spontaneous emission events from an atom having an average
probability of one half to be in the excited state. We also derive
the optimal shape of the mode function of the input field
(Eq.~\eqref{mode}), when the mode function is only allowed to be
nonzero in a finite time interval.
\section{Nondestructive projection onto photon-number correlated states}\label{Filter}
\begin{figure*}
\includegraphics[width=\textwidth]{figure2}
\caption{\label{setup}(Color online) The first three units of the proposed setup to conditionally project an arbitrary two-mode input state onto the subspace spanned by the state vectors $|n\rangle|n\rangle$, $n=0,1,2,\ldots$. All atoms are prepared in the state $(|{\uparrow}\rangle+|{\downarrow}\rangle)/\sqrt{2}$ before the interaction with the field, and the desired projection occurs in the limit of infinitely many units conditioned on all atoms being in the state $(|{\uparrow}\rangle+|{\downarrow}\rangle)/\sqrt{2}$ after the interaction. As explained in the text, a small number of units will typically suffice in practice. For later reference we label the beam splitters as BS$_i$ and the cavities as C$_i$.}
\end{figure*}
The proposed setup for projection of an arbitrary two-mode input
state onto $S$ is sketched in Fig.~\ref{setup}. We denote the
field annihilation operators of the two input modes by $\hat{a}$
and $\hat{b}$, respectively. The total transformation corresponding to one of the units consisting of a beam splitter, a set of cavities, and a second beam splitter, conditioned on both atoms being measured in the state $|+\rangle$ after the interaction, is given by the operator $U^\dag PU$, where
\begin{equation}\label{UBS}
U=\exp\left[\frac{\pi}{4}\left(\hat{a}^\dag\hat{b}-\hat{a}\hat{b}^\dag\right)\right]
\end{equation}
and
\begin{equation}
P=\sum_{n=0}^\infty\sum_{m=0}^\infty|2n\rangle\langle2n|\otimes|2m\rangle\langle2m|.
\end{equation}
As explained above, the Hong-Ou-Mandel effect ensures that $U^\dag PU|n\rangle|n\rangle=|n\rangle|n\rangle$, while most other possible components of the input state are removed through the conditioning, for instance all components $|n\rangle|m\rangle$ with $n+m$ odd. There are, however, a few exceptions, since all states of the form $U^\dag|2n\rangle|2m\rangle$, $n=0,1,2,\ldots$, $m=0,1,2,\ldots$, are accepted. The phase shifts between the $U^\dag PU$ units are represented by the operator
\begin{equation}\label{Uphi}
U_\phi=\exp\left[i\phi\left(\hat{a}^\dag\hat{a}-\hat{b}^\dag\hat{b}\right)\right],
\end{equation}
which leaves states of the form $|n\rangle|n\rangle$ unchanged, while states of the form $|n\rangle|m\rangle$ with $n\neq m$ acquire a phase shift.
For a setup containing $N+1$ units, the complete conditional transformation is thus represented by the operator
\begin{eqnarray}
\hat{O}_N&=&U^\dag PUU_{\phi_N}U^\dag PU\cdots U_{\phi_2}U^\dag PU
U_{\phi_1}U^\dag PU\\
&=&U^\dag PU\prod_{i=1}^N\cos[\phi_i(\hat{a}^\dag\hat{a}-\hat{b}^\dag\hat{b})]\label{ON},
\end{eqnarray}
where $U^\dag PU$ in the last line commutes with the product of
cosines. For $N \rightarrow \infty$, the product of cosines
vanishes for all components of the input state with different
numbers of photons in the two modes if, for instance, all the
$\phi_i$'s are chosen as an irrational number times $\pi$. We note
that even though we here apply the two-mode operators one after
the other to the input state corresponding to successive
interactions of the light with the different components of the
setup, the result is exactly the same if the input pulses are
longer than the distance between the components such that
different parts of the pulses interact with different components
at the same time. The only important point is that the state of
the atoms is not measured before the interaction with the light
field is completed. The setup using an array of cavities as in Fig.~\ref{setup} may thus be very compact even though the pulses are required
to be long. (Note that it would also be possible to use a single pair of
cavities and atoms repeatedly in a fold-on type of experiment; however
in that case the compactness would be lost due to the need for long
delay lines necessary to measure and re-prepare the atoms before they
are reused.)
A natural question is how one should optimally choose the angles $\phi_i$ to approximately achieve the projection with a small number of units. To this end we define the fidelity of the projection
\begin{equation}\label{FN}
F_N=\frac{|\langle\psi_N|\psi_\infty\rangle|^2}
{\langle\psi_N|\psi_N\rangle\langle\psi_\infty|\psi_\infty\rangle}
=\frac{\langle\psi_\infty|\psi_\infty\rangle}
{\langle\psi_N|\psi_N\rangle}
\end{equation}
as the overlap between the unnormalized output state $|\psi_N\rangle=\hat{O}_N|\psi_{\textrm{in}}\rangle$ after $N+1$ units and the projection $|\psi_\infty\rangle$ of the input state $|\psi_{\textrm{in}}\rangle$ onto the subspace $S$. The last equality follows from the fact that $|\psi_N\rangle=|\psi_\infty\rangle+|\psi_\bot\rangle$, where $|\psi_\bot\rangle$ lies in the orthogonal complement of $S$. Maximizing $F_N$ for a given $|\psi_{\textrm{in}}\rangle=\sum_n\sum_mc_{nm}|n\rangle|m\rangle$ thus corresponds to minimizing
\begin{multline}
\langle\psi_N|\psi_N\rangle=\sum_{n=0}^\infty\sum_{m=0}^\infty c_{nm}
\prod_{i=1}^N\cos^2[\phi_i(n-m)]\\
\times\langle\psi_{\textrm{in}}|U^\dag PU|n\rangle|m\rangle,
\end{multline}
i.e., we would like to find the optimal solution of
\begin{multline}
\frac{\partial\langle\psi_N|\psi_N\rangle}{\partial\phi_j}=
-\sum_{n=0}^\infty\sum_{m=0}^\infty c_{nm}\sin[2\phi_j(n-m)](n-m)\\
\times\prod_{\substack{i=1\\i\neq j}}^N\cos^2[\phi_i(n-m)]
\langle\psi_{\textrm{in}}|U^\dag PU|n\rangle|m\rangle=0.
\end{multline}
A set of solutions valid for any input state can be obtained by requiring $\sin[2\phi_j(n-m)]\prod_{i\neq j}\cos^2[\phi_i(n-m)]=0$ for all even values of $n-m$ (note that $U^\dag PU|n\rangle|m\rangle=0$ for $n+m$ odd). Within this set the optimal solution is $\phi_j=2^{-j}\times\pi/2$. It is interesting to note that choosing one of the angles to be $2^{-j}\times\pi/2$, $j\in\{1,2,\ldots,N\}$, all terms with $n-m=2^j\times(\pm1,\pm3,\pm5,\ldots)$ are removed from the input state according to \eqref{ON}. When all angles are chosen according to $\phi_j=2^{-j}\times\pi/2$, it follows that $|\psi_N\rangle$ only contains terms with $n-m=q2^{(N+1)}$, $q=0,\pm1,\pm2,\ldots$, which may be a useful property in practical applications of the scheme. Even though this is not necessarily optimal with respect to maximizing $F$ for a particular choice of input state, we thus use the angles $\phi_j=2^{-j}\times\pi/2$ in the following, except for one important point: If the input state satisfies the symmetry relations $c_{nm}=c_{mn}$, it turns out that the operator $U^\dag PU$ by itself removes all terms with $n-m=\pm2,\pm6,\pm10,\ldots$, i.e., we can choose the angles as $\phi_j=2^{-j}\times\pi/4$, and $|\psi_N\rangle$ only contains terms with $n-m=q2^{(N+2)}$, $q=0,\pm1,\pm2,\ldots$. For $N=2$, for instance, only terms with $n-m=0,\pm16,\pm32,\ldots$ contribute.
In Fig.~\ref{fidelity}, we have chosen the input state to be a product of two coherent states with amplitude $\alpha$ and plotted the fidelity \eqref{FN} as a function of $|\alpha|^2$ for different numbers of units of the setup. Even for $|\alpha|^2$ as large as 10, the fidelity is still as high as 0.9961 for $N=2$, and the required number of units is thus quite small in practice. The figure also shows the success probability
\begin{equation}
P_N=\langle\psi_N|\psi_N\rangle
\end{equation}
for $N\rightarrow\infty$. For $|\alpha|^2=10$, for instance, one should repeat the experiment about 11 times on average before the desired measurement outcome is observed.
\begin{figure}
\includegraphics[width=\columnwidth]{figure3}
\caption{\label{fidelity}(Color online) Fidelity (Eq.~\eqref{FN}) as a function of the expectation value of the number of photons in one of the input modes for $|\psi_{\textrm{in}}\rangle=|\alpha\rangle|\alpha\rangle$ and setups with one, two, three, and infinitely many units. The angles are chosen as $\phi_j=2^{-j}\times\pi/4$. The dotted line labeled $P_{\infty}$ is the probability in the limit of infinitely many units to actually obtain the required measurement outcome, i.e., all atoms in $(|{\uparrow}\rangle+|{\downarrow}\rangle)/\sqrt{2}$ after the interaction with the field.}
\end{figure}
\section{Photon number resolving measurement}\label{Detector}
In this section, we show how a photon number measurement can be
implemented using a modified version of the setup introduced in
the previous section. The key idea is explained in subsection
\ref{DestructiveScheme} where we describe the basic photo-counting
scheme. In subsection \ref{QNDScheme}, this protocol is extended
to allow for a QND measurement of photon numbers.
\subsection{Number resolving detection scheme}\label{DestructiveScheme}
In the following, we analyze the setup shown in Fig.~\ref{setup}
when the input is a product of an $n$-photon Fock state in the lower input beam and a vacuum state in the upper input beam. Since the setup
contains a series of beam splitters, it will be useful to define
$\hat{A}=(\hat{a}^\dag-\hat{b}^\dag)/\sqrt{2}$ and
$\hat{B}=(\hat{a}^\dag+\hat{b}^\dag)/\sqrt{2}$, such that
$\hat{a}^\dag\ket{0}\rightarrow\hat{A}\ket{0}$ and
$\hat{b}^\dag\ket{0}\rightarrow\hat{B}\ket{0}$ at beam splitters
BS$_1$, BS$_3$, BS$_5$, $\ldots$, and
$\hat{A}\ket{0}\rightarrow\hat{a}^\dag\ket{0}$ and
$\hat{B}\ket{0}\rightarrow\hat{b}^\dag\ket{0}$ at beam splitters
BS$_2$, BS$_4$, BS$_6$, $\ldots$.
As before, all atoms are initially prepared in the state
$\ket{+}$ and will after the interaction with the field be measured in the
$\ket{\pm}$ basis. When we start with an $n$ photon state, there are only two possible outcomes of the measurement of the atoms in the cavities labeled C$_1$ and C$_2$ in Fig.~\ref{setup} dependent on whether $n$ is even or odd. In the even case, the two atoms can only be in $\ket{++}$ or $\ket{--}$ and in the odd case $\ket{-+}$ or $\ket{+-}$. To handle the odd and even case at the same
time we denote $\ket{++},\ket{-+}$ as $\ket{B_+}$ and
$\ket{--},\ket{+-}$ as $\ket{B_-}$. A measurement of $\ket{B_+}$
indicates an even number of photons in the $\hat{b}$-beam, resp.,
$\ket{B_-}$ an odd number of photons in the $\hat{b}$-beam.
We start with the state $\ket{n}\ket{0}=\frac{1}{\sqrt{n!}}(\hat{a}^\dag)^n\ket{0}\ket{0}$ as input. After the beam splitter BS$_1$, the state has changed into $\frac{1}{\sqrt{n!}}\left(\frac{\hat{a}^\dag-\hat{b}^\dag}{\sqrt{2}}\right)^n\ket{0}\ket{0}$
and interacts with the two atoms in the cavities C$_1$ and C$_2$. By measuring the atoms in the $\ket{\pm}$ basis, the state is projected into the subspace of an even or odd number of photons in the $\hat{b}$ path. The photon state after the measurement can be written as
$\ket{b_\pm}:=\frac{1}{\sqrt{2\
n!}}\left[\left(\frac{\hat{a}^\dag-\hat{b}^\dag}
{\sqrt{2}}\right)^n \pm\left(\frac{\hat{a}^\dag+\hat{b}^\dag}
{\sqrt{2}}\right)^n\right]\ket{0}\ket{0}=\frac{1}{\sqrt{2\
n!}}(\hat{A}^n \pm\hat{B}^n)\ket{0}\ket{0}$, where
$\ket{b_+}$($\ket{b_-}$) is the state with an even (odd) number of
$\hat{b}$ photons and corresponds to the measurement result
$\ket{B_+}$ ($\ket{B_-}$). Note that this first measurement result is
completely random.
After BS$_2$ the state simplifies to
$\frac{1}{\sqrt{2\ n!}}[(\hat{a}^\dag)^n
\pm(\hat{b}^\dag)^n]\ket{0}\ket{0}$. Now due to the phase shifters the two modes pick up a relative phase of $2 \phi_1 n$ so that the state is given by $\frac{1}{\sqrt{2\ n!}}[e^{i \phi_1 n}(\hat{a}^\dag)^n \pm e^{-i \phi_1 n}(\hat{b}^\dag)^n]\ket{0}\ket{0}$. Finally, after BS$_3$ we have the state $\frac{1}{\sqrt{2\ n!}} (e^{i\phi_1 n}\hat{A}^n \pm e^{-i\phi_1 n}\hat{B}^n)\ket{0}\ket{0}$, which is equal to
$\frac{1}{2\sqrt{2\ n!}}(e^{i\phi_1 n} \pm e^{-i\phi_1 n})(\hat{A}^n +
\hat{B}^n)\ket{0}\ket{0}+\frac{1}{2\sqrt{2\ n!}}(e^{i\phi_1
n} \mp e^{-i \phi_1 n})(\hat{A}^n - \hat{B}^n)\ket{0}\ket{0}$. This can also be rewritten as $(e^{i\phi_1 n}\pm e^{-i\phi_1 n})/2\ket{b_+}+(e^{i \phi_1
n} \mp e^{-i \phi_1 n})/2\ket{b_-}$. So the result of measuring the state of the atoms in cavities C$_3$ and C$_4$ will be $\ket{B_\pm}$
with probability $p_+=\cos(\phi_1n)^2$
and $\ket{B_\mp}$ with probability $p_-=\sin(\phi_1n)^2$. Since the state is again projected into one of the two states $\ket{b_\pm}$ we can repeat exactly the same calculations for all following steps.
Whereas the first measurement result was completely random, all
following measurement results depend on $n$ and the previous
measurement outcome, i.e., with probability $p_i=\cos(\phi_i
n)^2$ the $(i+1)$-th measurement result is the same as the $i$-th
result, and with probability $\sin(\phi_i n)^2$ the measurement result
changes and the state changes from $\ket{b_\pm}$ to $\ket{b_\mp}$.
If the number of units is infinite (or sufficiently large) and we have
chosen all phases equal as $\phi$, then we can guess from the
relative frequency with which the measurement result has switched
between $\ket{B_+}$ and $\ket{B_-}$ the number of photons with
arbitrary precision for all photon numbers $n<\frac{\pi}{\phi}$. $n\approx \arccos(\sqrt{f})/\phi$, where $f=N_\textrm{same}/(N_\textrm{same}+N_\textrm{different})$ and $N_\textrm{same}$ ($N_\textrm{different}$) is the number of cases, where the measurement outcome is the same (not the same) as the previous measurement outcome.
Measuring this relative frequency with a fixed small phase is not
the optimal way to get the photon number. We propose instead the
following. Let us use a setup with a total of $N+1$ units and
choose the phases to be $\phi_i=2^{i-1} \pi/n_0$,
$i=1,2,\ldots,N$, for an arbitrarily chosen value of $n_0 \in
\mathbb{N}$ and let us calculate the probability $p(n)$ that the
measurement results are all the same,
\begin{equation}\label{pn}
p(n)=\prod_{i=1}^N p_i=\prod_{i=0}^{N-1} \cos\left(\frac{2^i\pi}{n_0}n\right)^2.
\end{equation}
This probability is equal to one for all photon numbers that are a multiple of $n_0$ and goes to zero otherwise in the limit of infinitely many units of the setup. This way we can measure whether the photon number is a multiple of $n_0$.
For example, for $n_0=3$ and $N+1=3$ we detect any state which is not a
multiple of three with a probability of at least $q=93.75\%$, resp., $q=99.61\%$ for $N+1=5$, where
\begin{equation}\label{q}
q:=1-\max_{n\neq0,n_0,2n_0,\ldots}p(n).
\end{equation}
For $n_0=4$, already $N+1=3$ is sufficient to achieve $q=100\%$. For $n_0=5$ and $N+1=3$ we have $q=93.75\%$, which increases to $q=99.61\%$ for $N+1=5$. In Fig.~\ref{20photo} and \ref{100photo} we have shown $p(n)$ for $n_0=20$ and
$n_0=100$. The number $N$ needed to get a good result typically
scales logarithmical with $n_0$, e.g. for $n_0=1000$ already
$N+1=11$ is enough to reach $q=99.95\%$.
\begin{figure}
\includegraphics[width=\columnwidth]{figure4}
\caption{\label{20photo}(Color online) Probabilities $p(n)$ (Eq.~\eqref{pn}) for $n_0=20$ and $N+1=5$ resulting in $q=94.49\%$ (Eq.~\eqref{q}). For
$N+1=7$, $q=99.66\%$.}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{figure5}
\caption{\label{100photo}(Color online) Probabilities $p(n)$ for $n_0=100$ and
$N+1=8$ resulting in $q=96.33\%$. For $N+1=10$, $q=99.95\%$.}
\end{figure}
Given an unknown state we can test all possible prime factors and powers of primes to identify the exact photon number. If, e.g., we have a state consisting of $0$ to $10$ photons the following factors have to be tested $n_0=2,3,4,5,7,8,9$ (where 2 does not need to be checked separately). 24 measurement results are sufficient to test all factors with a probability over $99\%$.
If we require reliable photon number counting for $n$ ranging from
$0$ to $n_\textrm{max}$, for large $n_\textrm{max}$, all primes
and power of primes that are smaller than $n_\textrm{max}$ need to
be tested. This number can be bounded from above by
$n_\textrm{max}$. All $n_\textrm{max}$ tests are required to work
with high probability. To this end each single test needs to
succeed with a probability better than $q\geq
1-1/n_\textrm{max}$. It can be checked numerically that this is
the case if $N=2 \log(n_\textrm{max})$, leading to a photon
counting device with reliable photon detection up to
$n_\textrm{max}$ using an array consisting of less than $2
n_\textrm{max} \log(n_\textrm{max})$ basic units.
Note that this setup does not destroy the photonic input state but
changes $\ket{n}\ket{0}$ randomly into $\frac{1}{\sqrt{2}}[\ket{n}\ket{0}\pm\ket{0}\ket{n}]$, i.e., the photons
leave the setup in a superposition of all photons taking either
the $\hat{a}$-beam or the $\hat{b}$-beam. This can not be changed
back into $\ket{n}\ket{0}$ by means of passive optical elements.
The output state - a so-called $N00N$ state - is, however, a very
valuable resource for applications in quantum information
protocols and quantum metrology \cite{NOON}.
\subsection{Non-destructive number resolving detection scheme}\label{QNDScheme}
For a non-demolition version of the photon number measurement we
use the basic building block depicted in Fig.~\ref{block}. The atoms in the two upper cavities are initially prepared in an entangled state $\ket{\phi_+}=(\ket{{\uparrow}{\uparrow}}+\ket{{\downarrow}{\downarrow}})/\sqrt{2}$, and the atoms in the lower cavities are also prepared in the state $\ket{\phi_+}$. This can, for instance, be achieved via the parity measurement scheme suggested in \cite{parity}. During the interaction the upper and the lower atoms will stay in the subspace spanned by $\ket{\phi_\pm}=(\ket{{\uparrow}{\uparrow}} \pm \ket{{\downarrow}{\downarrow}})/\sqrt{2}$. The state changes between $\ket{\phi_+}$ and $\ket{\phi_-}$ each time one of the two entangled cavities interacts with an odd number of photons. As in the
previous subsection we define $\ket{B_\pm}$ to handle the even
and the odd case at the same time. For $n$ even,
$\ket{B_+}=\ket{\phi_+}\ket{\phi_+}$ and
$\ket{B_-}=\ket{\phi_-}\ket{\phi_-}$, while in the odd case we
define $\ket{B_+}=\ket{\phi_-}\ket{\phi_+}$ and
$\ket{B_-}=\ket{\phi_+}\ket{\phi_-}$.
\begin{figure}
\includegraphics[width=\columnwidth]{figure6}
\caption{(Color online) Basic building block of the non-demolition photon number resolving detection scheme. The dashed circles and the wavy lines indicate the entanglement between the atoms in cavities C$_1$ and C$_3$ and between the atoms in cavities C$_2$ and C$_4$. The gray box is either a cavity or a mirror. (In the latter case, we may as well send the light directly from the cavities C$_3$ and C$_4$ to the first two cavities of the next block.)} \label{block}
\end{figure}
Using the same notation as before, the state is transformed as follows, when we go through the setup from left to right. The initial state is
\begin{equation*}
\ket{\psi_1}=\frac{1}{\sqrt{n!}} (\hat{a}^\dag)^n\ket{0}\ket{0}\ket{\phi_+}\ket{\phi_+},
\end{equation*}
and after the first beam splitter we have the state
\begin{equation*}
\ket{\psi_2}=\frac{1}{\sqrt{n!}}\hat{A}^n\ket{0}\ket{0}\ket{\phi_+}\ket{\phi_+}
=\frac{1}{\sqrt{2}}(\ket{b_+}+\ket{b_-})\ket{\phi_+}\ket{\phi_+}.
\end{equation*}
The interaction with the first two cavities leads to the state
\begin{multline*}
\ket{\psi_3}=\frac{1}{\sqrt{2}}\ket{b_+}\ket{B_+}+\frac{1}{\sqrt{2}}\ket{b_-}\ket{B_-}\\
=\frac{1}{2\sqrt{n!}}(\hat{A}^n+\hat{B}^n)\ket{0}\ket{0}\ket{B_+}\\
+\frac{1}{2\sqrt{n!}}(\hat{A}^n-\hat{B}^n)\ket{0}\ket{0}\ket{B_-}.
\end{multline*}
The second beam splitter transforms the state into
\begin{multline*}
\ket{\psi_4}=\frac{1}{2\sqrt{n!}}\big\{[(\hat{a}^\dag)^n+(\hat{b}^\dag)^n]\ket{0}\ket{0}\ket{B_+}\\
+[(\hat{a}^\dag)^n-(\hat{b}^\dag)^n]\ket{0}\ket{0}\ket{B_-}\big\}.
\end{multline*}
The two modes pick up a relative phase shift of $2 \phi_i n$ at the phase shifters so that
\begin{multline*}
\ket{\psi_5}=\frac{1}{2\sqrt{n!}}
\big\{[e^{i\phi_i n}(\hat{a}^\dag)^n
+e^{-i\phi_in}(\hat{b}^\dag)^n]\ket{0}\ket{0}\ket{B_+}\\
+[e^{i\phi_i n}(\hat{a}^\dag)^n
-e^{-i\phi_i n}(\hat{b}^\dag)^n]\ket{0}\ket{0}\ket{B_-}\big\}.
\end{multline*}
After the third beam splitter we get
\begin{multline*}
\ket{\psi_6}=\frac{1}{2\sqrt{n!}}\big[(e^{i\phi_i n}\hat{A}^n+e^{-i\phi_i n}\hat{B}^n)\ket{0}\ket{0}\ket{B_+}\\
+(e^{i\phi_i n}\hat{A}^n-e^{-i\phi_i n}\hat{B}^n)\ket{0}\ket{0}\ket{B_-}\big]\\
=\frac{1}{\sqrt{2}}\cos(\phi_i n)\ket{b_+}\ket{B_+}
+\frac{i}{\sqrt{2}}\sin(\phi_i n)\ket{b_-}\ket{B_+}\\
+\frac{i}{\sqrt{2}}\sin(\phi_i n)\ket{b_+}\ket{B_-}
+\frac{1}{\sqrt{2}}\cos(\phi_i n)\ket{b_-}\ket{B_-}.
\end{multline*}
Note now that independent of whether $n$ is even or odd, the interaction with the last two cavities turns the states $\ket{b_\pm}\ket{B_\pm}$ into $\ket{b_\pm}\ket{\phi_+}\ket{\phi_+}$ and the states $\ket{b_\pm}\ket{B_\mp}$ into $\ket{b_\pm}\ket{\phi_-}\ket{\phi_-}$. The state is thus changed to
\begin{multline*}
\ket{\psi_7}=
\frac{1}{\sqrt{2}}\cos(\phi_i n)\left(\ket{b_+}+\ket{b_-}\right)\ket{\phi_+}\ket{\phi_+}\\
+\frac{i}{\sqrt{2}}\sin(\phi_i n)\left(\ket{b_+}+\ket{b_-}\right)\ket{\phi_-}\ket{\phi_-}\\
=\frac{1}{\sqrt{n!}}\hat{A}^n\ket{0}\ket{0}
\left[\cos(\phi_i n)\ket{\phi_+}\ket{\phi_+}+i\sin(\phi_i n)\ket{\phi_-}\ket{\phi_-}\right],
\end{multline*}
and after the last beam splitter we have
\begin{multline*}
\ket{\psi_8}=\frac{1}{\sqrt{n}}(\hat{a}^\dag)^n\ket{0}\ket{0}\\
\otimes\left[\cos(\phi_i n)\ket{\phi_+}\ket{\phi_+}
+i\sin(\phi_i n)\ket{\phi_-}\ket{\phi_-}\right].
\end{multline*}
Note that the photonic modes are now decoupled from the state of the atoms. The photons will continue after the final beam splitter unchanged in $\ket{n}\ket{0}$ while the atoms still contain some information about the photon number.
Note that $\ket{\phi_+}=(\ket{++}+\ket{--})/\sqrt{2}$ and
$\ket{\phi_-}=(\ket{+-}+\ket{-+})/\sqrt{2}$. By measuring all
atoms in the $\ket{\pm}$ basis we can easily distinguish between
$\ket{\phi_\pm}$ by the parity of the measurements. The probability to obtain $\ket{\phi_+}\ket{\phi_+}$ is $\cos(\phi n)^2$, and the probability to get $\ket{\phi_-}\ket{\phi_-}$ is $\sin(\phi n)^2$. After the measurement, all photons are found in the $\hat{a}$ beam for both outcomes. The probability for measuring $\ket{\phi_+}\ket{\phi_+}$ is the same as the changing probability in the previous setup such that we can do the same with a chain of the demolition free block. If we prefer also to get the parity information in each step, we can add an additional cavity at the end of each block as shown in Fig.~\ref{block}.
More generally, the demolition free element leaves photonic input
states $\ket{\psi}=\frac{1}{\sqrt{n!q!} }
(\hat{a}^\dag)^n(\hat{b}^\dag)^q \ket{0}\ket{0}$, where $n$ photons
enter through the lower and $q$ photons enter through the upper
port, unchanged. A calculation analogous to the previous one shows
that one obtains the atomic states $\ket{\phi_+}\ket{\phi_+}$ and
$\ket{\phi_-}\ket{\phi_-}$ with probabilities $ \cos(\phi_i
(n-q))^2$ and $\sin(\phi_i (n-q))^2$ respectively. This way, we
can test for photon number differences $n-q$ in two input states
in the same fashion as for photon numbers in a single input beam
described above. Similarly, one can project two coherent input states
$\ket{\alpha}\ket{\alpha}$ onto generalized photon-number
correlated states $\sum_n c_n\ket{n}\ket{n-d}$ with fixed photon
number difference $d=0,1,2$....
In a realistic scenario we may be faced with photon losses. Both
setups have a built-in possibility to detect loss of one photon. In the first case we get the parity of the total number of photons in every single measurement of a pair of atoms. If this parity changes at some place in the chain then we know that we lost at least one photon. In the demolition free setup the valid measurement results are restricted to $\ket{\phi_+}\ket{\phi_+}$ and $\ket{\phi_-}\ket{\phi_-}$. If we measure
$\ket{\phi_-}\ket{\phi_+}$ or $\ket{\phi_+}\ket{\phi_-}$ we know that we lost a photon in between the two pairs of cavities. In addition, the optional cavity at the end of each block provides an extra check for photon loss.
\section{Filtering out losses}\label{Purification}
We next investigate the possibility to use a comparison of the number of photons in the two modes to detect a loss. As in Sec.~\ref{Filter}, we start with the input state $|\psi_{\textrm{in}}\rangle=|\alpha\rangle|\alpha\rangle$ and use the proposed setup to project it onto the subspace $S$. We then use two beam splitters with reflectivity $R$ to model a fractional loss in both modes. After tracing out the reflected field, we finally use the proposed setup once more to project the state onto $S$. In Fig.~\ref{recover}, we plot the fidelity between the state obtained after the first projection onto $S$ and the state obtained after the second projection onto $S$, the probability that the second projection is successful given that the first is successful, and the purity
of the state after the second projection. The second projection is seen to recover the state obtained after the first projection with a fidelity close to unity even for losses of a few percent. This is the case because a loss of only one photon will always lead to a failure of the second conditional projection. The main contribution to the fidelity decrease for small $R$ is thus a simultaneous loss of one photon from both modes. It is also interesting to note that the final state is actually pure for all values of $R$, which is a consequence of the particular choice of input state. Finally, we note that a single unit is sufficient to detect loss of a single photon, and for small $R$ we thus only need to use one unit for the second projection in practice.
\begin{figure}
\includegraphics[width=\columnwidth]{figure7}
\caption{\label{recover}(Color online) Projecting the input state $|\psi_{\textrm{in}}\rangle=|\alpha\rangle|\alpha\rangle$ onto the subspace $S$ followed by a fractional loss $R$ in both modes and a second projection onto $S$, the figure shows the fidelity between the states obtained after the first and the second projection onto $S$, the probability that the second projection is successful given that the first is successful, and the purity of the state after the second projection for two different values of $|\alpha|^2$.}
\end{figure}
Let us also consider a four mode input state $|\psi_{\textrm{in}}\rangle=|\alpha\rangle|\alpha\rangle|\alpha\rangle|\alpha\rangle$. As before we use the setup to project this state onto the subspace spanned by the state vectors $|n\rangle|n\rangle|n\rangle|n\rangle$, $n=0,1,2,\ldots$. We then imagine a fractional loss of $R$ to take place in all modes. If two of the modes are on their way to Alice and the two other modes are on their way to Bob, we can only try to recover the original projection by projecting the former two modes onto $S$ and the latter two modes onto $S$. The results are shown in Fig.~\ref{loss}, and again the curves showing the fidelity and the purity are seen to be very flat and close to unity for small losses. This scheme allows one to distribute entanglement with high fidelity, but reduced success probability.
\begin{figure}
\includegraphics[width=\columnwidth]{figure8}
\caption{\label{loss}(Color online) Projecting the input state $|\psi_{\textrm{in}}\rangle=|\alpha\rangle|\alpha\rangle|\alpha\rangle|\alpha\rangle$ onto the subspace spanned by the vectors $|n\rangle|n\rangle|n\rangle|n\rangle$, $n=0,1,2,\ldots$, followed by a fractional loss $R$ in all four modes and a projection of modes 1 and 2 onto $S$ and of modes 3 and 4 onto $S$, the figure shows the fidelity between the states obtained after the first and the second set of projections, the probability that the projections onto $S$ are successful given that the first projection is successful, and the purity of the state after the projections onto $S$ for two different values of $|\alpha|^2$.}
\end{figure}
\section{Deviations from ideal behavior}\label{Nonideal}
So far we have considered the ideal limit of infinitely long pulses and an infinitely strong coupling. In this section, we use a more detailed model of the interaction of the light field with an atom in a cavity to investigate how long the input pulses need to be and how large the single atom cooperativity parameter should be to approximately achieve this limit.
\subsection{Optimal input mode function}
The backreflection of light in a cavity leads to a state dependent distortion of the shape of the mode function of the input field, and to study this effect in more detail we concentrate on a single cavity as shown in Fig.~\ref{cavity} in the following. For simplicity, we assume the input field to be a continuous coherent state \cite{blow}
\begin{equation}\label{cont}
|\{\alpha_{\textrm{in}}(t)\}\rangle=\exp\left[\int\alpha_{\textrm{in}}(t)\hat{a}^\dag(t)dt
-\int\alpha_{\textrm{in}}^*(t)\hat{a}(t)dt\right]|0\rangle
\end{equation}
with mode function $f_{\textrm{in}}(t)=\alpha_{\textrm{in}}(t)/\alpha$, where $|\alpha|^2=\int|\alpha_{\textrm{in}}(t)|^2dt$ is the expectation value of the total number of photons in the input beam. The light-atom interaction is governed by the Hamiltonian
\begin{equation}
H=\hbar g(\hat{c}^\dag\sigma+\sigma^\dag\hat{c}),
\quad\sigma:=|{\uparrow}\rangle\langle e|
\end{equation}
and the decay term
\begin{equation}
\mathcal{L}\rho=\frac{\Gamma}{2}(2\sigma\rho\sigma^\dag-\sigma^\dag\sigma\rho-\rho\sigma^\dag\sigma),
\end{equation}
where $g$ is the light-atom coupling strength, $\hat{c}$ is the annihilation operator of the cavity field, $\Gamma$ is the decay rate of the excited state of the atom due to spontaneous emission, and $\rho$ is the density operator representing the state of the atom and the cavity field. For $g^2\textrm{Tr}(\hat{c}^\dag\hat{c}\langle{\uparrow}|\rho|{\uparrow}\rangle)\ll(\Gamma/2)^2$, where $\textrm{Tr}$ denotes the trace, the population in the excited state of the atom is very small, and we may eliminate this state adiabatically. This reduces the effective light-atom interaction dynamics to a single decay term
\begin{equation}
\frac{2g^2}{\Gamma}\left(2\hat{c}|{\uparrow}\rangle
\langle{\uparrow}|\rho|{\uparrow}\rangle\langle{\uparrow}|\hat{c}^\dag
-\hat{c}^\dag\hat{c}|{\uparrow}\rangle\langle{\uparrow}|\rho
-\rho|{\uparrow}\rangle\langle{\uparrow}|\hat{c}^\dag\hat{c}\right),
\end{equation}
i.e., the atom is equivalent to a beam splitter which reflects photons out of the cavity at the rate $4g^2/\Gamma$ if the atom is in the state $|{\uparrow}\rangle$ and does not affect the light field if the atom is in the state $|{\downarrow}\rangle$.
Assume the atom to be in the state $|j\rangle$, $j\in\{{\downarrow},{\uparrow}\}$. Since the input mirror of the cavity may be regarded as a beam splitter with high reflectivity, all components of the setup transform field operators linearly. For a coherent state input field, the cavity field and the output field are hence also coherent states. We may divide the time axis into small segments of width $\tau$ and approximate the integrals in \eqref{cont} by sums. The input state is then a direct product of single mode coherent states with amplitudes $\alpha_{\textrm{in}}(t_k)\sqrt{\tau}$ and annihilation operators $\hat{a}(t_k)\sqrt{\tau}$, where $t_k=k\tau$, $k=0,\pm1,\pm2,\ldots$. In the following, we choose $\tau$ to be equal to the round trip time of light in the cavity, which requires $\alpha_{\textrm{in}}(t)$ to vary slowly on that time scale. Denoting the coherent state amplitude of the cavity field at time $t$ by $\gamma_j(t)$, we use the beam splitter transformation
\begin{eqnarray}\label{bst}
\bigg[\textrm{out}\bigg]=\bigg[\begin{array}{cc}t_c&-r_c\\r_c&t_c\end{array}\bigg]
\bigg[\textrm{in}\bigg]
\end{eqnarray}
for the input mirror of the cavity to derive
\begin{eqnarray}
\gamma_j(t)&=&t_c\sqrt{\tau}\alpha_{\textrm{in}}(t)+r_ct_j\gamma_j(t-\tau),\label{gamma}\\
\sqrt{\tau}\alpha^{(j)}_{\textrm{out}}(t)&=&r_c\sqrt{\tau}
\alpha_{\textrm{in}}(t)-t_ct_j\gamma_j(t-\tau)\label{out},
\end{eqnarray}
where $r_c^2=1-t_c^2$ is the reflectivity of the input mirror of the cavity, $t_j^2$ is the transmissivity of the beam splitter, which models the loss due spontaneous emission, i.e., $t_j^2=1-4g^2\tau\delta_{j{\uparrow}}/\Gamma$, and $\alpha^{(j)}_{\textrm{out}}(t)$ denotes the output field from the cavity. We have here included an additional phase shift of $\pi$ per round trip in the cavity to ensure the input field to be on resonance with the cavity, i.e., the second element of the input vector in \eqref{bst} is $-t_j\gamma(t-\tau)$. Taking the limit $\tau\rightarrow0$ and $t_c^2\rightarrow0$ for fixed cavity decay rate $\kappa:=t_c^2/\tau$, \eqref{gamma} and \eqref{out} reduce to
\begin{eqnarray}
\frac{d\gamma_j(t)}{dt}&=&-\left(1+2C\delta_{j{\uparrow}}\right)\frac{\kappa}{2}\gamma_j(t)
+\sqrt{\kappa}\alpha_{\textrm{in}}(t),\label{gammaeq}\\
\alpha^{(j)}_{\textrm{out}}(t)&=&\alpha_{\textrm{in}}(t)-\sqrt{\kappa}\gamma_j(t)\label{inout},
\end{eqnarray}
where $C:=2g^2/(\kappa\Gamma)$ is the single atom cooperativity parameter. According to the steady state solution of \eqref{gammaeq}
\begin{equation}\label{ss}
\gamma_j(t)=\frac{1}{1+2C\delta_{j{\uparrow}}}\frac{2\alpha_{\textrm{in}}(t)}{\sqrt{\kappa}},
\end{equation}
we need $C\gg1$ to efficiently expel the light field from the cavity for $j={\uparrow}$. We should also remember the criterion for the validity of the adiabatic elimination, which by use of \eqref{ss} takes the form
\begin{equation}
1\gg\frac{4C}{(1+2C)^2}\frac{2|\alpha_{\textrm{in}}(t)|^2}{\Gamma}
\approx\frac{\kappa|\alpha_{\textrm{in}}(t)|^2}{g^2},
\end{equation}
i.e., for $C\gg1$ the average flux of photons in the input beam should not significantly exceed the average number of photons emitted spontaneously per unit time from an atom, which has an average probability of one half to be in the excited state.
Solving \eqref{gammaeq},
\begin{equation}\label{gammat}
\gamma_j(t)=\sqrt{\kappa}\int_{-\infty}^te^{-(1+2C\delta_{j{\uparrow}})
\kappa(t-t')/2}\alpha_{\textrm{in}}(t')dt',
\end{equation}
we obtain the output field $\alpha^{(j)}_{\textrm{out}}(t)=\alpha f^{(j)}_{\textrm{out}}(t)$ with
\begin{equation}\label{fout}
f^{(j)}_{\textrm{out}}(t)=f_{\textrm{in}}(t)-
\kappa\int_{-\infty}^te^{-(1+2C\delta_{j{\uparrow}})\kappa(t-t')/2}f_{\textrm{in}}(t')dt'.
\end{equation}
Note that $f^{(j)}_{\textrm{out}}(t)$ is not necessarily normalized due to the possibility of spontaneous emission. The ideal situation is $f^{(\uparrow)}_{\textrm{out}}(t)=f_{\textrm{in}}(t)$ and $f^{(\downarrow)}_{\textrm{out}}(t)=-f_{\textrm{in}}(t)$, and we would thus like the norm of
\begin{equation}
E_j:=1+(-1)^{\delta_{j{\uparrow}}}\int_{-\infty}^{\infty}f^*_{\textrm{in}}(t)
f^{(j)}_{\textrm{out}}(t)dt
\end{equation}
to be as small as possible.
For $j={\uparrow}$ and $C\gg1$, the exponential function in \eqref{fout} is practically zero unless $\kappa(t-t')\ll1$, and as long as $f_{\textrm{in}}(t)$ changes slowly on the time scale $(C\kappa)^{-1}$, we may take $f_{\textrm{in}}(t)$ outside the integral to obtain $f^{(\uparrow)}_{\textrm{out}}(t)=(1-2/(1+2C))f_{\textrm{in}}(t)$ and $E_{\uparrow}=2/(1+2C)\approx C^{-1}$. As this result is independent of $f_{\textrm{in}}(t)$, a natural criterion for the optimal choice of input mode function is to minimize
\begin{equation}
|E_{\downarrow}|=\left|2-\kappa\int_{-\infty}^{\infty}\int_{-\infty}^t
e^{-\kappa(t-t')/2}f^*_{\textrm{in}}(t)f_{\textrm{in}}(t')dt'dt\right|
\end{equation}
under the constraint $\int_{-\infty}^{\infty}f^*_{\textrm{in}}(t)f_{\textrm{in}}(t)dt=1$. We also restrict $f_{\textrm{in}}(t)$ to be zero everywhere outside the time interval $[-T/2,T/2]$. Since we would like the double integral to be as close to 2 as possible, we should choose $f^*_{\textrm{in}}(t)=f_{\textrm{in}}(t)$. A variational calculation then provides the optimal solution
\begin{equation}\label{mode}
f_{\textrm{in}}(t)=\left\{\begin{array}{cl}
A\cos(\omega_0t)&\textrm{for }t\in[-T/2,T/2]\\
0&\textrm{otherwise}
\end{array}
\right.,
\end{equation}
where
\begin{equation}
A=\left(\frac{2}{T+\sin(\omega_0T)/\omega_0}\right)^{1/2},
\end{equation}
\begin{equation}
\frac{2\omega_0}{\kappa}\tan\left(\frac{\omega_0T}{2}\right)=1,
\end{equation}
and $\omega_0T\in[0,\pi[$.
\begin{figure}
\includegraphics[width=\columnwidth]{figure9}
\caption{\label{optmode}Deviation $E_j=1+(-1)^{\delta_{j\uparrow}}\int f^*_{\textrm{in}}(t)f^{(j)}_{\textrm{out}}(t)dt$ of the overlap between the output mode function and the input mode function from the ideal value when the atom is in the state $|j\rangle$. Note that $f^{(j)}_{\textrm{out}}(t)$ is defined such that the norm is less than unity if a loss occurs during the interaction. The single atom cooperativity parameter is assumed to be $C=10^3$. Solid lines are for the optimal input mode function given in \eqref{mode}, and dashed lines are for an input mode function, which is constant in the interval $t\in[-T/2,T/2]$ and zero otherwise. The inset illustrates these functions for $\kappa T=300$. For $j=\uparrow$, the shape of the output mode function is almost the same as the shape of the input mode function as long as $\kappa T\gg C^{-1}$, but the norm is slightly decreased due to spontaneous emission from the atom such that $E_{\uparrow}\approx C^{-1}$. This result is independent of the actual shape of the input mode function, and the solid and dashed lines for $E_{\uparrow}$ are hence indistinguishable in the figure. The results for $E_{\downarrow}$ are independent of $C$, because the light field does not interact with the atom in this case. Nonzero values of $E_{\downarrow}$ only occur due to distortion of the shape of the mode function. For $\kappa T\gg1$, $E_{\downarrow}\approx8\pi^2/(\kappa T)^2$ for the optimal input mode function and $E_{\downarrow}\approx4/(\kappa T)$ for the constant mode function.}
\end{figure}
This solution gives
\begin{equation}
E_{\downarrow}=\frac{2x^2}{1+x^2}=2\cos^2\left(\frac{\omega_0T}{2}\right),\quad x:=\frac{2\omega_0}{\kappa}.
\end{equation}
For fixed $\kappa$, $\omega_0$ is a decreasing function of $T$, and for $\kappa T\gg2\pi$, $\omega_0\approx\pi/T$ and $E_{\downarrow}\approx8\pi^2/(\kappa T)^2$. Distortion of the input mode function can thus be avoided by choosing a sufficiently long input pulse. This may be understood by considering the Fourier transform of the input mode function. For a very short pulse the frequency distribution is very broad and only a small part of the field is actually on resonance with the cavity, i.e., most of the field is reflected without entering into the cavity. For a very long pulse, on the other hand, the frequency distribution is very narrow and centered at the resonance frequency of the cavity. $E_j$ is plotted in Fig.~\ref{optmode} as a function of $T$ both for the optimal input mode function and an input mode function, which is constant in the interval $[-T/2,T/2]$. The optimal input mode function is seen to provide a significant improvement. In fact, for the constant input mode function $E_{\downarrow}=4[1-\exp(-\kappa T/2)]/(\kappa T)$, and hence $E_{\downarrow}$ scales only as $(\kappa T)^{-1}$ for large $\kappa T$. Finally, we note that $\tau=t_c^2/\kappa<\kappa^{-1}$, and $\kappa T\gg1$ thus also ensures $\tau/T\ll1$, which justifies the approximation in Eqs.~\ref{gammaeq} and \ref{inout}.
\subsection{Single atom cooperativity parameter}\label{cooperativity}
We would like to determine how a single unit of the setup transforms the state of the field, when we use the full multi-mode description. For this purpose it is simpler to work in frequency space, and we thus use the definition of the Fourier transform $f(\omega)=\int f(t)\exp(-i\omega t)dt/\sqrt{2\pi}$ on Eqs.~\eqref{gammat} and \eqref{fout} to obtain
\begin{eqnarray}
\gamma_j(\omega)&=&\frac{\sqrt{\kappa}}{(1+2C\delta_{j{\uparrow}})\kappa/2+i\omega}
\alpha_{\textrm{in}}(\omega),\label{gammao}\\
\alpha^{(j)}_{\textrm{out}}(\omega)&=&K_j(\omega)\alpha_{\textrm{in}}(\omega),\label{tfield}
\end{eqnarray}
where
\begin{equation}\label{Kj}
K_j(\omega):=-\frac{(1-2C\delta_{j{\uparrow}})\kappa/2-i\omega}
{(1+2C\delta_{j{\uparrow}})\kappa/2+i\omega}.
\end{equation}
Assume now that the density operator of the two-beam input field to the unit may be written on the form
\begin{multline}\label{rhoin}
\rho_{\textrm{in}}=\sum_n\sum_mc_{nm}|\{\alpha_n(\omega)\}\rangle\langle\{\alpha_m(\omega)\}|\\
\otimes|\{\beta_n(\omega)\}\rangle\langle\{\beta_m(\omega)\}|,
\end{multline}
where $n$ and $m$ are summed over the same set of numbers and $|\{\alpha_n(\omega)\}\rangle$ and $|\{\beta_n(\omega)\}\rangle$ are continuous coherent states in frequency space, i.e.,
\begin{multline}\label{ccs}
|\{\alpha_n(\omega)\}\rangle=\exp\bigg[\int\alpha_n(\omega)\hat{a}^\dag(\omega)d\omega\\
-\int\alpha_n^*(\omega)\hat{a}(\omega)d\omega\bigg]|0\rangle
\end{multline}
and similarly for $|\{\beta_n(\omega)\}\rangle$. Note that \eqref{ccs} and \eqref{cont} are consistent when $\alpha_n(\omega)$ and $\alpha_n(t)$ as well as $\hat{a}(\omega)$ and $\hat{a}(t)$ are related through a Fourier transform \cite{blow}. After the first 50:50 beam splitter, the input state is transformed into
\begin{multline}
\rho'=\sum_n\sum_mc_{nm}
\left|\left\{(\alpha_n(\omega)+\beta_n(\omega))/\sqrt{2}\right\}\right\rangle\\
\left\langle\left\{(\alpha_m(\omega)+\beta_m(\omega))/\sqrt{2}\right\}\right|\\
\otimes\left|\left\{(\beta_n(\omega)-\alpha_n(\omega))/\sqrt{2}\right\}\right\rangle\\
\left\langle\left\{(\beta_m(\omega)-\alpha_m(\omega))/\sqrt{2}\right\}\right|,
\end{multline}
and this is the input state to the two cavities. The initial state of the two atoms is
\begin{equation}
\rho_{\textrm{at}}=\frac{1}{4}\sum_{i\in\{\downarrow,\uparrow\}}
\sum_{j\in\{\downarrow,\uparrow\}}\sum_{p\in\{\downarrow,\uparrow\}}
\sum_{q\in\{\downarrow,\uparrow\}}
|i\rangle\langle p|\otimes|j\rangle\langle q|.
\end{equation}
We thus need to know how one of the cavities transforms a term like $|\alpha_{\textrm{in},n}(\omega)\rangle\langle\alpha_{\textrm{in},m}(\omega)|\otimes|i\rangle\langle p|$.
We saw in the last subsection that a cavity is equivalent to an infinite number of beam splitter operations applied to the cavity mode and the input field modes. To take the possibility of spontaneous emission into account, we also apply a beam splitter operation to the cavity mode and a vacuum mode in each time step and subsequently trace out the vacuum mode. As the beam splitters are unitary operators acting from the left and the right on the density operator, the field amplitudes are transformed according to \eqref{tfield} as before, but when the ket and the bra are different, the trace operations lead to a scalar factor. Since the reflectivity of the beam splitter modeling the loss is $2C\kappa\tau\delta_{j\uparrow}$ for an atom in the state $|j\rangle$, this factor is
\begin{multline}
d_{ip}[\alpha_{\textrm{in},n}(\omega),\alpha_{\textrm{in},m}(\omega)]\\
=\prod_{k=-\infty}^\infty\langle\sqrt{2C\kappa\tau}\gamma_{m,p}(k\tau)\delta_{p\uparrow}|
\sqrt{2C\kappa\tau}\gamma_{n,i}(\kappa\tau)\delta_{i\uparrow}\rangle\\
=\exp\bigg\{-\int_{-\infty}^{\infty}\frac{C\kappa^2}{(1+2C)^2(\kappa/2)^2+\omega^2}
\big[|\alpha_{\textrm{in},n}(\omega)|^2\delta_{i\uparrow}\\
+|\alpha_{\textrm{in},m}(\omega)|^2\delta_{p\uparrow}
-2\alpha_{\textrm{in},n}(\omega)\alpha^*_{\textrm{in},m}
(\omega)\delta_{i\uparrow}\delta_{p\uparrow}\big]d\omega\bigg\},
\end{multline}
where $\gamma_{n,i}(t)$ is the amplitude of the cavity field corresponding to the input field $\alpha_{\textrm{in},n}(t)$ and the atomic state $|i\rangle$ as given in Eq.~\eqref{gammat}. Altogether, $|\alpha_{\textrm{in},n}(\omega)\rangle\langle\alpha_{\textrm{in},m}(\omega)|
\otimes|i\rangle\langle p|$ is thus transformed into
$d_{ip}[\alpha_{\textrm{in},n}(\omega),\alpha_{\textrm{in},m}(\omega)]
|K_i(\omega)\alpha_{\textrm{in},n}(\omega)\rangle
\langle K_p(\omega)\alpha_{\textrm{in},m}(\omega)|
\otimes|i\rangle\langle p|$. Projecting both atoms onto $(|{\uparrow}\rangle+|{\downarrow}\rangle)/2$ and taking the final 50:50 beam splitter into account, we obtain the output state after one unit
\begin{widetext}
\begin{multline}\label{rhoout}
\rho_{\textrm{out}}=\sum_n\sum_m\sum_{i\in\{\downarrow,\uparrow\}}\sum_{j\in\{\downarrow,\uparrow\}}
\sum_{p\in\{\downarrow,\uparrow\}}\sum_{q\in\{\downarrow,\uparrow\}}\frac{1}{16}c_{nm}
d_{ip}[(\alpha_n(\omega)+\beta_n(\omega))/\sqrt{2},(\alpha_m(\omega)+\beta_m(\omega))/\sqrt{2}]\\
\times d_{jq}[(\beta_n(\omega)-\alpha_n(\omega))/\sqrt{2},(\beta_m(\omega)-\alpha_m(\omega))/\sqrt{2}]\\
\times\left|\left\{\frac{1}{2}(K_i+K_j)\alpha_n(\omega)
+\frac{1}{2}(K_i-K_j)\beta_n(\omega)\right\}\right\rangle
\left\langle\left\{\frac{1}{2}(K_p+K_q)\alpha_m(\omega)
+\frac{1}{2}(K_p-K_q)\beta_m(\omega)\right\}\right|\\
\otimes\left|\left\{\frac{1}{2}(K_i+K_j)\beta_n(\omega)
+\frac{1}{2}(K_i-K_j)\alpha_n(\omega)\right\}\right\rangle
\left\langle\left\{\frac{1}{2}(K_p+K_q)\beta_m(\omega)
+\frac{1}{2}(K_p-K_q)\alpha_m(\omega)\right\}\right|,
\end{multline}
\end{widetext}
where we have written $K_i=K_i(\omega)$ for brevity. A subsequent phase shifter is easily taken into account by multiplying the coherent state amplitudes by the appropriate phase factors. We note that the result has the same form as the input state \eqref{rhoin} if we collect $n$, $i$, and $j$ into one index and $m$, $p$, and $q$ into another index. We can thus apply the transformation repeatedly to obtain the output state after $N+1$ units of the setup.
\begin{figure}[b]
\hspace{0.15\textwidth}
\includegraphics[width=\columnwidth]{figure10}
\caption{\label{finiteC}(Color online) Fidelity as defined in \eqref{FN} as a function of the single atom cooperativity parameter $C$ for different values of the expectation value of the number of photons in one of the input modes and different numbers of units of the setup, assuming $\kappa T\gg1$ and $\rho_{\textrm{in}}=|\alpha\rangle\langle\alpha|\otimes|\alpha\rangle\langle\alpha|$. The dotted lines are the asymptotes for $C\rightarrow\infty$.}
\end{figure}
If we assume $\alpha_n(\omega)$ and $\beta_m(\omega)$ to have the same shape for all $n$ and $m$, the output field simplifies to a two-mode state for $\kappa T\gg1$ as expected. This may be seen as follows. When $T$ is much larger than $\kappa^{-1}$, the width of the distribution $\alpha_n(\omega)$ in frequency space is much smaller than $\kappa$. For the relevant frequencies we thus have $\omega\ll\kappa/2$, and in this case the right hand side of \eqref{Kj} reduces to $-(1-2C\delta_{j{\uparrow}})/(1+2C\delta_{j{\uparrow}})$, i.e., $K_j$ is independent of frequency, and the amplitudes of all the continuous coherent states of the output density operator are proportional to $\alpha_n(\omega)$.
In Fig.~\ref{finiteC}, we have used \eqref{rhoout} in the limit $\kappa T\gg1$ to compute the fidelity \eqref{FN} as a function of the single atom cooperativity parameter for $\rho_{\textrm{in}}=|\alpha\rangle\langle\alpha|\otimes|\alpha\rangle\langle\alpha|$. The ideal values for $C\rightarrow\infty$ are also shown, and the deviations are seen to be very small for $C$ larger than about $10^3$. Current experiments have demonstrated single atom cooperativity parameters on the order of $10^2$ \cite{strongcoupling1,strongcoupling2,strongcoupling3}, and high fidelities can also be achieved for this value. When $C$ is finite, there is a relative photon loss of $1-\int|f^{({\uparrow})}_\textrm{out}(t)|^2dt\approx2/C$ due to spontaneous emission each time the field interacts with a cavity containing an atom in the state $|{\uparrow}\rangle$. We note, however, that the setup is partially robust against such losses because the next unit of the setup removes all components of the state for which a single photon has been lost.
\section{Conclusion}\label{Conclusion}
The setup put forward and studied in this article acts in many respects like a photon number filter and has several attractive applications for quantum technologies. Based on the ability to distinguish even and odd photon numbers using the interaction of the light field with a high finesse optical cavity, photonic two mode input states can be projected onto photon-number correlated states. Naturally, this protocol is very well suited to detect losses and can in particular be adapted to purify photon-number entangled states in quantum communication. We studied deviations from ideal behavior such as finite length of the input pulses and limited coupling to estimate for which parameters the idealized description is valid. The setup can be modified such that it is capable to perform a quantum-non-demolition measurement of photon numbers in the optical regime. The non-destructive photon counting device completes the versatile toolbox provided by the proposed scheme.
\begin{acknowledgments}
We acknowledge valuable discussions with Ignacio Cirac and Stefan Kuhr and support from the Danish Ministry of Science, Technology, and Innovation, the Elite Network of Bavaria (ENB) project QCCC, the EU projects SCALA, QAP, COMPAS, the DFG-Forschungsgruppe 635, and the DFG Excellence Cluster MAP.
\end{acknowledgments}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,715 |
Q: Use ng-model with a primitive value inside ng-transclude In a legacy project, I want to create a new directive that uses transclude.
A trimmed down version of the directive code is:
app.directive('controlWrap', function() {
return {
restrict: 'E',
transclude: true,
scope: { label: "@" },
templateUrl: "control-wrap-template.html"
}
})
And the template is:
<div>
<label>{{label}}</label>
<div>
<ng-transclude></ng-transclude>
</div>
</div>
This directive is used like this
<control-wrap label="Just a example">
<input type="text" ng-model="input" />
</control-wrap>
Test: {{input}}
I know that the workaround is to use a object in the scope instead of primitive value (ng-model inside ng-transclude). But that is no option for me. It is a ugly, poorly coded, legacy code that relies in those attributes directly on the scope.
Is there a something I can do in the directive to make that html works without change?
A: You can manually transclude (instead of using ng-transclude) and apply whatever scope (which is, in your case, scope.$parent) you need to the transcluded content:
transclude: true,
scope: { label: "@" },
template: '<div>\
<label>{{label}}</label>\
<placeholder></placeholder>\
</div>',
link: function(scope, element, attrs, ctrls, transclude){
transclude(scope.$parent, function(clone){
element.find("placeholder").replaceWith(clone);
});
}
Demo
A: The cleanest solution is to do some refactoring and passing an object instead of a primitive value, but if for some reason you cannot do that, you're not out of the options.
However, I wouldn't recommend any of these options
1) Bind input from the parent scope, that prevents creating a new value on the child scope upon write - butt keep in mind that accessing the parent scope hurts reusability of your directive.
Angular 1.2:
<input type="text" ng-model="$parent.input" />
Angular 1.3:
<input type="text" ng-model="$parent.$parent.input" />
(The difference is because the parent of the transcluded scope is the directive scope from 1.3)
2) Create some kind of wrapper object and pass that instead of the primitive value
$scope.inputWrapper = {};
Object.defineProperty($scope.inputWrapper, 'input', {
get: function() { return $scope.input },
set: function(newValue) { $scope.input = newValue; }
})
and pass this to the directive. But again, I would do some refactoring instead.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,618 |
\section{Introduction}
Fine-tuning is widely used as a procedure to employ the knowledge learned during pre-training of language models for specific tasks \cite{howard-ruder-2018-universal,peters-etal-2019-tune,merchant-etal-2020-happens,zhou-srikumar-2022-closer}. However, fine-tuning can be a computationally expensive process, given that it usually involves updating all the parameters in transformer-based models which are often massive in size. Parameter-efficient fine-tuning methods try to ameliorate this by reducing the number of updatable parameters during fine-tuning.
Adapters \cite{pmlr-v97-houlsby19a,pfeiffer-etal-2020-adapterhub,wang-etal-2021-k,ruckle-etal-2021-adapterdrop,hu2021lora} try to circumvent this issue by inserting light-weight modules in the transformer blocks, tuning of which usually results in comparable performance to the full fine-tuning (while the number of updatable parameters is significantly lower). Nevertheless, introducing new parameters to an already-large model can be considered a drawback. Another category of parameter-efficient fine-tuning methods is based on the \textit{Lottery Ticket Hypothesis} \cite{prasanna-etal-2020-bert}, where the goal is to find a small subset of parameters that can compete with the full fine-tuning setting. Various subsets of network parameters have been suggested as the \textit{winning ticket}, including the connections with high magnitudes \cite{NIPS2015_ae0eb3ee}, identity mappings \cite{lin-etal-2020-pruning}, and dominant dimensions \cite{guo-etal-2021-parameter}.
In this paper, we study the ability of different modules of a transformer block in knowledge transfer. Our experiments provide a more comprehensive analysis than the existing work, which usually suggests specific modules as the winning ticket, such as the bias terms \cite{ben-zaken-etal-2022-bitfit}. Through module-wise fine-tuning, we check if the winning ticket is a property that can be associated only with some specific modules in the transformer block. Our results suggest that \textit{all} individual modules possess this property to some extent. Among these, \layernorms\ prove to be highly reliable for knowledge transfer: fine-tuning only $37k$ \layernorm\ weights (out of $110M$ parameters in BERT-base) is often on par with full fine-tuning on various downstream tasks. Extending this analysis, we show that tuning even only one \layernorm\ can yield comparable performance and that the middle layers are the best in terms of transferability. We also investigate the reasons behind the effectiveness of \layernorm\ tuning. Our experiments suggest that this could be due to the relatively high-magnitude weights in these modules. In fact, we show that tuning just a tiny fraction of high-magnitude dimensions (usually referred to as \textit{outliers}) can lead to competitive performance on various tasks.
\section{Winning Modules}
According to the Lottery Ticket Hypothesis, there are small sub-networks whose performance is comparable to the over-parameterized model on different tasks \cite{frankle2018the}. Several studies have been carried out to identify sub-networks across the model that can provide the best transferability \cite{Gale2019mag,rigl,lee2021layeradaptive,guo-etal-2021-parameter,hu2021lora}. Nonetheless, finding the winning sub-network usually requires extra computation, which is costly in terms of time and memory. In this section, we take another look at the transformer block of BERT and focus on the ability of its different modules to transfer knowledge to various downstream tasks. More specifically, we aim to find the winning module among the different modules in the transformer-based architecture of the pre-trained BERT.
\input{./tables/glue}
\subsection{Experimental Setup}
\paragraph{Datasets.}
We fine-tune our models on the \glue\ benchmark \cite{wang-etal-2018-glue}. We leave out the {\wnli} (the Winograd Schema Challenge) task \cite{levesque2012winograd}), given that BERT's performance on this benchmark is not much better than a random classifier. Instead, we test the models on the Corpus of Linguistic Acceptability \cite[\cola]{warstadt-etal-2019-neural}, the Stanford Sentiment Treebank \cite[\sst]{socher-etal-2013-recursive}, the Microsoft Research Paraphrase Corpus \cite[\mrpc]{dolan-brockett-2005-automatically}, the Semantic Textual Similarity \cite[\sts]{cer-etal-2017-semeval}, the Quora Question Pairs \cite[\qqp]{wang-etal-2018-glue}, the Multi-Genre Natural Language Inference Corpus \cite[\mnli]{williams-etal-2018-broad}, the Stanford Question Answering Dataset \cite[\qnli]{rajpurkar-etal-2016-squad}, and the Recognizing Textual Entailment \cite[\rte]{dagan2005pascal}. All the reported results are obtained on the corresponding development sets.
\paragraph{Models.}
We opt for bert-base-uncased, implemented by the HuggingFace library in TensorFlow \cite{wolf-etal-2020-transformers,abadi2015tensorflow}. The maximum sequence length is set to 128. Except for the fully fine-tuned model (\fullfinetune), where we train the models for five epochs, the number of epochs is chosen based on the size of the tasks: 10 epochs for \sst, \qqp, \mnli, and \qnli\ and 20 epochs otherwise. We use the Adam optimizer with an epsilon set to 1e-6, a warmup ratio of 10\%, and a batch size of 16. The only hyperparameter tuning we do is on choosing the learning rate from \{1e-5, 5e-5, 1e-4, 5e-4, 1e-3, 5e-3, 1e-2\} to draw a fair comparison with previous work. We report the average and standard deviation of the results of three models trained with different random seeds. All the models are trained on four NVIDIA Tesla V100S-32G GPUs.
\paragraph{Module Settings.}
To find out the potential of transformer modules in transfer learning, we pick similar modules across all layers and fine-tune them while keeping the rest of the network frozen. The aim of this setup is to broaden our insights on the distribution of knowledge across the model and the adaptability of different modules to target tasks.
In every transformer block, we check for the role played by their \textit{\multihead} attention, \textit{\feedforward} layer, and \textit{\layernorms} in knowledge transfer. Since every transformer block has two \layernorms\ (attention and feedforward), we also consider fine-tuning them separately (\textit{\layernormsA} and \textit{\layernormsF}). We also compare against the replicated results of \bitfit\ \cite{ben-zaken-etal-2022-bitfit}, in which consistent bias terms across the transformer blocks are employed for fine-tuning. To verify if consistency in selecting parameters matters, we also show the results of fine-tuning only a small randomly selected subset of all the parameters with the same size as of the \textit{\layernorms} (\textit{\rand}). In the experiments, the full fine-tuning (\textit{\fullfinetune}) and \textit{\freeze} modes are considered as the upper and lower bounds, respectively.
\subsection{Results}
Table \ref{tab:main} shows our experimental results on eight tasks from the \glue\ benchmark.\footnote{As for \bitfit, we were unable to carry out full hyperparameter tuning on three tasks due to the large dataset size and computational constraints. Instead, we report the results as given in the original paper, which is around 5\% better than the best results we obtained for these settings.} For each setting, we also report the corresponding ratio of updatable parameters (compared to the full fine-tuning). As can be observed, individual modules of BERT can be considered as \textit{winning tickets} because they can achieve comparable performance to the \textit{\fullfinetune} setting, despite involving significantly smaller numbers of trainable parameters. In particular, \layernorms\ prove to have a high potential in transferability and adaptability to various downstream tasks with a very limited set of trainable parameters ($0.034\%$). The performance is mostly preserved even when only one of the two \layernorms\ is set to be trainable, reducing the number of effective parameters to $0.017\%$ of that in the full fine-tuning. Moreover, our results also reveal that selecting consistent weights (similar modules across layers) has a key role in fine-tuning quality, given that the random subset of a comparable number of parameters does not lead to the same performance levels.
\subsection{Token-level Classification}
In addition to the sentence-level tasks of the \glue\ benchmark, we also conduct experiments on two different token-level datasets to broaden our insights on the capacity of individual modules: \penn\ Part-of-speech tagging \cite{marcus-etal-1993-building} and \conl\ Named Entity Recognition \cite{tjong-kim-sang-de-meulder-2003-introduction}. For part-of-speech tagging, we use the subset of the Wall Street Journal (WSJ) portion of PTB which is freely available in the Natural Language Toolkit \cite[NLTK]{bird2009natural}. In this experiment, we adhere to the convention of using the cased version of BERT, given the case-sensitive nature of these token-level tasks.
Table \ref{tab:token} summarizes the results. Similarly to what is observed on the sentence-level tasks, LeyerNorms can attain competitive performance on both token-level tasks, despite involving just a small fraction of all the model parameters. Moreover, in comparison with the equal number of randomly selected weights, they demonstrate remarkably better performance.
\input{./tables/token_level}
\input{./tables/layerwise}
\subsection{Single Norms Tuning}\label{sec:sing}
Previous studies have reported that different layers do not contribute equally to the ultimate performance in transfer learning \cite{zhou-srikumar-2021-directprobe,rogers-etal-2020-primer,kovaleva-etal-2019-revealing}. We are interested in studying the extent to which individual \layernorms\ in different transformer blocks are adaptable to downstream tasks. To this end, we perform a layer-wise analysis in which the only trainable parameters are the two \layernorms\ in each block and the final classifier. Therefore, the total number of fine-tuning parameters is less than 5K (3,072 and 1,538 for the \layernorms\ and classifier, respectively)\footnote{{For \sts, the number of classifier parameters is 769.}}, which is about $0.003\%$ of all the parameters. Due to our limited computational resources, we restrict our experiments to \cola, \mrpc, \sts, and \rte.
Table \ref{tab:layer} presents the results for the layer-wise analysis. According to the fine-tuning results, tuning only a single \layernorm\ may be sufficient to achieve performance comparable to fine-tuning all \layernorms. Furthermore, the middle-layer \layernorms\ exhibit the best results across all layers, which can be attributed to the high transferability of the middle layers in BERT, corroborating previous findings on the concentration of task-specific features in these subsets of the network \cite{liu-etal-2019-linguistic}.
\begin{figure*}[ht!]
\centering
\includegraphics[width=157mm, height=63mm]{./figs/analysis-RTE,STS-B.png}
\caption{The empirical distribution of a fixed random subset of weights in different modules of BERT. For better visualization, we have discarded the outliers. The weights of \layernorms\ appear to have a bimodal distribution with significantly higher overall averages and standard deviations.}
\label{fig:analysis}
\end{figure*}
\section{Analysis}
In the previous section, we have shown that different modules of a transformer block can play as winning tickets, since they all have the potential for transferring knowledge to the selected downstream tasks. Among different modules, \layernorms\ have proven to be the most reliable in fine-tuning. In this section, we search for the reasons behind the effectiveness of these modules. To this end, we focus on the magnitude of every weight and how they change during the full fine-tuning across all layers.
As a first step, in Figure \ref{fig:analysis}, we visualize the distribution of weights for different BERT modules on \rte\ and \sts. In general, the distribution of weights is similar across \feedforward\ and \multihead\ modules. Nevertheless, \layernorms\ tend to have a bimodal distribution, with one of the modes having significantly higher magnitudes\footnote{Notice that in the pre-training process, weights ($\bm\gamma$) and biases ($\bm\beta$) of \layernorms\ were initialized with $\bm1$s and $\bm0$s, respectively, bringing about a distribution consisting of two modes with different averages and standard deviations.}. The pattern is consistent across \layernormsA\ and \layernormsF. We hypothesize that these high-magnitude weights are the reason behind the effectiveness of \layernorms\ and, in what follows, check our hypothesis by restricting our experiments to only high-magnitude dimensions of \layernorms.
\subsection{Outlier Tuning}\label{sec:outlier}
\input{./tables/outliers}
Outliers are high-magnitude weights in \layernorms\ appearing early in the pre-training process \cite{kovaleva-etal-2021-bert}. Transformer-based models perform significantly worse on downstream tasks when their outliers are disabled after the fine-tuning process \cite{kovaleva-etal-2021-bert}.
In this experiment, we choose outliers as the set of $n$ weights whose values are farthest from the mean. Except for the outliers, all the parameters are frozen during fine-tuning. It should be considered that the specific dimensions where the outliers appear may not necessarily be the same across different layers.
Table \ref{tab:outlier} presents the performance of the fine-tuned BERT in two different settings and for four different values of $n$: 4, 16, 64, 256. We also report the results for the corresponding sets of $n$ randomly selected weights. As can be observed, outliers tuning leads to competitive performance on most target tasks, despite using less than $0.0056\%$ of all the model parameters. Interestingly, tuning in the extremely constrained setting of $n=4$ still outperforms the frozen model, sometimes by significant margins (e.g., on \sts). Setting $n$ to higher values gives the model more capacity, bringing about higher performance.
Overall, we can conclude that the high-magnitude weights in \layernorms\ play an important role in the effectiveness of these modules in parameter-efficient fine-tuning.
\section{Conclusions}
In this work, we study the efficiency of different modules in the transformer block of BERT to transfer knowledge from the pre-trained model to various downstream tasks. Our experimental results demonstrate that, contrary to what was suggested by previous work, every module can be a \textit{winning ticket}, achieving comparable performance to the full fine-tuning scenario. Among all modules, \layernorms\ prove to be the most reliable for transferability with a limited number of trainable weights, such that tuning them in only one layer can be sufficient for attaining performance on a par with that of the full fine-tuning. We find that the weights in these modules have notably high magnitudes compared to other modules, which could be the reason for their effectiveness. We examine this hypothesis through Outlier Tuning (tuning only the $n$ weights in each \layernorm\ whose values are farthest from the mean), limiting the number of tunable parameters to a significantly small fraction.
Our results pave the way for better parameter-efficient fine-tuning of large language models without the need for costly algorithms to determine the optimum sub-network or introduce additional parameters for knowledge transfer.
\section{Acknowledgment}
We thank the anonymous reviewers for the constructive comments and suggestions that helped improve the paper. Sara Rajaee is funded in part by the Netherlands Organization for Scientific Research (NWO) under project number VI.C.192.080.
\section{Limitations}
We were subject to the constraints of computational resources; as a consequence, we reported results only for bert-base and chose the four smallest tasks of the \glue\ benchmark in the tuning single norms ({Section \ref{sec:sing}}) as well as outlier tuning (Section \ref{sec:outlier}). Obviously, the more trainable parameters a model has, the more accurate its results will be. Since our outlier tuning technique fine-tunes just a tiny portion of parameters, less than $0.006\%$ of the model weights, there is an upper bound on its learning capability.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 8,983 |
Fine Art Prints available of "The Ride Begins" in 3 sizes: 10″ x 10″ ($50), 12″ x 12″ ($60) & 20″ x 20″ ($100). Please go to contact page to order a print. NOTE: I will donate 25% of sales to the Alberta Cancer Foundation. | {
"redpajama_set_name": "RedPajamaC4"
} | 274 |
**iOS 12 Programming
for Beginners
_Third Edition_
**
An introductory guide to iOS app development with Swift 4.2 and Xcode 10
Craig Clayton
****BIRMINGHAM - MUMBAI****
# iOS 12 Programming for Beginners Third Edition
Copyright © 2018 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
**Commissioning Editor:** Amarabha Banerjee
**Acquisition Editor:** Larissa Pinto
**Content Development Editor:** Flavian Vaz
**Technical Editor:** Akhil Nair
**Copy Editor:** Safis Editing
**Project Coordinator:** Kinjal Bari
**Proofreader:** Safis Editing
**Indexer:** Rekha Nair
**Graphics:** Alishon Mendonsa
**Production Coordinator:** Jyoti Chauhan
First published: October 2016
Second edition: October 2017
Third edition: December 2018
Production reference: 1201218
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham
B3 2PB, UK.
ISBN 978-1-78934-866-8
www.packtpub.com
mapt.io
Mapt is an online digital library that gives you full access to over 5,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.
# Why subscribe?
* Spend less time learning and more time coding with practical eBooks and videos from over 4,000 industry professionals
* Improve your learning with Skill Plans built especially for you
* Get a free eBook or video every month
* Mapt is fully searchable
* Copy and paste, print, and bookmark content
# Packt.com
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.packt.com and, as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at `customercare@packtpub.com` for more details.
At www.packt.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.
# Contributors
# About the author
**Craig Clayton** is a self-taught, senior iOS engineer at Adept Mobile, specializing in building mobile experiences for NBA and NFL teams. He also volunteered as the organizer of the Suncoast iOS meetup group in the Tampa/St. Petersburg area for three years, preparing presentations and hands-on talks for this group and other groups in the community. He has also launched Cocoa Academy online, which specializes in bringing a diverse list of iOS courses, ranging from building apps to games for all programming levels, to the market.
# About the reviewer
**Kevin Munc** (@muncman) is a programming veteran with 20+ years' experience in a variety of areas, ranging from mainframes to mobile, from web to blockchain, and from enterprise to startup. Along the way, he's reviewed books on Objective-C, watchOS, RFP, UIAutomation, SpriteKit, JavaFX, and Vim.
I would like to thank all of the people who have helped me sharpen my reviewing skills over the years. I'm also grateful for the ongoing support of my family as I continue to seek to grow as a developer.
# Packt is searching for authors like you
If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.
# Table of Contents
1. Title Page
2. Copyright and Credits
1. iOS 12 Programming for Beginners Third Edition
3. Packt Upsell
1. Why subscribe?
2. Packt.com
4. Contributors
1. About the author
2. About the reviewer
3. Packt is searching for authors like you
5. Preface
1. Who this book is for
2. What this book covers
3. To get the most out of this book
1. Download the example code files
2. Download the color images
3. Conventions used
4. Get in touch
1. Reviews
6. Getting Familiar with Xcode
1. Getting started
2. The Xcode interface
1. Navigator panel
2. Standard editor
3. Utilities panel
4. Debug panel
5. Toolbar
6. Generic iOS device
7. iOS device
8. Connecting wirelessly
9. Window pane controls
3. Summary
7. Building a Foundation with Swift
1. Playgrounds – an interactive coding environment
2. Data types – where it all starts
1. String
2. Integer data type
3. Floating-point numbers
4. Booleans
5. Variables and constants – where data is held
1. Creating a variable with a string
2. Creating a variable with an integer (int)
3. Debug and print() – detecting your bugs
4. Adding floating-point numbers
5. Creating a Boolean
6. Why constants versus variables?
6. Comments – leaving yourself notes or reminders
3. Type safety and type inference
1. Concatenating strings
2. String interpolation
4. Operations with our integers
1. Increment and decrement
2. Comparison operators
5. Summary
8. Building on the Swift Foundation
1. Creating a Playground project
2. The if statements – having fun with logic statements
3. Optionals and optional bindings
1. Why optionals?
4. Functions
5. Summary
9. Digging Deeper
1. Creating a Playground project
2. Ranges
1. Closed range
2. Half-closed range
3. Control flow
1. The for...in loop
2. One-sided range
3. The while loop
4. The repeat...while loop
4. Summary
10. Digging into Collections
1. Arrays
1. Creating an empty array
2. Creating an array with initial values
3. Creating a mutable array
4. Adding items to an array
5. Checking the number of elements in an array
6. Checking for an empty array
7. Retrieving a value from an array
8. Iterating over an array
9. Removing items from an array
2. Dictionaries
1. Creating a dictionary
2. Adding and updating dictionary elements
3. Accessing an item in a dictionary
4. Iterating over dictionary values
5. Iterating over dictionary keys
6. Iterating over dictionary keys and values
7. Checking the number of items in a dictionary
8. Removing items from a dictionary
3. Sets
1. Creating an empty set
2. Creating a set with an array literal
3. Creating a mutable set
4. Adding items to a set
5. Checking whether a set contains an item
6. Iterating over a set
7. Intersecting two sets
8. Joining two sets
9. Removing items from a set
4. Summary
11. Starting the UI Setup
1. Useful terms
1. View Controllers
2. Table View Controllers
3. Collection View Controllers
4. Navigation Controllers
5. Tab Bar Controllers
6. Storyboards
7. Segues
8. Stack Views
9. Auto Layout
10. Model View Controller (MVC)
2. App tour
1. The Explore tab
2. Locations
3. Restaurant listings
4. Restaurant detail
5. The Map tab
3. Project setup
1. Creating a new project
4. Summary
12. Setting Up the Basic Structure
1. Starting from scratch
2. Storyboard setup
1. Adding our app assets
2. Storyboards
1. Creating our launch screen
2. Adding a Navigation Controller
3. Summary
13. Building Our App Structure in Storyboard
1. Adding a Collection View Controller
2. Hooking up our outlets
3. Creating a custom color
4. Setting up our cell
5. Section header
6. Updating the grid
7. Adding a modal
1. Updating Bar Button Items
2. Unwinding our Cancel button
3. Adding our first Table View
8. Summary
14. Finishing Up Our App Structure in Storyboard
1. Adding our Restaurant List View
2. Hooking up our outlets
3. Setting up our cell
4. Adding the Reviews View
5. Viewing reviews
6. Map Kit View
7. Summary
15. Designing Cells
1. Setting up the Explore header
1. Adding Auto Layout to the Explore header
2. Setting up the Explore cell
3. Adding Auto Layout to the Explore cell
2. Setting up the Restaurant cell
1. Adding Auto Layout to the Restaurant cell
2. The Locations cell
3. Summary
16. Getting Started with the Grid
1. Understanding the Model View Controller architecture
1. Getting familiar with the setup
2. Classes and structures
3. Controllers and classes
1. Understanding Collection Views
2. Creating our controller
3. Understanding Collection View controllers and Collection View cells
4. Getting data into Collection View
5. Understanding the data source
4. Summary
17. Getting Data into Our Grid
1. Model
1. ExploreData.plist
2. ExploreItem.swift
3. ExploreDataManager.swift
2. Getting data
3. Connecting to our cell
4. Hooking up our UI with IBOutlets
5. Restaurant listing
6. Summary
18. Getting Started with the List
1. Understanding Table Views
2. Creating our Location View Controller class
3. Connecting our Table View with our Location View Controller
4. Digging into our Table View code
5. Adding the data source and delegate
6. Adding locations to our Table View
7. Creating our first property list (plist)
8. Adding data to our property list
9. Creating our location data manager
10. Working with our data manager
11. Summary
19. Where Are We?
1. Setting up map annotations
1. What is an MKAnnotation?
2. Creating a restaurant annotation
3. Creating our Map Data Manager
4. Creating a base class
5. Refactoring code
1. Refactoring ExploreDataManager
2. Creating and adding annotations
1. Creating our Map View Controller
2. Creating custom annotations
3. Map to restaurant detail
1. Creating a storyboard reference
2. Map to restaurant detail
1. Passing data to restaurant detail
4. Organizing your code
1. Refactoring ExploreViewController
1. Using the MARK comment
2. Refactoring RestaurantViewController
3. Refactoring MapViewController
5. Summary
20. Working with an API
1. Creating an API Manager
1. What is an API?
2. Understanding a JSON file
3. Exploring the API Manager file
2. Location list
1. Selecting a location
2. Adding a Header view
3. Passing a selected location back to Explore View
4. Unwinding our Done button
5. Getting the last selected location
6. Passing location and cuisine to the restaurant list
7. Creating our restaurant cell class
1. Setting up restaurant list cell outlets
2. Creating a restaurant data manager
3. Handling no data
3. Summary
21. Displaying Data in Restaurant Detail
1. Displaying data in our static Table View
2. Summary
22. Foodie Reviews
1. Getting started with reviews
2. Displaying ratings in our custom UIControl
3. Adding our touch events
4. Setting up the unwind segues
5. Creating our ReviewFormController
6. Summary
23. Working with Photo Filters
1. Understanding filters
2. Creating our filter scroller
1. Creating a filter cell
2. Creating our PhotoFilterViewController
3. Getting permission
4. Summary
24. Understanding Core Data
1. What is Core Data?
2. Creating a data model
1. Entity autogeneration
2. The RestaurantPhoto Entity
3. Review item
4. Core Data manager
3. Summary
25. Saving Reviews
1. Saving reviews
2. Saving photos
3. Adding an overall rating
4. Summary
26. Universal
1. Explore
2. Location listing
3. Restaurant listing
1. Updating the restaurant detail page
4. Summary
27. iMessages
1. Understanding iMessages
1. Creating our extension
2. Updating our assets
2. Creating a framework
1. Connecting your message cell
3. Showing restaurants
1. iMessage crashing
2. Sending reservations
4. Summary
28. Notifications
1. Starting with the basics
1. Getting permission
2. Setting up notifications
3. Showing notifications
2. Customizing our notifications
1. Deliver quietly (iOS 12 feature)
2. Embedding images (iOS 10 feature)
3. Adding buttons
4. Grouped notifications (iOS 11)
5. Summary and hidden text (iOS 12)
6. Custom UI in notifications
7. Custom Notification Settings (iOS 12)
3. Summary
29. SiriKit
1. Using Siri Shortcuts
1. Siri voice shortcut
2. Understanding SiriKit
3. Supported intents
4. Enabling Siri's capabilities
5. Creating users
6. Updating our intent handler
7. Testing Siri
2. Summary
30. Beta and Store Submission
1. Creating a bundle identifier
2. Creating a certificate signing request
3. Creating production and development certificates
4. Creating a production provisioning profile
5. Creating a development provisioning profile
6. Creating an App Store listing
7. Creating an archive build
8. Internal and external testing
1. Internal testing
2. External testing
9. Summary
31. Other Books You May Enjoy
1. Leave a review - let other readers know what you think
# Preface
In this book, we will build a restaurant reservation app called _Let's Eat_. We will start the book off by exploring Xcode, our programming environment, which is also known as the Interface Development Environment (IDE). Next, you will start learning the foundations of Swift, the programming language used in iOS apps. Once we are comfortable with the basics of Swift, we will dig deeper to build a more solid foundation.
Once we have a solid foundation of using Swift, we will start creating the visual aspects of our _Let's Eat_ app. During this process, we will work with storyboards and connect our app's structure together using segues. With our UI complete, we will go over the different ways in which we can display data. To display our data in a grid, we will use `Collection Views`, and to display our data in a list, we will use `Table Views`.
We will also look at how to add basic and custom annotations on to a map. Finally, it's time to get real data; we will look at what an Application Programming Interface (API) is and how we can get actual restaurant data into our `Collection Views`, `Table Views`, and `Map`.
We now have a complete app, but how about adding some bells and whistles? The first place where we can add a feature will be the restaurant detail page, where we can add restaurant reviews. Here, users will be able to take or choose a picture and apply a filter to their picture. They will also be able to give the restaurant a rating as well as a review. When they are done, we will save this data using `Core Data`.
Since we built our app to work on both iPhone and iPad, we should add the ability to make our app support iPad multitasking. Doing this will allow our app to be open alongside another app at the same time.
If we want to be able to send our reservation to a friend, we can create a custom UI for iMessages, which will send them the details for the reservation along with the app it came from. The one thing missing from our app is the ability to notify the user with a custom notification to alert when they have an upcoming reservation.
Finally, let's create quick access by using SiriKit and Siri to request money and send reservations. Now that we have added some bells and whistles, let's get this app to our friends using TestFlight, and finally get it into the App Store.
# Who this book is for
This book is for you if you are completely new to Swift, iOS, or programming and want to make iOS applications. However, you'll also find this book useful if you're an experienced programmer looking to explore the latest iOS 12 features.
# What this book covers
Chapter 1, _Getting Familiar with Xcode_ , takes you through a tour of Xcode and talks about all the different panels that we will use throughout the book.
Chapter 2, _Building a Foundation with Swift_ , deals with the basics of Swift.
Chapter 3, _Building on the Swift Foundation_ , teaches us to build on our Swift foundation and learn some further basics of Swift.
Chapter 4, _Digging Deeper into Swift_ , talks about ranges and control flow.
Chapter 5, _Digging into Swift Collections_ , talks about the different types of collections.
Chapter 6, _Starting the UI Setup_ , is about building the _Let's Eat_ app. We will focus on getting our structure set up using storyboards.
Chapter 7, _Setting Up the Basic Structure_ , deals with working on our _Let's Eat_ app in a storyboard.
Chapter 8, _Building Our App Structure in Storyboard_ , is about adding more to our app structure in a storyboard.
Chapter 9, _Finishing Up Our App Structure in Storyboard_ , concludes the discussion of our app structure in a storyboard.
Chapter 10, _Designing Cells_ , is about designing the table and collection view cells in a storyboard.
Chapter 11, _Getting Started with the Grid_ , concerns working with `Collection Views` and how we can use them to display a grid of items.
Chapter 12, _Getting Data into Our Grid_ , concerns the incorporation of data into our `Collection Views`.
Chapter 13, _Getting Started with the List_ , teaches us to work with `Table Views` and takes an in-depth look at dynamic `Table Views`.
Chapter 14, _Where Are We?_ , deals with working with MapKit and learning how to add annotations to a map. We will also create custom annotations for our map.
Chapter 15, _Working with an API_ , involves learning how to use a JSON API within our app.
Chapter 16, _Displaying Data in Restaurant Detail_ , teaches you how to pass data using segues.
Chapter 17, _Foodie Reviews_ , talks about working with the phone's camera and library.
Chapter 18, _Working with Photo Filters_ , takes a look at how to apply filters to our photos.
Chapter 19, _Understanding Core Data_ , teaches us the basics of using core data.
Chapter 20, _Saving Reviews_ , wraps up reviews by saving them using core data.
Chapter 21, _Universal_ , deals with multitasking on the iPad, and how we can get an update to be supported on all devices.
Chapter 22, _iMessages_ , is about building a custom message app UI. We will also create a framework to share data between both apps.
Chapter 23, _Notifications_ , provides instruction on how to build basic notifications. Then, we will look at embedding images into our notifications as well as building a custom UI.
Chapter 24, _SiriKit_ , teaches the reader how to use Siri to create money requests.
Chapter 25, _Beta and Store Submission_ , concerns how to submit apps for testing as well as submitting apps to the App Store.
# To get the most out of this book
You need to have Xcode 9 installed on your system. To download Xcode 9, visit <https://developer.apple.com/xcode/>.
# Download the example code files
You can download the example code files for this book from your account at www.packt.com. If you purchased this book elsewhere, you can visit www.packt.com/support and register to have the files emailed directly to you.
You can download the code files by following these steps:
1. Log in or register at www.packt.com.
2. Select the SUPPORT tab.
3. Click on Code Downloads & Errata.
4. Enter the name of the book in the Search box and follow the onscreen instructions.
Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:
* WinRAR/7-Zip for Windows
* Zipeg/iZip/UnRarX for Mac
* 7-Zip/PeaZip for Linux
The code bundle for the book is also hosted on GitHub at <https://github.com/PacktPublishing/iOS-12-Programming-for-Beginners-Third-Edition>. In case there's an update to the code, it will be updated on the existing GitHub repository.
We also have other code bundles from our rich catalog of books and videos available at **<https://github.com/PacktPublishing/>**. Check them out!
# Download the color images
We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: <https://www.packtpub.com/sites/default/files/downloads/9781789348668_ColorImages.pdf>.
# Conventions used
There are a number of text conventions used throughout this book.
`CodeInText`: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: "Mount the downloaded `WebStorm-10*.dmg` disk image file as another disk in your system."
A block of code is set as follows:
states.insert("Ohio", at:1) states.insert(contentsOf:["North Carolina", "South Carolina", "Nevada"],at:3)
**Bold** : Indicates a new term, an important word, or words that you see on screen. For example, words in menus or dialog boxes appear in the text like this. Here is an example: "Hit Next and then Create."
Warnings or important notes appear like this.
Tips and tricks appear like this.
# Get in touch
Feedback from our readers is always welcome.
**General feedback** : If you have questions about any aspect of this book, mention the book title in the subject of your message and email us at `customercare@packtpub.com`.
**Errata** : Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packt.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.
**Piracy** : If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at `copyright@packt.com` with a link to the material.
**If you are interested in becoming an author** : If there is a topic that you have expertise in, and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.
# Reviews
Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!
For more information about Packt, please visit packt.com.
# Getting Familiar with Xcode
So, you want to get into iOS development? I was in your shoes on January 27, 2010, when Apple first announced the iPad. As soon as the conference was over, I knew that I wanted to learn how to create apps for the iPad. I signed up for the Apple Developer website and paid my $99 annual fee. But then, I realized that I did not know where to begin. A large variety of instructional books or videos did not exist, especially since the iPad hadn't released. I had previous programming experience; however, I had no idea how to write Objective-C (the original programming language for iOS). Therefore, I had to teach myself the basics. In this book, we will learn what it takes to become an iOS developer together.
If you are new to programming, take your time. You should understand the lessons that are provided in one chapter before moving on to the next. These essential skills will set you up with a solid foundation in iOS development. If you have previous programming experience, you should still review the earlier chapters, as they will be a refresher for you.
Throughout this book, we will work in Xcode, specifically Xcode 10 (and Swift 4, which we will tackle later in this book). Xcode is known as an **Integrated Development Environment** ( **IDE** ). Using Xcode gives us everything we will need to build apps for iOS, tvOS, macOS (formerly, OS X), and watchOS. In this chapter, we will explore Xcode to help you get more comfortable using it. If you are not on Xcode 10, make sure to update Xcode, as the code in this book will not run correctly otherwise.
Our focus in this book will be on creating a universal iOS app (an app for both the iPhone and iPad). The best way to do this is to create a project to familiarize yourself with where everything is and how to find what you need. So first, let's first download and install Xcode.
# Getting started
To download Xcode, launch the App Store on your Mac and then type `Xcode` into the search bar in the upper-right corner:
For enhanced image quality, download the graphics bundle from <https://www.packtpub.com/sites/default/files/downloads/9781789348668_ColorImages.pdf>.
Next, click on INSTALL:
Once installed, launch Xcode, and you should see the following Welcome to Xcode screen:
If this is the first time you have launched Xcode, then you will see No Recent Projects in the right-hand panel. If you have previously created projects, then you will see those listed to the right. To get started, we are going to click on Create a new Xcode project in the left-hand panel of the welcome screen. You will see the new project screen, as follows:
Across the top of this screen, you can select one of the following items: iOS, watchOS, tvOS, macOS, and Cross-platform. Since we are creating apps for iOS, make sure that you have iOS selected. Then, choose Single View App and click on Next. Now, you will see an options screen for a new project:
This option screen has the following seven items to complete or choose:
1. Product Name: The product name is your app. We are going to set ours as `ExploringXcode`.
2. Team: The team connects to your Apple account. We are going to ignore this for now, because we do not need the Team for this chapter. If you already have a team set up, leave it as is. We will cover this in greater detail later in this book.
3. Organization Name: You can set the organization name to your company name, or just use your name.
4. Organizer Identifier: You will set the organizer identifier to be your domain name in reverse. For example, my website URL is `cocoa.academy`, and therefore, my identifier is `academy.cocoa`. Since URLs are unique, it will ensure that no one else will have your identifier. If you do not have a domain, then use your first and last names for now. However, you will eventually have to purchase a domain if you want to submit your app to the Apple Store.
5. Bundle Identifier: When you create a new project, Apple will combine your Product Name with your Organizer Identifier to create your unique bundle identifier. So, even if 10,000 people create this project, each person will have a different bundle identifier.
6. Language: Set language to Swift.
7. Checkboxes: You can uncheck Use Core Data, Include Unit Tests, and Include UI Tests, as these are things that we will not use in this chapter.
Now, select Next, and Xcode will prompt us to save our project. I have a dedicated folder for all my projects, but you can save it on your desktop for easy access.
# The Xcode interface
Your project is now open, and it is time for us to get familiar with all of the panels. If this is your first time in Xcode, then it will probably be a bit overwhelming for you. Therefore, we will break it down into six parts:
* **NAVIGATOR PANEL**
* **STANDARD EDITOR**
* **UTILITIES PANEL**
* **DEBUG PANEL**
* **TOOLBAR**
* **WINDOW PANE CONTROLS**
# Navigator panel
The primary use of the navigator panel is to add new files and select existing files. The other icons are used from time to time; we will cover them as we need them.
# Standard editor
The standard editor is a single panel view that's used to edit files. The standard editor area is the primary area in which you will work. In this area, we can view storyboard files, see our Swift files, or view our project settings.
# Utilities panel
The utilities panel can be a bit confusing when you first use Xcode because this menu changes based on what you have selected in the standard editor. When we start building an app, we will dig deeper into this. For now, know that the utilities panel is made up of the inspector pane at the top and the library pane at the bottom. The inspector pane allows you to change the attributes or properties of things you put in your storyboard; the library pane enables you to insert objects, image assets, and code snippets into your app.
# Debug panel
The debug panel will allow us to see log messages from our app. You will become very familiar with this panel by the time you finish this book. The debug panel is one of the most excellent tools for getting feedback on what your app is doing or not doing.
# Toolbar
Next, we look at the toolbar, which is demonstrated as follows:
First, we have a play button, which is how we launch our app (or use _command_ \+ _R_ ). Next, you will see a stop button, which will not be active until you run your app. This stop button (or _co mmand + ._) is used to stop your app from running. To the right of the stop button, you will see your target (your project name), along with the current simulator that has been selected. If you click on your project name, you will see a screen similar to this:
This drop-down menu, which we will call the Device and iOS Simulators drop-down menu, allows you to change your simulator type. For our project, select iPhone7 Plus as your simulator and then click on the play icon (or use _command_ \+ _R_ ) to run your app.
Now, let's return to Xcode and select the stop button (or use _command_ \+ _._ ).
If you use the keyboard shortcut, make sure Xcode is in focus; otherwise, this shortcut will not work. I work on a 15-inch MacBook Pro Retina. Therefore, when I am working on an app, I will use the iPhone X or iPad Air 2 simulator in landscape mode. They both fit nicely on my screen without me having to resize either.
In addition to the Simulator, there is a Build Only Device as well as a Device section, both of which can be found at the top of the Device and Simulator drop-down menu that was shown earlier in this chapter. Note that, for our purposes, you will only need a simulator while we are building the app; however, you can add an iOS device if you would like (see under iOS Device).
# Generic iOS device
The Generic iOS Device, under the Build Only Device section of the Device and Simulator drop-down menu, is used for when you need to archive your app, which means that you are preparing your app for submission to Apple (either to the App Store or Test Flight). If you try to select Generic iOS Device now and run the app, you will get the following message:
Therefore, change Generic iOS Device to an actual simulator, and then you will be able to continue.
# iOS device
If you do not have a device connected to the computer, you will see No devices connected under the Device section of the Device and Simulator drop-down menu.
As noted earlier, when we start building the _Let's Eat_ app, you will have the option of using the simulator or connecting a device to Xcode. Using a device is slower; however, the simulator will not perform in the same way as a device will.
In the past, you needed to have a paid account to build your app on a device. Nowadays, you do not need a developer account to run the app on your device. Note that, if you decide to connect your device instead of using a simulator, you will need iOS 12 installed on it. Xcode 10 introduced the capability of connecting your phone wirelessly. We will look at the traditional way first and then we will go over how you can connect your phone wirelessly.
The following steps are only intended for those of you who do not want to pay for the Apple Developer Program at this time:
1. Connect your iOS device via USB.
2. In the drop-down menu, select your device (here, I have chosen Xclusive iPhone 6 Plus):
3. Wait for Xcode 10 to finish indexing and processing. The indexing and processing may take a bit of time. Once complete, the status will say Ready.
4. Run the project by hitting the Play button (or use _command_ \+ _R_ ).
You will get two errors that state the following:
* * Signing for `ExploringXcode` requires a development team.
Select a development team in the project editor.
* Code signing requires a product type application in SDK iOS 12.0.
Ignore the specifics of these errors as they indicate that we need to create an account and add our device to that account.
5. Now, in the standard editor, you will see under Signing that you need to add an account:
6. Click on Add Account. If a Sign into Xcode with your Apple ID dialog box does not pop up, inside the Accounts screen on the bottom left, click on the + and select Apple ID:
7. Then, when you click on Create Apple ID, you will be asked to enter your birth date, name, email, and password, along with a number of security questions. Make sure that you verify your email before you answer the security questions, otherwise you will have to come back to this screen and add your Apple ID again.
8. Once you have finished all of the steps, you will see your account, as follows:
If you already have an account, then instead of seeing Add Account, you will see a drop-down menu with your account listed. If your device is not connected to this account, you might see a message asking if you would like to add your device to your account. Adding your device to an account is for testing purposes only.
# Connecting wirelessly
Now that you have your phone and account connected, you can quickly get your phone set up to run wirelessly. With your device already connected via USB, go to Window | Devices, and then Simulators. Click on the checkbox marked Connect via network:
Make sure that your phone and your computer are connected to the same Wi-Fi network.
When I first connected to my device, I saw a globe icon in Xcode that lets you know that your device is connected via the network, as demonstrated in the following screenshot:
After a short time, the globe went away. Even if you do not see the icon, you can disconnect the USB, and your device should still be connected to Xcode (as long as it is connected to the same Wi-Fi network).
You will not need to use a device for this book, but it is always good to run your app in an actual device before you submit it to the store.
Before we get to the right-hand side of the toolbar, select the `Main.storyboard` file in your navigator panel. This file is used to display all of your visual setup for your entire app. We will this in detail later in this book. After you select the file, you should see the following:
# Window pane controls
The following screenshot shows the window pane controls:
Moving on to the window pane controls, you will see two groups of icons. The first group is called the Editor Mode, and the second group is called the View. Let's look at the functions of the Editor Mode icons:
Editor Mode icons | Function
---|---
| This icon controls the standard editor (which is the center panel in the earlier screenshot of the `Main.storyboard` file in the navigator panel).
| This icon splits the Standard editor into two panels, where you will see the `ViewController.swift` file on the right. We will use this split screen throughout this book.
| This icon is the Version editor. We will not address the Version editor in this book since it is a more advanced feature.
At this point, you might be thinking that there are way too many panels open, and I would agree with you! The last panel is where the previous group of View icons in the toolbar comes in handy.
Let's look at these icons and their functions in the following table:
View Mode icons | Function
---|---
| This icon will toggle (hide or show) the navigator panel (or use _command_ \+ _O_ ).
|
This icon will toggle (hide or show) the debug panel (or use _command_ \+ _shift_ \+ _Y_ ).
|
This icon will toggle (hide or show) the utilities panel (or use _command_ \+ _alt_ \+ _O_ ).
# Summary
Congratulations! You have finished exploring the basics of Xcode. When we start building our app, we will cover the more essential parts of Xcode in depth. In the next few chapters, we will begin learning about the Swift programming language. We will use the latest Swift version, and this will be a basic intro into the programming language. If you are familiar with Swift, feel free to skip ahead to Chapter 6, _Starting the UI Setup_. Even if you are familiar with Swift, it is always good to go back through the basics as a refresher. So, let's get started!
# Building a Foundation with Swift
Now that we have had a short tour of Xcode, it is time to start learning about Swift. Remember, if you are new to programming, things will be very different for you, so take your time. The essential skills that you learn here will set you up with a solid foundation in iOS development. If you have previous programming experience, you should still review this chapter, as it can only enhance your programming skills and act as a refresher for you.
On June 2, 2014, Apple changed the game for iOS development, because this was the day they announced Swift to the world. With this announcement, everybody was put on an even playing field, because they had to learn a new programming language. Swift has brought a more modern approach to developing apps and has seen a massive influx of new developers of all ages wanting to build iOS apps. However, enough about history! Let's dig in and look at what you are going to learn about.
The following topics will be covered in this chapter:
* Playgrounds
* Data types
* Variables and constants
* Debug and `print()`
* Comments
# Playgrounds – an interactive coding environment
Before we jump into building the app that we will be creating in later chapters, called _Let's Eat_ , we need to understand the basics of Swift. An easy way to experiment with Swift is to use **Playgrounds**. It is an interactive coding environment that evaluates your code and displays the results. Using Playgrounds gives us the ability to work with Swift without needing to create a project. It is great for prototyping a particular part of your app. So, whether you are learning or experimenting, Playgrounds are an invaluable tool. To create a Playground, we need to launch Xcode and click on Get started with a playground:
The Playground template screen appears. Make sure that you select iOS, and then choose Blank and hit Next:
You will be asked to give your project a name and a location to save the file; name your new project `Playground iOS11-Programming-for-Beginners-Ch2`. You can save the file anywhere that you like. Now, with the project saved, we can explore Playgrounds in a little more detail.
When you launch the app, you will see five distinct areas:
Let's break down each area in Playgrounds:
* **Playground Editor** : This area is where you write all of your code.
* **Results Panel** : The Results panel is a feature that's only found in Playgrounds and provides immediate feedback.
* **Window Pane Controls** : The Window Pane Controls have two groups of icons:
As we discussed earlier, the first group is called the **Editor Mode** , and the second group is called the **View**. Refer to the detailed description of these icons in the previous chapter for information about what each one does.
* **Debug Toggle** : This button allows you to hide and show the Debug panel and toggle on the Debug panel.
* **Play/Stop** : This button is used to make Playgrounds execute code or to stop Playgrounds from running. Typically, Playgrounds runs on its own, but sometimes you need to manually toggle this feature on when Playgrounds does not execute your code for you.
Now that we have the Xcode panels setup, delete all of the code in this file. Your Playground should have three open panels: your Playground Editor, the Results Panel, and the Debug Panel. Let's start digging into some code.
# Data types – where it all starts
Swift offers a collection of built-in data types. Its data types are a string, an integer, floating-point numbers, and Booleans. These data types are found in most programming languages. Therefore, if you are not new to programming, you can skip this section and start at the _Variables and constants – where data is held_ section later.
Let's walk through each data type for those of you who are new to programming or would like a refresher.
# String
The first data type we will cover is a string. A series of characters represent a string. Strings are used to display text in an app. A string wrapped in quotes is known as a string literal. In programming, we cannot just add text to Playgrounds. So, to write a string, we must wrap our string inside quotes.
Let's add our name into Playgrounds, wrapped in quotes:
In Playgrounds, your values appear inside of your Results Panel. So, we now know that in order to create a string, we need to use quotes.
# Integer data type
**Integers** ( **Ints** ) are whole numbers, such as 32 and −100. Integers are useful when you need to perform calculations (that is, adding, subtracting, multiplication, and so on). Let's add some numbers to Playgrounds. On the next line, under your name, type `32`, and then, on the following line, `−100`, as demonstrated in the following screenshot:
Again, you see both `32` and `−100` in the Results Panel under your name.
# Floating-point numbers
Floating-point numbers are numbers with a fractional component, such as `4.993`, `0.5`, and `−234.99`. Let's add these values to Playgrounds as well:
# Booleans
**Booleans** ( **bools** ) are referred to as logical because they can either be true or false. Use Booleans when you need to determine whether some logic is `true` or `false.` For example, did the user log in? This statement would either be true—yes they did, or false—no they did not. So, in Playgrounds, add `true` and `false`:
Now, we have covered all of the primary data types in Swift. Right now, we have no way to use these data types. Using the data is where variables and constants come into play.
# Variables and constants – where data is held
Variables and constants are like containers that contain any data. When you want to declare a variable, you have to use the var keyword. Let's declare each of the data types we did earlier, but, this time, using variables and constants instead.
# Creating a variable with a string
First, delete what you have entered in Playgrounds already, and let's declare our first variable, named `fullName`, and set it to your name:
var fullName = "Craig Clayton"
The preceding code says that we have a variable named `fullName` and that it is holding a string value of `Craig Clayton`. Your Results Panel shows your actual name as its data:
# Creating a variable with an integer (int)
Now, let's create a variable with an int called `age` and set it to our age (or whatever you want your age to be) by adding the following:
var age = 40
Our program now knows that age is an `int`. You should see both your name and age in the Results Panel, just like you did previously:
# Debug and print() – detecting your bugs
We can use the Debug panel (at the bottom of the following screenshot) using `print()`. So, let's see how `print()` works by printing both our name and age. We can do this by adding the following:
print(fullName)
print(age)
It should appear on your screen as follows:
You should now see the output in both the **Results** and **Debug Panels**. Using `print()` allows us to see things in our Debug Panel and therefore verify expected results. Using `print()` is a handy debugging tool.
# Adding floating-point numbers
Now let's add floating-point numbers, using the `let` constant, in Playground:
let gradeAvg = 2.9
let version:Float = 1.1
This is demonstrated in the following screenshot:
You will notice that a couple of things are different. First, we are using the `let` keyword. Using `let` tells our program that this is a constant. Constants are variables that cannot change once they are set (as opposed to a non-constant variable, which can change after being set).
The other thing you might have noticed is that we explicitly set our `version` to `Float`. When dealing with a floating-point number, it can be a `Double` or a `Float`. Without getting too technical, a `Double` is much more precise than a `Float`. The best way to explain this is to use pi as an example. Pi is a number in which the digits go on forever. Well, we cannot use a number that goes on forever; however, a `Double` and `Float` handle how precise that number is. Let's look at the following diagram to see what I mean by precise:
So, in the preceding example, you can see that `Float` only displays `3.14`, whereas `Double` gives you a much more accurate number. In Swift, a `Double` is preferred. Therefore, if you do not explicitly set the floating-point number to a `Float`, Swift defaults to a `Double`. To set `version` to a `Float`, you must purposely set it that way.
# Creating a Boolean
Now, it is time to create a `Bool`. Let's make it a constant. Enter the following code:
let isConstant:Bool = true
This is demonstrated in the following screenshot:
Since `isConstant` is set, let's make it `false` by adding this:
isConstant = false
On the same line as what you just entered, you will now see a red circle with a white dot in the middle. The red circle means that there is an error. The white circle inside of it indicates that Xcode can fix the error for you (most of the time):
You will also notice an error in your Debug Panel, which is just a more detailed version of the error. This error is telling us that we are trying to change the value of a constant when we cannot do so.
If you tap on the circle, you will see that Playgrounds suggests that you change the `let` to a `var` since you cannot assign a value to a constant:
Since we want it to remain a constant, let's delete the line `isConstant = false`. We have covered basic data types, but there are some other programming basics we should discuss as well.
# Why constants versus variables?
You might be asking yourself "Why would you ever want to make something constant?". Since constants cannot change after you run your app, they keep you from accidentally breaking a value that should not change. Another excellent use for constants is for base URLs, as you would not want these to change. When you are getting data, you do not want to change the value midway through your app accidentally. Apple recommends that you use `let` whenever possible. Typically, I use `let` until Xcode warns me that a `var` is preferable. If I change the value from `let` to `var`, then I am verifying that this is the behavior I want.
# Comments – leaving yourself notes or reminders
Comments are a great way to create notes or reminders to yourself. When you comment code, it means that it will not execute when your code runs. There are two types of comment used: `//` or `/* */`. `//` is used for a one-line comment and `/**/` is used for a block of text.
Let's see what both of these look like:
# Type safety and type inference
Swift is a type-safe language, which means that you are encouraged to be clear about the value types with which your code works. Type inference means that, before your code runs, it quickly checks to ensure that you did not set anything to a different type. If you do, Xcode gives you an error. Why is this good? Let's say that you have an app in the store and that you set one of your variables as a `String` in one part of your code, but then accidentally set the same variable as an `Int` in another part of your code. This error may cause some bad behavior in your app that could cause it to crash. Finding these kinds of errors is like finding a needle in a haystack. Therefore, type checking helps you write safer code by helping you to avoid errors when working with different types.
We have now looked at data types and know that strings are for textual data, `Int` is for integer, `Bool` is for Boolean, and `Double` and `Float` are for floating-point numbers. Let's look a bit deeper into data types and see how we can do more than assign them to variables.
# Concatenating strings
String concatenation is the result of combining multiple string literals to form an expression. So, let's create one by first entering two string literals:
Combining these two gives us a string concatenation. We can combine strings by using the `+` operator, add the following:
let full = firstName + lastName
When you look in the Results Panel, you will notice that there is no space between our first and last names.
Also, if we just put the variables in quotes, they will revert to simple string literals and will no longer be variables.
# String interpolation
To correct this, we can put these variables inside quotes, which is known as string interpolation, using a backslash and parentheses around each of our variables inside the string interpolation. Let's update our name variable to the following, and you will see the space in the name in the Results Panel:
let full = "\(firstName) \(lastName)"
After adding the preceding line, our code should look something like this:
Now that we know about using variables inside quotes, we can do the same inside `print()`. Let's put the `firstName` and `lastName` variables inside `print()`, as follows:
print("\(firstName) \(lastName)")
The `print` statements are great for checking to see whether you are getting the value you want:
Bam! Now, we have a way to view multiple variables inside of `print()` and to create string interpolation by combining multiple strings. We can do much more with `Strings`, and we will cover that later in this book.
# Operations with our integers
In our Playground, we know that age is an int, but with `Int`, we also can write arithmetic expressions using numbers, variables/constants, operators, and parentheses. Let's start with addition, subtraction, multiplication, and division. Add the following into Xcode:
So, sum added two integers (`+` operator) together, totaling `43` in our preceding example. Then, we subtracted (`-` operator) sum from `32` to create a result (`−11`, in our example). After that, we took the result and multiplied (`*` operator) it by `5` (see `-55` in the Results Panel). All of this is pretty basic math; however, you may have noticed something different with our division equation (`/` operator). When you divide two integers, the result is the third integer. So, instead of `-55` divided by `10` equals `-5.5`, our result was `-5`. To get the correct floating-point value of `-5.5`, we need to make our division value a `Double`. Therefore, let's add the following:
let divide2 = Double(total) / 10
After adding the preceding line of code, your code should look something like this:
All of these operations might look familiar to you, but there is one with which you might not be familiar, and that is the remainder operator. The remainder operator returns the remainder when a number is divided by another.
So, for example, `7` divided by `3` equals `2.33`. When we apply the remainder operator, we get back `1`. Add the following to Playgrounds:
let mod = 7 % 3
Now, your code should look something like this:
# Increment and decrement
There are times when you need to increment (increase) or decrement (decrease) a value. There are two ways you can accomplish this. Add the following into Playgrounds:
Both of these options do the same thing, but option `#2` is in shorthand. The preferred way is to use option `#2`, which is `+=` (addition assignment operator) and `−=` (subtraction assignment operator), but the choice is yours.
# Comparison operators
We can also compare different numerical variables. These might be familiar to you from math class. Let's enter these in to Playgrounds:
As you can see in the Results panel, these comparison entries result in true or false based on the values that you enter (here, these are `1` and `2`).
# Summary
We have hit the basics and, from this point, if you are new to programming, it is a good idea to make sure that you understand each topic we cover. As the chapters progress, we will cover more and more, so take your time and make sure that you are comfortable with all of the topics covered in this chapter.
# Building on the Swift Foundation
In the last chapter, we went through the basics of understanding data types and how to create variables and constants. Now that we are comfortable with those topics, let's look at adding more building blocks. This chapter will build on what we learned in the previous chapter and get us a bit closer to understanding Swift better.
The following topics will be covered in this chapter:
* Type safety and type inference
* Operations with integers
* `if` statements
* Optionals and optional bindings
* Functions
Data types are good, but we will need to add some logic to our app. For example, we want to be able to control whether someone should see a login screen when they launch the app, or whether they should go right into the app. You will use logic a lot, so let's look at what an `if` statement is and how to use it.
# Creating a Playground project
As you learned earlier, launch Xcode and click on Get started with a playground:
The Playground template screen will appear. Make sure that you select iOS and then choose Blank and hit Next. You will be asked to give your project a name and a location to save the file; name your new Playground `iOS11-Programming-for-Beginners-Ch3`. You can save the file anywhere you like. Now, with the project saved, we can explore Playgrounds a bit.
# The if statements – having fun with logic statements
Let's add our first piece of logic using an `if` statement. An `if` statement is a simple statement to determine whether or not a statement is `true.` Input the following into Xcode:
In the first line of the preceding code, we created a constant named `isPictureVisible`, and we set it to `true`. The next line starts our `if` statement and reads as follows: if `isPictureVisible` is `true`, then print `Picture is visible`. When we write `if` statements, we must use the curly braces to enclose our logic. It is a good practice to put the opening curly brace (`{`) on the same line as the `if` statement and the closing curly brace (`}`) on the line immediately after your logic.
When writing `if` statements using a `bool`, you are always checking for `true`; however, if you wanted to check for `false`, you would do the following:
Bools work great with `if` statements, but we also can use them with other data types. Let's try an `if` statement with an `Int` next. Write the following in Playgrounds:
In the preceding example, we first created another constant with our `Int` set to `19`. The next line says: if the `drinkingAgeLimit` is less than `21`, then print `Since we cannot offer you an adult beverage - would you like water or soda to drink?` When you are using `Int` within `if` statements, you will use the comparison operators (`<`, `>`, `<=`, `>=`, `==`, or `!=`). However, our last `if` statement feels incomplete because we are not doing anything for someone over `21`. When you need to cover the contradictory statement, this is where you will utilize an `if...else` statement. You enter an `if...else` statement precisely as you do with an `if` statement, but, at the end, you add the word else.
You can add else to both of the `if` statements we have inputted so far, but, for now, add it to the end of our last `if` statement:
With else added to the end of our `if` statement, it turns into an `if...else` statement, which now reads: if the `drinkingAgeLimit` is less than `21`, then print `Since we cannot offer you an adult beverage - would you like water or soda to drink?` Otherwise (or `else`), print `What type of beverage would you like? We have adult beverages along with water or soda to drink`.
Now, our `if...else` statement can handle both conditions. Based on the value `19` for our `drinkingAgeLimit`, we can see the following in the Debug Panel: `Since we cannot offer you an adult beverage - would you like water or soda to drink?` If we change `drinkingAgeLimit` to `30`, our Debug Panel says `What type of beverage would you like? We have adult beverages along with water or soda to drink`. Go ahead and change `19` to `30` in Playgrounds:
Note that we got the behavior we wanted in the Debug Panel.
So far, we have covered using an `if` statement with a `bool` and an `Int`. Let's take a look at one more example using a string. Add the following bit of code to Playgrounds:
In programming, we use equals (`=`) when setting data to variables. However, to compare two data types, we must use the double equals (`==`) sign. Therefore, when we write an `if` statement that compares two strings, we must use double equals (`==`) instead of a single equal (`=`) to determine equality.
An `if...else` statement only lets us check two conditions, whether they are `true` or `not`. If we wanted to add more conditions, we would not be able to use just an `if...else` statement. To accomplish this, we would use what is called an `if...else...if...else` statement. This statement gives us the ability to add any number of `else-if` inside our `if...else` statement. We will not go overboard, so let's add one. Update your last `if...else` statement to the following:
In this example of an `if...else...if...else` statement, we are checking whether `restaurantName` equals `La Bamba`, then print `I've only been to La Bamba II!`, else if `restaurantName` equals `La Bamba II`, then print `This restaurant is excellent!`, else print `Oh! I've never heard of that restaurant`.
Using `if`, `if...else`, and `if...else if...else` statements helps you to create simple or complex logic for your app. Being able to use them with `Strings`, `bools`, `Ints`, and floating-point numbers give you more flexibility.
# Optionals and optional bindings
Use optionals when a value cannot be set. Think of optionals as containers that can take either a value or nil. Optionals gives us the ability to check whether the value is nil or not. To create an optional value, you will have to provide it with a data type followed by a question mark (`?`). Before we do that, let's create a string that is not an optional. Add the following to Playgrounds:
Now, let's add an optional to Playgrounds:
In this example, we created a string optional, and, if you notice in the Results Panel, it is nil. But for our `notAnOptional`, we see `"This is not an optional"`. Now, on the next line, let's set `optional` equal to `"This is an optional"`:
In our Results Panel, we see `"This is an optional"`. Now let's print both `notAnOptional` and `optional`, as you will see a difference between the two:
Note that our `notAnOptional` variable looks fine, but `optional` has an optional (`""`) wrapped around the `String`. The (`""`) means that, in order for us to access the value, we must unwrap the optional. One way we could do this is by force-unwrapping the optional using an (`!`). Let's update our `print` statement and change it to the following:
We force-unwrapped our optional, but this method is not recommended. If you force-unwrap an optional and there is no value, your app will crash, so avoid this. We should use what is called **optional binding** , which is the safe way to access the value using an `if...let` statement. Remove the (`!`) from the `print` statement and instead write the following optional binding:
This `if...let` statement is saying that if the optional is not nil, set it to `value`, but, if this optional is nil, ignore it and do nothing. Now, we do not have to worry about anything regarding setting our value and causing our app to crash.
# Why optionals?
So, now, you are probably asking yourself: "Why do you have to do this?". Trust me, when I first learned about optionals, I felt the same way. Using optionals helps to protect your code. For now, understand that when you see a data type followed by a question mark, this variable is an optional. As we work with optionals more and more throughout this book, it will become more evident to you.
# Functions
Now, it is time to get into an enjoyable part of programming and learn how to write functions. Functions are self-contained pieces of code that you want to run on something. In Swift 3, Apple made a change to how you should write functions. All of the functions we will write in this chapter will perform an action (think of verbs). Let's create a simple function called `greet()`:
This example is a basic function with a `print` statement in it. In programming, functions do not run until you call them. We call a function by merely calling its name. So, let's call `greet`, as follows:
greet()
Once we add this to the code, this is what we'll see on screen:
That's it! We just created our first function and called it. However, functions can do so much more. We can add what is called a parameter to a function. A parameter allows us to accept data types inside our parentheses. Doing this will enable us to build more reusable chunks of code. So, let's update our `greet()` function to accept a parameter called `name`:
func greet(name:String) {
print("Hello")
}
After you update the function, you will get the following error:
We received this error because we updated our function, but we did not update the line where we called it. Let's update where we call `greet()` to the following:
greet(name: "Joshua")
When you are done you should see the following:
The preceding code looks good; however, the Debug Panel shows us that we are not using the name in our greeting. Earlier, you learned how to create a string interpolation. So, we need to append our variable name inside our `print` statement, as follows:
print("Hello \(name)")
This is how your code will now look:
Functions can take multiple parameters, so let's create another `greet()` function that takes two parameters, a first name, and a last name:
func greet(first:String, last:String) {
print("Hello \(first) \(last)")
}
Now, your code and its output should look as shown in the following screenshot:
We also need to update where we called `greet()` in order to accept multiple parameters as well:
greet(first: "Craig", last: "Clayton")
Now, your code and output screen should look something like this:
We now have a function that takes multiple parameters.
Functions can perform an action, but they can also run an action, and then, when it is done, return back some type of data. Whenever we want our function to return something, we need to use a noun as a way to describe what our function will do. We just created a function called `greet()` that takes a first and last name and creates a full name.
Now, let's create another function called `greeting()`, which will return a full name with a greeting. Let's see what this looks like:
func greeting(first:String, last:String) -> String {
return "Hello \(first) \(last)"
}
The following is how your code and output screen should appear:
This function is almost the same as the previous one, but with a couple of new things. First, `-> String` tells the function that we want to return a string. Inside our function, we return "Hello \\(first) \\(last)." Since we said that we want to return something after our parentheses, we have to do just that. Now, let's look at how we can do this. Enter the following code:
print(greeting(first:"Teena", last:"Harris"))
Now, this is how your code and output screen should look:
As you may have noticed, in the Debug Panel, we now have our full name with `Hello` added to the beginning. As you start to build on functions, you start to see the power.
These are just the basics of functions. We will cover more advanced functions throughout our _Let's Eat_ app. The main thing novice programmers forget is that functions should be small. Your function should do one thing and one thing only. If your function is too long, then you need to break it up into smaller chunks. Sometimes, longer functions are unavoidable, but you should always be mindful of keeping them as small as possible. Nice work!
**Let's work**
We covered a lot in this chapter, and now it is time to put everything we covered into practice. Here are two challenges. If you are comfortable with them, then work on them on your own. Otherwise, go back into this chapter, where you can follow along with me and see how you can do each one:
* **Challenge 1** : Write a function that accepts and returns a custom greeting (other than `Hello`, which we addressed earlier in this chapter), along with your first and last name
* **Challenge 2** : Write a function that will take two numbers and add, subtract, multiply, or divide those two numbers
# Summary
In this chapter, we learned about operations with integers, as well as working our way through `if` statements. Finally, we discussed the power of optionals and learned about what functions are and how to use them.
In the next chapter, we will move on to some more Swift basics by digging into Swift collections.
# Digging Deeper
When I first started programming, I was in my mid-twenties. I started a lot older than most, but I will say that grasping the basics took me a bit longer than most, too. I remember when I bought my first programming book, and I read and re-read chapters over and over again until the concepts made sense to me. I found that a lot of books talked to me like I had majored in computer science. As you progress through this book, take your time and, if you need to go back, it is okay to do so. No one is going to care that it took you an extra day to understand a concept. It is more important that you fully understand that concept.
One tip I would give you is to not copy and paste the code. No matter where you find the code and no matter how long it takes, it benefits you to type it out. Doing this benefited me, as I eventually started to remember the code, and it became second nature to me.
In the last chapter, we went over the basics of Swift to get you warmed up. Now, we will dig deeper and learn about some more programming concepts. These concepts will build on what you have already learned. In this chapter, we will cover the following topics:
* Ranges
* Control flow
Let's begin by creating a new Playground project.
# Creating a Playground project
As you learned earlier, launch Xcode and click on Get started with a playground:
The Playground template screen will appear. Make sure that you select iOS and then choose Blank and hit Next. You will be asked to give your project a name and a location to save the file; name your new Playground `iOS11-Programming-for-Beginners-Ch4`. You can save the file anywhere you like. Now, with the project saved, we can explore Playgrounds a bit.
Next, delete everything inside your file and toggle on the Debug panel using the toggle button ( _command_ \+ s _hift_ \+ _Y_ ). You should now have a blank screen with the Results Panel on the right, and the Debug Panel opened at the bottom.
We focused on the basics earlier, and now we will build upon those skills. Ranges are one such data type that we should learn about. These are very useful and can come in handy for a variety of reasons. Let's take a look at what ranges are and then start to understand the difference between a _closed range_ , a _half-closed range,_ and a _one-sided range_.
# Ranges
Ranges are generic data types that represent a sequence of numbers. Let's look at the following diagram to understand this:
# Closed range
Notice that, in the preceding diagram, we have numbers ranging from **10** to **20**. Rather than having to write each value, we can use ranges to represent all of these numbers in shorthand form. To do this, let's remove all of the numbers in the diagram except for **10** and **20** :
Now that we have removed those numbers, we need a way to tell Swift that we want to include all of the numbers that we just deleted. All the numbers in-between are where the range operator (...) comes into play. Therefore, in Playgrounds, let's create a constant called range and set it equal to `10...20`:
let range = 10...20
You should see the following:
The range that we just entered says that we want the numbers between `10` and `20`, as well as both `10` and `20` themselves. This type of range is known as a closed range.
Inside Playground, in the result, you will see a Show Result icon:
If you hover over the result, you will also see quick look:
Select the Show Result icon so that you can see the result:
We also have what is called a half-closed range.
# Half-closed range
Let's make another constant that is known as a half-closed range and set it equal to `10 < 20`. Add the following to Playgrounds:
let halfClosedRange = 10..<20
Your code should now look like this:
A half-closed range is the same as a closed range, except that the end value is not included. In this example, this means that 10 through 19 are included, and 20 will be excluded.
At this point, you will notice that your Results Panel shows you `CountableClosedRange(10...20)` and `CountableRange(10..<20)`. We cannot see all of the numbers within the range. To see all of the numbers, we need to use a loop.
# Control flow
In programming, control flow is the order in which your code executes. When working with Swift, we can use a variety of control statements. Loops, in particular, are useful for when you want to repeat a task multiple times. Let's take a look at a few different types of loop.
# The for...in loop
One of the most common control statements is a `for...in` loop. It allows you to iterate over each element in a sequence. Let's see what a `for...in` loop looks like:
for <value> in <sequence> {
// Code here
}
We start the `for...in` loop with `for`, which is proceeded by `<value>`. `<value>` is a local constant (only the `for...in` loop can access it) and can be any name you like. Typically, you will want to give this value an expressive name. Next, we have `in`, which is followed by `<sequence>`. `<sequence>` is where we want to provide it with our sequence of numbers. Let's write the following into Playgrounds:
Notice that, in our Debug Panel, we can see all of the numbers we wanted in our range.
Let's do the same for our `halfClosedRange` variable by adding the following:
In our Debug Panel, we can see that we get the numbers 10 through 19. One thing to note is that these two `for...in` loops have different variables. In the first loop, we used the `value` variable, and in the second one, we used the `index` variable. You can make these variables whatever you choose them to be.
Also, in the two preceding examples, we used constants, but we could just use the ranges within the loop. As a next step, you need to add the following:
Now, you will see 0 to 3 print inside the Debug Panel.
What if you wanted the numbers to go in reverse order? Let's input the following `for...in` loop:
# One-sided range
A one-sided range operator allows you to use ranges that continue as far as possible in one direction. If you want the range to continue, then this is what you would use. Let's look at a one-sided range by adding the following:
let names = ["Craig", "Teena", "Jason", "Joshua", "Myah", "Tiffany", "Kim", "Veronica", "Mikki(KK)", "Milan", "Shelby", "Kaysey"]
for name in names[2...] {
print(name)
}
You will see that all the names print in the console:
As a next step, let's add the following:
for name in names[...6] {
print(name)
}
// Craig
// Teena
// Jason
// Joshua
// Myah
// Tiffany
// Kim
You should now see in the console how this update changes what is in the console:
Another useful loop is the `while` loop. Let's take a look at how the `while` loop is used.
# The while loop
A `while` loop executes a Boolean expression at the start of the loop, and the set of statements run until a condition becomes `false`. It is important to note that `while` loops can execute zero or more times. Here is the basic syntax of a `while` loop:
while <condition> {
// statement
}
Let's write a `while` loop in Playgrounds and see how it works. You have to add the following:
So, this `while` loop starts with a variable that begins at zero. Before the `while` loop executes, it checks to see whether `y` is less than `50`, and, if so, it continues into the loop. Using the `+=` operator, which we covered earlier, we increment `y` by five each time. Our `while` loop will continue to do this until `y` is no longer less than `50`. Now, let's add the same `while` loop after the one we created and see what happens:
while y < 50 {
y += 5
print("y: \(y)")
}
When you are done, you should see the following:
You will notice that the second `while` loop never runs. This may not seem like it is essential until we look at our next type of loop.
# The repeat...while loop
The `repeat...while` loop is pretty similar to a `while` loop in that it continues to execute the set of statements until a condition becomes `false.` The main difference is that the `repeat...while` loop does not evaluate its bool condition until the end of the loop. Here is the basic syntax of a `repeat...while` loop:
repeat {
// statement
} <condition>
Let's write a `repeat...while` loop in Playgrounds and see how it works. Add the following to Playgrounds:
var x = 0
repeat {
x += 5
print("x: \(x)")
} while x < 100
print("repeat completed x: \(x)")
You will notice that our `repeat...while` loop executes first and increments `x` by `5`, and after, as opposed to checking the condition like it did before, as with a `while` loop, it checks to see whether `x` is less than `100`. This means that our `repeat...while` loop will continue until the condition hits `100`. Here is where it gets interesting.
Let's add another `repeat...while` loop after the one we just created:
repeat {
x += 5
print("x: \(x)")
} while x < 100
Now, you can see that our `repeat...while` loop incremented to `105` instead of `100`, like the previous `repeat...while` loop. This happens because the bool expression does not get evaluated until after it is incremented by `5`. Knowing this behavior will help you to pick the correct loop for your situation.
# Summary
So far, we have looked at three loops: the `for...in` loop, the `while` loop, and the `repeat-while` loop. We will use the `for...in` loop again, but first, we need to talk about collections.
In the next chapter, we will focus on what collections are and how to use them when working with data. Make sure that you fully understand loops, because we will build on them in the next chapter and throughout this book. Therefore, review as much as you need so that you feel confident that you are proficient in the topics contained in this chapter.
# Digging into Collections
In the last couple of chapters, we reviewed the basics of Swift to get you warmed up. Before we start building our app, we need to look at one more programming concept—collections. In Swift, we have three primary collection types, which we will cover in this chapter:
* Arrays
* Dictionaries
* Sets
We will dig deeper into each one, but we will start with the most common collection type—arrays.
# Arrays
Arrays are ordered collections of values and can hold any number of items, for example, a list of strings, ints, and floating-point values. Arrays are stored in an ordered list, starting at `0`. Let's look at a diagram:
Starting from left to right in the preceding examples, we first have an array that holds a collection of strings. In the second example, we have another array that holds a collection of ints. In our third example, we have an array that holds a collection of mixed data values.
Now, let's review the following diagram, which is a mixed array:
Since this example contains mixed data types, such as strings, ints, and bools, we would have to name this an array type of Any. This means that we can have mixed data types inside our array. Until you are genuinely comfortable with arrays, I would not recommend using mixed data arrays. Try to stick to arrays with the same data type because you will know the exact data type of each element.
An array can hold any data type, but making the array strongly typed means that every element in it must be of the same type.
# Creating an empty array
Now let's create a few arrays in Playgrounds.
Sometimes, you may want to remove your prior entries from your Playground so that it is easier for you to see each new `print` statement. Do that now and input the following:
We just created our first two arrays. The reason for two different syntaxes is because you can create it in two different ways. Regarding the first example, you will see me create arrays throughout this book.
The data types within each set of brackets tells Swift what type of array we want to create. The first array (`integers`) we created has a data type of ints, and our second array (`strings`) has a data type of strings.
# Creating an array with initial values
Arrays can have initial values when they are created. Let's see how this would look by entering the following in Playgrounds:
let integers2 = [54, 29]
Your code will now look like this:
The array that we just entered uses type inference to declare the data type of the array using its initial values. In this case, Swift understands that it is an array of ints because the values we entered are integers. In addition, when we use a constant (`let`) on an array, we are telling Swift that the array is an immutable array, which means that the contents or size cannot change once it is instantiated.
# Creating a mutable array
It is a best practice to make all arrays (and, for that matter, collections) immutable, but there are some cases where you will need to create an array that is mutable. Let's have some fun and create a mutable array:
var states:[String] = []
As an aside, when creating a mutable array (or any variable), note that each variable must be unique.
One use of a mutable array is so that we can add to the array. Let's look at some ways in which we can do this.
# Adding items to an array
Let's add some data to our array. There are a few different convenience methods for adding data to an array.
A convenience method is, just as its name implies, a method that makes things convenient. A method is a function that lives inside a class. We will discuss classes later in this book. If this is starting to get overwhelming, that's understandable. You do not need to worry about every single detail at this time. We will cover this again, and things will slowly start to click at some point. Everyone learns differently, so there is no reason to worry if someone else understands something more quickly. Just go at your own pace.
The first convenience method we will look at is the `append()` method:
states.append(23)
Your code and the output window should now look like this:
Houston, we have a problem! You will see that we are getting an error. I did this for a couple of reasons, because getting errors is normal and common. Most people who start out coding are afraid to make a mistake or get scared about getting or seeing errors. Trust me, I have been coding for years, and I make mistakes all the time. The error is telling us that we tried to add an int into an array that can only hold strings.
For every developer, whether they are a beginner or a veteran, there will come a time when they encounter an error that they cannot figure out. This error might get you frustrated to the point where you want to throw the computer across the room (I have been there a few times). The best advice my boss ever gave me was to take a walk for 10-15 minutes or do something to take your mind off it. Sometimes, this helps, and you will come up with an idea after you walk away. Even if you come back and it still takes you hours to figure out what is wrong, this is still part of the process. The best errors are the ones where you overlooked the simplest thing and had to spend hours trying to figure it out. You might have lost time, but you will have learned a great lesson. Lessons like these will stay with you forever, and you will never forget the error the next time you encounter it. So, if your coding results in an error, even in this book, embrace the challenge, because there is no greater feeling than figuring out a challenging error.
So, let's correct what we just did by revising the array to show the following:
states.append("Florida")
The following is how your code should now look:
In the Results Panel, you will see the contents of our corrected array.
Since an array can hold any number of items, let's add some more. Earlier, I mentioned that we have a variety of ways to add items to an array. The `append()` method allows us to add only one item at a time. To add multiple items, we can use the convenience called `append(contentsOf:)`.
Add the following to Playgrounds:
states.append(contentsOf:["California", "New York"])
Now, your code should look like this:
We added two more items to our array, but, so far, every example we have utilized has added items at the end of our array. We have two convenience methods that allow us to add items at any index position that is available in the array.
The first method we can utilize to do this is called `insert(at:)`, which allows us to add a single item at a specific index position. We also have `insert(contentsOf:at:)`, which enables us to add multiple items into an array at a certain index position. Let's use them both and add `Ohio` after Florida and then `North Carolina`, `South Carolina`, and `Nevada` in front of New York:
states.insert("Ohio", at:1)
states.insert(contentsOf:["North Carolina", "South Carolina", "Nevada"],at:3)
Now, your code should look like this:
We just added items to our array using `append(contentsOf:)`, but there also is a shorthand version of this, which is done by using the `+=` operator. As a next step, let's add the following:
states += ["Texas", "Colorado"]
Now, your code should look like this:
This technique for adding items is much more concise and is my preferred way of inserting items into an array. Writing less code is not always better but, in this case, using the `+=` operator is my go-to method.
# Checking the number of elements in an array
If you are keeping track, we now have nine items in our array. Luckily, we do not have to keep track of how many items are in our array because we have a property called `count`. This property will keep track of the current item count and give us the total count of our array when we want to check. Let's look at the count for states:
states.count
Your code will now look like this:
# Checking for an empty array
The `count` property is not the only property we can use to calculate how many items are in an array. The most commonly used property for an array is called `isEmpty`. This property uses the `count` property by checking to see whether the count is greater than `0`. This method will return `true` or `false`, depending on whether there are any items within our array. Since you learned that `if...else` statements work well with bools, let's use this `isEmpty` property in an `if...else` statement.
Add the following to Playgrounds:
if states.isEmpty {
print("There are no items in the array")
}
else {
print("There are currently \(states.count) total items in our array")
}
Now, your code and output should look like this:
Now, our Debug panel prints the following: `There are currently total 9 items in our array`.
One thing to remember in programming is that occasionally, there are multiple ways of writing a piece of code. It is not shocking to meet someone who will approach the same problem differently to you. To me, this is why programming is so amazing. Ultimately, all that matters is that it works as expected, especially when you are new to programming.
All programming languages have what is known as a style guide, which is a preferred way of writing code, and it is no different in Swift. Using the preferred style means a suggested method, but even then, you will notice that most preferred methods differ on certain things. For now, you do not need to worry about different style guides, other than to know that they exist. In this book, we will follow a style that I have incorporated into my code.
Once you get comfortable, I recommend that you start to look at style guides and adapt them into your code. Knowing different styles helps you to know your options as well as to understand what others are doing with their code, even if you do not agree with how they write something. If you write your code with a defined structure or style throughout a project, it will make it easier for you to come back to your code if you, for instance, had to take a break for some reason, such as starting another project, or just taking some time off.
# Retrieving a value from an array
We discussed creating arrays as well as adding items to an array. Now, let's turn to retrieving a value from an array. Since arrays are stored by their index, we can use their index to retrieve values. By way of an example, let's retrieve California:
let state = states[3]
Now, your code should look like this:
The Results Panel shows North Carolina and not California. Remember, arrays start at `0`, not `1`. Therefore, for us to get California, we would actually need to use the index position of `2`. Let's make that update in Playgrounds as follows:
let state = states[2]
When you are done, you should see that we get `"California"`:
There we go!
We now have this great list of states, but someone told you that Arizona is also amazing. Instead of just adding Arizona to our list, you decide that you'd actually prefer to replace South Carolina with Arizona. We could simply look at our array and see in which index South Carolina is located. This would not be helpful, however, if it were to change, or if the state for which you were searching did not exist. So, the safe way to code this is to check the array for an item, and, if that item is found, then Swift will give us its current index position. The `index(of:)` method is what we will use to get the index position of South Carolina:
if let index = states.index(of:"South Carolina") {
print("Current index position is \(index)")
}
This is how our code and output should now appear:
Now that we have the position, we can replace South Carolina with Arizona, as follows:
if let index = states.index(of:"South Carolina") {
states[index] = "Arizona"
}
This is how our code should now look:
# Iterating over an array
It would be nice if we could see a list of states in our array. Earlier, you learned that `for...in` loops work with sequences. Since our array is a sequence, we can use `for...in` loops to loop through each element. When working on a project that has arrays, it is helpful to use a `print` statement inside a `for...in` loop. This lets us print every item in our array to the Debug Panel. So, let's use a `for...in` loop to look at the contents of our array:
for state in states {
print(state)
}
This is how our code and output should now look:
# Removing items from an array
Now, it is time to start deleting items from our array. Let's delete the first item from our list. We have a convenience method for removing items from an array, called `removeFirst()`. This method will remove the first item from our array, which, in our case, is Florida. Let's remove Florida and add this line above our `for...in` loop:
let updatedStates = states.removeFirst()
for state in states {
print(state)
}
This is how our code and output should now look:
Since we removed Florida, all of our states' index positions will be updated to move one position closer to the top of the array. But what if we wanted to remove an item that was not first? To do this, we can use the remove(at:) convenience. So, let's remove North Carolina and New York, which are sitting at positions 2 and 4, respectively. We will add the following above our `for...in` loop:
states.remove(at:2)
states.remove(at:4)
This is how our code and output should now look:
Now, both North Carolina and New York have been removed. You will see that California and Ohio did not move, but Colorado and Nevada moved up closer to the top of the list. To remove the remaining six items, we could use remove(at:) for each one, but instead we will use the simpler method of `removeAll()`. So, let's use `removeAll()` in Playgrounds:
states.removeAll()
Now, your code should look something like this:
Now, we are back to where we started with an empty array. We have only scratched the surface of arrays. We will do more with arrays later in this book, but we first need to look at the next collection type: dictionaries.
# Dictionaries
A dictionary is an unordered collection of values, with each one accessed through a unique key. Let's look at the following diagram:
In our diagram, we have a dictionary of pizzas ( **keys** ) with their prices ( **values** ). To find something inside a dictionary, we must look it up according to its key. Let's look at a dictionary syntax:
Dictionary<Key, Value>
Now, that we understand what a dictionary is and its syntax let's look at how we can use it by creating our first dictionary.
# Creating a dictionary
The traditional way of creating a dictionary is to first declare it as a dictionary and then, inside angle brackets, declare a type for the key and value. Let's create our first dictionary inside Playgrounds:
The immutable dictionary we created earlier has a string data type for both its key and value. We have multiple ways to create a dictionary. Let's look at another by adding the following to Playgrounds:
let dictSecondExample = [String: Int]()
Your code should now look like this:
In this latest example, we created another immutable dictionary, with its key having a string data type and its value having an int data type.
If we wanted to use our pizza diagram, the key would have a string data type and the value would have a double data type. Let's create this dictionary in Playgrounds, but, this time, we will make it a mutable dictionary and give it an initial value:
var dictThirdExample = Dictionary<String, Double>(dictionaryLiteral: ("veggie", 14.99), ("meat", 16.99))
Your code should now look like this:
The preceding example is just one way of creating a dictionary for our pizza diagram example. Let's look at a much more common method using type inference:
var dictPizzas = ["veggie": 14.99]
Once you add this to your code, your code should look something like this:
The preceding is a much simpler way of creating a dictionary with an initial value. When initializing a dictionary, it can have any number of items. In our case, we are starting off with just one.
Now, let's look at how we can add more pizzas to our dictionary.
# Adding and updating dictionary elements
Let's add another item to our `dictPizzas` dictionary:
dictPizzas["meat"] = 17.99
Once you add this line of code, your code snippet should look like this:
This is the shorthand method for adding an item to a dictionary. After the dictionary variable, we add the key inside the brackets. Since the key for this dictionary is strings, we must put this key in quotes. Next, we assign a double to our value. Now, our dictionary has two items. This syntax is also used to update a dictionary item. Let's change the price of meat pizza to `16.99`:
dictPizzas["meat"] = 16.99
Have a look at the code. It should look like this:
Instead of using the shorthand syntax, you can use the `updateValue(_:forKey:)` method. This method does almost the same thing as the shorthand syntax. If the value does not exist, it creates the item; if it does exist, it will update the item. The only difference is that, when using the `updateValue(_:forKey:)` variable, it actually returns the old value after performing the update. Using this method, you will get an optional value because it's possible that no value exists in the dictionary. Now, let's change the value from `16.99` to `15.99`:
if let oldValue = dictPizzas.updateValue(15.99, forKey: "meat") {
print("old value \(oldValue)")
}
Your code should now look like this:
Since we do not need the old value, we will just use the shorthand syntax to add a couple more pizzas:
dictPizzas["specialty"] = 18.99
dictPizzas["chicken"] = 16.99
Your code and output should now look like this:
Now that we have some data inside our dictionary, let's see how we can access that data.
# Accessing an item in a dictionary
When trying to access an item inside a dictionary, you will always receive an optional value. The reason for this is that you could potentially receive `nil` if the value does not exist. So, you should always use an `if...let` statement to safeguard your code:
if let numChickenPrice = dictPizzas["chicken"] {
print(numChickenPrice)
}
Your code should now look like this:
# Iterating over dictionary values
Just like an array, we can iterate through our dictionary. However, there are a few differences. Since a dictionary is unordered, each time you loop through, the values will never be in the same order. With dictionaries, you can loop through both the values and keys.
Let's iterate over a dictionary's values using a `for...in` loop. Add the following to Playgrounds:
for value in dictPizzas.values {
print(value)
}
Your code should now look like this:
# Iterating over dictionary keys
To iterate over a dictionary's keys using a `for...in` loop, add the following to Playgrounds:
for key in dictPizzas.keys {
print(key)
}
Your code and output should now look like this:
# Iterating over dictionary keys and values
When you need to iterate over both dictionary keys and values using a `for...in` loop, you use the following:
for (key, value) in dictPizzas {
print("\(key): \(value)")
}
Your code and output should now look like this:
We have successfully looked at how to loop through a dictionary.
# Checking the number of items in a dictionary
In addition to keys and values, we have other useful properties. We can see the number of items in a dictionary using the `count` property. Let's try that by adding the following:
print("There are \(dictPizzas.count) total pizzas.")
Now, your code and output should look like this:
Along with a count, we can check whether a dictionary is empty. Let's use this in an `if...else` statement by adding the following:
if dictPizzas.isEmpty {
print("there are no pizzas")
}
else {
print("There are \(dictPizzas.count) total pizzas.")
}
Now, your code and output should look like this:
This kind of logic is helpful when you want to display something back to the user or hide a UI.
# Removing items from a dictionary
Next, let's learn how to remove an item from a dictionary. When deleting items from a dictionary, we have two primary ways of doing this. The first uses `removeValue(forKey:)`. Let's add this right above our `if...else` statement that checks whether the dictionary is empty:
dictPizzas.removeValue(forKey: "chicken")
Your code should now look like this:
Let's look at the second way of removing dictionary items, which is by using the shorthand syntax. Add the following to Playgrounds, following on from the `removeValue(forKey:)` variable:
dictPizzas["meat"] = nil
Your code should now look like this:
Notice that, just like with `updateValue(_:forKey:)`, `removeValue(forKey:)` will return you the value before it is removed. If you do not need the value, the shorthand syntax is the preferred method.
So far, we have covered arrays and dictionaries, and now we will review one last collection: sets.
# Sets
A set stores unique values of the same type in a collection without a defined order. Let's look at the following diagram:
In the preceding diagram, we have two circles, both of which represent a set. On the left, we have Craig's favorite movies, and on the right, we have Gabe's favorite movies.
# Creating an empty set
Before we create these sets, let's create an empty set and see what that looks like:
In this first set, after the equals sign, we create the set and give it a string data type. Then, we use the parentheses to initialize the set.
# Creating a set with an array literal
Our first set was an empty string set, but we can create a set using an array literal. Let's add the following to Playgrounds:
let numberSet = Set<Int>([])
Your code should now look like this:
This preceding immutable set has an `int` data type, but, in the parentheses, we passed an empty array literal when we used the brackets.
# Creating a mutable set
Now that we are familiar with the way sets are created, let's create a mutable set for Craig's favorite movies and one for Gabe's favorite movies. Add the following to Playgrounds:
var craigsFavMovieSet = Set<String>([])
var gabesFavMovieSet = Set<String>(["Fight Club", "Matrix", "Evil Dead", "Big Trouble in Little China", "Aliens", "Winter Solider", "The Illusionist", "Predator"])
Be aware that if you copy and past this code into Xcode, you might see a number of errors due to line breaks and book formatting.
Now, your code should look like this:
We now have two mutable sets. The first set is created with an empty array literal, and the second set is created with some initial values. Let's add some more items to both sets.
# Adding items to a set
To add an item to a set, we have to use the `insert()` method. Let's use that to add another movie to Gabe's favorite movies:
gabesFavMovieSet.insert("Terminator")
gabesFavMovieSet
Your code should now look like this:
Now, Gabe has nine films, and Craig still has none. We added the `gabeFaveMovieSet` variable again so that we can see the contents update in the Results Panel. To add multiple items to a set, we can use an array literal.
Let's add ten films to Craig's list, as follows:
craigsFavMovieSet = ["The Pianist", "The Shawshank Redemption", "Dark Knight", "Black Swan", "Ip Man", "The Illusionist", "The Silence of the Lambs", "Winter Solider", "Green Mile", "Se7en"]
Your code should now look like this:
Craig's set now has ten films. Next, let's look at how we can work with sets.
# Checking whether a set contains an item
The first thing we can do with sets is check whether a set includes an item. Let's see whether Craig's movie list includes the movie `Green Mile`:
if craigsFavMovieSet.contains("Green Mile") {
print("Green Mile found")
}
Your code should now look like this:
In the preceding example, we used the `contains()` method to discover whether an item is in the set.
# Iterating over a set
If we want a list of all the movies in Gabe's list, we can use a `for...loop`. Let's see how that works:
for movie in gabesFavMovieSet {
print("Gabe's movie - \(movie)")
}
Your code should now look like this:
Now that we have seen a `for...in` loop for all three collections, that is, arrays, dictionaries, and sets, you can see that there are a lot of similarities. Remember, since sets come unordered, every time we run our `for...in` loop, we will get a list in a different order. The way around this is to use the `sorted()` method. Using `sorted()` will ensure that every time we loop through our list, it will always be in the same order. Let's do that with Craig's movie list:
for movie in craigsFavMovieSet.sorted() {
print("Craig's movie - \(movie)")
}
Your code should now look like this:
Now that we have our set sorted, let's look at the real power of using sets.
# Intersecting two sets
In the following diagram, we can see that, where both sets intersect, we should get a list of any movies they have in common:
We can do the same using the `intersection()` method in our code. Let's intersect both movie lists and see what happens:
craigsFavMovieSet.intersection(gabesFavMovieSet)
Your code and output should now look like this:
We can see that the only two movies that these sets have in common are _The Illusionist_ and _Winter Solider_. In addition to seeing which movies the two sets have in common, we can also join the lists to get one consolidated list of the movies from both sets.
# Joining two sets
If you look at the following diagram, you will see the two sets joined together:
Using the `union()` method, we get a consolidated list of items with no duplicates. Let's try this in Playgrounds:
craigsFavMovieSet.union(gabesFavMovieSet)
Your code should now look like this:
We have a combined list of movies that includes all of the movies that the two sets did not have in common, along with the two movies that were in common but listed only once. As you can see, sets are really powerful, and you can use them to manipulate data. Finally, we need to look at how we can remove items from a set.
# Removing items from a set
To remove an item from a set, we can use the `remove()` method. When we use this method, we input the item we want to remove in the parentheses. Let's remove _Winter Solider_ from Craig's movie list:
craigsFavMovieSet.remove("Winter Solider")
Your code should now look like this:
If you tried to remove more than a single item from a set (for instance, all of the items), then you can use the `removeAll()` method or give it an empty array literal:
craigsFavMovieSet.removeAll()
gabesFavMovieSet = []
Your code should now look like this:
Now, both sets are empty.
# Summary
We covered a lot in this chapter. We are now comfortable with using collections. Now that you are familiar with arrays, dictionaries, and sets, take the time to practice and work with them as much as you can. Collections are used a lot in programming, so getting comfortable is very important.
Even though we will touch on these things throughout the creation of the _Let's Eat_ app, it is best to make sure that you are comfortable with what we covered here. So, please review the material as often as is necessary to make sure you feel that you are proficient in the topics contained in this chapter.
In the next chapter, we will start building our _Let's Eat_ app. Over the next two chapters, we will work on getting our project set up, and then we will begin working on the visual aspects of our app.
# Starting the UI Setup
Now that you have learned Swift, which will help you to understand a lot of the boilerplate code you will see later, it's time to start building our _Let's Eat_ app. Let's begin by getting an overview of what we are going to build. We will review the finished product and then get into how to create this app. Before we start, there will be a lot of new terms and things with which you may or may not be familiar. Learn as much as you can, and do not let the finer details stop you from progressing.
We will cover the following topics in this chapter:
* Useful terms
* App tour
* Project setup
* Storyboards
* Creating a custom title view
# Useful terms
Before we dig in and start getting our UI set up, we need to take a few minutes to introduce (or re-introduce) you to some terms that you should understand while we build our app:
* **View Controller**
* **Table View Controller**
* **Collection View Controller**
* **Navigation Controller**
* **Tab Bar Controller**
* **Storyboard**
* **Segue**
* **Stack Views**
* **Auto layout**
* **Model View Controller** ( **MVC** )
# View Controllers
View Controllers (`UIViewControllers`) are blank scenes that you can use to hold other UI elements. They give you the ability to create a custom interface.
# Table View Controllers
A Table View Controller (`UITableViewController`), which inherits from `UIViewController`, is one of the most common UI elements and is used to display a list of items. For example, Apple's Settings screen uses a Table View Controller to display the list of settings a user can access and change:
# Collection View Controllers
Collection View Controllers (`UICollectionViewControllers`) are typically used when you want to display elements within a grid. They are highly customizable and, because of that, are becoming more popular in non-grid-based layouts.
The App Store, for example, currently uses `UICollectionViewControllers` for both its featured page and its app details page:
# Navigation Controllers
A Navigation Controller (`UINavigationController`) is a UI element that allows you to build a drill-down interface for hierarchical content. When you embed a View Controller, Table View Controller, or Collection View Controller into a Navigation Controller, it manages navigation from one controller to another.
# Tab Bar Controllers
The Tab Bar Controller (`UITabBarController`) manages an array of View Controllers. The _Let's Eat_ app will use a Tab Bar Controller. This controller will give us the ability to have navigation for our app with minimal setup.
Apple has a few apps with which you might be familiar that use the Tab Bar Controller:
`UITabBarController` can only have five tabs on the iPhone. If your `UITabBarController` has more than five tabs on the iPhone, the fifth tab, and any after that, move underneath a More button:
# Storyboards
A storyboard is a file that displays a visual representation of your app's UI. The following is what a storyboard looks like for an app:
Storyboards let you create your entire app visually using View Controllers, Table View Controllers, and Collection View Controllers as scenes. Along with building your app visually, you can connect scenes and set up transitions between scenes using segues.
# Segues
Segues are used to connect one controller to another. In the storyboard, segues are represented by an arrow with an icon:
Segues also give you the ability to specify a transition from one scene to another, with very little to no programming.
# Stack Views
Stack Views are a great way to stack different components either horizontally or vertically. We will cover Stack Views in this book because they are a great way to easily organize components with equal spacing.
# Auto Layout
Auto layout is an excellent tool that allows you to support different screen sizes and device-rotation. With auto layout, you can set different constraints on UI elements for it to adjust to changes in size and rotation. Using auto layout in your app allows you to use one storyboard for all devices.
# Model View Controller (MVC)
MVC is a standard software design pattern, which is a solution for commonly occurring problems within software design. Apple has built iOS apps on the MVC design pattern. This pattern divides our app into three camps, known as the Model, View, and Controller. We will cover this in detail later in this book.
# App tour
The _Let's Eat_ app that we are building is a restaurant reservation app that allows users to find restaurants in a specific area and create reservations from within the app (although our app does not book those reservations). I chose a restaurant reservation app for the lessons in this book because most of the new iOS 12 features work well together in such an app. The app covers a lot of different aspects, from maps to iMessage extensions. Let's take a look at the overall flow of the app, so that, as we build, you have a good idea of the direction we are heading in:
# The Explore tab
When the app launches, you will see the Explore tab. This tab allows users to search for a particular cuisine and to set their location. Let's break down each component in this view:
For this screen, we will work with an empty View Controller, which is where all of our UI components live. As you can see, this view in our app is designed to be a grid so that we will be using a Collection View Controller. We will be setting up this Collection View Controller ourselves.
When I build apps, I typically start with a blank Collection View or Table View, because it gives me more flexibility in my code as well as with my user interface.
# Locations
The Locations view is a list of cities accessed from the Explore tab. We load a list of cities from a local file and, when the user selects a city, the app loads all of the restaurants from that area:
For this Locations view, we will be working with a View Controller that uses a Table View.
# Restaurant listings
In Restaurant listings, we can see restaurants in the area by the selected cuisine:
We will be covering both `UICollectionViews` and `UITableViews` in this book, but, as an introduction, you should know that `UICollectionViews` are very powerful—this is because you can customize them to look how you want. For example, the App Store detail is a custom `UICollectionView`.
One great feature when using `UICollectionView` is that, when you are building a universal app such as this one, you can make your view look like a list for the iPhone, but appear as a grid on the iPad with minimal effort.
# Restaurant detail
Our Restaurant detail has more information about the restaurant. This view is built using a `UITableView` that uses static cells:
# The Map tab
Our Map tab is a View Controller with a map that has pins dropped on it from a specific location, denoting all of the restaurants in the area:
# Project setup
Now that we have gotten a tour of the app, we are going to build the _Let's Eat_ app. First, we need to create the app, then work on the UI and, lastly, design our app in a storyboard.
For the initial setup of the app, we will look at some basics of iOS, starting with creating a new project.
# Creating a new project
To create a new project, do the following:
1. Open Xcode and the Xcode welcome screen will appear. Click on Create a new Xcode project in the left panel of the welcome screen.
2. Select Single View App and click on Next:
3. In the options screen that will appear, there will be a number of items to complete or choose. Add the following into that options screen and then hit Next:
* **Product Name** : LetsEat
* **Team** : Your account, or leave blank
* **Organization Name** : Your name/company name
* **Organization Identifier** : Your domain name in reverse order
* **Language** : Swift
* **Use Core Data** : Unchecked
* **Include Unit Tests** : Unchecked
* **Include UI Tests** : Unchecked
Your screen should look like the following screenshot:
4. Choose where to save your project, and then hit Create:
5. You're presented with the following screen:
Your project is created, and we can start working on building our first iOS app.
# Summary
In this chapter, we covered useful terms that we will use throughout this book. We also looked at what we are going to build within the app, and now we have a good idea of what the app will look like when we are done.
Next, we'll start working inside the storyboard and getting the UI of the application set up. Once we have everything set up, we will focus on code throughout the rest of the book. If you are familiar with working with storyboards or do not want to learn the design aspect of iOS, please skip to Chapter 11, _Getting Started with the Grid._
# Setting Up the Basic Structure
Typically, before I write any code when working on a project, I like to set up my storyboard, which allows me to focus on coding without having to go back and forth from storyboard to code. In this book, we will do some of our layouts in code to show you how to do that. But first, as I mentioned earlier, my preference is to set up as much as I can inside of the storyboard.
The following will be covered in this chapter:
* Creating a Tab Bar Controller
* Tab Bar buttons
* Launch screens
* Navigation Controllers
In the last chapter, we created our project, and now we are going to continue with that by building a Tab bar Controller from scratch. Although there is a Tab Bar Controller template that has everything you need, I find that starting from scratch is an excellent way to learn. Also, I find that it is easier to start to clean, rather than fix or update, the template. However, you may want to utilize the template to begin your project in the future. Let's start setting up our app.
# Starting from scratch
We will be creating all of our files from scratch, so we will delete the existing files in our project and recreate them in the coming chapters. The reason for this is so you can become comfortable with a project and understand how it was set up.
To delete the `ViewController.swift` file, do the following:
1. Select the `ViewController.swift` file in the **NAVIGATOR PANEL** :
2. With the file selected, hit the _Delete_ or _Backspace_ key. You will get the following message:
3. Select Move to Trash.
Now, we can continue with the setup of the storyboard.
# Storyboard setup
Let's get familiar with the UI setup. To update your `Main.storyboard`, do the following:
1. Select the `Main.storyboard` file in the Navigator panel:
2. In this storyboard file, select View Controller scene in the **OUTLINE VIEW** :
3. With the scene selected, press the _Delete_ or _Backspace_ key, and now your `Main.storyboard` file will be empty.
4. In your **UTILITIES PANEL** , in the bottom pane, you will see the **LIBRARY SELECTOR BAR**. In the bar, select the object library:
5. Pull up on the **LIBRARY SELECTOR BAR** to view more of the object library:
6. Find the Tab Bar Controller:
7. Drag the Tab Bar Controller out onto the canvas:
We now have our Tab Bar Controller, which will only have two tabs.
Next, we will get our app assets set up so that we can give our tabs image icons.
# Adding our app assets
Let's add images into our project by performing the following:
1. Select the `Assets.xcassets` folder in the **NAVIGATOR PANEL** :
2. Hit the _Delete_ or _Backspace_ button, and you will get the following message:
3. Select Move to Trash.
4. Open the project's `assets` folder that you downloaded from Packt's website or GitHub. Open `Chapter_07`. Drag the `Assets.xcassets` folder into your project in the Navigator Panel:
5. When you drop the folder, you will get the following message:
6. Make sure that both Copy items if needed and Create groups are selected. Then, hit Finish.
If you open the `Assets.xcassets` folder, you will now see all the assets for your entire project:
When you explore the assets, you will notice that we will be using both PNGs and PDFs.
Using PDFs allows us to support multiple device resolutions with only one image. Therefore, Xcode can handle supplying assets for all resolutions.
7. Select `Main.storyboard` again, and, in the Outline view, select both disclosure arrows for `Item 1 Scene` and `Item 2 Scene`, to have them face downwards:
8. Select both disclosure arrows for Item 1 under `Item 1 Scene` and Item 2 under `Item 2 Scene`. Both should be downward-facing:
9. Select Item 1 with the blue star to the left of it, and then select the Attributes inspector in the **UTILITIES PANEL** :
10. In the panel, use the following values to update your first tab icons:
* * In the Tab Bar Item, enter the following details:
* Badge: Leave this field blank
* System Item: Select Custom from the drop-down list
* Selected Image: icon-explore-on
* Title Position: Select Default Position from the drop-down list
* * In the Bar Item, enter the following details:
* Title: Type `Explore` in this field
* Image: icon-explore-off
* Tag: Enter `0` in this field
* Enabled: This checkbox should be checked
11. Select Item 2 with the blue star to the left of it in the **OUTLINE VIEW** , and the Attributes inspector should already be open:
12. Add the following to the panel:
* * In the Tab Bar Item, enter the following details:
* Badge: Leave this field blank
* System Item: Select Custom from the drop-down list
* Selected Image: icon-map-on
* Title Position: Select Default Position from the drop-down list
* In the Bar Item, enter the following details:
* **Title** : Type `Map` in this field
* **Image** : icon-map-off
* **Tag** : Enter `0` in this field
* **Enabled** : This checkbox should be checked
13. Run the project by hitting the play button (or use _command_ \+ _R_ ) to see where we are:
As you may have noticed, this screen does not look like an app. Since we are building a Tab Bar Controller from scratch, we need to add an entry point. So, close the simulator, and continue with the steps.
14. Select `Main.storyboard` again in the **OUTLINE VIEW** , and make sure that the disclosure arrow is down for `Tab Bar Controller Scene`:
15. Select `Tab Bar Controller` under `Tab Bar Controller Scene`, and, in the **UTILITIES PANEL** , make sure that the Attributes inspector is selected:
16. Under the View Controller section, you will need to check the box for Is Initial View Controller:
17. Once you set the initial View Controller, there will be an arrow pointing to the Tab Bar Controller. This arrow signifies the entry point of our app:
18. Let's rerun the project by hitting the play button (or use _command_ \+ _R_ ):
Perfect! Now, with our basic structure established, we can start adding more specific elements to our views.
# Storyboards
Before we do that, let's update `LaunchScreen.storyboard`. This storyboard is used when our app first launches.
# Creating our launch screen
Launch screens can use images, but that would mean that you would have to create images for every device and device orientation. Using `LaunchScreen.storyboard` gives us the ability to create just one asset for all devices and orientations:
1. Select the `LaunchScreen.storyboard` file, and, in the Outline view, make sure that the disclosure arrows for `View Controller Scene` and `View Controller` are collapsed. Then, select `View` under `View Controller`:
2. In the **UTILITIES PANEL** panel, select the Attributes inspector, and click on the white Background bar:
3. A Colors panel will appear. Select the second tab, which is called the **Color Slider** :
4. Under RGB Sliders, Hex Color #, update the value from `FFFFFF` to `4A4A4A`. This should change your background color from white to a dark gray:
5. You might have to select the background color a second time. If so, select the Background bar in the Attributes inspector again, which should change the Hex Color # back to `FFFFFF`. Then, change it again to `4A4A4A`. You can now close the Color panel, and you should see the background color update in your Standard Editor panel:
Next, we need to bring the app logo onto the screen:
1. While still in `LaunchScreen.storyboard`, launch the Media Library _(command_ \+ _Shift_ \+ _M_ ). You can also access the library if you long-press the object library, but the shortcut is much faster:
The Media Library allows us to access our image assets, and it will place them inside a `UIImageView` for us.
2. In the filter, at the bottom of the Library pane, type `detail-logo`. Once that appears, drag and drop the logo onto `LaunchScreen.storyboard`:
There might be a bug in Xcode and, therefore, sometimes when you drag the logo out, the width and height will not be set, and you will need to enter the width and height manually.
3. If your logo does not drag out to size, do this step: with the logo selected, open the **SIZE INSPECTOR** in the **UTILITIES PANEL** , and set the width and height to the following:
* * **Width** : `220`
* **Height** : `112`
4. We want our _Let's Eat_ logo to appear in the center of the screen. For our logo to appear in the center for all devices, we need to apply auto layout. Select `detail-logo`, and then select the **ALIGN** icon, which is to the left of the **PIN** icon:
5. Check the following boxes that appear:
* * Horizontally in container
* Vertically in container
6. Click on Add 2 Constraints.
When you are done, you will see the following:
Our launch screen is now set up for all devices. If you rerun the project, you will now see the launch screen with the LET'S EAT logo and new background color.
Let's move on to adding some detail to our Explore tab, since this is the first thing a user will see after the app launches.
# Adding a Navigation Controller
We first need to add a Navigation Controller to our Explore tab. The Navigation Controller will allow us to do a few things, such as adding a button to the title bar of the navigation to present our cities list:
1. Select `Main.storyboard`, and, in the **OUTLINE VIEW** , select `Explore` with the blue star to the left of it, under `Item 1` in `Explore Scene`:
2. Navigate to Editor | Embed In | Navigation Controller:
3. Our View Controller has a Navigation Controller:
4. Run the project by hitting the Play button (or use _command_ \+ _R_ ):
Repeat steps 1 to 4 from the _Adding a Navigation Controller_ subsection for the Map tab. Now that we have added both Navigation Controllers, in the next chapter, we will continue to create other View Controllers.
# Summary
Storyboarding is one of the things I enjoy doing. It is quick and easy to set up your UI with storyboards. Being able to drag and drop what you need onto the canvas is such an efficient method of developing app storyboards. There are times when you will need to code, but being able to work on things without having to write any code is an excellent capability. My preference is to use storyboards as much as possible, but many developers prefer to work in code. If you come from another programming language, try to keep an open mind and learn storyboarding.
When you work on a project that uses storyboards, you can get a high-level overview of the project. When everything is written in code, it takes more time to get a basic idea of how the app is structured, and its overall flow. Again, some people love to code their UI, and we will do some of that in this book. My main point is that you have to find what works for you. This book leans more toward the storyboard side versus the coding side of setting up your UI.
In the next chapter, we will continue setting up our UI, and become familiar with more of the UI elements that you have seen in many iOS apps.
# Building Our App Structure in Storyboard
In the previous chapter, we created our Tab Bar Controller. In this chapter, we will be creating other View Controllers that we need in our app. Our goal for the end of this chapter is to be able to navigate through the app with the least code required.
The following will be covered in this chapter:
* Collection View
* Outlets
* Modals
Before we begin setting up our Collection View Controller, you will need to add two files, `ExploreViewController` and `RestaurantViewController`, which you'd have downloaded from Packt's website or GitHub. By combining these files and then a bit of code, we will be able to focus on the design of our app.
Later in the book, we will delete these files, and create them ourselves. But, for this chapter, let's add these two files into our project:
1. Open the `assets` project folder that you downloaded from Packt's website or GitHub. Open `Chapter_08` and drag the two files in the folder into your project in the Navigator panel:
2. When you drop the folder, you will get the following message:
3. Make sure you have both Copy items if needed and Create groups selected, then hit Finish.
4. Add code to these new files, which will allow us to dismiss modals that we will create later in this chapter. A modal is a container that opens on top of the current content showing in an app and allows you to take more action without opening up all of the information on the viewed screen.
Let's add the code to enable us to dismiss modals. Open the `ExploreViewController.swift` file and, under where it says `// Add Unwind here` at the bottom of the file, add the following code:
@IBAction func unwindLocationCancel(segue:UIStoryboardSegue) {}
If we look at our app design, which we reviewed earlier in this book, in our first tab, the Explore tab, we show a grid of food cuisines as well as a list of locations. First, we will set up our grid.
# Adding a Collection View Controller
As we discussed earlier in the book, Collection View Controllers allow us to display elements within a grid. Let's set up our Collection View:
1. Select the `Main.storyboard` file, making sure that you are zoomed out and can see all of your scenes. In the Utilities Panel ( _command_ \+ _Shift_ \+ _L_ ), ensure that you have the object library tab selected.
2. In the filter field, type `collec`:
3. Click on and drag Collection View, and drop it onto the Explore View Controller:
4. You will see small boxes around the entire Collection View component. Select the Pin icon, and enter the following values:
All values under Add New Constraints are set to `0`.
5. Click on Add 4 Constraints.
We now have our Collection View component set up for our Explore tab.
# Hooking up our outlets
Let's now link our file, `ExploreViewController`, to our `UIViewController` in the storyboard:
1. While still in the `Main.storyboard` file, select the `UIViewController` with the Collection View that we just created, by clicking on the leftmost icon at the top of that controller:
2. In the Utilities Panel, select the Identity Inspector, which is the third icon from the left:
3. Under Custom Class, in the Class drop-down menu, choose `ExploreViewController` and hit the _Enter_ key.
4. Select the Connections Inspector, the last icon on the right, in the Utilities Panel.
5. Under Outlets, you will see collectionView and an empty circle:
`IBOutlet` is a way to connect to a UI element. We have a Collection View on our `UIViewController`; now, we are hooking into that variable. Later in the book, you will learn how to create these variables.
6. Click on the collectionView circle, and click-drag from the circle in the Connections inspector to the Collection View that we just added inside of the `UIViewController`:
7. Release it and you will see the circle become filled:
We need to hook up the data source and delegate. The delegate and data source allow us to pass data to our Collection View, and to know when our Collection View has some kind of interaction.
The `dataSource` property is what is used to supply the data for our Collection View, so we need to pass whatever data we have to this property. On the other hand, the `delegate` property, which supplies the behavior, does not require us to provide anything, as it receives interactions that happen within our Collection View.
8. In your scene, select your Collection View and then, in the Utilities Panel, select the Connections Inspector.
9. Under the Outlets section, you will see two empty circles, `dataSource` and `delegate`:
10. Click and drag from the empty circle of the `dataSource` property to the Explore View Controller in your Outline view, and then release:
11. Repeat for the `delegate` property:
Next, let's set up our Collection View prototype cell to have a color.
# Creating a custom color
Let's add colors to your `Assets.xcassets` folder. Adding colors here is great when you want to have all your colors in one location. Before we update our explore cell, let's create a new color:
1. Open the `Assets.xcassets` file.
2. Right-click inside of `Assets.xcassets`, where you will see folders, and create a new folder called colors:
3. Right-click the `colors` folder, and, this time, select New Color Set. You will see a new color added to your folder. Select the Attributes inspector in the Utilities panel:
4. Under Color set, update the name to `Demo Grey`.
5. Under Color, click the Input Method dropdown:
6. Select 8-bit Hexadecimal. Change the Hex # value to `#AAAAAA`. When you are done, you should see the following:
Now that we have a color, we will be able to find our new color in the Color dropdown, as you will see next.
# Setting up our cell
To set up our cell, we need to perform the following steps:
1. In `Main.storyboard`, select the Collection View prototype cell, which is the small box inside of your Collection View.
2. Open the Attributes inspector in the Utilities Panel:
3. Update the following:
* * Identifier: `exploreCell`
* Background: Demo Grey
4. To update the background, you will need to click on the drop-down arrow under Background. You will see that our Demo Grey is added:
You should now see the following:
Next, we need to add a section header.
# Section header
Our section header will include the page title, the selected location, and a button that we will use to see the locations:
1. Select the Collection View in the `Main.storyboard` outline.
2. In your Utilities Panel, select the Attributes inspector and, under Collection View Accessories, select the checkbox next to **section header** :
3. You will see a box appear above our Demo Grey cell, which is our new section header—select it:
4. In the Attributes inspector in the Utilities Panel, update Identifier to header:
Let's build and run the project by hitting the Play button (or use _command_ \+ _R_ ):
You will see that you now have a grid of boxes and some white space (the section header) near the top of the grid. Before we work on the section header, let's update our grid to match our design of two items per row with a particular cell size.
# Updating the grid
To update our grid, we need to take the following steps:
1. Use _command_ \+ _Shift_ \+ _O_ and, in the Open Quickly window, type `Main.storyboard`, and then hit the _E_ _nter_ key.
2. Select the Collection View, and then, in the Utilities Panel, select Size Inspector.
3. Update the following values, based on the simulator that you are currently using. These values may need to be changed so that your grid has two columns of cells, so feel free to alter the values.
For iPhone 7, use the following values:
For iPhone 7 Plus, use the following values:
For iPhone 4/iPhone SE/iPhone 5/iPhone 5s, use the following values:
When you are done, this is what everything should look like:
**Challenge** : If you are using iPhone XR, or iPhone Xs, try to see whether you can figure out how to make the grid work. For now, as we just did, we will use storyboard settings to get our cells set up. Later in the book, we will make this dynamic so that our widths and heights adjust with code. Next, we will work on our section header.
# Adding a modal
Let's review the design for the section header:
Note that we have a \+ Location button that will display our locations. Let's add that modal now:
1. While in the `Main.storyboard` file, select the object library and, in the filter field at the bottom of the Library pane, type `button`.
2. Drag and drop the `Button` component into the section header we created in our Explore View Controller:
Ignore the layout warning. We will format the button later regarding location and size, and that will get rid of the warning.
Next, we need to add another View Controller to our storyboard:
1. In the filter, type `viewcontroller`, and drag and drop the `ViewController` component above the Explore View Controller in `Main.storyboard`.
2. With the View Controller selected, navigate to Editor | EmbedIn | NavigationController.
3. _Control_ \+ drag from where it says Button in the View Controller, under the Explore tab, to the Navigation Controller that was just created (you can also do this within Outline view, by _Control_ \+ dragging from the button to the new Navigation Controller you just created):
4. When you let go, you will be presented with the following menu, and you should select Present Modally:
Now, let's run the project by hitting the Play button (or use _command_ _+_ _R_ ). You will see that our button now launches a modal. In the next chapter, we will make this button match our design:
Currently, as you can see in the preceding screenshot, we cannot dismiss this modal. Therefore, we need a cancel button and a done button to dismiss the view. Let's fix this:
1. Open `Main.storyboard` and then go to your View Controller (not the Navigation Controller) of your modal:
2. Open the object library ( _command_ \+ _Shift_ \+ _L_ ) and type `bar button` into the filter area of the objects library in the Utilities Panel.
3. Drag and drop a Bar Button Item into the right-hand side of the Navigation Bar of your `View Controller Scene`:
4. Drag another Bar Button Item into the left-hand side of the Navigation Bar.
5. You should have two Bar Button Items that both say Item:
# Updating Bar Button Items
Next, we need to update both of the Bar Button Items to say cancel and done:
1. Select the left-hand Bar Button Item, and, in the Utilities Panel, select the Attributes inspector.
2. Click on System Item and select Cancel in the drop-down menu.
3. Select the right-hand Bar Button Item, and, while still in the Attributes inspector in the Utilities panel, update System Item to Done.
Now, you should see Cancel on the left and Done on the right:
# Unwinding our Cancel button
Now that we have our buttons, we want to dismiss the modal when a user hits Cancel.
In the `Main.storyboard`, _Control_ \+ drag from the Cancel button to Exit:
You can also do this in the Outline view.
You will see a window popup that says Action Segue and unwindLocationCancelWithSegue. Select `unwindLocationCancelWithSegue`:
Let's build and run the project by hitting the Play button (or use _command_ _+_ _R_ ), and test our Cancel button. It should now dismiss the View. We will update the Done button when we add code later.
# Adding our first Table View
Now, let's add a `UITableView` into our `UIViewController`:
1. In the Utilities Panel of `Main.storyboard`, in the filter field, type `table`, then drag the Table View onto the scene:
2. Select the Pin icon and enter the following values:
* * Set all values under Add New Constraints to `0`.
* The Constrain to margins checkbox should be unchecked.
3. Click on Add 4 Constraints.
If you build and run the project, and then launch the modal, you will see an empty Table View. We will complete this Table View later.
# Summary
We are about halfway through the setup of our UI structure. In this chapter, we created our Collection View with a dummy cell. Dummy cells allow us to continue to work on the basic structure of our app and focus on the design of the app, getting all of the assets ready to go before we add code. We also added our first prototype header, as well as presenting a modal to the user.
In the next chapter, we will complete the rest of our basic structure, before concentrating on adapting our structure to match our design.
# Finishing Up Our App Structure in Storyboard
The more we do stjoryboard work, the easier it gets. I remember that, when I started Xcode, it was a bit overwhelming because of all the panels, and it took me time to get comfortable. Any time I speak with someone looking to get into iOS, I always tell them to dedicate at least 10-15 minutes a day to it for the first six months. It seems like a lot, but it makes a difference when you are trying to learn. If you step away for a week and try to come back, it's like starting from square one; at least, it was for me. When I finally started to catch on was when I was in Xcode every day and was relentless.
In the previous chapter, we got our **Explore** and **Location** both set up. In this chapter, we are still working on just the structure, and in the next couple of chapters we will work on the design.
We will cover the following in this chapter:
* Restaurant View Controller
* Restaurant Detail View Controller
* Reviews View Controller
* The Map tab
# Adding our Restaurant List View
Our restaurant list has the same basic setup as in the previous chapter. If you think you have a grasp of this, now is an excellent time to challenge yourself. If you think you still need more practice, keep reading and let's set up the restaurant list:
1. Select the `Main.storyboard` file, making sure that you are zoomed out and can see all of your scenes (depending on your screen resolution). Open the object library (c _ommand_ \+ _shift_ \+ _L_ ).
2. Drag out a View Controller—it should be the first item in the list—put it next to Explore View Controller.
3. Open the object library (C _ommand_ \+ _Shift_ \+ _L_ ) and, in the filter field, type `collectionview`.
4. Click on and drag Collection View and drop it onto the new View Controller we just added, next to the Explore View Controller.
5. Select the Pin icon and enter the following values:
* * All values under Add New Constraints are set to `0`.
* Make sure to uncheck Constrain to margins.
6. Click on Add 4 Constraints.
We now have our Collection View component set up for our Restaurant list.
# Hooking up our outlets
Let's now link our file, `RestaurantViewController`, to our new `UIViewController` in the storyboard:
1. Select the `UIViewController` with the Collection View that we just created.
2. In the Utility panel, select the Identity inspector. Under Custom Class, in the Class drop-down menu, select `RestaurantViewController` and hit the _Enter_ key.
3. Select the Connections Inspector in the Utilities Panel.
4. Under Outlets, (just like we did earlier) click on the `collectionView` circle and drag from the circle to the Collection View that we just added inside of your `UIViewController`.
Now that we have our Collection View hooked up, we need to hook up the data source and delegate. The data source and delegate allow us to pass data to our Collection View as well as to know when our Collection View has some interaction. Let's do that now by doing the following:
1. In your scene, select your Collection View. In your Utilities Panel, select the Connections Inspector.
2. Click on and drag the `dataSource` property to the Restaurant View Controller in your Outline view.
3. Click on and drag the delegate property to the Restaurant View Controller in your Outline view.
Finally, let's set up our cell to have a color.
# Setting up our cell
In `Main.storyboard`, select the small box inside of your Collection View. The small box is your Collection View prototype cell:
1. Open the Attributes inspector in the Utilities Panel.
2. Update the following:
* * Identifier: `restaurantCell`
* Background: Demo Grey
3. _Control_ \+ drag from the explore cell to Restaurant View Controller:
4. When you let go, you are presented with the following menu—select Show:
Now, let's run the project by hitting the Play button (or using C _ommand_ \+ _R_ ). You will now be able to tap on an explore cell and see the following:
Next, we want the user to be presented with the restaurant's details when they touch a restaurant. We will use a static Table View Controller to do our detail. Using a static Table View allows us to create content without code. We will still have to hook up our data but, in the upcoming chapters, you will see how static Table Views come in handy. Let's set up the restaurant details:
1. Select the `Main.storyboard` file, making sure that you are zoomed out and can see all of your scenes (depending on your screen resolution). In the Utilities Panel, ensure that you have the object library tab selected.
2. In the filter field, type `tableviewcontroller` (make sure it's the controller—it will have a yellow icon). Drag this Table View Controller and put it next to Restaurant View Controller:
3. _Control-_ drag from the restaurant cell button to the Restaurant Detail Table View Controller. When you let go, you are presented with the following menu, and you should select Show:
5. Click on the Table View inside of the Outline:
6. Make sure that you have the Attributes inspector opened in the Utilities Panel, then change the Table View content from Dynamic Prototypes to Static Cells:
Now, let's run the project by hitting the Play button (or using C _ommand_ \+ _R_ ). You will now be able to tap on a restaurant cell and see the following:
# Adding the Reviews View
We have our static Table View set up now; we need another view that allows us to view restaurant reviews. Let's add that now:
1. In the object library, in the filter field, type `button`. Drag this button and put it next to one of the `tableview` cells:
2. Select the `Main.storyboard` file, making sure that you are zoomed out and can see all of your scenes (depending on your screen resolution). In the Utilities panel, ensure that you have the object library tab selected.
3. In the filter field, type `viewcontroller`. Drag this View Controller and put it next to the Restaurant Detail View Controller.
4. In the filter field, type `label`.
5. Click on and drag Label and drop it onto the new View Controller we just added, next to the Restaurant Detail View Controller.
6. Double-click in the Label and add the `Reviews` text.
7. Select the Align icon that is to the left of the Pin icon and check the following boxes that appear:
* * Horizontally in container
* Vertically in container
8. Click on Add 2 Constraints.
When you are done, you will see the following:
# Viewing reviews
Now, we need to add a segue to be able to get to the Reviews View Controller:
1. _Control_ + __ drag from the button to the Review View Controller that we added earlier.
2. When you let go, you are presented with a menu, and you should select Show:
Let's run the project by hitting the Play button (or using C _ommand_ \+ _R_ ). You will now be able to tap on the button in restaurant details and see the following:
# Map Kit View
The last thing we need to do is to set up our Map tab. Select the `Main.storyboard` file and find the View Controller connected to the Map tab:
Let's get started:
1. Open the Object Library (C _ommand_ \+ _shift_ \+ _L_ ), then type map.
2. Drag and drop Map Kit View onto the Map View Controller:
3. Select the Pin icon and enter the following values:
* All values under Add New Constraints are set to `0`.
* Uncheck the Constrain to margins checkbox.
* Click on Add 4 Constraints.
4. Your View Controller should look like the following when you are done:
5. Run the project by hitting the Play button (or using C _ommand_ \+ _R_ ) and selecting the Map tab:
We now have both tabs set up, but, as we progress through the book, we will add more scenes to the storyboard. The following is what your `Main.storyboard` file should look like:
# Summary
In this chapter, we finished the application structure. We hooked up our explore cell to a restaurant list. Then, we were able to connect a restaurant to a detail. Next, we added a button to our details, which allows us to see restaurant reviews. Lastly, we added a map to our Map tab.
At this point, a good challenge would be to see whether you can get back to this point. Try starting from when we created the project toward the end of Chapter 6, _Starting the UI Setup_ , in the _Creating a new project_ section. See whether you can get from that point to here without the book and without missing anything. This will help you, and it is something I like to do with those I mentor.
In the next chapter, we will start digging more into the design and getting our app to look like the design visually.
# Designing Cells
In this chapter and throughout this book, we will adjust our app to match the design we reviewed earlier. However, the specifics of the design, such as custom fonts, are there as examples; you should feel free to change things to match your taste. By experimenting while learning, you should get a better understanding of how things work and become more comfortable using Xcode. I would recommend that you first thoroughly understand the lessons before experimenting; however, I highly encourage you to have fun and make the app your own.
In this chapter, we will be working with the following:
* Table View Cells
* Collection View Cells
* Auto Layout
# Setting up the Explore header
Let's review the section header for the Explore tab:
In this header, we only have four elements: two `UILabels` (title and subtitle), a button, and a gray line underneath the title and button.
We already have the button in the prototype header (collection reusable view), which we created in Chapter 8, _Building Our App Structure in Storyboard_ , and now we need to add the two `UILabels` and then revise all three elements so that they match our design:
When working with multiple components in the same area, I like to put them into a view. The view acts as a container and allows me to keep my constraints down. Let's get started:
1. In `Main.storyboard`, select the prototype header and, in the Size inspector, update the following values:
* * Width: `0`
* Height: `100`
When you update the size, you might experience the following:
If you do, click on a different file and come back; the storyboard should fix itself.
2. In the filter field of the object library, type `view`.
3. Drag out a View into the prototype header.
Make sure you use the outline view and move this below all the other elements using the Outline view. If you do not, it will cover everything.
4. In the Size inspector, update the following values:
5. * X: `0`
* Y: `0`
* Width: `375 `
* Height: `90`
At this point, this view is covering up our button.
Let's fix this next.
5. Expand the Outline and you will see our newly added View and Button:
6. Select the Button in the Outline and drag it into the View. When you are done, it should look like the following:
7. Next, type `label` in the filter field of the object library.
8. Drag out two Labels into the View of the prototype header that we just added. When you are done, you should see the following:
9. We are going to add a new color to our `Assets.xcassets` file. Name the color `LetsEat Light Grey` and set the Hex Color # to `AFAFB2`.
10. Let's rename `Demo Grey` to `LetsEat Dark Grey`. Our cells do not update to this new color, yet it does not break anything. We will change these colors later in the chapter.
11. Now, select one of the labels, which will be our subtitle, and, in the Attributes inspector, update the following values:
* * Color: `LetsEat Light Grey`
* Font: `System Semibold 13`
10. In the Size inspector, update the following values:
* * X: `8`
* Y: `24`
* Width: `350`
* Height: `21`
11. Select the other label, which will be our title, and, in the Attributes inspector, update the value of Font to `System Heavy 40.0`:
12. In the Size inspector, update the following values:
* * X: `8`
* Y: `45`
* Width: `255`
* Height: `37`
13. Select the button (you might have to select it from the outline if the labels are covering it); then, in the Attributes inspector, update the following values:
* * Type: `Custom`
* Image: `btn-location`
* Remove the text button
14. In the Size inspector, update the button to the following values:
* * X: `271`
* Y: `50`
You should now have the following:
15. Type `view` in the filter field of the object library.
16. Drag a View into our view that we are using as a container.
17. Select the View (make sure you use the outline view and move this below all the other elements) and, in the Size inspector, update the following values:
* * X: `8`
* Y: `89`
* Width: `359`
* Height: ``1``
18. Now, with the View selected, in the Attributes inspector, update the value of Background to `LetsEat Light Grey`:
Now, with all of the elements placed into the prototype header, your cell should look as follows (inside your label, you should have the text Label):
To ensure that our cells adjust to the size of different devices, we must add Auto Layout constraints.
# Adding Auto Layout to the Explore header
Working with Auto Layout can be very frustrating. If it does not work correctly, I recommend that you clear all the constraints and start over.
Let's begin by adding Auto Layout to our Label subtitle, which is where we should show the currently selected location:
1. Select the View, that we are using as a container and then the Pin icon. Enter the following value:
2. Click Add 4 Constraints.
3. Select the Label subtitle and then the Pin icon. Enter the following values:
4. Click Add 4 Constraints. Your constraints should look like the following:
5. Next, select the Location button and then the Pin icon. Enter the following values:
* * Top: `5`
* Right: `8`
* Constrain to margins: Unchecked
* Width: `96` (should be checked)
* Height: `25` (should be checked)
6. Click Add 4 Constraints. You should now see the following:
7. Now, select the grey line (it might be easier to use the outline to select the view) and then the Pin icon. Enter the following values:
* Right: `8`
* Bottom: `0`
* Left: `8`
* Constrain to margins: Unchecked
* Height: `1` (should be checked)
8. Click Add 4 Constraints. Your constraints should look like the following:
9. Select the Label (title) and then the Pin icon. Enter the following values:
* * Top: `0`
* Right: `8`
* Left: `8`
* Height: `37` (should be checked)
* Constrain to margins: Unchecked
10. Click Add 4 Constraints. You should see the following when you are done:
We have completed adding Auto Layout to the Explore tab header. If you want to check every constraint, feel free to look at Chapter 11, _Getting Started with the Grid_ , starter project. Let's look at designing the Explore Cell next.
# Setting up the Explore cell
Next, let's work on the Explore Collection View cell:
1. Select the prototype cell, called `exploreCell`, in the Attributes inspector, and update the background color to white. Then, in the Size inspector, change the Size from Default to Custom.
2. Then update the following values:
* * Width: `176`
* Height: `195`
3. In the object library's filter field, type `view`.
4. Drag a View into the prototype cell.
5. Select the View and, in the Size inspector, update the following values:
* * X: `0`
* Y: `0`
* Width: `176`
* Height: `156`
6. Type `image` in the filter field.
7. Drag an Image view into the View we just added.
8. With the Image view still selected, update the following values in the Size inspector:
* * X: `0`
* Y: `0`
* Width: `176`
* Height: `156`
9. Type `label` in the filter field.
10. Drag a Label into the prototype cell (not the View).
11. With the Label selected, update the value of Font in the Attributes inspector to Avenir Next Condensed Demibold 20.
12. In the Size inspector, update the following values:
* * X: `8`
* Y: `165`
* Width: `160`
* Height: `21`
`exploreCell` is now complete. Your cell should now look like the following:
When setting up elements in storyboard, I like to get all my sizes set and then I use Auto Layout constraints to make sure it works for all devices. Let's add Auto Layout constraints before we move on to our Restaurant cell.
# Adding Auto Layout to the Explore cell
1. In the Outline view, select the container View that is holding the Image view and then the Pin icon. Enter the following values:
* * Top: `0`
* Right: `0`
* Left: `0`
* Constrain to margins: Unchecked
* Height: `156` (should be checked)
2. Click Add 4 Constraints.
3. Select the Image view and then the Pin icon. Enter the following values:
* * Top: `0`
* Right: `0`
* Bottom: `0`
* Left: `0`
* Constrain to margins: Unchecked
4. Click Add 4 Constraints.
5. Select the Label in this `exploreCell` and then the Pin icon. Enter the following values:
* * Top: `9`
* Right: `8`
* Left: `8`
* Constrain to margins: Unchecked
* Height: `21` (should be checked)
6. Click Add 4 Constraints.
The Explore cell now has all the necessary constraints, and we can now set up the Restaurant cell.
# Setting up the Restaurant cell
The Restaurant cell that we are setting up has many elements, so make sure to take your time. Make sure that you go to the Restaurant View Controller; let's get started:
1. Select the prototype cell, called `restaurantCell`, in the Attributes inspector, and update the background color to white. Then, in the Size inspector, change the Size from Default to Custom:
2. Then, update the following values:
* Width: `375`
* Height: `312`
3. In the filter field of the object library, type `view`.
4. Drag a View into the prototype cell.
5. With the View selected, update the following values in the Size inspector:
* * X: `75.5`
* Y: `245`
* Width: `224`
* Height: `56`
6. Type `label` in the filter field.
7. Drag a Label into the View we just added.
8. With the Label selected, update the following values in the Size inspector:
* * X: `0`
* **Y** : `2`
* Width: `224`
* Height: ``21``
9. In the Attributes inspector, update the following values:
* * Text: Add Available Times into the empty text field under the Text
* Color: `Black Color`
* Alignment: `Center`
* Font: Avenir Next Condensed Bold 17
When you are done, you should see the following:
10. Next, in the filter field of the object library, type `button`.
11. From the Object library, drag the button from the into the View where we have the label.
12. With the button selected, update the following values in the Attributes inspector:
* * Type: `System`
* Title: Plain and then add 7:30 pm in the empty text field under the Title
* Font: Avenir Next Condensed Regular 17
* Text Color: `White Color`
* Background: `time-bg`
13. In the Size inspector, update the following values:
* * Width: `68`
* Height: `27`
14. Select the button in the Outline view and hit _command_ \+ _C_ to copy.
15. Hit _command_ \+ _V_ twice to paste. You should now have three buttons.
16. Using the Outline view, _ command_ \+ click each button created and click on the Embed in View icon. The stack icon is two icons to the left of the Pin icon:
_ _
17. Select Stack View in the dropdown:
_ _
18. Select the stack view in the Outline view, and update the following values in the Attributes inspector:
* * Axis: Horizontal
* Alignment: Fill
* Distribution: Equal Spacing
* Spacing: `10`
19. In the Size inspector, set X to `0` and **Y** to `29`.
When you are done, you should see the following:
20. In the filter field of the Object library, type `view`.
21. Drag a View into the prototype cell.
22. With the View selected, update the following values in the Size inspector:
* * X: `11`
* Y: `42`
* Width: `353`
* Height: `200`
23. Type `image` in the filter field.
24. Drag out an Image view into the View we just added.
25. Select the Image view in the Outline view and update the value of Image in the Attributes inspector with `american`. We are just using this image as a placeholder to see that our cells are set up correctly. Later, we will remove this and load the images using code.
26. With the Image view selected, update the following values in the Size inspector:
* * X: `0`
* Y: `0`
* Width: `353`
* Height: `200`
27. Type `label` in the filter field.
28. Drag two Labels into the prototype cell.
29. Select one of the Labels and update the value of Font in the Attributes inspector with Avenir Next Condensed Demi Bold 17.
30. In the Size inspector, update the following values:
* * X: `10`
* Y: `3`
* Width: `355`
* Height: `19`
31. Select the other Label and update the following values in the Attributes inspector:
* * Color: `LetsEat Dark Grey`
* Font: Avenir Next Condensed Regular 14
32. In the Size inspector, update the following values:
* * X: `10`
* Y: `22`
* Width: `355`
* Height: `16`
We have completed our Restaurant cell setup, and it now looks like the following:
Now we need to add Auto Layout to all of the elements.
# Adding Auto Layout to the Restaurant cell
Since we have many elements in the Restaurant cell, it means that there are more chances for errors with Auto Layout. Although you may get frustrated when using Auto Layout, if you are a visual person like me, hopefully, you will eventually appreciate using it:
1. Select the top label and then enter the following values:
* * Top: `3`
* Right: `10`
* Left: `10`
* Constrain to margins: Unchecked
* Height: `19` (should be checked)
2. Click Add 4 Constraints.
3. Select the Label right under the last label and then the Pin icon. Enter the following values:
* * Top: `0`
* Right: `10`
* Left: `10`
* Constrain to margins: Unchecked
* Height: `16` (should be checked)
4. Click Add 4 Constraints.
5. Select the Image container and then the Pin icon. Enter the following values:
* * Top: `4`
* Constrain to margins: Unchecked
* Width: `353` (should be checked)
* Height: `200` (should be checked)
6. Click Add 3 Constraints.
7. Click on the Align icon and enter the value of Horizontally in Container as `0` (this should be checked).
8. Click Add 1 Constraint.
9. Select the Image inside of the container and then the Pin icon. Enter the following values:
* * Top: `0`
* Right: `0`
* Bottom: `0`
* Left: `0`
* Constrain to margins: Unchecked
10. Click Add 4 Constraints.
11. Select the container that is holding the stack view and the available time's label and then select the Pin icon. Enter the following values:
* * Top: `3`
* Constrain to Margins: Unchecked
* Width: `224`
* Height: `56`
12. Click Add 3 Constraints.
13. Click on the Align icon and enter the Horizontally in Container value as `0` (this should be checked).
14. Click Add 1 Constraint.
15. Select the stack view inside of the container and then the Pin icon. Enter the following values:
* * Top: `6`
* Right: `0`
* Left: `0`
* Constrain to margins: Unchecked
* Height: `27`
16. Click Add 4 Constraints.
17. Select the Label that is above the three buttons and then the Pin icon. Enter the following values:
* * Top: `2`
* Left: `0`
* Right: `0`
* Constrain to margins: Unchecked
* Height: `21`
18. Click Add 4 Constraints.
19. Click on the Align icon and enter the value of **Horizontally in Container** as `0` (this should be checked).
20. Click Add 1 Constraint.
Now, all of the Auto Layout for the Restaurant cell is set up. Before you run the project, select the collection view in the outline. Then, go to the Size inspector and update Cell Size Width to `375` and Height to `312.` These numbers are used for design purposes, and we will make these values dynamic depending on the device size later.
Let's build and run our project and go to the restaurant cell. You should now see the following:
# The Locations cell
We now need to work on the Locations cell. Find the view we will use for our locations and, in your table view we need to update this cell, for this cell, we are using a predefined cell that Apple provides. Let's update our Table View by do the following:
1. Select the Table View, and update Prototype Cells to `1`.
2. Select the prototype cell and enter the following values:
* * Style: Basic
* Identifier: `locationCell`
That is all we need to do. Now, your cell should look as follows:
When you change the style from Custom to Basic, the word Title should appear in the cell. The word Title is just placeholder text. We have now finished designing our cell.
# Summary
In this chapter, we formatted our cells to match our design and added Auto Layout constraints. Auto Layout can be complicated; however, as with anything, the more you practice, the easier it gets. You can write Auto Layout with code, but it is not what I prefer in a storyboard. If you would like to do it in code, there are plenty of tutorials that can help you with this.
We are now finished with the storyboard and design setup, so we can focus on the code side since our UI is pretty much set up. You should have a good idea of how our app should work. If you are struggling, there is nothing wrong with that. This stuff takes time to click, and. as I have said before, if you are struggling with anything that we have done, please go back. If you keep going when you are not comfortable, learning will get harder and harder. We will be covering a lot of new topics, so adding on to them when you are not ready is not recommended.
In the next chapter, we will learn what the Model View Controller is and how to work with it.
# Getting Started with the Grid
I am a visual person; I prefer to start with the visuals and make sure that the app looks like the design. Starting with the UI helps me to identify the data structure and allows me to get familiar with the app, which means I can then focus my attention on the code.
In earlier chapters, we set up our app structure and developed a good understanding of the basics involved. In this chapter, you will learn about app architecture and how to create it for our _Let's Eat_ app. For this chapter, please use the Chapter 11 project files, as I have added more design elements that you will need throughout the book.
We will cover the following in this chapter:
* Understanding the Model View Controller architecture
* Classes and structures
* Controllers and classes
# Understanding the Model View Controller architecture
Apple built iOS apps to use what is known as the **Model View Controller** ( **MVC** ), which is an architectural pattern that describes a way to structure the code in your app. In layman's terms, this just means breaking up our app into three distinct camps: Model, View, and Controller.
Here is a diagram of MVC to help you understand it:
Let's discuss each camp:
* **Model** : The **Model** camp is responsible for an app's data and business logic. The Model's only job is to handle representations of data, data storage, and the operations performed on data.
* **View** : The **View** camp is responsible for all the things that you see on the screen. The View handles presenting and formatting data that results from the user's interactions.
* **Controller** : The **Controller** camp is the liaison or coordinator between the other two camps. The Controller handles a lot of setup and connections to the View. The Controller also interprets user interactions. Since the Controller is between both the View and the Model, the View and the Model should know nothing about each other.
In summary, the Controller takes user interactions and either responds back to the View or passes it onto the Model. When the Model completes a task, it passes it back to the Controller, and then the Controller talks with the View.
# Getting familiar with the setup
For beginners, the MVC architecture can make you uncertain about where things should go. As we progress through the book, you will learn where to put things and why. So, you need not worry about where things should be placed as we work through this process together, step by step.
As your project grows, the MVC architecture places a lot of the responsibility on the Controller. Therefore, in this book, we tweak the MVC pattern to not put so much pressure on the Controller.
Before we continue with our coding, we need to discuss classes and structures.
# Classes and structures
Classes and structures (also known as structs) are files that contain properties and methods. You use these properties and methods to add functionality. You have been working with structs since Chapter 1, _Getting Familiar with Xcode_. Strings, Ints, Bools, Arrays, Dictionaries, and Sets are all structs.
Earlier in the book, we created functions. As noted in Chapter 6, _Starting the UI Setup_ , a method is a function that lives inside a class or struct.
Classes and structs are very similar; however, Swift handles each of them a bit differently. To get a better understanding of how classes and structs work, we create a new Playground project. Working in the Playground gives us the ability to learn how to create custom classes and structs and to gain an understanding of each of their positives and negatives.
Since we already have a project created, we can actually add a playground directly into our project. Right-click in the Project Navigator and create a new group called Playgrounds. When you are done, you should see the following:
Next, right-click on the Playgrounds folder, go to New File, and do the following:
1. Scroll to the bottom of the template screen, select a Blank playground, and hit Next.
2. In the options screen that appears, name your new Playground `FunctionsStructs`, and make sure that your Platform is set to iOS. Hit Next and then Create. Now, let's delete everything inside your new Playground and toggle on the Debug Panel, using either the toggle button or _command_ \+ _shift_ \+ _Y_.
In your empty Playground, add the following:
class Cat {
}
struct Dog {
}
We just created our first class and struct and defined two new custom data types (known as **Swift types** ), `Cat` and `Dog`. Since we have not yet given the class or struct a property (such as a name) or created an instance of either `Cat` or `Dog`, you see nothing in the Results or Debug Panels.
When you create classes and structs, it is best practice to start with a capital letter. Also, you must have different names for your class and your struct. Otherwise, you will get an error. Even though one is a class and the other is a struct, each of them needs a distinct name.
Now, we need to give names to our `Cat` class and our `Dog` struct. Therefore, let's give them both a property, called `name`:
class Cat {
var name:String?
}
struct Dog {
var name:String?
}
If you cannot set a property when it is created, then it is recommended that you set that property to an optional using the question mark (`?`). Using optional protects your code from trying to access the name if you never set it. You can also set your variable as an optional unwrapped. For example, you can also do the following:
var name:String!
With both `Cat` and `Dog` now having a property called `name`, let's create an instance of each of them:
let yellowCat = Cat()
yellowCat.name = "Whiskers"
print(yellowCat.name as Any)
var yellowDog = Dog()
yellowDog.name = "Bruno"
print(yellowDog.name as Any)
So far, everything on the surface looks the same. We created both a `Cat` and a `Dog` and gave them each names. However, let's say `Whiskers` runs away and, a few weeks later, finds a home with a new family, who decide to change his name to `Smokey`. After `Whiskers` runs away, `Bruno` becomes lonely and decides to find him, but also gets lost. `Bruno` finds a new home as well, and this new family decides to name him `Max`.
In Playgrounds, we create a new constant called `yellowStrayCat` and set it equal to `yellowCat`, since it is still `Whiskers`. However, we change the name of `yellowStrayCat` to `Smokey`. We also create a new constant called `yellowStrayDog`, setting it equal to `yellowDog` and naming it `Max`:
let yellowStrayCat = yellowCat
yellowStrayCat.name = "Smokey"
print(yellowStrayCat.name)
var yellowStrayDog = yellowDog
yellowStrayDog.name = "Max"
print(yellowStrayDog.name)
Our Results Panel shows that the names of `yellowStrayCat` and `yellowStrayDog`, respectively, are now `Smokey` and `Max`. So, everything seems to be the same between our class and our struct, right? No, they are not the same. Let's print the name of `yellowCat` underneath the line where we have print (`yellowStrayCat.name`). In addition, let's do the same for the name of `yellowDog` underneath where we have print (`yellowStrayDog.name`). Your code should now look as follows:
let yellowStrayCat = yellowCat
yellowStrayCat.name = "Smokey"
print(yellowStrayCat.name)
print(yellowCat.name)
var yellowStrayDog = yellowDog
yellowStrayDog.name = "Max"
print(yellowStrayDog.name)
print(yellowDog.name)
In our Results Panel, you should notice an unexpected result. The `yellowCat`, `Whiskers`, now has the name `Smokey`, but the `yellowDog` is still `Bruno`. Without getting too technical, when you use a class and copy it as we did, it refers back to the original instance created. This is known as a **reference type**. However, when structs get copied, they create a new instance and the original is not affected. This is known as a **value type**.
Before we move on, let's look at one more difference between the two. In programming, we have what is called **inheritance** , which means that we can create another object with default values and other objects can inherit from those default values. Let's create an `Animal` class that is the base class immediately below our `Cat` class:
class Animal {
var age:Int?
}
Now, let's update our `Cat` class to inherit from it, as shown in the following code:
class Cat:Animal {
...
}
Note that we are only updating what goes directly after `Cat`. The rest of the class in the curly brackets stays the same.
Since our class now inherits from `Animal`, we should have a new property called `age`. Underneath where we name `yellowCat` as `Whiskers` and above our `print` statement, enter the following after we set Whiskers' name:
yellowCat.age = 3
So, as expected, we were able to give `Whiskers` an `age`. Let's do the same for our `Dog` struct by adding `Animal` directly after `Dog`:
struct Dog:Animal {
var name:String?
}
Once you have entered the preceding code snippet, you will see the following:
A red error displays and informs you that `Non-class type 'Dog' cannot inherit from class 'Animal'`. Therefore, we need to create a struct called `AnimalB`, since structs cannot have the same name:
struct AnimalB {
var age:Int?
}
Update your `Dog` struct from `Animal` to `AnimalB`:
struct Dog:AnimalB {
var name:String?
}
Now, you should see an error, `Inheritance from non-protocol type 'AnimalB'`, which means that our struct cannot inherit from another struct:
Inheritance is something that you can do with classes, but not with structs; this is another difference between classes and structs. There are a couple of other advanced technical differences but, for our purposes, the two described here are sufficient.
# Controllers and classes
When working with `UIViewController`, `UICollectionViewController`, and `UITableViewController`, you need to create a class file for each of these elements. Each file handles all of the logic and interactions that the controller sends and receives. Along with interactions, the class file is responsible for receiving data. We can see what this looks like in Playground. Let's see how this works:
1. Right-click on the Playgrounds folder and go to New File.
2. Scroll to the bottom of the template screen, select a Blank playground, and hit Next.
3. In the options screen that appears, name your new Playground `CollectionViewBasics`, and make sure that your Platform is set to iOS. Hit Next and then Create. Delete the variable and leave the import statement, then toggle open/on the Debug Panel, using either the toggle button or _command_ \+ _shift_ \+ _Y_.
Now that we are set up, let's see how we can view the UI inside of playgrounds. Using Playgrounds really lets us focus on learning, instead of, having to worry about running our project every time we want to see changes.
# Understanding Collection Views
The first thing we want to do is get access to all of the UI and Playground components that we will need. At the very top of the playground, please add the following import statements:
import UIKit
import PlaygroundSupport
The first import statement imports all of the UI elements we will need. The second import gives us access to Playground support, this will allow us to add our UI elements into playgrounds. Now let's create our first `UIViewController,` this setup is pretty much the same structure I like to use for all of my classes that are controllers. Add a line break and then add the following code:
class CollectionViewExampleController:UIViewController {
}
This code looks pretty similar to what we discussed earlier in this chapter. We created a class named `CollectionViewExampleController` and we subclass `UIViewController`. `UIViewController` is a class that Apple provides us with, and it gives us access to a lot of things. Going into all of them would take a chapter in itself, but throughout the book, we will slowly introduce you to new things we can access. Next, we need to create `UICollectionView,` so let's do that next by adding the following code inside of our curly brackets:
var collectionView:UICollectionView?
After you add the variable, your complete code should look like the following:
class CollectionViewExampleController:UIViewController {
var collectionView:UICollectionView?
}
This is the variable we will use for our `UICollectionView`. The question mark signifies that we are making it an Optional value, just as we discussed earlier. The next thing we should do is the `UICollectionViewDataSource` protocol. This protocol has many methods that we can use, but there are two that are required to use `UICollectionView`. This protocol allows us to tell `collectionView` how many items we have in each section of `collectionView`, as well as creating a cell for each item we want to display in our collection view. Update your class to conform to `UICollectionViewDataSource` by adding it after the subclassing of `UIViewController`. In order to add it after, we must add a comma after `UIViewController`. When you are done, you should have the following:
class CollectionViewExampleController:UIViewController, UICollectionViewDataSource {
var collectionView:UICollectionView?
}
We now understand what this means, but we have an error:
This error is telling us that we are missing the required methods for `UICollectionViewDataSource`. Click on the red dot with the white circle in it:
When we click on the error, it lets us see more details:
We can either add the functions ourselves or we can click on the Fix button and it will add them for us.
Let's do that and we will see that we now have the two required methods needed for `UICollectionView`. Your code should look like the following:
As you see, we have the new methods, but now we have more errors.
**NOTE** : Whenever you use the fix button, it adds the code to the top of your file. When coding, I like to have my variables at the top of my file and my functions after them.
Move the `collectionView` variable above the newly added functions. Inside each function, you should see the word code:
Inside the `numberOfItemsInSection` method, delete the word and replace the code with the following:
func collectionView(_collectionView:UICollectionView, numberOfItemsInSectionsection:Int) -> Int {
return 1
}
The `return 1` that we added tells our `UICollectionView` that we want to display 1 item.
Next, we need to fix our last method, `cellForItemAt`. Again, this method is responsible for displaying cells in our `UICollectionView`. Let's update this method to display a red box for every item we have in our `collectionView`. Add the following code:
func collectionView(_collectionView:UICollectionView, cellForItemAtindexPath:IndexPath) -> UICollectionViewCell {
let cell = collectionView.dequeueReusableCell(withReuseIdentifier:"BoxCell",for:indexPath)
cell.backgroundColor = .red
return cell
}
In the first line of the code we just added, we create a reusable cell with an identifier name of `"BoxCell"` and we pass the index path of the of the collection view to the reusable cell. Next, we set the background color to red, and then we return the cell we created to `UICollectionView`. This method is run for every item we need, and in our case, we are returning `1` cell because we set our `numberOfItemsInSection` to `1`.
When you are done with both of these methods, your entire file should look like the following:
import UIKit
import PlaygroundSupport
class CollectionViewExampleController:UIViewController, UICollectionViewDataSource {
var collectionView:UICollectionView?
func collectionView(_collectionView:UICollectionView, numberOfItemsInSectionsection:Int) -> Int {
return 1
}
func collectionView( _collectionView:UICollectionView, cellForItemAtindexPath:IndexPath) -> UICollectionViewCell {
let cell = collectionView.dequeueReusableCell(withReuseIdentifier:"BoxCell", for:indexPath)
cell.backgroundColor = .red
return cell
}
}
All of your errors are now gone, but right now, nothing will run because we need to add a couple more things. We have a `collectionView`, but it actually needs to be created before we can use it. Typically, when working, I like to create a method that will create or set up my `collectionView` depending on what I am doing. Let's add the following code after our `collectionView` variable:
func createCollectionView() {
self.collectionView = UICollectionView(frame:CGRect(x:0 , y:0, width:self.view.frame.width, height:self.view.frame.height), collectionViewLayout:UICollectionViewFlowLayout())
self.collectionView?.dataSource = self
self.collectionView?.backgroundColor = .white
self.collectionView?.register(UICollectionViewCell.self, forCellWithReuseIdentifier:"BoxCell")
self.view.addSubview(self.collectionView!)
}
In the first line of this method, we create an instance of `UICollectionView` and we set the frame and layout. The frame sets its x and y positions, as well as, the width and height. Our width and height will match the size of the view we are using. In the next line, we set the data source to self, and we already added all of the methods that go with the data source when we set up our `UICollectionViewDataSource`. In the next line, we set the background color of our collection view to white. Then, we register our cell in the next line. In order for the collection view to know about our cell, we have to register it. Finally, we add the collection view to our view. Nothing too complicated here, but it is new and will take a bit of time to get used to. Now we just need to make sure that we call the `createCollectionView` method. Every `UIViewController` has an entry point, and the one that we will use is a method called `viewDidLoad()`. Add the following code above `createCollectionView`:
override func viewDidLoad() {
super.viewDidLoad()
createCollectionView()
}
Here, we are overriding the method viewDidLoad() so that we can use it to call methods we need too. Inside this method, we are calling the new method we just created—`createCollectionView()`. We will still not be able to see our `collectionView`, because when you are working inside of your app, this is all the code you would need. Since we are in Playgrounds, we have to do one more thing in order for it to be displayed inside of Playgrounds. After the very last curly bracket, add the following code:
// Present the view controller in the Live View window
PlaygroundPage.current.liveView = CollectionViewController()
Your entire class should look like the following:
import UIKit
import PlaygroundSupport
class CollectionViewExampleController:UIViewController, UICollectionViewDataSource {
var collectionView:UICollectionView?
override func viewDidLoad(){
super.viewDidLoad()
createCollectionView()
}
func createCollectionView() {
self.collectionView = UICollectionView(frame:CGRect(x:0, y:0, width:self.view.frame.width,
height:self.view.frame.height), collectionViewLayout:UICollectionViewFlowLayout())
self.collectionView?.dataSource = self
self.collectionView?.backgroundColor = .white
self.collectionView?.register(UICollectionViewCell.self,forCellWithReuseIdentifier:"BoxCell")
self.view.addSubview(self.collectionView!)
}
func collectionView( _collectionView:UICollectionView, numberOfItemsInSectionsection:Int) -> Int {
return 1
}
func collectionView( _collectionView:UICollectionView, cellForItemAtindexPath:IndexPath) -> UICollectionViewCell {
let cell = collectionView.dequeueReusableCell(withReuseIdentifier:"BoxCell",for:indexPath)
cell.backgroundColor = .red
return cell
}
}
// Present the view controller in the Live View window
PlaygroundPage.current.liveView = CollectionViewExampleController()
Now, you are probably wondering why you still cannot see anything, and that is because we need to open up the Assistant Editor. Click the following icon to do so:
When you do this, you will see your collection view and 1 red box on the screen:
Note that you might have to click the play button at the bottom of your code to get it to show up:
Now that we have covered the basics, let's go back to our app and set up a Collection View using a storyboard. Using Playgrounds is a great way to work things out with your code before coding them in your app.
# Creating our controller
Now that we better understand how to create `UICollectionView`, let's add one to our project. Open your project and do the following steps:
1. Right-click inside of the `LetsEat` folder and select New File.
2. Inside of the Choose a template for your new file screen, select iOS at the top, and then Cocoa Touch Class. Then, hit Next:
3. You should see an options screen. Add the following:
New file:
* * Class: `ExploreViewController`
* Subclass: `UIViewController`
* Also create XIB: Unchecked
* Language: `Swift`
4. Upon hitting Next, you are asked to create this file. Select Create and then your file should look like mine.
Let's review this `ExploreViewController` class file and also do some maintenance inside of the file. We created this file to use with the `UIViewController` that we created when we initially set up our UI.
Note that there are three methods in this file: `viewDidLoad()`, `didReceiveMemoryWarning()`, and `prepare()` (which is commented out). Let's delete both `didReceiveMemoryWarning()` and `prepare()`, as we do not need them at this time:
What remains is `viewDidLoad()`; this method is called only once during the life of the View Controller. Let's see what this means by updating `viewDidLoad()` to the following:
func viewDidLoad() {
super.viewDidLoad()
print("Hello Explore View Controller")
}
Now, run the project by hitting the Play button (or using _command_ \+ _R_ ). You should now only see `Hello Explore View Controller` inside of the Debug panel.
# Understanding Collection View controllers and Collection View cells
As noted earlier in the book, Collection View Controllers allow us to display our data in a grid. The individual items inside of Collection Views are called cells, and these cells are what show our data. This data can be anything from an image to text, or both an image and text. You have complete control over what your Collection View cell can display. Our Collection View Controller is responsible for making sure the correct number of cells is displayed.
Let's now connect our file, `ExploreViewController`, with our `UICollectionView` in the storyboard. To do this, we use the Assistant Editor (or split screen), which we access by doing the following:
1. Open `Explore.storyboard`.
2. Close the Navigator panel using the hide Navigator toggle or _command_ \+ _O._
3. Close the Utilities panel by hitting the Utilities toggle or use _command_ \+ _alt_ \+ _O._
4. Select the Assistant Editor or use _command_ \+ _alt_ \+ _enter._
You should now see `Explore.storyboard` on the left and `ExploreViewController.swift` on the right:
5. Add the following inside of your `ExploreViewController.swift` file on the line after the following code:
class ExploreViewController: UIViewController {
@IBOutlet weak var collectionView:UICollectionView!
`IBOutlet` is a way to a connect to the UI element. We have a Collection View on our `UIViewController`; now, we are creating a variable that allows us to hook into it.
6. After you create the variable, you should see a small circle to the left of the variable:
7. When you hover over it, you should see a plus button appear inside of the circle:
Click on it and drag this to your Collection View inside of your `UIViewController`:
8. Once you release the mouse button, you should see the circle become filled:
9. Select the Standard Editor or use _command_ \+ _enter_.
In your scene, select your Collection View. Then, in your Utilities Panel, select the Connections inspector, which is the last icon on the right. Under the Outlets section, we now add back `dataSource` and `delegate`, the same ones we removed earlier:
The `dataSource` property is what is used to supply the data for our Collection View, so we need to pass whatever data we have to this property. On the other hand, the `delegate` property, which supplies the behavior, does not require us to supply anything as it receives interactions that happen within our Collection View.
We need to update our data source for our Collection View; let's add this now:
10. Click and drag the `dataSource` property to the Explore View Controller in your Outline view:
11. Click and drag the delegate property to the Explore View Controller in your Outline view:
# Getting data into Collection View
Having boxes is great, but having data with beautiful pictures is so much more appealing. Let's get some data displaying inside of our Collection View:
1. Use _command_ \+ _shift_ \+ _O_ , which opens a small window called Open Quickly. Inside the window, type `ExploreView` and hit _enter_ to select the `ExploreViewController.swift` file.
2. Update our class definition from the `ExploreViewController:UIViewController` class to the following:
class ExploreViewController:UIViewController, UICollectionViewDataSource
# Understanding the data source
Whenever we use Collection View to get data, we must conform to a protocol. A protocol is a set of methods to which we have access, and can either be required or optional. For Collection Views, we are required to implement three methods to get data into a Collection View. So, let's add the following four functions (each beginning with `func`) after the closing curly bracket of `viewDidLoad()`:
Let's break down the code to better understand what we are doing:
* **Part A** : This first method is what we need to add a header to our Collection View:
collectionView(_:viewForSupplementaryElementOfKind:at:)
* **Part B** : The identifier is what we added when we were designing in earlier chapters. This identifier helps Xcode know what view we are referring to:
let headerView = collectionView.dequeueReusableSupplementaryView(ofKind: kind, withReuseIdentifier: "header", for: indexPath)
return headerView
* **Part C** : Our next method gets called for every item we need. Therefore, in our case, it gets called 20 times:
collectionView(_:cellForItemAt:)
* **Part D** : Here, we are creating a cell every time `collectionView(_:numberOfItemsInSection:)` is called. The identifier, `exploreCell`, is the name we gave it in the storyboard; so, this is the cell that is grabbed and used inside of our Collection View:
return collectionView.dequeueReusableCell(withReuseIdentifier: "exploreCell", for: indexPath)
* **Part E** : This method tells our Collection View how many different sections we want to display:
numberOfSections(in collectionView: UICollectionView)
* **Part F** : Here, we are telling our Collection View that we only want one section:
return 1
* **Part G** : Our next method tells our Collection View how many different items we are going to display inside of the section we set up:
collectionView(_:numberOfItemsInSection:)
* **Part H** : We are telling our Collection View that we want to display 20 items:
return 20
* **Part I** : Finally, we add this line back as it was removed. We use this function to dismiss our location modal when you hit the Cancel button:
@IBAction func unwindLocationCancel(segue:UIStoryboardSegue) {}
Let's build and run the project by hitting the Play button (or using _command_ \+ _R_ ). We are now finished.
# Summary
In this chapter, we covered quite a few new topics, as well as a lot of code. As long as you have a basic understanding of what we covered in this chapter, you will be OK to continue. A lot of these concepts and ideas will be covered again, as these are common design patterns in iOS.
We learned about the MVC architecture. Then, we covered classes and structures, along with their similarities and differences. Finally, we looked at Controllers and classes and how they work. We then created our Controller for our storyboard file.
In the next chapter, we will look at how to get local data into our app. We will also look at how to pass data from the Explore view to the restaurant list.
# Getting Data into Our Grid
Working with data is very important but, when teaching beginners, I like to do it in steps so that this process is a bit easier. In this chapter, we are going to work with data that is stored on the device. Later in this book, we will work with data that we get from a feed. Feed data means it is coming from a website URL, and using data from a feed means you can update the data without having to update the app.
We will cover the following in this chapter:
* What is a model?
* What is a plist?
* How do we create a plist?
* Working with the manager class
In the last chapter, we got the Explore listing up, but we have no data. We need to create a model object that will represent the information that our cell can use to display data.
# Model
Typically, when developing your model, the best way to start when you have design is to look at the data associated with your view. Let's look at our app design again:
The items (`UICollectionViewCell`) displayed in the grid are each supported by some data. Looking at the design, we see that each item needs an image and a name (cuisine). Therefore, we need to create a model, called `ExploreItem`, with two properties, specifically `image` and `name`.
In the model camp, we will create three files: `ExploreData.plist`, `ExploreItem.swift`, and `ExploreDataManager.swift`.
# ExploreData.plist
The first file, `ExploreData.plist`, has already been created for you and can be found in your project inside of the `Explore` folder. This file contains all of the data we need for a list of cuisines. Create a new folder called `Model` and drag this file into it.
In the file, there is an array of dictionary items. Each item has a cuisine name and image for that particular cuisine. Let's take a look at the first few elements of this file:
We will load this file into our Explore list, and this is what we use to filter restaurants by a specific cuisine.
# ExploreItem.swift
Next, we need to create a file to represent our data. Our Explore list displays an image and a name that match the corresponding image and name that we see in our `Explore.plist` file. Let's create this `ExploreItem` file now:
1. Right-click on the `Model` folder and select New File.
2. Inside the template screen, select iOS at the top and then Swift File, then hit Next.
3. Name the file `ExploreItem` and then hit Create.
The only thing in this file is an `import` statement.
The `import` statement allows us to import other libraries into our file, giving us the ability to see inside of these libraries and use properties from them. Foundation is one of Apple's core frameworks, and it has a bunch of tools that we can use while we program.
Since we do not need to use inheritance, we are going to make this file a `struct`. Add the following to your file:
struct ExploreItem {
}
Now that we have declared it a `struct`, let's add the two properties we need for this file: an image and a name. For both of these properties, we are going to make them String data types. For the title, this makes sense, because it is text that we are displaying in our Collection View. However, for the image, using a String data type might not seem as obvious. The reason we are doing so is that, to get it, we have to access it by name. For example, `american.png` is the file name for the American cuisine image. Add the following to the inside of your curly brackets (`{ }`):
var name:String
var image:String
We have now added two properties, one for the image and one for the name, both of which are optional. Since we cannot give either of them an initial value, we have to make them optional.
Your file should look like the following:
struct ExploreItem {
var name:String
var image:String
}
We next need to add one more thing to this file.
We take the dictionary data we get from `plist` and create an `ExploreItem` for each item. Our dictionary now looks like the following:
["name": "All", "image": "all.png"]
We need to pass this dictionary object to our `ExploreItem`. When you are passing a dictionary object, you are required to create a custom initializer. Our initializer takes a dictionary object into it. Then, we can set each item from the dictionary to the data of both of our properties—image and name.
When you create a struct, by default, you get an `init()` method that has all of the properties you created in the parameters.
For example, our `ExploreItem` will have a default initializer that looks like the following:
init(name:String, image:String)
Instead of using this initializer, we will create our own to pass a dictionary object into it.
To create a **custom initializer** , we are going to use what is called an **extension** , which gives us the ability to extend our code and add more functionality to it. Inside of your `ExploreItem` file, after the ending curly bracket, add the following:
extension ExploreItem {
}
Next, let's create our custom initializer, which takes a dictionary object into the parameters. Add the following between the curly brackets of the extension we just added:
init(dict:[String:Any Object]) {
}
We have now created an `init()` method in the parameters, which accepts a dictionary object. As stated in the preceding section, we know that our data looks like the following:
["name": "All", "image": "all.png"]
To pass each value, we need to use the following dictionary syntax:
dict["name"]
dict["image"]
Let's proceed by mapping the dictionary data to our two properties. Add the following inside of the `init()` method's curly brackets:
self.name = dict["name"] as! String
self.image = dict["image"] as! String
}
Since our dictionary value is `AnyObject`, we have to specify that our data is a String by using `as! String` at the end.
We now have our data item set up for our Explore view (cuisine list), and your file should look like the following:
extension ExploreItem {
init(dict:[String:AnyObject]) {
self.name = dict["name"] as! String
self.image = dict["image"] as! String
}
}
Let's now focus on our data manager. We want our data manager to handle parsing the plist and giving us the data. Since our data will be coming from a plist, we need to have a method that will get the data from the plist first.
# ExploreDataManager.swift
In our app, the data manager is responsible for communicating with a service (for example, the Yelp API), as well as manipulating the data from the service. Once the data from the service is received, the data manager will create model objects that we can use for our app.
In some apps, these two jobs are handled by the controller. However, rather than putting that responsibility on our controller, we limit the controller from talking to the manager so that it never knows anything about the service.
As you get comfortable with programming, you will find that there are a few different types of architectures. We are sticking as closely as we can to MVC because it is what Apple uses to build iOS apps.
Let's create the `ExploreDataManager` file now:
1. Right-click on the `Model` folder and select New File.
2. Inside of the template screen, select iOS at the top and then Swift File, then hit Next.
3. Name this file `ExploreDataManager` and hit Create.
Since we need to define our class first, add the following under the `import` statement:
class ExploreDataManager {
}
Here, we used a `class` instead of a `struct`, because this is a file that we will inherit from later. You do not always necessarily know whether you are going to inherit from another class; therefore, you can default to a struct and then change to a class if you realize that you need to inherit from another class.
Now, we need to load data from the `ExploreData.plist` file. Add the following method to our `ExploreDataManager` class:
Let's breakdown this method:
* **Part A** : This function starts with the `fileprivate` keyword. Think of `fileprivate` as a way to give your methods an access level. If you do not use `fileprivate`, it defaults to internal, which means anyone can access or use the method outside of the class:
fileprivate
* **Part B** : Our `loadData()` function is returning something back. `->` states that our function has a return value. The return value for this method is an array of dictionary objects. Our dictionary will have a key to a String and the value will be `AnyObject`:
[[String: AnyObject]]
`AnyObject` lets us take any data type that comes back. Therefore, we can have one item give us an Int, while another gives us back a String.
You can also use `Any`, which can represent an instance of any type at all, including functional types and optional types.
* **Part C** : Inside of the function, we are using what is known as a `guard` statement. A `guard` statement is designed to exit a method or function early when a given statement returns `false`. Our `guard` statement checks two statements and both need to return `true`:
guard let path = Bundle.main.path(forResource: "ExploreData", ofType: "plist")
The first statement checks to see whether the `ExploreData.plist` file exists in our app bundle. If the file is found, the statement returns `true`, and the file path is set to the constant path. Our next statement, which is separated by a comma, is discussed in _Part D_ , as follows.
* **Part D** : In this statement, if the first statement returns `true`, we take the `path` constant, and then we check the contents inside of the file.
let items = NSArray(contentsOfFile: path)
Let's take a look at the data in our file again:
If you look at the root of this plist, you see that its type is an array. `NSArray` has a method that we can use to get the data out of our file and put it into an array with which we can work.
Typically, plists come in two types: an array or a dictionary. Currently, neither the standard Swift array nor dictionary gives us a method that allows us to get data out of a file, so we need to utilize `NSArray` (as we are here) or `NSDictionary`, respectively, to do that.
This statement now checks to verify that we are, indeed, working with an array, and then returns `true` if so. If both conditions return `true`, our array inside of our plist is given to us. The array is set to our constant `items`.
`NSArray` and `NSDictionary` come from Objective C (Apple's main programming language for building iOS apps); they have some extra features. Just know that they are similar to their Swift counterparts without the `NS`.
* **Part E** : Here, if any of the conditions are `false`, we return an array with an empty dictionary:
else { return [[:]] }
Otherwise, we run the following `return`.
* **Part F** : This `return` gives back an array of dictionary items. Once we have our data loaded out of the plist, we can create our `ExploreItem`. Therefore, we need a method so that we can access all of our Explore items and return an array of items:
return items as! [[String : AnyObject]]
# Getting data
To get our data out of the plist, add the following method above `loadData()` inside of `ExploreDataManager`:
func fetch() {
for data in loadData() {
print(data)
}
}
Our `fetch()` method is going to loop through our dictionary data from the plist. Here is what your file should look like now:
Inside of your `ExploreViewController.swift` file, delete the previous `print` statement that was inside your `viewDidLoad()` and replace it with the following:
let manager = ExploreDataManager()
manager.fetch()
Let's build and run the project by hitting the Play button (or use _command_ \+ _R_ ). You will notice that, in the Debug Panel, every time our loop runs, it gives a dictionary object, such as the following:
The above print statement is exactly what we want. Now, inside of `ExploreDataManager`, add the following above our `fetch` method:
fileprivate var items:[ExploreItem] = []
Next, inside of `fetch()`, update our `for...in` loop by replacing `print(data)` with the following:
items.append(ExploreItem(dict: data))
Your file should look like the following:
Let's build and run the project by hitting the Play button (or use _command_ \+ _R_ ). In the Debug Panel, you should see an array of Explore items.
We currently have our data, and we have cells. However, we need to get our data to our cells so that we can see the image and name. Let's open up `Explore.storyboard` and update our `exploreCell`.
# Connecting to our cell
Now that we have our cell set up, we need to create a file so that we can connect to our cells:
1. Right-click on the `Explore` folder and create a new group called `View` in the Navigator panel. Then, right-click on `View` and select New File.
2. Inside of the template screen, select iOS at the top, and then Cocoa Touch Class, then hit Next.
3. You should now see an options screen. Add the following:
New file:
* * Class: `ExploreCell`
* Subclass: `UICollectionViewCell`
* Also create XIB: Unchecked
* Language: `Swift`
4. Once you hit Next, you are asked to create this file. Select Create and your file should look like mine:
import UIKit
class ExploreCell: UICollectionViewCell {
}
5. Open `Explore.storyboard` and select `exploreCell` in the Outline view.
6. In the Utilities Panel, select the Identity inspector and, under Custom Class, type `ExploreCell`, then hit _Enter_.
# Hooking up our UI with IBOutlets
To access our UI elements, we need to connect them to `IBOutlets`. To do so, perform the following steps:
1. Open the `ExploreCell.swift` file in the Navigator panel (or use _command_ \+ _Shift_ \+ _O_ , type `ExploreCell`, and then hit _Enter_ ).
2. Inside of the class declaration, add the following:
@IBOutlet var lblName:UILabel!
@IBOutlet var imgExplore:UIImageView!
3. Open `Explore.storyboard` and select your `exploreCell` again using the project Outline.
4. In the Utilities panel, select the Connection inspector. You should see both variables we just created, lblName and imgExplore, under Outlets:
5. Click-drag from imgExplore to the UIImageView we put in our cell:
6. Repeat this step for lblName by click-dragging from lblName to the UILabel in our cell:
Great! Now that we have our cell set up, let's pull data into it. In our `ExploreDataManager`, add these two methods above the `loadData()` method:
func numberOfItems() -> Int {
return items.count
}
func explore(at index:IndexPath) -> ExploreItem {
return items[index.item]
}
We use the first method, `numberOfItems()`, to update the total number of items in our Collection View. The second method, `explore (at index:IndexPath)`, is called for each item we create in our Collection View. Then, we use this to pass the data to our cell to display the name and the image.
Now that we have these two methods added, let's open up our `ExploreViewController` file. We currently have the following inside of our `viewDidLoad()`:
let manager = ExploreDataManager()
manager.fetch()
Let's move `let manager` underneath our Collection View so that it is outside `viewDidLoad()`; therefore, we can access it anywhere within the class as opposed to only within the function. You should now have this before `viewDidLoad()`:
@IBOutlet var collectionView: UICollectionView!
let manager = ExploreDataManager()
Inside of `viewDidLoad()`, only `manager.fetch()` remains. Next, we need to update `numberOfItemsInSection()` to say the following:
func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int {
return manager.numberOfItems()
}
Therefore, instead of returning 20, we are going to get the number of items from our plist.
Finally, inside of `cellForItemAt()`, revise the `let` statement in the third required method before the `return` cell by adding `as! ExploreCell`, as follows:
let cell = collectionView.dequeueReusableCell(withReuseIdentifier: "exploreCell", for: indexPath) as! ExploreCell
Then, add the following after the code snippet you just added and before the `return` cell:
let item = manager.explore(at: indexPath)
cell.lblName.text = item.name
cell.imgExplore.image = UIImage(named: item.image)
The preceding code gets an `ExploreItem` for each cell in our Collection View and passes the data to the cell. Finally, for your return, add the following:
return cell
Let's build and run the project by hitting the Play button (or use _command_ \+ _R_ ). You should now see your Collection View come to life with images and text:
The images are not perfect, but we will fix them later. Now that we have our cells displaying content, we need to make it so that, when you select a cell, it goes to our restaurant listing.
Let's build and run the project by hitting the Play button (or use _command_ \+ _R_ ). You should now be able to select your cell, and it goes to what will be your restaurant listing page. This page will be empty for now, so let's work on this next.
# Restaurant listing
Now that we have our Explore listing going to our restaurant listing, we need to get our Collection View connected to our `RestaurantListViewController`. To do so, perform the following steps:
1. Right-click inside of the `Restuaurants` folder and select New File.
2. Inside of the template screen, select iOS at the top and then Cocoa Touch Class, then hit Next. You should now see an options screen. Add the following under New file:
* * Class: `RestaurantListViewController`
* Subclass: `UIViewController`
* Also create XIB: Unchecked
* Language: `Swift`
After hitting Next, you will be asked to create this file.
4. Select Create.
5. Let's delete both `didReceiveMemoryWarning()` and `prepare()` (which has been commented out), as we do not need them at this time.
6. Open `Restaurants.storyboard`.
7. Select`UIViewController` in the Utility Panel, and select the Identity inspector, which is the third icon from the left.
8. Under Custom Class, and in the Class drop-down menu, select `RestaurantViewController` and hit _enter_.
Note that, when working with IBOutlets, it is easier to have the storyboard and View controller next to each other. In order to do this, we have to close a few windows:
* If your Navigator Panel is currently open, close it by clicking on the hide navigator toggle or _command_ \+ _O_.
* If your Utilities Panel is currently open, close it by clicking on the Utilities toggle or use _command_ \+ _Alt_ or _Alt_ \+ _O_.
9. Select the Assistant editor or use _command_ \+ _Alt_ or _Alt_ \+ _Enter_.
10. You should now see `Restaurants.storyboard` on the left side and `RestaurantListViewController.swift` on the right. Add the following after the class declaration:
@IBOutlet var collectionView:UICollectionView!
12. Once you create the variable, you'll see a small circle to the left of the variable.
13. When you hover over it, you'll see a plus button appear inside of the circle. Click on it and drag this to your Collection View inside of your `UIViewController`.
14. Once you release it, you'll see the circle become filled:
It's time to display something inside of our Collection View.
15. In your scene, select your Collection View. Then, in your Utilities panel, select the Connections inspector, which is the last icon on the right. Under the Outlets section, we now add back `dataSource` and `delegate`, which are the same ones we removed earlier:
16. Update the class definition inside of `RestaurantListViewController`. You currently have `RestaurantListViewController:UIViewController` — update it to the following:
class RestaurantListViewController:UIViewController, UICollectionViewDataSource
As you learned earlier with our Explore grid, we are required to implement `numberOfSections()`, `numberOfItemsInSection()`, and `cellForItemAt()` in order to use a Collection View. Therefore, add the following three methods inside of `RestaurantListViewController`:
func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell {
return collectionView.dequeueReusableCell(withReuseIdentifier: "restaurantCell", for: indexPath)
}
func numberOfSections(in collectionView: UICollectionView) -> Int {
return 1
}
func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int {
return 10
}
Let's build and run the project by hitting the play button (or use _command_ \+ _R_ ) to see what happens:
Instead of having a grid, as we did for Explore, our restaurant list displays a column of cells. However, when the restaurant list displays on an iPad, it shows a grid instead. This is one of the flexibilities from which we benefit by using a Collection View. We will further set up our restaurant list cells along with displaying the data later in this book.
# Summary
In this chapter, we learned how to create a model object and how to tie that data to a plist. We also looked at what a plist is. We learned how to create a plist as well as our first manager class, which takes care of the data. In our data manager, we covered getting data from a plist and how to represent that data as a model object.
In the next chapter, we will look at Table Views and how they are similar to—and yet different from—Collection Views.
# Getting Started with the List
When I started doing iOS development, I first worked with Table Views. At the time, Collection Views hadn't been introduced yet. As you progress in iOS development, you will work with a lot of Table and Collection Views. You'll begin with just the basics to allow you to use them, and then you'll slowly progress into more advanced Table and Collection Views.
The reason that I bring this up is that, by the end of this chapter, you may feel as though things are not clicking. This is perfectly normal. However, the more you go through the steps in these chapters, the more they will become second nature to you.
For those of you who've not done iOS development, Table Views are great for presenting a list of data. The iPhone's Mail app is an example of a Table View.
In this chapter, we are going to work with our first Table View. In our _Let's Eat_ app, users select a specific location to look for restaurants.
In this chapter, we will cover the following topics:
* Understanding Table Views
* Creating our first property list (plist)
* Creating our location data manager
* Cleaning up our file structure
# Understanding Table Views
The first thing we want to do is get access to all of the UI and Playgrounds' components that we will need. At the very top of the playground, please add the following two import statements:
import UIKit
import PlaygroundSupport
The first import statement imports all of the UI elements we will need. The second import gives us access to Playgrounds support; this will allow us to add our UI elements into Playgrounds. Now, let's create our first `UIViewController`. This setup is pretty much the same structure I like to use for all of my classes that are controllers. Add a line break and then add the following code:
class TableViewExampleController: UIViewController {
}
This code looks pretty similar to what we did in the last chapter. We created a class named `TableViewExampleController` and we subclassed `UIViewController`. Next, we need to create a `UITableView` and an array of data to display in our `tableView`. Let's do that next by adding the following inside of our curly braces:
var tableView:UITableView?
var names:[String] = ["Deanna","Corliss","Deyvn"]
After you add the variable, your complete code should look like the following:
class TableViewExampleController: UIViewController {
var tableView:UITableView?
var names:[String] = ["Deanna","Corliss","Deyvn"]
}
This is the variable we will use for our `UITableView`. Here, we have an array with three names. The next thing we should do is create the `UITableViewDataSource` protocol. This protocol has many methods that we can use, but there are two we need that are required for us to be able to use a `UITableView`. This protocol allows us to tell the `tableView` how many items we have in each section of the `tableView`, as well as create a cell for each item we want to display in our `tableView`. Update your class to conform to `UITableViewDataSource` by adding it after the subclassing of `UIViewController`. To add it after, we must add a comma after `UIViewController`. When you are done, you should have the following:
class TableViewExampleController: UIViewController, UITableViewDataSource {
var tableView:UITableView?
var names:[String] = ["Deanna","Corliss","Deyvn"]
}
We now understand what this is, but now we have an error:
This error is telling us that we are missing the required methods for `UITableViewDataSource`. Next, click on the red dot with the white circle in it:
When you click on the error, it will give us the option to see more details:
We can either add the functions ourselves, or we can click on the Fix button and it will add them for us.
Let's do that. You will see that we now have the two required methods that are needed for a `UITableView`. Your code should look like the following:
As you can see, we have the new methods, but now we have more errors.
Whenever you use the Fix button, it adds the code to the top of your file. When coding, I like to have my variables at the top of my file and my functions after them.
Move the `tableView` and name variables above the newly added functions. Inside each function, you should see the word code:
Inside the `numberOfItemsInSection` method, delete the word 'code' and replace it with the following:
func tableView(_tableView:UITableView, numberOfRowsInSection section:Int) -> Int {
return names.count
}
The return tells our `UITableView` that we want to display in each cell and, since we have three names in our array, we should see three cells in our `tableView`.
Next, we need to fix our last method, `cellForItemAt`. Again, this method is responsible for displaying cells in our `UITableView`. Let's update this method so that it displays a red box for every item we have in our `tableView`. Add the following code:
func tableView(_tableView:UITableView, cellForRowAtindexPath:IndexPath) -> UITableViewCell {
let cell = tableView.dequeueReusableCell(withIdentifier:"cell", for:indexPath) as UITableViewCell
let name = names[indexPath.row]
cell.textLabel?.text = name
return cell
}
In the first line of the code we just added, we create a reusable cell with an identifier name of "cell", and we pass the index path of the `tableView` to the reusable cell. Next, we set the cell's label to the text in our array. Finally, we return the cell we created to the `UITableView`.
This method gets run for every item we need. In our case, we are returning three cells, which returns a name to each because our array has three items.
When you are done with both of these methods, your entire file should look like the following:
import UIKit
import PlaygroundSupport
class TableViewExampleController: UIViewController, UITableViewDataSource {
var tableView:UITableView?
var names:[String] = ["Deanna","Corliss","Deyvn"]
func tableView(_tableView:UITableView, numberOfRowsInSection section:Int) -> Int{
return names.count
}
func tableView(_tableView:UITableView, cellForRowAtindexPath:IndexPath) -> UITableViewCell {
let cell = tableView.dequeueReusableCell(withIdentifier:"Cell",for:indexPath) as UITableViewCell
let name = names[indexPath.row]
cell.textLabel?.text = name
returncell
}
}
All of your errors are now gone, but right now, nothing will run because we need to add a couple more things. We have a `tableView` variable, but it actually needs to be created before we can use it. Let's add the following code after our `tableView` variable:
func createTableView() {
self.tableView = UITableView(frame:CGRect(x:0, y:0, width:self.view.frame.width, height:self.view.frame.height))
self.tableView?.dataSource = self
self.tableView?.backgroundColor = .white
self.tableView?.register(UITableViewCell.self, forCellReuseIdentifier:"Cell")
self.view.addSubview(self.tableView!)
}
In the first line of this method, we create an instance of an `UITableView` and we set up the frame and layout. The frame sets its x and y positions, as well as the width and height. Our width and height will match the size of the view we are using. In the next line, we set up the data source to self. We already added all of the methods that go with the data source earlier when we set up our `UITableViewDataSource`. In the next line, we set the background color our our `tableView` to white. Next, we register our cell in the next line. For the `tableView` to know about our cell, we have to register it. Finally, we add the `tableView` to our view. There's nothing too complicated here, but it is new and will take a bit of getting used to. Now, we just need to make sure that we call the `createTableView` method. Add the following code above `createTableView`:
override func viewDidLoad() {
super.viewDidLoad()
createTableView()
}
Here, we are overriding the method so that we can use it to call methods we need, too. Inside of this method, we are calling the new method we just created: `createTableView()`. We will still not be able to see our `tableView` because when you are working inside of your app, this is all the code you would need. However, since we are in Playgrounds, we have to do one more thing in order for it to be displayed inside of Playgrounds. After the very last curly brace, add the following code:
// Present the view controller in the Live View window
PlaygroundPage.current.liveView = TableViewExampleController()
Your entire class should look like the following:
import UIKit
import PlaygroundSupport
class TableViewExampleController:UIViewController, UITableViewDataSource {
var tableView:UITableView?
var names:[String] = ["Deanna","Corliss","Deyvn"]
override func viewDidLoad(){
super.viewDidLoad()
createTableView()
}
func createTableView(){
self.tableView = UITableView(frame:CGRect(x:0, y:0, width:self.view.frame.width,
height:self.view.frame.height))
self.tableView?.dataSource = self
self.tableView?.backgroundColor = .white
self.tableView?.register(UITableViewCell.self,forCellReuseIdentifier:"Cell")
self.view.addSubview(self.tableView!)
}
func tableView(_tableView:UITableView, numberOfRowsInSection section:Int) -> Int {
return names.count
}
func tableView(_tableView:UITableView, cellForRowAtindexPath:IndexPath) -> UITableViewCell {
let cell = tableView.dequeueReusableCell(withIdentifier:"Cell", for:indexPath) as UITableViewCell
let name = names[indexPath.row]
cell.textLabel?.text = name
return cell
}
}
PlaygroundPage.current.liveView = TableViewExampleController()
Now, you are probably wondering why you still can not see anything, and that is because we need to open up the Assistant Editor. Click the following icon to do so:
By doing this, you will see your `tableView` and the names we added to our array on the screen:
Please note that you might have to click the Play button at the bottom of your code to get it to show up:
Now that we have covered the basics of Table View, let's go back to our app and set up one using a storyboard.
# Creating our Location View Controller class
Now that we understand Table View more, we want to get locations displaying inside our Table View:
Before we start, create three new folders inside the Location folder – Controller, View, and Model. As we have previously done, right-click on the `Location` folder and hit New Group to create a new folder.
Next, we need to create a Location View Controller class that we can use with our `UIViewController`:
1. Right-click on the `Locations` folder inside of Controllers and select New File.
2. In the Choose a template for your new file screen, select iOS at the top and then Cocoa Touch Class. Then, hit Next.
3. In the Options screen that appears, add the following file after New file:
* * Class: `LocationViewController`
* Subclass: `UIViewController`
* Also create XIB: Unchecked
* Language: `Swift`
4. Click on Next and then Create.
Next, we need to connect our View Controller with our class:
1. Select `Locations.storyboard`.
2. Then, select the View Controller
3. Now, in the Utilities Panel, select the Identity inspector.
4. Under Custom Class, in the Class drop-down menu, select `LocationViewController` and hit _Enter._
# Connecting our Table View with our Location View Controller
Currently, we have no way to communicate with our Table View and our Location View Controller. Let's see how we can connect these two:
1. Open the `LocationViewController.swift` file and add the following code after the class declaration:
@IBOutlet weak var tableView:UITableView!
2. Save the file by hitting _command_ \+ _S_. Your file should look like the following, with an empty circle next to the variable:
Before we get started, we are going to clean up our `LocationViewController.swift` file. Delete everything after `viewDidLoad()`:
Next, let's connect our table view to the file:
1. Open `Locations.storyboard` again and make sure that you have the Location View Controller selected in the Outline view.
2. Then, in the Utilities Panel, select Connections inspector. Under the Outlets section, you will see an empty circle, tableView:
Click and drag the empty circle to the Table View in the storyboard:
We have now connected our Table View to our Location View Controller.
# Digging into our Table View code
To get data into our Table View, we must conform to a protocol, as we did with the Collection View. In this case, we must implement `UITableViewDataSource`:
1. First, we need to update our `class` declaration. We currently have the following:
class LocationViewController: UIViewController
2. We now need to add `UITableViewDataSource`, as follows:
class LocationViewController: UIViewController, UITableViewDataSource
# Adding the data source and delegate
As discussed in the previous chapter, we need to add a data source and delegate to our Table View. Table View uses **dynamic cells** , which we are required to add:
1. Select Table View in the Outline view, and then Connections inspector in the Utilities Panel.
2. Click on and drag from `dataSource` to the Location View Controller in the Outline view:
3. Repeat with the `delegate` property:
4. Now, select the Table View and then in the Utilities Panel, select the Attributes inspector, if not already selected, and make sure you have the following values:
* * Style: `Basic`
* Identifier: `locationCell (named for you)`
* Selection: `Gray`
* Accessory: `Disclosure indicator`
Next, for us to display anything in `Tableview`, we need to add the `UITableViewDataSource` protocol. Our protocol requires that we implement the following three methods. Add the following after the closing curly brace of `viewDidLoad()`:
Let's break down the code to understand what we are doing:
* **Part A** : This method tells our Table View how many rows we want to display:
tableView(_:numberOfRowsInSection:)
* **Part B** : Here, we tell our Table View that we want to display `15` rows:
return 15
* **Part C** : This method tells our Table View how many sections we want to display. Sections in Table Views are typically used as headers, but they can be used however you choose:
numberOfSections(in:)
* **Part D** : We tell our Table View that we only want one section:
return 1
* **Part E** : Our third and final method gets called for every item we need. Therefore, in our case, it gets called 15 times:
tableView(_:cellForRowAt:)
* **Part F** : Here, we create a cell every time _Part E_ is called, either by taking one from the queue, if available, or by creating a new cell. The identifier, `locationCell`, is the name we gave it in the storyboard. Therefore, we are telling our Table View that we want to use this cell. If we had multiple Table Views, we would reference the identifier for the row and section in which we want the cell to display:
let cell = tableView.dequeueReusableCell(withIdentifier: "locationCell", for: indexPath) as UITableViewCell
cell.textLabel?.text = "A cell"
Since we do not have any data yet, we set our label to `A cell`. The `textLabel` variable is the default label we got when we selected a basic cell.
* **Part G** : Finally, after each time we create a new cell, we give the cell back to the Table View to display that cell:
return cell
Let's build and run the project by hitting the Play button (or using _command_ \+ _R_ ) to see what happens. You should now see `A cell` repeating 15 times:
# Adding locations to our Table View
We now have our Table View displaying data, but we need it to display a list of actual locations. Let's update our Table View to show our list of locations:
1. Directly under the `tableView` variable, add the following:
let locations = ["Aspen", "Boston", "Charleston", "Chicago", "Houston", "Las Vegas", "Los Angeles", "Miami", "New Orleans", "New York", "Philadelphia", "Portland", "San Antonio", "San Francisco", "Washington District of Columbia"]
2. Your file should now look like mine:
3. Next, to update our cell to display the locations, we need to replace the `cell.textLabel?.text = "A cell"` line with the following:
cell.textLabel?.text = locations[indexPath.item]
Let's build and run the project by hitting the Play button (or using c _om_ _mand_ \+ _R_ ). You should see the following after clicking Select a location in your simulator:
However, there are a couple of problems. If we add another location to the array, it crashes because we are manually setting the number of rows. Also, we are just loading this list from an array we built in the app. If we decide to add more locations, we would have to update our cell number count as well as our list of locations. Therefore, we should instead pull our locations from a plist, as we did in the last chapter. Plists provide a place where we can quickly add or remove a location from our list.
# Creating our first property list (plist)
In the last chapter, we used a provided plist to load our cuisine list. We will do the same in this chapter, but now that you are familiar with what a plist is, we will create one from scratch together.
I use plists all the time, from creating menus to having a file that holds app settings such as colors or social media URLs. I find them very useful, especially if I need to come back later and update or change things.
Let's learn how to create a plist from scratch. To create a plist in Xcode, do the following:
1. Right-click on the `Locations` folder and create a New Group called `Model`. Then, right-click on this folder and select New File.
2. Under Choose a template for your new file, select iOS at the top, and then type Property in the filter field:
3. Select Property List and then hit Next.
4. Name the file Locations and hit Create.
You should now have a file that looks like mine:
# Adding data to our property list
As you learned in the previous chapter, our plist has a Root; for this new file, we created a Dictionary as our Root type. Since we are going to display a list of locations, we need our Root to be an Array:
1. Click on Dictionary in the plist and change it to Array:
2. You should see a plus next to Array (if the plus button is not displaying, hover your mouse over that line item, and it will appear):
3. Click on the plus button, and it will add a new item with a String type. Change the type to Dictionary:
4. Click on the plus button that appears when you hover over Item 0.
5. We now need to update the New Item **.** Update the Key property to say state and update the Value property of the new item by entering CO:
6. Next, click on the plus button when you hover over the state.
7. Update the Key property to say city and update the Value property of the new item by entering Aspen:
8. Next, click on the disclosure arrow for Item 0 to close it:
9. Select Item 0 and then hit _command_ \+ _C_ to copy and then _command_ \+ _V_ to paste:
10. Next, open up Item 1 and update the city to Boston and the state to MA:
11. Continue with the same process by adding the following cities and states:
**Key** | **Type** | **City** | **State**
---|---|---|---
Item 2 | String | Charleston | NC
Item 3 | String | Chicago | IL
Item 4 | String | Houston | TX
Item 5 | String | Las Vegas | NV
Item 6 | String | Los Angeles | CA
Item 7 | String | Miami | FL
Item 8 | String | New Orleans | LA
Item 9 | String | New York | NY
Item 10 | String | Philadelphia | PA
Item 11 | String | Portland | OR
Item 12 | String | San Antonio | TX
Item 13 | String | San Francisco | CA
When you are done, your file should look like mine:
We just set up our data source. We now need to create a data manager similar to the one that we made in the previous chapter.
# Creating our location data manager
Let's create the `LocationDataManager` file:
1. Right-click on the `Model` folder in the `Location` folder and select New File.
2. Under Choose a template for your new file, select iOS at the top, and then Swift File. Then, hit Next.
3. Name this file `LocationDataManager`, and then hit Create.
4. We need to define our class definition now, so add the following under the `import` statement:
class LocationDataManager {
}
5. Inside the class declaration, add the following variable to keep our array private, as there is no reason to have to access this outside the class:
private var locations:[String] = []
6. Now, let's add the following methods after our variable:
init() {
fetch()
}
func fetch() {
for location in loadData() {
if let city = location["city"] as? String,
let state = location["state"] as? String {
locations.append("\(city), \(state)")
}
}
}
func numberOfItems() -> Int {
return locations.count
}
func locationItem(at index:IndexPath) -> String {
return locations[index.item]
}
private func loadData() -> [[String: AnyObject]] {
guard let path = Bundle.main.path(forResource: "Locations", ofType: "plist"), let items = NSArray(contentsOfFile: path) else {
return [[:]]
}
return items as! [[String : AnyObject]]
}
These methods are the same as we had in `ExploreDataManager`, except that we are getting back an array of dictionary objects from our plist.
# Working with our data manager
We now need to update our `LocationViewController`.
First, because we do not need it anymore, delete the following array that we created in the class:
let locations = ["Aspen", "Boston", "Charleston", "Chicago", "Houston", "Las Vegas", "Los Angeles", "Miami", "New Orleans", "New York", "Philadelphia", "Portland", "San Antonio", "San Francisco", "Washington District of Columbia"]
Next, since we need to create an instance of our data manager in this class, add the following above `viewDidLoad()`:
let manager = LocationDataManager()
Inside `viewDidLoad()`, we want to fetch the data for the Table View, so add the following under `super.viewDidLoad()`:
manager.fetch()
Now, your `viewDidLoad()` should look like the following:
override func viewDidLoad() {
super.viewDidLoad()
manager.fetch()
}
For the `numberOfRowsInSection()` method, instead of `15`, we will use the following:
manager.numberOfItems()
Lastly, we need to update our `cellForRowAt`. Replace `cell.textLabel?.text = arrLocations[indexPath.item]` with the following:
cell.textLabel?.text = manager.locationItem(at:indexPath)
Your `cellForRowAt` should now look like this:
func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
let cell = tableView.dequeueReusableCell(withIdentifier: "locationCell", for: indexPath) as UITableViewCell
cell.textLabel?.text = manager.locationItem(at:indexPath)
return cell
}
Let's build and run the project by hitting the Play button (or using _command_ \+ _R_ ). We should still see our locations, but now they are coming from our plist.
# Summary
In this chapter, we worked with a Table View that has dynamic cells, which allows the Table View to change based on the data. We looked at plists once more, learning how to create them from scratch, as well as how to add data to them. Finally, we created our locations data manager, which is responsible for giving data to the View Controller.
In the next chapter, we will work with a Table View that has static cells to build out our restaurant detail. Static cells are excellent for forms or detail views. We could build out the restaurant detail using a Collection View; however, a static Table View will work well and will be less complicated.
At this point, before moving on to the next chapter, you may want to download the starter project for this chapter and try to do it again without using the book as your guide. Going back helps solidify your understanding of what you have learned.
# Where Are We?
We have all used a map at some point in our lives, be it an actual map or a map on our phone or other device. Apple Maps has come a long way from when it was first announced in 2012. Apple has made steady improvements to Apple Maps every year.
During this chapter, we will display our restaurant list using a map and custom pins. When users tap a pin on the map, they will be taken directly to the restaurant detail page that we created in the last chapter.
In this chapter, we will cover the following topics:
* What annotations are and how to add them to a map
* How to create custom annotations
* How to create a storyboard reference
* What extensions are and how to use them to clean up your code
# Setting up map annotations
In our map, we are going to drop pins down at each restaurant location. These pins are called annotations, more specifically, `MKAnnotation`. MK stands for MapKit and is part of the MapKit framework. Since we are going to create multiple `MKAnnotation` protocols, we are going to create a class that conforms to `MKAnnotation`.
# What is an MKAnnotation?
`MKAnnotation` is a protocol that provides us with information related to a map view. Protocols provide a blueprint for methods, properties, and other required functionalities. `MKAnnotation` will contain information, such as the coordinates (latitude and longitude), title, and subtitle of the annotation.
To drop a pin onto a map, we must subclass `MKAnnotation`. When we first looked at classes versus structs, we saw that classes could subclass or inherit from other classes, which means that we can get properties, methods, and additional requirements from the one that we are subclassing. Let's create an annotation that subclasses `MKAnnotation` and see how this works.
# Creating a restaurant annotation
Before we jump into creating our file, we should first look at the data that we will be using. The data for the map view will be the same data that we use for our restaurant-listing page. Let's take a look at what the restaurant data will look like in plist format:
We need to create a file to represent this data for the map view, which will differ from the restaurant-listing page because we need to subclass `MKAnnotation`. Let's get started by creating this file now:
1. Right-click on the `Map` folder and create a new group called Model. Then, right-click this folder and select New File.
2. In the Choose a template for your new file screen, select iOS at the top and then Cocoa Touch Class. Then, hit Next.
3. In the Options screen that appears, add the following:
New file _:_
* * Class _:_ `RestaurantItem`
* Subclass _:_ `NSObject`
* Also create XIB _:_ Unchecked
* Language _:_ `Swift`
4. Click Next and then Create.
5. In this new `RestaurantItem.swift` file, under `import UIKit`, add `import MapKit`. We need this `import` statement so that Xcode knows where the files are that we are going to use.
6. Next, we need to update our class declaration to make our annotation. Since this is subclassing `MKAnnotation`, we need to change what we currently have (`class RestaurantItem: NSObject`) to the following:
class RestaurantItem: NSObject, MKAnnotation
You will see an error when you add the `MKAnnotation`. Just ignore it for now, as we will fix this error shortly.
Inside of the class declaration, add the following:
var name:String?
var cuisines:[String] = []
var lat:Double?
var long:Double?
var address:String?
var postalCode:String?
var state:String?
var imageURL:String?
When the user taps on the annotation, the name of the restaurant and the types of cuisine will appear, along with a detail icon. This detail icon will take the user to the restaurant detail page. Then, we will pass along all of this data and use it to populate the restaurant detail page we created in the last chapter.
We need to initialize all of the data that's been passed into the object. Therefore, let's create a custom `init()` method to which we can pass a dictionary object through its parameters:
This method is large, but it is nothing you have not seen before. We are using the `if...let` statement to check for data in each element. If something is missing, it will not be set.
Let's address this error now. The reason we are getting an error is because we are subclassing `MKAnnotation` and have not yet declared the `coordinates`, which is a required property. We also have two other optional properties—`title` and `subtitle`—that we are using for our map and that we need to declare. What we want to be able to do is pass the data that we have over to these three properties so that we can use them on our map.
To get rid of the error, we need to add the coordinates first. We need to set up the latitude and longitude, so add the following after the `init()` method:
var coordinate: CLLocationCoordinate2D {
guard let lat = lat, let long = long else { return CLLocationCoordinate2D() }
return CLLocationCoordinate2D(latitude: lat, longitude: long )
}
`CLLocationCoordinate2D` is a class that is used by `MapKit` to set the exact location of a pin.
Note that we are using curly braces for this property. It is defined in `MKAnnotation`, and we are using the computed property to set the value. For the `coordinate` property, we will pass latitude and longitude to it using `CLLocationCoordinate2D`. In our `init()` method, we created the data that sets the latitude and longitude, and now, we are passing those coordinates over to the `coordinate` property.
Let's do the same with `subtitle` by adding the following above the variable coordinate:
var subtitle: String? {
if cuisines.isEmpty { return "" }
else if cuisines.count == 1 { return cuisines.first }
else { return cuisines.joined(separator: ", ") }
}
The `subtitle` variable is a computed property, but this time we are using an `else...if` statement. We first check to see whether the array is empty; if so, nothing displays. If we only have one item in the array, we return that item. Finally, if we have multiple elements in our array, we take each item and put them in one string, separating each element with a comma. For example, if your array had the items `["American," "Bistro," "Burgers"]`, then we would create a string that looks like _American_ , _Bistro_ , _Burgers_.
Finally, we need to add the title. Enter the following above the `subtitle` variable:
var title: String? {
return name
}
Your file should no longer have an error, and should now look as follows:
Next, we want to create a manager that will take our data and create annotations for our map.
# Creating our Map Data Manager
In the next chapter, we will deal with data, but for now, we can mock up some data to set up our structure. We will use a plist to load our data, just like we did in the last chapter.
Let's create the `MapDataManager` file now:
1. Right-click on the `Model` folder inside of the `Map` folder and select New File.
2. In the Choose a template for your new file screen, select iOS at the top and then Swift File. Then, hit Next.
3. Name this file `MapDataManager` and then hit Create.
4. Next, we need to define our class definition, so add the following under the `import` statement:
class MapDataManager {}
5. Inside of the class declaration, add the following variables:
fileprivate var items:[RestaurantItem] = []
var annotations:[RestaurantItem] {
return items
}
Note that we are keeping our array private since there is no reason to have to access this outside of the class.
6. Now, let's add the following methods inside of our class declaration, after our variables:
func fetch(completion:(_ annotations:[RestaurantItem]) -> ()) {
if items.count > 0 { items.removeAll() }
for data in loadData() {
items.append(RestaurantItem(dict: data))
}
completion(items)
}
fileprivate func loadData() -> [[String:AnyObject]] {
guard let path = Bundle.main.path(forResource: "MapLocations", ofType: "plist"),
let items = NSArray(contentsOfFile: path) else { return [[:]] }
return items as! [[String : AnyObject]]
}
Your file should now look as follows:
7. The `fetch()` and `loadData()` methods are the same as those that we had in the `ExploreDataManager` file. However, the `fetch()` method here has something new inside of its parameters, specifically the following:
completion:(_ annotations:[RestaurantItem]) -> ())
The preceding code is called a **closure block** , which allows us to signify when we have completed the method, and it then dictates an action to occur (here, returning an array of annotations). We will use these annotations to load pins on our map. We are looping through the `for...in` loop; when we are done, we call `completion()`. When we get to our `MapViewController`, you will see how we can write this.
Now, let's take a look at our `MapLocations.plist` file:
This file has the same structure as our `ExploreData.plist` file. Our `Root` is an array, and each item inside of our `Root` is a dictionary item. There is an acronym that many programmers call **don't repeat yourself** ( **DRY** ). Since both plist files have an array of dictionary objects, we can update our code so that we can use the same method in multiple places.
# Creating a base class
To keep us from repeating ourselves, we are going to create a base class. This base class will have a new method called `load(file name:)`, but we will add a parameter to pass the file name. Let's create a `DataManager` file now under our `Common` folder:
1. Right-click on the `Misc` folder and select New File.
2. In the Choose a template for your new file screen, select iOS at the top and then Swift File. Then, hit Next.
3. Name this file `DataManager`, and then hit Create.
4. In this new file, we need to define our class definition; therefore, add the following under the `import` statement:
protocol DataManager {}
5. Inside of the protocol declaration, add the following method:
func load(file name:String) -> [[String:AnyObject]]
6. Now, create an extension under the protocol:
extension DataManager {}
7. Inside of the `extension` declaration, add the following:
func load(file name:String) -> [[String:AnyObject]] {
guard let path = Bundle.main.path(forResource: name, ofType: "plist"), let items = NSArray(contentsOfFile: path) else { return [[:]] }
return items as! [[String : AnyObject]]
}
8. When you are done, your file should look like mine:
Other than changing the function name to include parameters, we created the same function as we have in our `Explore` and `Map Data Manager` files. However, this function here is no longer a `private` method, because we want it to be accessible to any class that wants to use it.
By creating a protocol, we are using what is known as protocol-oriented programming. We will not get too heavily into the detail of this since there are plenty of books and videos on this topic. The central concept that you will want to understand is that we can use this in any class we want and have access to the `load(name:)` method.
The preceding method is all we need to do in this file.
# Refactoring code
Now that we have created this new protocol, we can access it from anywhere we need it. First, let's update our `MapDataManger` class to use our newly created protocol:
1. Delete the `loadData()` function, because we will not need it anymore. You will see an error after you delete the `loadData()` method. This error is happening because we need to give the `fetch()` method a filename to load whenever we call the `loadData()` method. We will fix this shortly.
2. Next, we need to update our class declaration to say the following:
class MapDataManager: DataManager
3. We now have our `MapDataManager` class using our `DataManager` protocol, which means that we will use the `load(name:)` method from our `DataManager` inside of our `MapDataManager`.
4. Now, let's fix the error by updating our `fetch()` method from our data in `loadData()` to the following:
for data in load(file: "MapLocations")
Your updated file should now look like the following:
We removed the error in our `MapDataManager`, but we need to do some refactoring of our `ExploreDataManager` file to do the same.
# Refactoring ExploreDataManager
Because our `loadData()` was written the same in both the `ExploreDataManager` and `MapDataManager` files, we need to update our `ExploreDataManager` in the same way that we just did for the `MapDataManager`. Open `ExploreDataManager` and do the following:
1. Delete the private `loadData()` function, because we will not need it anymore. Again, ignore the error, as we are going to fix this shortly.
2. Next, update our class declaration to now say the following:
class ExploreDataManager: DataManager
3. Now, let's fix the error by updating our `fetch()` method from the data in `loadData()` to the following:
for data in load(file: "ExploreData")
4. Your updated function should now look like the following:
func fetch() {
for data in load(file: "ExploreData") {
items.append(ExploreItem(dict: data))
}
}
We have completed our files, and we can now use the same method any time we need to load a plist that has an array of dictionary items.
Refactoring is something you will become more comfortable with the more you write code. Understanding when to refactor is a bit harder when you first start out because you are still learning. The most prominent indicator that you need to refactor is when you have written something more than once. However, refactoring does not always work for everything; at times, writing the same code more than once can be unavoidable. Just being aware of when refactoring may be useful is a good sign and half the battle to a greater understanding of this method. I have been coding for years; there will be times when I copy and paste something I wrote to see if it works and then never refactor. Then, months later, I will wonder why I did not write a method to handle it in both places.
# Creating and adding annotations
Now, we need to get our map hooked up and get the annotations displayed on the map. Then, we will customize our annotations to look like those in our design.
# Creating our Map View Controller
We need to create our `MapViewController` file and then connect it with our `UIViewController` and map view in the storyboard. First, let's create this file:
1. In the Navigator panel, right-click on the `Controller` folder in the `Map` folder and select New File.
2. In the Choose a template for your new file screen, select iOS at the top and then Cocoa Touch Class. Then, hit Next.
3. Add the following to the Options screen that appears:
New file _:_
* * Class: `MapViewController`
* Subclass: `UIViewController`
* Also create XIB: Unchecked
* Language: `Swift`
4. Click Next and then Create.
5. Under the `import UIKit` statement, add `import MapKit`.
6. Update your class declaration to include the following protocol:
class MapViewController: UIViewController, MKMapViewDelegate
Now, let's connect this file with our `UIViewController` and our map view in the storyboard:
1. Add the following after the class declaration:
@IBOutlet var mapView: MKMapView!
2. Open your `Map.storyboard` file.
3. In the Outline view, select the View Controller that contains the map view.
4. Now, in the Utilities panel, select the Identity inspector.
5. Under Custom Class, in the Class drop-down menu, select `MapViewController` and hit _enter_ to connect the View Controller to the class.
6. Now, select the Connections inspector.
7. Under the Outlets section, you will see an empty circle next to `mapView`. Click and drag the outlet to the map view in the View Controller in the Outline view.
We are going to start working with our map, but first, we need to add some things to our `MapDataManager`:
1. Open the `MapDataManager.swift` file in the Navigator panel. Underneath the `import Foundation` statement, add `import MapKit`.
2. Next, add the following method to our `MapDataManager`:
func currentRegion(latDelta:CLLocationDegrees, longDelta:CLLocationDegrees) -> MKCoordinateRegion {
guard let item = items.first else { return MKCoordinateRegion() }
let span = MKCoordinateSpanMake(latDelta, longDelta)
return MKCoordinateRegion(center:item.coordinate,span:span)
}
Before we delve into the particular sections of this function, we need to understand what this function does. When you use a map and drop pins down onto it, you want the map to zoom in to a particular area. To zoom in on a map, you need latitude and longitude. What this method is doing is grabbing the first pin (or annotation) in the array and zooming in on the area:
Let's break down the code:
* **Part A** : Our method has two parameters, both of which are `CLLocationDegrees`. It is just a class that represents a latitude or longitude coordinate in degrees:
func currentRegion(latDelta:CLLocationDegrees, longDelta:CLLocationDegrees) -> MKCoordinateRegion {
* **Part B** : This `guard` statement obtains the first item in the array. If there are no items in the array, it will just return an empty coordinate region. If there are items in the array, it will return the coordinate region:
guard let item = items.first else { return MKCoordinateRegion() }
* **Part C** : Here, we are creating an `MKCoordinate` with the latitude and longitude that we passed into the function. `MKCoordinateSpan` defines a span, in the latitude and longitude directions, to show on the map:
let span = MKCoordinateSpanMake(latDelta, longDelta)
* **Part D** : Lastly, we are setting the center and the span of our region and returning them so that when the pins drop, the map can zoom in on the area:
return MKCoordinateRegion(center: item.coordinate, span: span)
Now, let's set up our `MapViewController` to display annotations:
1. Open the `MapViewController.swift` file in the Navigator panel and delete both `didReceiveMemoryWarning()` and `prepare()` (which has been commented out), as we do not need them for our purposes.
2. Directly under our `IBOutlet` statement, add the following:
let manager = MapDataManager()
3. Then, inside of the class definition, add the following method after `viewDidLoad()`:
func addMap(_ annotations:[RestaurantItem]) {
mapView.setRegion(manager.currentRegion(latDelta: 0.5, longDelta: 0.5), animated: true)
mapView.addAnnotations(manager.annotations)
}
In this method, we are doing a couple of things. First, we pass annotations through the parameter. When we call `fetch()` and it is completed, it will return the array of annotations. We will pass that array over to `addMap(_ annotations:)` to use. Next, we set the region by obtaining it from our `MapDataManager`, thus setting the latitude and longitude delta. The delta will set our zoom and region for our map. Once we have that, we then pass all of our annotations for the map to display.
Therefore, we need to have our manager fetch the annotations. Add the following method above `addMap(_ annotations:)`:
func initialize() {
manager.fetch { (annotations) in
addMap(annotations)
}
}
Inside of the `initialize()` method, we are setting the map delegate to the class. In previous chapters, we did this using storyboard; however, you can also do this with code. This line allows us to be notified when the user taps on an annotation or taps the disclosure indicator in the annotation.
Earlier in this chapter, we created a `fetch()` method in the `MapDataManager`, wherein we used a closure block. This closure block requires that we wrap it in curly braces. Once the `completion()` block is called in the manager, everything inside of the curly braces will run. For our purposes in building this app, we are going to have a small number of pins or annotations; therefore, we do not need a completion block. However, if you have 100 or 500 annotations, for instance, a closure block would be more efficient. We will do more with this later so that you can get more practice with closure blocks.
Add `initialize()` inside of `viewDidLoad()` so that everything will run when the view loads.
Before you build, make sure that you add the `MapLocations.plist` file into the `maps` folder. This file is in this book's `assets` folder for this chapter.
Let's build and run the project by hitting the Play button (or use _command_ + _R_ ):
We now have pins on our map, but we need to update them so that they look more like the ones in our design. Let's learn how to customize the annotations on our map.
# Creating custom annotations
If you have ever owned an iPhone and used Apple Maps, you will be familiar with pins. When you have a map inside of your app, having custom pins (annotations) gives your app a bit more polish. Let's create our custom annotations.
Open up `MapViewController` in the Navigator Panel, then inside of the `initialize()` method, add the following:
mapView.delegate = self
Next, add the following directly under the `addMap(_ annotations:)` method:
func mapView(_ mapView:MKMapView, viewFor annotation:MKAnnotation) -> MKAnnotationView? {
let identifier = "custompin"
guard !annotation.isKind(of: MKUserLocation.self) else { return nil }
var annotationView: MKAnnotationView?
if let customAnnotationView = mapView.dequeueReusableAnnotationView(withIdentifier: identifier) {
annotationView = customAnnotationView
annotationView?.annotation = annotation
} else {
let av = MKAnnotationView(annotation: annotation, reuseIdentifier: identifier)
av.rightCalloutAccessoryView = UIButton(type: .detailDisclosure)
annotationView = av
}
if let annotationView = annotationView {
annotationView.canShowCallout = true
annotationView.image = UIImage(named: "custom-annotation")
}
return annotationView
}
Let's break down this code so we can understand what we are doing. We will break the function down into the following sections:
Let's start with A:
* **Part A** : This method will be called on the `mapView.delegate` we set up earlier when annotations need to be placed. We will use this method to grab the annotations before they are placed and replace the default pins with custom pins: **
**
mapView(_:viewFor:)
* **Part B** : Here, we set an identifier, similar to those that we set when using Collection Views and Table Views: **
**
let identifier = "custompin"
* **Part C** : This guard will ensure that our annotation is not the user location. If the annotation is the user location, the `guard` will return `nil`. Otherwise, it will move on through the method: **
**
guard !annotation.isKind(of: MKUserLocation.self) else {
return nil
}
* **Part D** : `MKAnnotationView` is the class name for the pin; here, we create a variable that we can use to set our custom image: **
**
var annotationView:MKAnnotationView?
* **Part E** : In this statement, we are checking to see whether there are any annotations that have already been created that we can reuse. If so, we point them to the variable we added previously. Otherwise, we create the annotation in the next `else` statement:
if let customAnnotationView = mapView.dequeueReusableAnnotationView(withIdentifier: identifier) {
annotationView = customAnnotationView
annotationView?.annotation = annotation
}
* **Part F** : If there are no annotations to reuse, we create a new `MKAnnotationView` and give it a callout with a button. A callout is a bubble that appears above the annotation when you tap it to display the title (restaurant name) and subtitle (cuisines) associated with that annotation. If the user selects this callout button, the user is taken to the restaurant detail view:
else {
let av = MKAnnotationView(annotation: annotation, reuseIdentifier: identifier)
av.rightCalloutAccessoryView = UIButton(type: .detailDisclosure)
annotationView = av
}
* **Part G** : Here is where we make sure that our custom annotation will show a callout. We can also set our custom image for our annotation:
if let annotationView = annotationView {
annotationView.canShowCallout = true
annotationView.image = UIImage(named: "custom-annotation")
}
* **Part H** : Once we are finished going through the method, we return our custom annotation to the map. This method is called for every annotation that appears on the map:
return annotationView
Let's build and run the project by hitting the Play button (or use _command_ \+ _R_ ):
We now have custom annotations displaying on our map. Each pin's callout shows the restaurant name as well as the cuisines for the restaurant associated with that particular pin. If you tap on the callout, the restaurant detail disclosure does not yet work. Let's set that up now.
# Map to restaurant detail
For us to go to the restaurant detail from the callout, we need to update our app so that our map can also open the restaurant detail. To do this, we must first create a storyboard reference. The project has a few storyboard references in the app already, but let's set up one together.
# Creating a storyboard reference
To link to the restaurant detail from the map, we need to create a storyboard reference:
1. Open the `Map.storyboard`, and in the object library ( _command_ \+ _shift_ \+ _L_ ), drag a Storyboard Reference into the `Map.storyboard` scene:
2. Next, select the Attributes inspector in the Utilities Panel, and update the storyboard under Storyboard Reference to say `RestaurantDetail`. Then, hit _enter_ :
3. Click on _control_ and drag from the Map View controller to the storyboard reference we just created and select Show on the screen that appears. Note that you can _control_ and drag from either the Map View controller in the Outline view or the Map View controller icon in the scene, as shown in the following screenshot:
4. Select the segue connecting the Map View controller to the storyboard reference:
5. In the Attributes inspector, update the Identifier under Storyboard Segue to say `showDetail`. Then, hit _enter_ :
This identifier is what we are going to call whenever the restaurant detail disclosure is tapped. Let's connect our segue next.
# Map to restaurant detail
Before we connect our segue, we should create an enumeration (an `enum`, for short) to keep track of our segues. An `enum` is a user-defined data type that consists of a set of related values:
1. Right-click on the `Misc` folder inside the `Common` folder and select New File.
2. In the Choose a template for your new file screen, select iOS at the top and then Swift File. Then, hit Next.
3. Name this file `Segue` and hit Create.
4. Under `import Foundation` in the new file, add the following:
enum Segue:String {
case showDetail
case showRating
case showReview
case ShowAllReviews
case restaurantList
case locationList
case showPhotoReview
case showPhotoFilter
}
We will eventually need all of these segues. Instead of coming back into this file, we will add them all now. Whenever we use a new one, I will refer back to this file. The next thing we need to know is when the user taps the detail disclosure of the callout.
In the `MapViewController.swift` file, add the following delegate implementation under the `addMap(_ annotations:)` method:
func mapView(_ mapView: MKMapView, annotationView view: MKAnnotationView, calloutAccessoryControlTapped control: UIControl) {
self.performSegue(withIdentifier: Segue.showDetail.rawValue, sender: self)
}
We are using `performSegue()` to call our custom segue. Now, when you tap the annotation and then the callout, you will go to the restaurant-detail view:
Let's build and run the project by hitting the Play button (or use _command_ \+ _R_ ). We can now get to the restaurant detail view from the map.
# Passing data to restaurant detail
In the next chapter, we are going to display the data in our restaurant detail. For now, we want to pass the data over to the detail view.
To make this work, we need to update both our `RestaurantDetailViewController` (which we have not created yet) and the `MapViewController`. Let's create the `RestaurantDetailViewController`:
1. Right-click on the new `Restaurant Detail` and select New File.
2. In the Choose a template for your new file screen, select iOS at the top and then Cocoa Touch Class. Then, hit Next.
3. In the Options screen, add the following:
New file:
* * Class: `RestaurantDetailViewController`
* Subclass: `UITableViewController`
* Also create XIB: Unchecked
* Language: `Swift`
6. Click Next and then Create.
7. Delete everything after the `viewDidLoad()` method, as we do not need all of the other code.
Your file should now look as follows:
8. Next, inside of the class declaration, add the following:
var selectedRestaurant:RestaurantItem?
9. Then, add the following code inside of `viewDidLoad()`:
dump(selectedRestaurant as Any)
10. Your file should now look like the following:
11. Open your `RestaurantDetail.storyboard` file.
12. In the Outline view, select the Table View Controller.
13. In the Utilities panel, select the Identity inspector.
14. Under Custom Class, in the Class drop-down menu, select `RestaurantDetailViewController` and hit _enter_ to connect the View Controller to the class.
The preceding code is all we need to have in `RestaurantDetailViewController`. Next, we need to update our `MapViewController`.
15. Open the `MapViewController.swift` file.
16. Directly under where we declare our manager, add the following code:
var selectedRestaurant:RestaurantItem?
17. Then, add the following code into the `calloutAccessoryControlTapped()` method above `performSegue`:
guard let annotation = mapView.selectedAnnotations.first else { return }
selectedRestaurant = annotation as? RestaurantItem
Your file should now look as follows:
18. Next, add the following code after `viewDidLoad()`:
override func prepare(for segue: UIStoryboardSegue, sender: Any?) {
switch segue.identifier! {
case Segue.showDetail.rawValue:
showRestaurantDetail(segue: segue)
default:
print("Segue not added")
}
}
You will see an error, but ignore it, as we are going to fix this in the next step.
Whenever we transition with a segue, this method gets called. First, we check for the `showDetail` identifier; if this identifier is called, we want to do something (in this case, get the selected restaurant and pass it to the detail view) before we transition.
Add the following code after the `addMap(_ annotations:)` method:
func showRestaurantDetail(segue:UIStoryboardSegue) {
if let viewController = segue.destination as? RestaurantDetailViewController, let restaurant = selectedRestaurant {
viewController.selectedRestaurant = restaurant
}
}
Here, we are checking to make sure that the segue destination is the `RestaurantDetailViewController`; if so, we make sure that we have a selected restaurant. When it is confirmed that the segue destination is the `RestaurantDetailViewController` and we have a selected restaurant, we use the `selectedRestaurant` variable that we created in `RestaurantDetailViewController` and set it to the selected restaurant in `MapViewController`.
Your file should now look like the following, with the two new methods we just added:
Let's build and run the project by hitting the Play button (or using _command_ \+ _R_ ) and test whether we can pass data to our `RestaurantDetailViewController`. You should see the following in your Debug Panel if everything worked:
We now have our `RestaurantDetailViewController`, which is capable of receiving data. In the next chapter, we will display that data. However, before we write any more code, we should organize our code a bit better.
# Organizing your code
Earlier, we wrote an extension for our `DataManager`; extensions are useful for adding functionality onto standard libraries, structs, or classes—such as arrays, ints, and strings—or onto your data types.
Here is an example. Let's say that you wanted to know the length of a string:
let name = "Craig"
name.characters .count
For us to access the count of the string, we would need to access the characters and then get a count.
Let's simplify this by creating an extension:
extension String {
var length: Int {
return self.characters.count
}
}
With this newly created `String` extension, we can now access the count by writing the following:
let name = "Craig"
name.length
As you can see, extensions are very powerful by enabling us to add extra functionality without having to change the main class or struct. The `length` property already exists in the `String` class, but I wanted to give you a simplified example of how powerful an extension is and how you can create your own.
Up until now, we have paid very little attention to file structure and more attention to understanding what we are writing. Organizing your code is also very important, which is why we are going to refactor our code. The refactoring will mostly consist of copying and pasting code that you have already written. Extensions can help us organize our code better and stay away from cluttering our View Controllers. Also, we can extend the functionality of View Controllers through extensions. We are going to update four classes: `ExploreViewController`, `RestaurantListViewController`, `LocationViewController`, and `MapViewController`.
# Refactoring ExploreViewController
We are going to divide our View Controller into distinct sections using what is known as a `MARK` comment. Let's start with our `ExploreViewController`:
1. In the `ExploreViewController` file, after the last curly brace, hit _enter_ a couple of times and add the following code (remember that this should be outside of the class, not inside):
// MARK: Private Extension
private extension ExploreViewController {
// code goes here
}
// MARK: UICollectionViewDataSource
extension ExploreViewController: UICollectionViewDataSource {
// code goes here
}
Here, we are creating two extensions. Our first one will be private and will be where we add any methods that we create that we need for this controller. Our second one is an extension that deals with our `collectionview` data source. Let's keep going for now.
2. We currently have an error because we are using `UICollectionViewDataSource` in two places. Delete `UICollectionViewDataSource` (including the comma) from the class definition at the top of the file:
3. Now, let's move all of our `CollectionViewDataSource` methods into our extension. You should be moving the following:
Your file, including the extension, should now look as follows:
Now, you are probably wondering why we created the `private` extension. Well, one thing that I try to do is keep `viewDidLoad()` as clean as possible. Instead of writing a ton of code inside of `viewDidLoad()`, I like to create an `initialize()` method and call that instead. This way, it's clear to anyone going into my code what I am doing. Let's add the following to our `private` extension:
func initialize() {
manager.fetch()
}
@IBAction func unwindLocationCancel(segue:UIStoryboardSegue){}
Now, we can call `initialize()` inside of `viewDidLoad()`. When you are done, you should see the following:
class ExploreViewController: UIViewController {
@IBOutlet weak var collectionView:UICollectionView!
let manager = ExploreDataManager()
override func viewDidLoad() {
super.viewDidLoad()
initialize()
}
}
// MARK: Private Extension
private extension ExploreViewController {
func initialize() {
manager.fetch()
}
@IBAction func unwindLocationCancel(segue:UIStoryboardSegue){}
}
Now, this might seem like we wrote extra code for nothing, but as your classes grow, you will see the benefit of doing this. Before we clean up the other files, let's look at what the `MARK` comment does.
# Using the MARK comment
Currently, our `MARK` comment may seem like a useless comment in our code, but it is more powerful than you think. Look at the bottom bar that is located to the right of the Play and Stop buttons in Xcode and look for the last arrow. Mine says `No Selection`, but if you have your cursor on a method, you might see the following instead:
Click on this last item, and you will see the following:
The preceding screenshots show all of your code, divided, just like our file. You can click on any method, and the file will jump right to that method. Even if your file is long and you are looking for a method, you can use this technique to get where you need to be. We are done cleaning up our `ExploreViewController`.
# Refactoring RestaurantViewController
We now know our structure, so let's update our `RestaurantListViewController`. Even though we do not currently have anything to put in our `private` extension, we will add it anyway as good practice. As you get more comfortable, only add this when you actually need it:
1. Inside our `RestaurantListViewController`, after the last curly brace, hit _enter_ a couple of times and add the following code (remember, this should be outside of the class, not inside):
// MARK: Private Extension
private extension RestaurantViewController {}
// MARK: UICollectionViewDataSource
extension RestaurantViewController: UICollectionViewDataSource {}
2. Next, delete the `UICollectionViewDataSource` subclass from the main class.
3. Now, let's move all of our `CollectionViewDataSource` methods into our extension. You should be moving the following:
4. Your file, including the extension, should now look as follows:
We successfully updated our `RestaurantListViewController`.
Next, let's take a look at our `LocationViewController`:
1. Inside of our `LocationViewController`, after the last curly brace, hit _enter_ a couple of times and add the following code (remember, this should be outside of the class, not inside):
// MARK: Private Extension
private extension LocationViewController {}
// MARK: UITableViewDataSource
extension LocationViewController: UITableViewDataSource {}
2. Next, remove the `UITableViewDataSource` subclass from the main class.
3. Now, let's move all of our `TableViewDataSource` methods into our extension. You should be moving the following:
Your file, including the extension, should now look as follows:
4. Now, just like we did in our `ExploreViewController`, we want to create an `initialize()` method in our `private` extension and update `viewDidLoad()` to call `initialize()`. When you are done, your file should look like mine:
We will finish by cleaning up our `LocationViewController`. Finally, let's take a look at our `MapViewController`.
# Refactoring MapViewController
We are just about done refactoring our files. The last file we need to refactor is our `MapViewController`. Let's get started:
1. Inside of our `MapViewController`, after the last curly brace, hit _enter_ a couple of times and add the following code (remember, this should be outside of the class, not inside):
// MARK: Private Extension
private extension MapViewController {}
// MARK: MKMapViewDelegate
extension MapViewController: MKMapDelegate {}
2. Next, remove the `MKMapViewDelegate` subclass from the main class and move it into our extension.
3. Now, let's move all of our `MKMapViewDelegate` methods into the extension. You should be moving the following:
Your extension should now look as follows:
4. Next, let's update our `private` extension by moving the following:
When you are done, you should have the following:
I did not include the `MKMapViewDelegate` extension because the file is too long. The extension is under our `private` extension. Why did I not move the `prepare()` method? The `prepare()` and `viewDidLoad()` methods are methods that are overrides for `UIViewController` in this case. We want to keep these methods inside of our main class declaration. The more we do this, the clearer it will become.
We've finished cleaning up the four View Controllers. You might be wondering what the benefits of this are. In this project, it may not seem like these updates are very important, because we are not doing a lot in our View Controllers. However, as a project grows, there will be some cases where multiple protocols and delegates are adopted; thus, these updates will be beneficial.
Here is an example:
class NewsListingView: UIViewController, NewsListingViewProtocol, UICollectionViewDelegate, UICollectionViewDataSource, LiveGameNewsViewDelegate, UIGestureRecognizerDelegate
This class is subclassing a View Controller and adopting one protocol, three delegates, and one data source. If you had two methods for each one that you need, you would have 12 functions in your class that would need certain methods. Separating out our code makes it easy to find where things are located.
# Summary
In this chapter, we discussed what `MKAnnotations` are and how to add and subclass them so that we can use them on our map. We also learned how to customize our annotations. Our app now takes us from tapping on an annotation to a restaurant detail page. We also learned that extensions help to organize code as well as add functionality without having to change the main class or struct with which we are working.
In the next chapter, we are going to display data on our restaurant list. We will also set up our restaurant detail page to display data.
# Working with an API
When building iOS apps, data can be the most critical part. Typically, the apps you make require that you get data from an online data source, known as an **Application Programming Interface** ( **API** ). In the previous chapters, we have only worked with a plist to supply our data. Using a plist bridges the gap to understanding how to work with an API, as you will see shortly. In this chapter, we will work with an API that is in **JavaScript Object Notation** ( **JSON** ) format. This format is typical, no matter which backend service was used to create the JSON file.
In this chapter, we will cover the following topics:
* What a JSON file is and the different components of this data feed
* Passing data using segues
# Creating an API Manager
In this chapter, we will be building an API Manager. This manager will be responsible for anything that has to do with getting data from the internet. When dealing with data online, you will typically get i0t in a particular format, which you then need to convert into something that your app can read.
# What is an API?
A RESTful API is a web service from which an app can receive data. Typically, when you are dealing with APIs, such as YELP, they tend to change often. For our purposes, we want to use static files so that we can work on this project without having to be concerned about changes to the API. Therefore, most of the data we are going to use comes from the <http://opentable.herokuapp.com/> site, which is not managed full-time and does not change often. The site's API, however, is missing some data that we need; therefore, I have updated these files (which are in the project files for this chapter) to include that missing data.
APIs are typically in JSON format, and working with them is similar to working with plists. The transition from one to the other should be pretty seamless. Let's get familiar with the JSON format.
# Understanding a JSON file
Before we write any code, we need to take a look at the structure of a simple JSON file. Let's create a new group inside the `Misc` folder in the Navigator panel called `JSON`. Then, we need to drag and drop all of the JSON files found in the project files for this chapter into the new `JSON` folder by clicking on Finish in the screen that appears. Lastly, open up the `Charleston.json` file and let's review the first part of it, including the first restaurant listing:
This file has four nodes inside it, `total_entries`, `per_page`, `current_page`, and `restaurants`. When you work with a feed, it will split items up into pages so that you are not trying to load all the data at once. This feed tells us that there are 67 total pages with 25 restaurants per page and that we are currently on page one. We do not need the first three nodes in this book since we are just going to load 25 restaurants.
The `restaurant` node, on the other hand, is essential for this book. The restaurant's node is an array of data, recognizable as such by the brackets (`[ ]`) used in the node. If you review the individual items in the restaurant's node, you will notice that everything needed for our app's name, address, city, and so on, is covered. This structure is the same as that which we saw in the plists earlier in this book. If you look at cuisines, you will notice that it is wrapped inside brackets (`[ ]`). Again, this is what we had in our plist data previously. We have an idea of what a JSON file looks like; let's see how we can work with it.
# Exploring the API Manager file
We just created our `API Manager` folder. Now, let's create the `API Manager` file:
1. Right-click on the `Misc` folder and select New File.
2. On the Choose a template for your new file screen, select iOS at the top. Then, select Swift File. Then, hit Next.
3. Name this file `RestaurantAPIManager`, and hit Create.
We need to define our class definition first; therefore, add the following to the `import` statement:
* **Part A** : Here, we define the class:
struct RestaurantAPIManager {
* **Part B** : The `loadJSON()` method is known as a type method because it has the `static` keyword in front of it. Type methods are called using the dot syntax. Static functions cannot be overridden:
static func loadJSON(file name:String) -> [[String:AnyObject]] {
The next bullet list explains what we need to write when we want to call the `loadJSON` method inside the `RestaurantAPIManager` file.
* **Part C** : Calling this method will return an array of dictionary objects. If this sounds familiar, it is because our plist data returns the same thing:
var items = [[String: AnyObject]]()
* **Part D** : On this line, we declare an array of dictionary objects:
guard let path = Bundle.main.path(forResource: name, ofType: "json"), let data = NSData(contentsOfFile: path) else {
return [[:]]
}
* **Part E** : Since we are not loading from the internet, we need to make sure that we call the right filename. If the path is found and there is nothing wrong with the data, we will use the data. Otherwise, we will return an empty array with no dictionary objects.
Here, we are using a `do...catch`. As a reminder, a do-catch statement is used to handle errors by running a block of code. To employ it, we must utilize it with what is known as a try. First, we need to try and serialize or convert the data from the JSON file; if we are successful, we can then access the information inside that file. To obtain the restaurant items in the JSON file (all of which are located inside the restaurant's node), we used `json["restaurants"]`.
Next, we cast this using `as?` as an array of dictionary objects. Also, since our data types are mixed, we used `AnyObject` to accept the dictionary of mixed data types. Finally, we set our data to the array of items. We now have the same structure, and the array of dictionary objects that we had in the `Map` section:
do {
let json = try JSONSerialization.jsonObject(with: data as Data, options: .allowFragments) as AnyObject
if let restaurants = json as? [[String: AnyObject]] {
items = restaurants as [[String : AnyObject]]
}
}
* **Part F** : This `catch` only runs if there is a problem serializing the data from the file. If there is a problem, we will return an empty array with no dictionary objects. Using a do-catch allows for our app to keep running without crashing:
catch {
print("error serializing JSON: (error)")
items = [[:]]
}
* **Part G** : Finally, if all goes well, we return the array of dictionary items back:
return items
This entire class is built so that we can pass any name we want; it will return data if it finds the file.
# Location list
Let's review how our app will work. A user will select a cuisine and location. Then, the location is passed to the Explore view. The user will get restaurants from the selected location, which have been filtered by the selected cuisine.
If this were online, we would pass the location to the API, and the API would return the JSON data. As you can see, we are doing the same thing here. When you eventually deal with an API, the transition to working with online data will be seamless.
# Selecting a location
As stated earlier, to get data, we need a location. To get the location, we need to get it from the `LocationViewController`. When a location is selected, we will show a checkmark. We will need this checkmark to update each time a new item is set. Finally, when the Done button is tapped, we need to pass this location to `ExploreViewController`.
We need to create a location item that will have both the city and state that we can use and pass around.
Right-click on the Model folder inside of Locations folder and select New File.
Inside the Choose a template for your new file screen, select iOS at the top. Then, select Swift file and name the file `LocationItem`. Hit Create and add the following:
struct LocationItem {
var state: String?
var city: String?
}
extension LocationItem {
init(dict: [String: AnyObject]) {
self.state = dict["state"] as? String
self.city = dict["city"] as? String
}
var full: String {
guard let city = self.city, let state = self.state else { return "" }
return "\(city), \(state)"
}
}
Here, we are creating a `LocationItem` that is a struct. This struct has two variables, state and city, that are optionals. So far, nothing too crazy. Next, we added an extension that contains a custom `init()` method that passes a dictionary into it. Finally, we created a full variable that will take our city and state and combine it into one string for display purposes. Now that we have our item, let's update our `LocationViewController` next. We need a variable to keep track of the selected location. Add the following inside the `LocationViewController.swift` file, under the constant manager:
var selectedCity:LocationItem?
Then, we need to create a new extension for `UITableViewDelegate`, as follows. Add the following after our `UITableViewDataSource` extension:
//MARK: UITableViewDelegate
extension LocationViewController: UITableViewDelegate {
}
As we discussed earlier in this book, delegates supply the behavior. Here, we want a behavior for when the user selects a Table View row, and another behavior for when the user deselects the row. First, let's update our `cellForRowAt` method with the selection behavior in our new extension by adding the following code:
func tableView(_tableView:UITableView, cellForRowAtindexPath:IndexPath) -> UITableViewCell {
let cell = tableView.dequeueReusableCell(withIdentifier: "locationCell", for:indexPath) as UITableViewCell
cell.textLabel?.text = manager.locationItem(at:indexPath).full
return cell
}
Next, let's add the selection behavior in our new extension by adding the following code:
func tableView(_ tableView: UITableView, didSelectRowAt indexPath:IndexPath) {
if let cell = tableView.cellForRow(at: indexPath) {
cell.accessoryType = .checkmark
selectedCity = manager.locationItem(at:indexPath)
tableView.reloadData()
}
}
Here, we will get the cell of the selected row and set its `accessoryType` to `checkmark`. Then, we will get the location and set it to the `selectedCity` variable. To only see the `checkmark` in our Table View cell, we need to remove the disclosure arrow and gray cell selection. Let's update this by doing the following:
1. Open the `Locations.storyboard` file.
2. Select the `locationCell` Table View in the Location View Controller.
3. Select the Attributes inspector in the Utilities panel, and update the Selection field from Gray to None.
4. Next, update the Accessory field from Disclosure Indicator to None.
# Adding a Header view
Our Explore has a header, and we need to pass data over to it. To do that, we need to create a header class for it:
1. Right-click on the View folder inside of Explore folder and select New File.
2. On the Choose a template for your new file screen, select iOS at the top. Then, select Cocoa Touch Class. Then, hit Next.
3. In the options screen that appears, add the following:
New file:
* * Class: `ExploreHeaderView`
* Subclass: `UICollectionReusableView`
* Also create XIB: Unchecked
* Language: `Swift`
4. Click Next and then Create.
5. Add the following to this file:
import UIKit
class ExploreHeaderView: UICollectionReusableView {
@IBOutlet weak var lblLocation:UILabel!
}
6. Next, open the `Explore.storyboard` file and under the Identity inspector in the Utilities Panel, update the Class to `ExploreHeaderView`.
Now, let's work on passing data from a location to explore and display the selected location in our header.
# Passing a selected location back to Explore View
We need to be able to send the selected city back to our `ExploreViewController`. Therefore, we need a selected city, as well, unwind for the Done button inside `ExploreViewController`. First, let's get our selected city to display in our Explore view:
1. Add the following variable under the constant manager in our `ExploreViewController.swift` file:
var selectedCity:LocationItem?
var headerView: ExploreHeaderView!
2. Next, open `Explore.storyboard` and select the Explore Header View in the Outline view:
3. Then, select the Connections inspector in the Utilities Panel, and click and drag `lblLocation` from the empty circle under Outlets to the label in the Explore View Controller Header scene:
Next, let's unwind our Done button in our Explore View Controller.
# Unwinding our Done button
Earlier in this book, we added an unwind for our Cancel button. Now, we need to make it so that our Done button can also dismiss the modal, but we also want to capture the selected location when the user is done. Let's add this code next:
1. Open the `ExploreViewController.swift` file again and, in the `private` extension under the `unwindLocationCancel()` function, add the following code:
@IBAction func unwindLocationDone(segue:UIStoryboardSegue) {
if let viewController = segue.source as? LocationViewController {
selectedCity = viewController.selectedCity
if let location = selectedCity {
headerView.lblLocation.text = location.full
}
}
}
The code we just added checks the source of the segue. If its source is a class of `LocationViewController`, then we want to grab the selected city and set the `selectedCity` variable inside `ExploreViewController` to that city.
We then use an `if...let` statement to make sure that `selectedCity` is not `nil`; if it isn't, then we set the label in the header to the currently selected city. Now, we need to hook up `IBAction`.
2. In your `UICollectionViewDataSource` extension, update `collectionView:viewForSupplementaryElementOfKind:atIndexPath:` with the following:
func collectionView(_ collectionView: UICollectionView, viewForSupplementaryElementOfKind kind: String, at indexPath: IndexPath) -> UICollectionReusableView {
let header = collectionView.dequeueReusableSupplementaryView(ofKind: kind, withReuseIdentifier: "header", for: indexPath)
headerView = header as? ExploreHeaderView
return headerView
}
3. Next, open `Locations.storyboard`.
4. Now, use _control_ and drag from the Done button in the Location View Controller to Exit in the Location View Controller scene:
5. When you let go, select `unwindLocationDoneWithSegue:` in the menu that appears:
Let's build and run the project by hitting the Play button (or use _command_ \+ _R_ ). You should now be able to select a location; when you hit Done, the Explore Header view should show you the selected location:
# Getting the last selected location
We have a couple of issues that we need to correct under Select a location. You will notice that when you click on Select a location, you can check multiple locations. We only want the user to be able to select one location. Also, the checkmark next to your selected location disappears if you click on Done in Location View and then click to choose a location again. We need to set the last selected location so that it is saved when you go back to your location list. We can address these issues at the same time:
1. Open `Explore.storyboard`.
2. Select the segue that is connected to the `LocationViewController`.
3. Then, select the Attributes inspector in the Utilities Panel and set Identifier under Storyboard Segue to locationList. Then, hit _Enter_ :
4. Now, select the segue that is connected to the `RestaurantListViewController` and set Identifier to `restaurantList`. Then, hit _enter_ :
5. Both of these identifiers were added to our `Segue.swift` file.
Right now, we are currently just displaying locations by cities, but we need to also display the state. Our plist, `Locations.plist`, has both a city and state value:
1. Next, open up the `LocationDataManager.swift` file and update the locations array to now be a `LocationItem`:
private var locations:[LocationItem] = []
2. Now, update the `fetch()` method to the following:
func fetch() {
for location in loadData() {
locations.append(LocationItem(dict:location))
}
}
3. Next, we need to update the `locationItem()` method. Currently, we are returning a `String`, but we want to return the object:
func locationItem(at indext:IndexPath) -> LocationItem {
return locations[index.item]
}
4. Finally, let's add the following code before the last curly brace:
func findLocation(by name:String) -> (isFound:Bool, position:Int) {
guard let index = locations.index(where: { $0.city == name }) else {
return (isFound:false, position:0) }
return (isFound:true, position:index)
}
This method will allow us to find the location, and then obtain its index position within the array. We will return a tuple, which is a compound type in Swift, meaning that it can hold multiple values. Tuples allow you to combine different data types into one. This method will check the tuple to see whether or not we found the data. If we found the data, then we will use the index position; if not, we will not do anything.
5. Next, we need to check whether or not a previous location was set. Open up the `LocationViewController.swift` file and create the following method after the `viewDidLoad()` method:
Let's break this method down:
* **Part A** : In the parameters of this method, we take in a cell and an index path:
set(cell:at)
if let city = selectedCity?.city
* **Part B** : Here, we check to make sure that the selected city is set:
let data = manager.findLocation(by: city)
* **Part C** : Then, we call the method we created in `LocationDataManager`, passing the selected city into the manager, and getting back a tuple of data:
if data.isFound {
if indexPath.row == data.position {
cell.accessoryType = .checkmark
}
else { cell.accessoryType = .none }
}
* **Part D** : Next, we check to see if data was found in the tuple; if so, we check to see if the selected row is the same as the position in the array. If the row and position are the same, we direct the cell to set its `accessoryType` to a checkmark; otherwise, `accessoryType` will be set to none:
else { cell.accessoryType = .none }
* **Part E** : Finally, if no data is found, we set `accessoryType` to none. Add the following inside `cellForRowAt()` after we set the text for the cell:
set(selected: cell, at: indexPath)
Build and run the project by hitting the Play button (or use _command_ \+ _R_ ). You should see that you can only select one location now. However, after you select the location, if you click on Done in the Location view and then click to show the locations again, your last selected location will not have been saved. We still need to address this issue, which we will do next.
# Passing location and cuisine to the restaurant list
Open the `ExploreViewController.swift` file, and inside the `private` extension, add the following method above the `unwindLocationCancel()` method:
func showLocationList(segue:UIStoryboardSegue) {
guard let navController = segue.destination as? UINavigationController,
let viewController = navController.topViewController as? LocationViewController else {
return
}
guard let city = selectedCity else { return }
viewController.selectedCity = city
}
Our `showLocationList()` method will be called whenever our destination view has a Navigation Controller. Then, it checks to see if the `topViewController` is of the `LocationViewController` class. If either of these two statements are `false`, we do nothing. If both are `true`, we check the `selectedCity`; if it is `nil`, then we also do nothing. If the `selectedCity` has a location, we set the `selectedCity` variable inside the `LocationViewController` to the `selectedCity` in the `ExploreViewController`. Adding this will save the last selected location if we return to the locations list after we selected a location earlier.
We also need to pass the selected city over to the `RestaurantListViewController`. Therefore, add the following variables inside the `RestaurantListViewController.swift` file above your `@IBOutlet var collectionView`:
var selectedRestaurant:RestaurantItem?
var selectedCity:LocationItem?
var selectedType:String?
While still in the `RestaurantListViewController.swift` file, add the following code under the `viewDidLoad()` method:
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
print("selected city \(selectedCity as Any)")
print("selected type \(selectedType as Any)")
}
The `viewDidAppear()` method will get called every time we load the View Controller, whereas the `viewDidLoad()` method only gets called once. We can print the `selectedCity` variable to verify that we are passing the location over correctly.
Next, open the `ExploreViewController.swift` file again and, inside, add the following under the `showLocationList()` method:
func showRestaurantListing(segue:UIStoryboardSegue) {
if let viewController = segue.destination as? RestaurantListViewController, let city = selectedCity,
let index = collectionView.indexPathsForSelectedItems?.first {
viewController.selectedType = manager.explore(at: index).name
viewController.selectedCity = city
}
}
We will now check to see if the segue destination is `RestaurantListViewController`, and make sure that `selectedCity` is set in `ExploreViewController`. Next, we need to get the selected `indexPath` of the Collection view. Once we have that, we then get the item from the `ExploreDataManager` at the `index` position.
Finally, we get the name from the item. If we get all of those items back, then we pass the `selectedCity` and `selectedType` variables to the `RestaurantListViewController`. If we do not, then we will display an alert, letting the user know that they need to select a location first. Let's create the three methods that will display such an alert:
1. First, we will create the actual alert. While still in the `ExploreViewController`, add the following code before `unwindLocationCancel()`:
func showAlert() {
let alertController = UIAlertController(title: "Location Needed", message:"Please select a location.", preferredStyle: .alert)
let okAction = UIAlertAction(title: "OK", style: .default, handler: nil)
alertController.addAction(okAction)
present(alertController, animated: true, completion: nil)
}
2. Then, we need to check that we have a location; if not, we want to make sure that the user cannot go to the restaurant list. Inside the `ExploreViewController`, add the following method after the `viewDidLoad()` method:
override func shouldPerformSegue(withIdentifier identifier: String, sender: Any?) -> Bool {
if identifier == Segue.restaurantList.rawValue {
guard selectedCity != nil else {
showAlert()
return false
}
return true
}
return true
}
Here, we check whether the segue equals `restaurantList`. If it does, we check to see if the `selectedCity` variable is set. If we return `true`, then the segue will be performed, and we will go to the restaurant list. If we return `false`, then we display our alert, letting the users know that they need to select a location first.
3. Lastly, we need to show either the location list or restaurant list, depending on whether or not the user chose a location before trying to see the restaurant list. Add the following method after `viewDidLoad()`, and before the `shouldPerformSegue` method we just added:
override func prepare(for segue: UIStoryboardSegue, sender: Any?){
switch segue.identifier! {
case Segue.locationList.rawValue:
showLocationList(segue: segue)
case Segue.restaurantList.rawValue:
showRestaurantListing(segue: segue)
default:
print("Segue not added")
}
}
The `prepare()` method checks which identifier is called. If it is the location list, then we call the `showLocationList()` method; if it is the restaurant list, then we call the `showRestaurantListing()` method.
Now, build and run the project by hitting the Play button (or use _command_ \+ _R_ ). If you try to select a cuisine first, you should not be able to go to the restaurant list. Instead, you should receive an alert, stating that you need to select a location:
If you pick a location, hit Done, and then tap the locations list again, you should see that your location is still selected. Now, if you select a cuisine, you should be directed to the restaurant listing and see the selected location printing in the Debug Panel. If you do not see that panel, you can open it using the toggle or _command_ \+ _shift_ \+ Y:
Now that we have the location, we need to check our `RestaurantAPIManager` for data. Therefore, let's update our `print` statement inside the `RestaurantListViewController` by revising the `viewDidAppear()` method so that it does the following:
override func viewDidAppear(_ animated: Bool) {
guard let location = selectedCity?.city, let type = selectedType else {
return
}
print("type \(type)")
print(RestaurantAPIManager.loadJSON(file: location))
}
You should now see the type selected, along with an array of dictionary objects, in the Debug Panel:
Now that we have our data, let's get that data to display in our `RestaurantListViewController`. To do this, we need to set up our cell, as well as a restaurant data manager. The restaurant data manager, rather than the `RestaurantListViewController`, will be the class that uses our `RestaurantAPIManager`.
# Creating our restaurant cell class
Now, we need to create a file so that we can connect to the cell:
1. Right-click on the `Restaurants` folder in the Navigator panel, and create a new group called `View`. Then, right-click the `View` folder and select New File.
2. Inside the Choose a template for your new file screen, select iOS at the top and then Cocoa Touch Class. Then, hit Next.
3. In the options screen that appears, add the following:
New file:
* * Class: `RestaurantCell`
* Subclass: `UICollectionViewCell`
* Also create XIB file: Unchecked
* Language: `Swift`
4. Click Next and then Create. Your file should look like the following:
import UIKit
class RestaurantCell: UICollectionViewCell {
}
5. Inside the class declaration, add the following:
@IBOutlet weak var lblTitle:UILabel!
@IBOutlet weak var lblCuisine:UILabel!
@IBOutlet weak var imgRestaurant:UIImageView!
6. Save the file.
Now that our file is set up, let's work on getting our outlets connected.
# Setting up restaurant list cell outlets
We need to set up our `restaurantCell` outlets:
1. Open `Explore.storyboard` and select our `restaurantCell` again in the Outline view.
2. Now, in the Utilities Panel, select the Identity inspector.
3. Under Custom Class, in the Class drop-down menu, select RestaurantCell and hit _enter_ to connect the Cell to the class.
4. Now, select the Connections inspector.
5. Click on and drag `lblTitle` from the empty circle, which is under Outlets, to the top label in our `restaurantCell`:
6. Click on and drag `lblCuisine` from the empty circle, which is under Outlets, to the other label in our `restaurantCell`:
7. Click on and drag `imgRestaurant` from the empty circle, which is under Outlets, to the image in our `restaurantCell`:
Now that we have our `restaurantListCell` outlets set up, let's get some data into our cell. We previously created our `RestaurantItem.swift` file; we will use this in our restaurant list.
# Creating a restaurant data manager
We need to create a data manager for our restaurants, but before we do that, we need to update a few things. In Swift 4, we have a more natural way to parse JSON, using what is called **Decodable**. First, we need to update our `RestaurantItem`, but before we get into what this code looks like, our `RestaurantItem` needs to conform to Decodable. Open `RestaurantItem` and update it to include the following:
class RestaurantItem: NSObject, MKAnnotation, Decodable {
var name: String?
var cuisines:[String] = []
var latitude: Double?
var longitude:Double?
var address:String?
var postalCode:String?
var state:String?
var imageURL:String?
var title: String? {
return name
}
var subtitle: String? {
if cuisines.isEmpty { return "" }
else if cuisines.count == 1 { return cuisines.first }
else { return cuisines.joined(separator: ", ") }
}
var coordinate: CLLocationCoordinate2D {
guard let lat = latitude, let long = longitude else {
return CLLocationCoordinate2D() }
return CLLocationCoordinate2D(latitude: lat, longitude: long )
}
enum CodingKeys: String, CodingKey {
case name
case cuisines
case lat
case long
case address
case postalCode = "postal_code"
case state
case imageURL = "image_url"
}
}
Our `RestaurantItem` now conforms to Decodable, which can now be used to work with JSON data. The variables in `RestaurantItem` match those variables inside of the JSON files. If your variable is different than the JSON property, you can assign it the property name inside of quotes. We need to create a manager that loads `RestaurantItem` from the location JSON files. Let's create the `RestaurantDataManager` file now:
1. Right-click on the `Restaurants` folder and create a new group called `Model`. Then, right-click the `Model` folder and select New File.
2. Inside the Choose a template for your new file screen, select iOS at the top and then Swift File. Then, hit Next.
3. Name this file `RestaurantDataManager`, and hit Create.
We need to define our class definition first, so add the following under the `import` statement in this new file:
class RestaurantDataManager {
}
Inside the class declaration, add the following variable:
private var items:[RestaurantItem] = []
Here, we keep our array `private`, since there is no reason to have to access this outside of the class.
Now, let's add the following three methods:
func fetch(by location:String, withFilter:String="All", completionHandler:() -> Void) {
var restaurants:[RestaurantItem] = []
for restaurant in RestaurantAPIManager.loadJSON(file: location) {
restaurants.append(RestaurantItem(from: restaurant))
}
if withFilter != "All" {
items = restaurants.filter({ $0.cuisines.contains(withFilter) })
}
else { items = restaurants }
completionHandler()
}
func numberOfItems() -> Int {
return items.count
}
func restaurantItem(at index:IndexPath) -> RestaurantItem {
return items[index.item]
}
The first method here differs from the one we looked at in `ExploreDataManager`, whereas the last two methods here are the same as those in `ExploreDataManager`:
Let's break these methods down so that we can understand what we are doing:
* **Part A** : This is a private array of `RestaurantItem`:
private var items:[RestaurantItem] = []
* **Part B** : This function is pretty long; however, we are simply fetching restaurants with a location as a filter. We have a closure block, that will allow us to let the function run until it is complete:
fetch(location:withFilter:completionHandler)
* **Part C** : In this parameter, we are setting a default. If we do not pass anything into this parameter, it will use `All`; otherwise, it will use whatever we give it:
with filter:String = "All"
As you type your code, Xcode will provide code hints (choices) that it believes that you might want. When you type this method, Xcode gives you two hints: one that includes the `with:` parameter (that takes a filter), and one that does not:
* **Part D** : Here, we use Decodable to parse the JSON file and create an array of `RestaurantItem`:
do {
let data = try Data(contentsOf:file)
let restaurants = try JSONDecoder().decode([RestaurantItem].self,from:data)
...
}
catch {
print("there was an error \(error)")
}
* **Part E** : Inside of the if-statement, we filter the restaurants by cuisine. Since our restaurants have multiple cuisines, we must check each cuisine, which is why we use contains:
do {
...
if filter != "All" {
items = restaurants.filter({ ($0.cuisines.contains(filter)) })
}
else { items = restaurants }
}
catch {
print("there was an error \(error)")
}
* **Part F** : This is used to tell our method that we are finished and pass back the restaurant items:
completionHandler(items)
* **Part G** : This method tells us how many restaurant items we have:
numberOfItems()
* **Part H** : This method allows us to get the restaurant at the index position at which it is located:
restaurantItem(at:)
Now we have a greater understanding of our restaurant data manager. We have done a lot of code, and some of it may not make sense to you yet, but as long as you have a basic understanding, then you will be fine.
We now need to update `MapDataManager` to work with the JSON files. Open `MapDataManager` and update `fetch()` to the following:
func fetch(completion:( _annotations:[RestaurantItem]) -> ()){
let manager = RestaurantDataManager()
manager.fetch(by:"Boston") { (items) in
self.items = items
completion(items)
}
}
In this method, we create an instance of `RestaurantDataManager`, and then tell it to fetch Boston. This is hard coded for now, but you could make this dynamic to get a value from the user instead. Now, we need to get the data displaying on our restaurant list. One of the most common things when displaying data is how to handle a Table View or Collection View when there is no data. Some of the filtering we are doing may return no results, so we should handle both cases. We are going to do this next.
# Handling no data
It is common to want to create a custom view that you can reuse, but also have a visual representation of it as well. There are two common ways to do this: the first way we will demonstrate now, and the other we will do later in this book. You can create a `UIView` that comes with a **XIB** (pronounced zib or nib). XIBs were the common way to create elements before storyboards, and are still effective today. Let's create one now:
1. Right-click on the `Misc` folder and select New Group and call it `No Data`.
2. Then, right-click on the `No Data` folder and create a new file.
3. On the Choose a template for your new file screen, select iOS at the top. Then, select Cocoa Touch Class. Then, hit Next.
4. In the options screen that appears, add the following:
New file:
* * Class: `NoDataView`
* Subclass: `UIView`
* Language: `Swift`
5. Click Next and then Create.
6. Next, right-click on the `No Data` folder again and create a new file.
7. Inside the Choose a template for your new file screen, select **iOS** at the top. Then, select View under User Interface. Then, hit Next.
8. Name the file `NoDataView` and hit Create.
9. First, open the `NoDataView.swift` file and add the following into this file:
class NoDataView: UIView {
var view: UIView!
@IBOutlet var lblTitle: UILabel!
@IBOutlet var lblDesc: UILabel!
override init(frame: CGRect) {
super.init(frame: frame)
setupView()
}
required init?(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)!
setupView()
}
func loadViewFromNib() -> UIView {
let nib = UINib(nibName: "NoDataView", bundle: Bundle.main)
let view = nib.instantiate(withOwner: self, options: nil) [0] as! UIView
return view
}
func setupView() {
view = loadViewFromNib()
view.frame = bounds
view.autoresizingMask = [.flexibleWidth, .flexibleHeight]
addSubview(view)
}
func set(title: String) {
lblTitle.text = title
}
func set(desc: String) {
lblDesc.text = desc
}
}
Our two `init` methods are required; simply call `setupView()`. The `loadViewFromNib()` method is used to get our XIB file. Our `setupView()` is used to take the NIB and is added to the `UIView()`. Finally, we have two methods that set up our two labels. The first four methods are boilerplate code that you will write every time you want to create a `UIView` with a NIB (XIB) file.
Next, let's get this set up:
1. Open `NoDataView.xib`.
2. Select Files Owner in the Outline. Then, open the Identity inspector, update Class to `NoDataView` and hit _enter._
3. Next, in the filter field of the object library, type `label`.
4. Then, drag out two labels into the view.
5. Select one of the labels. Then, in the Attributes inspector, update the following values:
* * Text: Add `TITLE GOES HERE` into the empty text field under the text
* Color: `Black`
* Alignment: `Center`
* Font: `Avenir Next Condensed Bold 26`
6. Then, in the Size inspector, update the following values:
* * Width: `355`
* Height: ``36``
7. Select one of the labels. Then, in the Attributes inspector, update the following values:
* * Text: Add `TITLE GOES HERE` into the empty text field under the text
* Color: `Black`
* Alignment: `Center`
* Font: `Avenir Next Condensed Regular 17`
8. Then, in the Size inspector, update the following values:
* * Width: `355`
* Height: `21`
9. Select both labels and then the Pin icon. Enter the value of the Height (this should be checked).
10. Now, with both labels selected, hit the Stack View icon. Alternatively, you can go to Editor | Embed In | Stack View.
11. Select the Stack View in the Outline view, and then the Pin icon. Enter the following values:
* * Right: `10`
* Left: `10`
12. Then, select the Align icon. Select the following:
* * Horizontally in the container: (this should be checked)
* Vertically in the container: (this should be checked)
13. Select the Files Owner in the Outline view.
14. Then, open the Identity inspector and connect `lblTitle` to the label that says `TITLE GOES HERE`.
15. Connect `lblDesc` to the other label.
When you are done, you should see the following:
Finally, let's connect everything. Open the `RestaurantListViewController.swift` file:
1. Above the `selectedRestaurant` variable, add the following:
var manager = RestaurantDataManager()
2. Next, add the following method inside the `private` extension:
func createData() {
guard let location = selectedCity?.city, let filter = selectedType else { return }
manager.fetch(by: location, with: filter) { _ in
if manager.numberOfItems() > 0 {
collectionView.backgroundView = nil
}
else {
let view = NoDataView(frame: CGRect(x: 0, y: 0, width: collectionView.frame.width, height: collectionView.frame.height))
view.set(title: "Restaurants")
view.set(desc: "No restaurants found.")
collectionView.backgroundView = view
}
collectionView.reloadData()
}
}
This method checks to see if we have a selected location and a filter. Then, we need to run the fetch method we created earlier. If we have any items, we should make sure that our background view is `nil`. If not, we will create our `NoDataView` and set it to display `No restaurants found`. Finally, we need to reload the Collection View.
3. Next, let's update `collectionView:cellForItemAt:` by adding the following:
func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell {
let cell = collectionView.dequeueReusableCell(withReuseIdentifier: "restaurantCell", for: indexPath) as! RestaurantCell
let item = manager.restaurantItem(at: indexPath)
if let name = item.name { cell.lblTitle.text = name }
if let cuisine = item.subtitle { cell.lblCuisine.text = cuisine }
if let image = item.imageURL {
if let url = URL(string: image) {
let data = try? Data(contentsOf: url)
if let imageData = data {
DispatchQueue.main.async {
cell.imgRestaurant.image = UIImage(data: imageData)
}
}
}
}
return cell
}
Here, we are just passing data into our cell. We are displaying the title, cuisine, and the image.
4. Finally, update `-collectionView:numberOfItemsInSection:` to the following:
func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int {
return manager.numberOfItems()
}
5. Build and run the project. You should now see the following, either with data or without:
Before we wrap this up, let's add one more thing. When you select a location, display it on this view using the new iOS 11 large titles. Add the following into the private extension under `createData()`, inside of `RestaurantListViewController`:
func setupTitle() {
navigationController?.setNavigationBarHidden(false, animated: false)
if let city = selectedCity?.city, let state = selectedCity?.state {
title = "\(city.uppercased()), \(state.uppercased())"
}
navigationController?.navigationBar.prefersLargeTitles = true
}
Then, call `setupTitle()` after `createData` in the `viewDidAppear()` method. Now, if you build and rerun the project, you should see the selected city. When you scroll, the large title will appear in the title view:
We are done with this chapter; good work! We did a lot, but you should be starting to see this app coming to life.
# Summary
Well, we finally have data working on our app. We are not using a service, but if we wanted to, it wouldn't be hard to add it. Working with local JSON files is pretty close to working with an API feed. You should feel confident doing either. One thing I love to do is this: when I know what the feed is like, but I do not want to write that portion yet, I will create static JSON files of the feed and work with those. Using static JSON files allows me to focus on getting the app to where it needs to be, and not get stopped because of the API data layer.
In this chapter, we learned what JSON is and how to use that JSON feed to make data for our app. We also looked at how to pass data using segues.
In the next chapter, we will look at how to display even more data.
# Displaying Data in Restaurant Detail
Our app is coming together nicely, but we have one more section to do before we can start adding features. We have data in all of our views, except for in our restaurant detail view. In the last chapter, we passed data using segues, and we are going to do this again in this chapter. We have a few other things in this view that we need to set up before we move on to some of the features of the app.
In this chapter, we will cover the following topics:
* Passing data using segues
* Connecting `IBOutlet` to display data
* Displaying one annotation in a map view
Let's set up our `RestaurantDetailViewController` by adding the following:
1. Add the following variables after the class declaration and before the `selectedRestaurant` variable:
// Nav Bar
@IBOutlet weak var btnHeart:UIBarButtonItem!
// Cell One
@IBOutlet weak var lblName:UILabel!
@IBOutlet weak var lblCuisine:UILabel!
@IBOutlet weak var lblHeaderAddress:UILabel!
// Cell Two
@IBOutlet weak var lblTableDetails:UILabel!
// Cell Three
@IBOutlet weak var lblOverallRating:UILabel!
// Cell Eight
@IBOutlet weak var lblAddress:UILabel!
@IBOutlet weak var imgMap:UIImageView!
2. Make sure you save the file.
Now that we've created our `IBOutlet`, we need to connect them:
1. Open the `RestaurantDetail.storyboard`, select the Restaurant Detail View Controller in the Outline view, and then open the Connections inspector in the Utilities Panel.
2. Now, from the empty circle, click on and drag each of the following variables we just added under Outlets to their respective elements in either the scene or Outline view.
3. The following is an empty circle for `imgMap` to the map view in the Outline view:
4. The following is an empty circle for `lblAddress` to the address Label above the map:
5. The following is an empty circle for `lblOverallRating` to the Label inside the Reviews cell:
6. The following is an empty circle for `lblTableDetails` to the Label under the header in the scene:
7. The following is an empty circle for `lblName` to the Label under the logo in the scene:
8. The following is an empty circle for `lblCuisine` to the Label under `lblName` in the scene:
9. The following is an empty circle for `lblHeaderAddress` to the Label under `lblCuisine` in the scene:
10. Finally, the following is an empty circle for `btnHeart` to our heart button:
Now that we have everything connected, we can jump into coding and get our detail page displaying data.
# Displaying data in our static Table View
Next, we need to create a method that will display all of our data in our labels.
Open the `RestaurantDetailViewController.swift` file and add the private extension after the last curly brace:
private extension RestaurantDetailViewController {
func setupLabels() {
guard let restaurant = selectedRestaurant else { return }
if let name = restaurant.name {
lblName.text = name
title = name
}
if let cuisine = restaurant.subtitle { lblCuisine.text = cuisine }
if let address = restaurant.address {
lblAddress.text = address
lblHeaderAddress.text = address
}
lblTableDetails.text = "Table for 7, tonight at 10:00 PM"
}
}
This method will now get the data and display it inside our labels. Next, we want to display a map of the restaurant location at the bottom of our Detail view. Now, you might be wondering why we are using an image and not a map. Using a map uses a lot more resources, whereas an image makes things a lot smoother. When you go with this approach, you can always add a button to go to an actual map, but this is a good way to do a snapshot of the map instead of loading a map for every detail.
Let's arrange for an image of a map to display, and also show our custom annotation in the image. Add the following method under the `setupLabels()` method and before the last curly brace:
func createMap() {
guard let annotation = selectedRestaurant, let long = annotation.long, let lat = annotation.lat else { return }
let location = CLLocationCoordinate2D(
latitude: lat,
longitude: long
)
takeSnapShot(with: location)
}
In this method, we get the longitude and latitude and enter the values into a `CLLocationCoordinate2D` object. We then pass the location to a method called `takeSnapShot(with:)`. We get two errors after we add this. The first one is for `CLLocationCoordinate2D`, and to get rid of it, we need to import `CoreLocation` at the top of the file. To get rid of the last one, simply add the following method under the `createMap()` method:
func takeSnapShot(with location: CLLocationCoordinate2D) {
let mapSnapshotOptions = MKMapSnapshotter.Options()
var loc = location
let polyLine = MKPolyline(coordinates: &loc, count: 1)
let region = MKCoordinateRegion(polyLine.boundingMapRect)
mapSnapshotOptions.region = region
mapSnapshotOptions.scale = UIScreen.main.scale
mapSnapshotOptions.size = CGSize(width: 340, height: 208)
mapSnapshotOptions.showsBuildings = true
mapSnapshotOptions.showsPointsOfInterest = true
let snapShotter = MKMapSnapshotter(options: mapSnapshotOptions)
snapShotter.start() { snapshot, error in
guard let snapshot = snapshot else {
return
}
UIGraphicsBeginImageContextWithOptions(mapSnapshotOptions.size, true, 0)
snapshot.image.draw(at: .zero)
let identifier = "custompin"
let annotation = MKPointAnnotation()
annotation.coordinate = location
let pinView = MKPinAnnotationView(annotation: annotation, reuseIdentifier: identifier)
pinView.image = UIImage(named: "custom-annotation")!
let pinImage = pinView.image
var point = snapshot.point(for: location)
let rect = self.imgMap.bounds
if rect.contains(point) {
let pinCenterOffset = pinView.centerOffset
point.x -= pinView.bounds.size.width / 2
point.y -= pinView.bounds.size.height / 2
point.x += pinCenterOffset.x
point.y += pinCenterOffset.y
pinImage?.draw(at: point)
}
if let image = UIGraphicsGetImageFromCurrentImageContext() {
UIGraphicsEndImageContext()
DispatchQueue.main.async {
self.imgMap.image = image
}
}
}
}
This method is long, but it allows us to create a map image at the size we need. We then pass all of our settings to our snapshotter to create a picture. Once we have created our image, we can then add our custom annotation to it. Although this requires a lot of code, it is the best way to understand it in its entirety. Here, we would recommend changing the values line-by-line to see how it affects the image. We have more errors, and these are because we need to import `MapKit`.
Now that we have created our functions, we need to call them as follows:
Add the following after the `viewDidLoad()` method in the `RestaurantDetailViewController.swift` file:
func initialize() {
setupLabels()
createMap()
}
This method needs to be called inside your `viewDidLoad()` method. Replace the `dump` statement in the `viewDidLoad()` method with the following:
initialize()
Now, we have finished with our Restaurant Detail View Controller, but we need to make sure that the selected restaurant is passed over from the restaurant list view. Open `RestaurantListViewController` and add the following code under `viewDidLoad()`:
override func prepare(for segue:UIStoryboardSegue, sender:Any?) {
if let identifier = segue.identifier {
switch identifier {
case Segue.showDetail.rawValue:
showRestaurantDetail(segue:segue)
default: print("Segue not added")
}
}
}
Here, we are looking for the `showDetail` segue. When it's called, we call the `showRestaurant()` method. Let's add that method next:
func showRestaurantDetail(segue:UIStoryboardSegue) {
if let viewController = segue.destination as? RestaurantDetailViewController, let index = collectionView.indexPathsForSelectedItems?.first {
selectedRestaurant = manager.restaurantItem(at:index)
viewController.selectedRestaurant = selectedRestaurant
}
}
Let's build and run the project by hitting the Play button (or using _command_ \+ _R_ ). When you select a restaurant, you should see all of the restaurant's information on the details page.
Also, you should see that a pin has been dropped at the restaurant's location on the map, which is actually an image:
We are done with the restaurant detail for now, but we still need to be able to show ratings, reviews, and photos. We will work on all of these features in upcoming chapters.
# Summary
We now have JSON data loading into our app. As you can see, going from a plist to a JSON file was not a huge step. Our app is now looking more and more like it should be available on the App Store. In the following chapters, we will turn our attention to adding features that you might want to use in your app. These features will enhance the user's experience, and therefore learning them will be invaluable. Even if the features don't seem like something you want or need, it will be beneficial in the long run to understand what they are and how they work.
In the next chapter, you will work with the camera, and learn how to apply filters and save images to the Camera Roll.
# Foodie Reviews
We are all familiar with reviews, from food reviews to App Store reviews. Seeing reviews for websites and apps is commonplace. In this chapter, we will create a review form that has a custom five-star rating component, which we will then add to it. We will learn about `UIControls` and how powerful they are. We will also look at literals and how to use them in our code.
In this chapter, we will cover the following topics:
* Creating a form that users can use to write a review
* Creating a custom star rating
* Image and color literals
# Getting started with reviews
Our review form UI is set up, but we need to make a slight change to it. Right now, we have an image displayed for ratings. We are going to build a custom rating component that we will use in both restaurant details and our Review form.
We will add it to our restaurant details first, and then, when finished, we will add it to the Review form. We want our ratings view to be able to show ratings from zero stars to five stars. We also want the user to be able to select half stars when rating, so it will also need to show half stars.
The first thing we do is start creating our custom `UIControl`. `UIButtons` and `UISwitches` are sub-classes of `UIControls`, and without getting super technical, we are going to create our control:
1. Right-click the `Reviews Form` folder and select New File.
2. Inside the Choose a template for your new file screen, select iOS at the top, and then Cocoa Touch Class. Then, hit Next.
3. In the options screen that appears, add the following:
New file:
* * Class: `RatingsView`
* Subclass: `UIControl`
* Language: `Swift`
4. Click Next, and then Create.
Now that we have created our file, we want to be able to hook it up to a `UIView` in the storyboard. Let's do the following:
1. Open up `RestaurantDetail.storyboard`.
2. You will see an empty UIView next to the 0.0 rating label:
3. Next, select the view. Then, in the Identity inspector, update the Custom Class to `RatingsView` and hit _Enter_.
Now, we can get started. Open up the `RatingsView.swift` file and let's get started.
# Displaying ratings in our custom UIControl
Inside the `RatingsView.swift` file, we first need to create all of the variables we will need. Add the following under the class declaration:
let imgFilledStar = #imageLiteral(resourceName: "filled-star")
let imgHalfStar = #imageLiteral(resourceName: "half-star")
let imgEmptyStar = #imageLiteral(resourceName: "empty-star")
let shouldBecomeFirstResponder = true
var rating:CGFloat = 0.0
var totalStars = 5
If you copy and paste this code, you will have to select each image to see the actual image. If you are having trouble using an image literal, you can use `UIImage(named:)` instead.
We are doing something new in this file. We are using image literals as our variables. If you type `Image Literal` in your file and hit return, you will see a small icon:
Double-click this icon and a modal will appear, which will allow you to select an image:
You can look and find the three images using this window, or you can type everything you see here, and the image will appear. When done, you should see the following:
The first three variables are used for drawing our ratings view. The next variable, `shouldBecomeFirstResponder`, is a variable that lets us respond and handle events as they happen. Next, the rating variable is used for keeping track of our current rating. Finally, we have a variable to keep track of the total number of stars.
Now, let's add our `init` methods:
override init(frame: CGRect) {
super.init(frame: frame)
}
required init?(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
}
This is just boilerplate code that you need when creating views. There's nothing to explain here other than that you need it. Next, we need to create a few methods that will handle creating our stars. We need three of them for each type of star (full, half, and empty). Add the following after the last curly bracket:
private extension RatingsView {
func drawStar(with frame:CGRect, highlighted:Bool) {
let image = highlighted ? imgFilledStar :imgEmptyStar
draw(with: image, and: frame)
}
func drawHalfStar(with frame:CGRect) {
draw(with: imgHalfStar, and: frame)
}
func draw(with image:UIImage, and frame:CGRect) {
image.draw(in: frame)
}
}
These methods create a full, half, or empty star. We now need to be able to draw these stars. `UIView` has a draw method that we can use to draw stars. Before the `drawStar(frame:highlighted:)` method, add the following method inside the class:
override func draw(_ rect: CGRect) {
let context = UIGraphicsGetCurrentContext()
context!.setFillColor( colorLiteral(red: 1, green: 1, blue: 1, alpha: 0).cgColor)
context!.fill(rect)
let availWidth = rect.size.width
let cellWidth = availWidth / CGFloat(totalStars)
let starSide = (cellWidth <= rect.size.height) ? cellWidth : rect.size.height
for index in 0...totalStars {
let value = cellWidth*CGFloat(index) + cellWidth/2
let center = CGPoint(x: value+1, y: rect.size.height/2)
let frame = CGRect(x: center.x - starSide/2, y: center.y - starSide/2, width: starSide, height: starSide)
let highlighted = (Float(index+1) <= ceilf(Float(self.rating)))
if highlighted && (CGFloat(index+1) > CGFloat(self.rating)) {
drawHalfStar(with: frame)
} else {
drawStar(with: frame, highlighted: highlighted)
}
}
}
This is all of the code we'll need to create our stars. Let's break down the code and see what is happening. First, we get a graphics context, and we set its fill color to be clear. We are using a Color Literal this time, and this allows us to create colors and see those colors directly in our Swift file.
You can either type `Color Literal` and hit _Enter_ or use `UIColor` instead. You will see that a white box has been created for you, and if you double-click this box, you can edit the color, just like you would in the storyboard.
Next, we create three variables: `availWidth`, `cellWidth`, and `starSide`. Since we are using `UIView` in the storyboard, we need to check the size of this container. We then determine the size of each star based on the width and the number of stars. Finally, we calculate the height of the star.
Then, we loop through each star and create them based on the rating value. Our rating can be from 0-5, with increments of `0.5`. We also need to set up the positioning of each star using the center point. Finally, we determine, based on the value, whether the star should be an empty star, a half star, or a full star. This is our setup method. You do not have to get what is going on entirely—you only need to have a basic understanding. The more you code, the more it will start to make sense.
Before we build the project, open `RestaurantDetailViewController`, and add the following `IBOutlet` after `imgMap`:
@IBOutlet weak var ratingView: RatingsView!
Next, add the following method into the private method:
func createRating() {
ratingView.rating = 3.5
}
Then, call the method after `createMap()` in the `initialize()` method.
Next, open `RestaurantDetail.storyboard` and select the Restaurant View Controller. Then, in the Outlet inspector, click and drag `ratingView` to the `UIView`. Let's build and run the project by hitting the Play button (or use _command_ \+ R). When you get to the restaurant details, you will see that we now have 3.5 stars:
This is precisely what we want, but we also need our control to be able to handle touch events.
# Adding our touch events
Adding touch events will be used so that the user can change the rating to their desired rating. Open `RatingView`. Let's add the methods we need to get our control to accept touch events. Start by adding the following inside the main class:
override var canBecomeFirstResponder: Bool {
return shouldBecomeFirstResponder
}
override func beginTracking(_ touch: UITouch, with event: UIEvent?) -> Bool {
if self.isEnabled {
super.beginTracking(touch, with: event)
if (shouldBecomeFirstResponder && self.isFirstResponder) {
becomeFirstResponder()
}
handle(with: touch)
return true
} else {
return false
}
}
Then, add the following in the private extension:
func handle(with touch: UITouch) {
let cellWidth = self.bounds.size.width / CGFloat(totalStars)
let location = touch.location(in: self)
var value = location.x / cellWidth
if (value + 0.5 < CGFloat(ceilf(Float(value)))) {
value = floor(value) + 0.5
} else {
value = CGFloat(ceilf(Float(value)))
}
updateRating(with: value)
}
// Update Rating
func updateRating(with value:CGFloat) {
if (self.rating != value && value >= 0 && value <= CGFloat(totalStars)) {
self.rating = value
setNeedsDisplay()
}
}
The following code is used to handle touch. First, we need to set the `canBecomeFirstResponder` variable. Next, we have `beginTracking(touch:event:)`. In this method, we set whether our control can accept touch events. If the control is enabled, then we allow touches, and we call the `handle()` method and pass it the `UITouch` location. Let's discuss the `handle()` method.
In our handle method, we start with three variables. First, we get the width of the entire rating view. Next, we get the value of the touch location, and then, finally, we take the `x` value of the location and divide it by the width. We then check the value, figure out whether it is less than `0.5` or greater than `0.5`, and round appropriately. Last, we update the rating with the value we calculate.
In the `updateRating(value:)` method, we check to make sure that our value is not equal to the current value and whether the value is greater than zero and less than the total number of stars. If these conditions pass, then we set the rating to the new value and call the `setNeedsDisplay()` method. This method makes sure that our control is redrawn.
Open `RestaurantDetailsViewController`. In the `createRating()` method, add the following:
ratingView.isEnabled = true
We now have a rating, and by setting the rating to 3.5, we should now see 3.5 stars. We also set the `isEnabled` value to `true`, which means that we can touch and change the rating. If we set it to `false`, then the value cannot change. In the restaurant details, we want to turn off the touch, but in the `ReviewFormViewController`, we want that to be enabled. You can play with this, and when done, set the `isEnable` value to `false` and remove the rating.
We will set the rating later in this book when we start saving reviews:
You can now change the rating from 3.5 to 4.5 by tapping on the view. Now that we have this set up, let's get our review form set up.
# Setting up the unwind segues
Currently, if you tap the Add Review button, you will see our review form modal. However, currently, you can not dismiss this view. As we have done before, we need to add code that allows us to unwind (dismiss) a View Controller:
1. Open the `RestaurantDetailsViewController.swift` file and add the following into the private extension:
@IBAction func unwindReviewCancel(segue:UIStoryboardSegue) {}
2. Save the file and open the `ReviewForm.storyboard`.
3. Use _control_ and drag the Cancel button to the exit icon inside of the same View Controller:
4. In the screen that appears, under Action Segue, select `unwindReviewCancelWithSegue`.
If you build and run the project by hitting the Play button (or use _command_ \+ _R_ ), you should now be able to dismiss the Review Form.
# Creating our ReviewFormController
1. Right-click the `Review Form` folder again and select New File.
2. Inside of the Choose a template for your new file screen, select iOS at the top, and then Cocoa Touch Class. Then, hit Next.
3. In the options screen that appears, add the following:
New file:
* * Class: `ReviewFormViewController`
* Subclass: `UITableViewController`
* Also create XIB: Unchecked
* Language: `Swift`
4. Click Next, and then Create.
Delete everything after the `viewDidLoad()` method, as we do not need all of the other code. Next, let's set up our `ReviewFormViewController` by adding the following after the class declaration:
@IBOutlet weak var ratingView: RatingsView!
@IBOutlet weak var tfTitle: UITextField!
@IBOutlet weak var tfName: UITextField!
@IBOutlet weak var tvReview: UITextView!
We also need to add a method when our save button is tapped. We can do this by adding the following code:
@IBAction func onSaveTapped(_ sender: Any) {
print(ratingView.rating)
print(tfTitle.text as Any)
print(tfName.text as Any)
print(tvReview.text)
dismiss(animated: true, completion: nil)
}
Now, let's connect this file with our `UIViewController` and our review form in the storyboard:
1. In the Utilities panel, select the Identity inspector.
2. Under Custom Class, in the Class drop-down menu, type/select `ReviewFormViewController` and hit _Enter_ to connect the View Controller to the class.
3. Now, select the Connections inspector in the Utilities panel.
4. Now, from the empty circle, click and drag each of the variables we just added under Outlets to their respective elements in either the scene or Outline view.
5. Click and drag from the empty circle for `ratingView` to the `UIView` in the storyboard:
6. Click and drag from the empty circle for `tfTitle` to the `Textfield` in the storyboard:
7. Click and drag from the empty circle for `tfName` to the `Textfield` in the storyboard:
8. Click and drag from the empty circle for `tvReview` to the `Text View` in the storyboard:
9. Finally, click and drag from the empty circle for `onSaveTapped` to the Save button in the Navigation controller:
Now that we have an outlet connected to our form, let's build and run the project by hitting the Play button (or use _command_ \+ _R_ ). If you go to your form, enter the information and hit save so that you can see that information in your Output panel. Our reviews are now ready to go.
# Summary
In this chapter, we created a Review Form using a static table view. We worked with Text View and Text Fields for the first time. We also set up our first custom `UIControl` with our star rating, and we got to use color and Image Literals. Literals are a great way to see your image or the color you are working with visually.
In the next chapter, we will work on creating a way to add a photo to a restaurant. We will also learn how to add filters to our photos.
# Working with Photo Filters
In this chapter, we will focus on creating photos for a restaurant and learn how to use the camera and camera roll. We will give the user the ability to take a picture and apply a filter to that picture. In the next chapter, we will tie the last chapter and this chapter together by completing the work on the Review Form and enable users to save their reviews. We will also learn how to save photos as well.
In this chapter, we will cover the following topics:
* How to use the camera roll to get pictures
* How to use the camera to take pictures and bring them into our app
* How to apply filters to our pictures and get them ready to save to the device
# Understanding filters
Based on our design, we know that we are going to need to apply filters to a photo. Instead of just creating an array of filters, we are going to use a plist to load in a set of filters that we want. You can find the `FilterData.plist` file inside this chapter's `asset` folder. Drag and drop this file into the `Model` folder that is inside the `Review` folder. Make sure that `Copy` items, if needed, is checked, and then hit Finish.
Let's take a look at the plist and see what it contains:
This list only has 10 of over 170 filters and effects that you can use. If you would like to see a full list of filters, you can find the list at <http://tinyurl.com/coreimage-ios>. Feel free to add, remove, or update any filters. Now that we have seen what our plist looks like, we need to create a model that represents this data. We also need to create a `Manager` class to manage our items. Let's create the model first:
1. Right-click the `Review` folder and create a new group called Model. Then right-click the `Model` folder and select New File.
2. Inside the Choose a template for your new file screen, select iOS at the top, and then Swift File. Then, hit Next.
3. Name this file `FilterItem` and hit Create.
4. Next, we need to define our struct; therefore, add the following under the `import` statement:
class FilterItem: NSObject {
let filter:String
let name:String
init(dict:[String:AnyObject]) {
name = dict["name"] as! String
filter = dict["filter"] as! String
}
}
The `filter` property will be the class that's passed to apply the filter, and the `name` property will be used as a display.
Let's create our `FilterManager` file next:
1. Right-click the `Photo Filter` folder and select New File.
2. Inside of the Choose a template for your new file screen, select iOS at the top, and then Swift File. Then, hit Next.
3. Name this file `FilterManager` and hit Create.
4. Next, we need to define our class definition; therefore, add the following under the `import` statement:
class FilterManager: DataManager {
func fetch(completionHandler:(_ items:[FilterItem]) -> Swift.Void) {
var items:[FilterItem] = []
for data in load(file: "FilterData") {
items.append(FilterItem(dict: data))
}
completionHandler(items)
}
}
This file uses our `DataManager` base class, which converts our plist data into an array of dictionary objects. Once that is complete, we create `FilterItems` from that.
Next, we need to create a file that takes a `FilterItem` and apply a filter to an image. Since we are going to do this in numerous places, it is best to have all of this code in one place. Therefore, we are going to create a file that handles all of this processing for us. Let's create our `ImageFiltering` file:
1. Right-click the `Photo Filter` folder and select New File.
2. Inside the Choose a template for your new file screen, select iOS at the top, and then Swift File. Then, hit Next.
3. Name this file `ImageFiltering`, and hit Create.
4. Update your file to the following:
import UIKit
import CoreImage
protocol ImageFiltering {
func apply(filter:String, originalImage:UIImage) -> UIImage
}
protocol ImageFilteringDelegate:class {
func filterSelected(item:FilterItem)
}
extension ImageFiltering {
func apply(filter:String, originalImage:UIImage) -> UIImage {
let initialCIImage = CIImage(image: originalImage, options: nil)
let originalOrientation = originalImage.imageOrientation
guard let ciFilter = CIFilter(name: filter) else {
print("filter not found")
return UIImage()
}
ciFilter.setValue(initialCIImage, forKey: kCIInputImageKey)
let context = CIContext()
let filteredCIImage = (ciFilter.outputImage)!
let filteredCGImage = context.createCGImage(filteredCIImage, from: filteredCIImage.extent)
return UIImage(cgImage: filteredCGImage!, scale: 1.0, orientation: originalOrientation)
}
}
Let's break down each section so that we can understand what we are doing with this code:
import UIKit
import CoreImage
`CoreImage` give us access to the image processing we need for filtering:
protocol ImageFiltering {
func apply(filter:String, originalImage:UIImage) -> UIImage
}
Creating this protocol allows us to have other classes conform to it, therefore giving us access to the method and allowing us to use it wherever we want:
protocol ImageFilteringDelegate:class {
func filterSelected(item:FilterItem)
}
We use this protocol when a filter is selected. When the filter is selected, that data is passed from one View or View Controller to another. The extension has the `apply(filter:originalImage:)` method in it. In this method, we are creating an extension and adding all of the code that we are going to use for applying filters to images.
# Creating our filter scroller
After a user selects a photo to use, we present the user with a screen, which contains that image. In the following screenshot, we have a scroller, also known as a `UIScrollView`, which allows us to create content that scrolls either horizontally or vertically. The `UIScrollView` displays an image (thumbnail) with the filter applied to it as well as the name of the filter. This image and name represent our filters visually to our users.
When the user taps on the image, the user will see the selected filter change the primary image. Let's look at an example:
We are now going to create the elements inside the `UIScrollView`. Since we have created a lot inside our storyboard, let's create the `PhotoItem` entirely in code:
1. Right-click the `Model` folder inside of `Review Form` folder and select New File.
2. Inside the Choose a template for your new file screen, select iOS at the top, and then Swift File. Then, hit Next.
3. Name this file `PhotoItem` and hit Create.
4. Update your file to the following:
import UIKit
class PhotoItem: UIView, ImageFiltering {
}
5. Next, add your variables inside of the class declaration:
var imgThumb:UIImageView?
var lblTitle:UILabel?
var data:FilterItem?
weak var delegate: ImageFilteringDelegate?
Here, we are creating a delegate, which is used to let any class know when something happens. We use this delegate when someone taps on the object itself, which allows us to pass the `FilterItem` data to a delegate class.
You have used this pattern already plenty of times. Table Views and Collection Views both have delegates to which you conform.
6. Now, we need to add our `init` methods. Add the following after your variables:
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
init(frame:CGRect, image:UIImage, item:FilterItem) {
super.init(frame: frame)
setDefaults(item: item)
createThumbnail(image: image, item: item)
createLabel(item: item)
}
Whenever you create a `UIView`, you are required to add this method. If you do not, it gives you an error.
This custom `init()` method allows us to pass data (here, the frame, image, and filter items) when the item gets created. We have a few errors because we have not created the methods that we added to our `init()` method.
7. Next, let's create an extension and add the following methods:
private extension PhotoItem {
func setDefaults(item:FilterItem) {
data = item
let tap = UITapGestureRecognizer(target: self,
action:#selector(thumbTapped))
self.addGestureRecognizer(tap)
self.backgroundColor = .clear
}
func createThumbnail(image:UIImage, item:FilterItem) {
if item.filter != "None" {
let filteredImg = apply(filter: item.filter, originalImage: image)
imgThumb = UIImageView(image: filteredImg)
}
else { imgThumb = UIImageView(image: image) }
guard let thumb = imgThumb else {
return
}
thumb.contentMode = .scaleAspectFill
thumb.frame = CGRect(x: 0, y: 22, width: 102, height: 102)
thumb.clipsToBounds = true
addSubview(thumb)
}
func createLabel(item:FilterItem) {
lblTitle = UILabel(frame: CGRect(x: 0, y: 0, width: 102, height: 22))
guard let label = lblTitle else {
return
}
label.text = item.name
label.font = UIFont.systemFont(ofSize: 12.0)
label.textAlignment = .center
label.backgroundColor = .clear
addSubview(label)
}
}
Our `setDefaults()` method is used to create a tap gesture. When the item gets tapped, we call the `thumbTapped` method. We also set the data and the background color of this method.
`createThumbnail(image: item:)` is used to create an image and apply a filter to the image. Then, we need to set its frame and add the image to the View.
With our final method, `createLabel(item:)`, we are creating a label and passing in the name of the filter. Then, we are setting its frame and adding the label to the View. We have two more methods that we need to add to our extension.
8. Add the following after the `createLabel(item:)` method:
@objc func thumbTapped() {
if let data = self.data {
filterSelected(item: data)
}
}
func filterSelected(item:FilterItem) {
delegate?.filterSelected(item: item)
}
The `thumbTapped()` method is used to detect taps. When the user taps the item, it calls `filterSelected`.
The `filterSelected(item:)` method is the protocol we created earlier; all we are doing is calling the `delegate` method, `filterSelected`. We will see what happens when the selected filter gets called next.
Our `PhotoItem` is complete; now, we need to work on our cell for our `Filter` collection view.
# Creating a filter cell
We already created the cell that we need in the storyboard. However, before we create our View Controller, we need to create a filter cell. This cell is used to display all of the available filters:
1. Right-click the `Photo Filter` folder and select New File.
2. Inside the Choose a template for your new file screen, select iOS at the top, and then Cocoa Touch Class. Then, hit Next.
3. In the options screen that appears, add the following:
New file:
* * Class: `FilterCell`
* Subclass: `UICollectionViewCell`
* Also create XIB: Unchecked
* Language: `Swift`
4. Click Next, and then Create.
5. Update your file with the following:
class FilterCell: UICollectionViewCell {
@IBOutlet var lblName:UILabel!
@IBOutlet var imgThumb: UIImageView!
}
extension FilterCell: ImageFiltering {
func set(image:UIImage, item:FilterItem) {
if item.filter != "None" {
let filteredImg = apply(filter: item.filter, originalImage: image)
imgThumb.image = filteredImg
}
else { imgThumb.image = image }
lblName.text = item.name
roundedCorners()
}
func roundedCorners() {
imgThumb.layer.cornerRadius = 9
imgThumb.layer.masksToBounds = true
}
}
Our cell is pretty basic: we are setting an image and giving it rounded corners.
6. Open `PhotoFilter.storyboard`.
7. In the Outline view, select the Collection View cell. Then, in the Utilities panel, under the Identity inspector, set the Custom Class to FilterCell.
8. In the Attributes inspector, set the Identifier to filterCell.
9. Next, connect your outlets for both `lblName` and `imgThumb`.
10. We need to make sure that we can dismiss our modal when we click the Add Photo button. We already added the method we needed, but we need to add this to the storyboard. Use _Control_ and drag from Cancel to the Exit icon:
11. In the popup, select `unwindReviewCancelWithSegue`:
We are done with setting up the cell and storyboard. Let's move on to creating our View Controller.
# Creating our PhotoFilterViewController
Now, we need to create our `PhotoFilterViewController`:
1. Right-click the `Photo Filter` folder and select New File.
2. Inside the Choose a template for your new file screen, select iOS at the top, and then Cocoa Touch Class. Then, hit Next.
3. In the options screen that appears, add the following:
New file:
* * Class: `PhotoFilterViewController`
* Subclass: `UIViewController`
* Also create XIB: Unchecked
* Language: `Swift`
4. Click Next, and then Create.
When the file opens, delete everything after the `viewDidLoad()` method.
5. Then, add the following:
class PhotoFilterViewController: UIViewController {
var image: UIImage?
var thumbnail: UIImage?
let manager = FilterManager()
var selectedRestaurantID:Int?
var data:[FilterItem] = []
@IBOutlet var collectionView: UICollectionView!
@IBOutlet weak var imgExample: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
initialize()
}
}
Here, we are setting up our variables and our `initialize()` method. You can ignore the error, as we will fix this next by creating an extension after our class definition.
6. Add the following extension:
// MARK: - Private Extension
private extension PhotoFilterViewController {
func initialize() {
requestAccess()
setupCollectionView()
checkSource()
}
}
We are creating some basic functions that we need. Our first function is our `initialize()` method, which calls three new methods. Let's create those three methods next.
7. Add the following methods after the `initialize()` method:
func requestAccess() {
AVCaptureDevice.requestAccess(for: AVMediaType.video) { granted in
if granted {}
}
}
func setupCollectionView() {
let layout = UICollectionViewFlowLayout()
layout.scrollDirection = .horizontal
layout.sectionInset = UIEdgeInsets(top: 7, left: 7, bottom: 7, right: 7)
layout.minimumInteritemSpacing = 0
layout.minimumLineSpacing = 7
collectionView?.collectionViewLayout = layout
collectionView?.delegate = self
collectionView?.dataSource = self
}
func checkSource() {
let cameraMediaType = AVMediaType.video
let cameraAuthorizationStatus = AVCaptureDevice.authorizationStatus(for: cameraMediaType)
switch cameraAuthorizationStatus {
case .authorized:
showCameraUserInterface()
case .restricted, .denied:
break
case .notDetermined:
AVCaptureDevice.requestAccess(for: cameraMediaType) { granted in
if granted {
self.showCameraUserInterface()
}
}
}
}
Our next method, the `setupCollectionView()` method, is the basic setup for our collection view. We are doing something different with `delegate` and `dataSource`. In the previous chapters, we set this up using the Outlet inspector. This time, I am setting them up in code. Either can be done; there is no right or wrong way, but pick one way and stick with it throughout the entire app. I did both for demonstration purposes.
The next method requests user access to their camera or the photo library. The `checkSource()` method, under the case `.notDetermined:` statement, checks whether you are running this on a simulator or on a phone. If you are running it on a simulator, you automatically get the photo library, since there is no camera. If you are on a device, then the user has access to their camera. Now, we need to add two more helper methods. Let's add them first and then discuss them.
8. Add the following methods:
func showApplyFilter() {
manager.fetch { (items) in
if data.count > 0 { data.removeAll() }
data = items
if let image = self.image {
imgExample.image = image
collectionView.reloadData()
}
}
}
func filterItem(at indexPath: IndexPath) -> FilterItem{
return data[indexPath.item]
}
@IBAction func onPhotoTapped(_ sender: Any) {
checkSource()
}
The first method, `showApplyFilter()`, is used to create the filter content inside of our collection view. `filterItem(at:)` is used when the user selects a `filter` item. We will pass the index position of the Collection View and create a filter item from it. This item is used to display the currently selected filter in the larger image above our Collection View.
Let's work on getting items displayed in our Collection View. As we have done in the past, we have a few methods that are required for our Collection View to display cells. Add the following extension under our private extension:
extension PhotoFilterViewController: UICollectionViewDataSource {
func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int {
return data.count
}
func numberOfSections(in collectionView: UICollectionView) -> Int {
return 1
}
func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell {
let cell = collectionView.dequeueReusableCell(withReuseIdentifier: "filterCell", for: indexPath) as! FilterCell
let item = self.data[indexPath.row]
if let img = self.thumbnail {
cell.set(image: img, item: item)
}
return cell
}
}
We have done this before, but let's go over the methods again. `-collectionView:numberOfItemsInSection:` is responsible for the number of items in each section. For this collection view, it means the number of filter items we are going to display. Next, we have `-numberOfSectionsInCollectionView:`, which tells our Collection View how many sections we have; in our case, we only have one. Finally, we have `collectionView:cellForItemAtIndexPath:`. This is the method that gets run for every cell we need to create. In this method, we are creating a filter cell.
Now that we have our basic collection view set up, we need to make sure that our Collection View is laid out correctly. Let's add another extension in this file that is responsible for the layout of items for our Collection View. Add the following extension and method after the last extension we just added:
extension PhotoFilterViewController: UICollectionViewDelegateFlowLayout {
func collectionView(_ collectionView: UICollectionView, layout collectionViewLayout: UICollectionViewLayout, sizeForItemAt indexPath: IndexPath) -> CGSize {
let screenRect = collectionView.frame.size.height
let screenHt = screenRect - 14
return CGSize(width: 150, height: screenHt)
}
}
This extension sets up our cell size and spacing. Save the file. Next, let's hook up our two `IBOutlets`:
1. Open the `PhotoFilter.storyboard`.
2. Select the View Controller in the Outline view, and then the Identity inspector in the Utilities panel.
3. Under Custom Class, in the Class drop-down menu, select or type `PhotoFilterViewController` and hit _Enter_.
4. Select the Connections inspector in the Utilities panel.
5. Under Outlets, click and drag from the empty circle of each of the components, `imgExample`, `collectionView`, and `onPhotoTapped:`, to `Image View`, `CollectionView View`, and `Camera Icon (inside Navigation Bar at the top)`, respectively, in the scene. Now, open the `PhotoFilterViewController.swift` file again. Let's add some more code.
Our Collection View is set up, but we need to add some more code before we can get everything else working. Next, we need to add two more extensions that handle when a user uses the camera and photo library and the second one that is for our custom protocol, which we created earlier. We will need to use `AVFoundation` and `MobileCoreServices` in our app. `AVFoundation` is a framework that gives us access to the camera and `MobileCoreServices` is a framework that gives us access to the filters. At the top of the file, under import `UIKit`, add the following:
import AVFoundation
import MobileCoreServices
6. Now, let's add the first extension so that we can access the camera and photo library:
extension PhotoFilterViewController: UIImagePickerControllerDelegate, UINavigationControllerDelegate {
func imagePickerControllerDidCancel(_ picker: UIImagePickerController) {
picker.dismiss(animated: true, completion: nil)
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
let image = info[UIImagePickerController.InfoKey.editedImage] as? UIImage
if let img = image {
self.thumbnail = generate(image: img, ratio: CGFloat(102))
self.image = generate(image: img, ratio: CGFloat(752))
}
picker.dismiss(animated: true, completion: {
self.showApplyFilter()
})
}
func showCameraUserInterface() {
let imagePicker = UIImagePickerController()
imagePicker.delegate = self
#if targetEnvironment(simulator)
imagePicker.sourceType = UIImagePickerController.SourceType.photoLibrary
#else
imagePicker.sourceType = UIImagePickerControllerSourceType.camera
imagePicker.showsCameraControls = true
#endif
imagePicker.mediaTypes = [kUTTypeImage as String]
imagePicker.allowsEditing = true
self.present(imagePicker, animated: true, completion: nil)
}
func generate(image:UIImage, ratio:CGFloat) -> UIImage {
let size = image.size
var croppedSize:CGSize?
var offsetX:CGFloat = 0.0
var offsetY:CGFloat = 0.0
if size.width > size.height {
offsetX = (size.height - size.width) / 2
croppedSize = CGSize(width: size.height, height: size.height)
}
else {
offsetY = (size.width - size.height) / 2
croppedSize = CGSize(width: size.width, height: size.width)
}
guard let cropped = croppedSize, let cgImage = image.cgImage else {
return UIImage()
}
let clippedRect = CGRect(x: offsetX * -1, y: offsetY * -1, width: cropped.width, height: cropped.height)
let imgRef = cgImage.cropping(to: clippedRect)
let rect = CGRect(x: 0.0, y: 0.0, width: ratio, height: ratio)
UIGraphicsBeginImageContext(rect.size)
if let ref = imgRef {
UIImage(cgImage: ref).draw(in: rect)
}
let thumbnail = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
guard let thumb = thumbnail else { return UIImage() }
return thumb
}
}
The extensions that we created for `UIImagePickerControllerDelegate` and `UINavigationControllerDelegate` have two methods that we need to implement. We also have some custom helper methods that we can use. The `imagePickerControllerDidCancel:` method is called when the user hits the Cancel button; therefore, we dismiss the `Controller` and do nothing.
The `imagePickerController:didFinishPickingMediaWithInfo:` method is used when we get the image from the `Picker` once it is dismissed. We set our thumbnail and image values here; then, we apply the `generate()` method to get them in a smaller size. Finally, we dismiss the `Controller` and then call `showApplyFilter()` to add our selected image to our filter view.
`showCameraUserInterface()` is used to show the camera interface, along with the camera controls. As I mentioned earlier, the code first checks whether you are running the simulator and, if so, shows the photo library. If you are running on a device, you see the camera interface.
The `generate(image:ratio:)` method is what we use to take the images and crop them to the size we need and return an image as a smaller size. The photo library and camera images are quite large. Therefore, if we did not use this method, it would take a long time for UI to go through and do everything we need.
We have one more extension to add, and that is for the custom protocols we created earlier. Add the following extension at the bottom of your `PhotoFilterViewController`:
extension PhotoFilterViewController: ImageFiltering, ImageFilteringDelegate {
func filterSelected(item: FilterItem) {
let filteredImg = image
if let img = filteredImg {
if item.filter != "None" {
imgExample.image = self.apply(filter: item.filter, originalImage: img)
} else {
imgExample.image = img
}
}
}
}
`filterSelected(item:)` gets the selected filter item and applies the filter to `imgExample`. We have an `if` statement that checks whether the user selected `None` and, if so, shows the image without any filters. We finally need to add one last extension for selecting a filter item. Add the following extension at the bottom of your `PhotoFilterViewController`
extension PhotoFilterViewController: UICollectionViewDelegate {
func collectionView(_ collectionView: UICollectionView, didSelectItemAt indexPath: IndexPath) {
let item = self.data[indexPath.row]
filterSelected(item: item)
}
}
Here were are simply getting the selected filter item and passing it into the newly created method filterSelected(). Before we can run it, we need to get the user's permission to use the camera or access the user's photo library.
# Getting permission
Apple requires that, if we use the camera or access the camera roll, we must let the user know that we are doing so and why. If you fail to do this, your code regarding the camera will not work, and your app will be rejected when you submit it. Let's take care of this now.
Open the `Info.plist` file and add the following two keys by hovering over any key and hitting the plus icon for the first key. We will then repeat this for the second key:
* `NSPhotoLibraryUsageDescription`
* `NSCameraUsageDescription`
For each key's value, enter anything you want as an alert that the user will see. In the following example, the value is set as `The app uses your camera to take pictures`:
Please make sure that if you are submitting this to the store that you put in the appropriate verbiage. The user, as well as Apple, sees this verbiage. Let's build and run the project by hitting the Play button (or using _command_ \+ _R_ ).
You should now be able to get a photo from the photo library or use the camera:
Once you have a photo, the window is dismissed, and you can apply a filter and save it:
We are not saving the photo yet. We will do this in the next chapter.
# Summary
In this chapter, we covered a lot of new things. You learned how to use the camera and how to integrate the camera roll when a camera is not available. We used a `UICollectionView` horizontally for the first time so that we could put in a row of images. This chapter had a lot of code, and there may be some parts that were confusing. Review these parts and make sure that you fully understand them. There are numerous things in this chapter that you can reuse in many other apps.
In the next chapter, we will be able to save photos and reviews to restaurants.
# Understanding Core Data
Our app is coming along nicely, and we are close to wrapping it up. In the previous chapter, we created a restaurant review form, the Create Review form, which allows us to take pictures or use photos from our library. We can apply filters to photos and even add more filters quickly by updating our plist file.
In this chapter, we will finish up working on the Create Review form. We will get the form fully working so that we can save the data that's entered into the form to what is known as Core Data. Core Data is a framework that handles persistent data, using what is known as **Object-Relational Mapping** ( **ORM** ). We will go much deeper into what Core Data is and how to use it in this chapter.
In this chapter, we will cover the following topics:
* What is Core Data?
* What are `NSManagedObjectModel`, `NSManagedObjectContext`, and `NSPersistentStoreCoordinator`?
* Creating our first Core Data model
# What is Core Data?
Let's start by taking a quote directly from Apple:
""Core Data is a framework for managing and persisting an object graph.""
Apple does not call Core Data a database, even though, behind the scenes, it saves data to an SQLite file in iOS. Core Data is very hard to explain to someone new to programming or to someone who has come from a different programming language. However, in iOS 10, Core Data has been dramatically simplified. Having a general understanding of what Core Data does and how it works is sufficient for the purposes of this book.
When using the Core Data framework, you should be familiar with the MANAGED OBJECT MODEL, the **MANAGED OBJECT CONTEXT** , and the **PERSISTENT STORE COORDINATOR**. Let's look at a diagram to get a better understanding of how they interact with each other:
* `NSManagedObjectModel`: The managed object model represents the data model of your Core Data application. It interacts with all of the data models (also known as entities) that you create within your app. This model is known for any relationships that your data may have in your app. It interacts with your data model, as well as with the persistent store coordinator.
Entities are just objects that represent your data. In our app, since we are going to be saving customer reviews of restaurants, we need to create a review entity.
* `NSManagedObjectContext`: The managed object context manages a collection of model objects, which it receives from the persistent store coordinator. It context is responsible for creating, reading, updating, and deleting models. The context is what you interact with the most.
* `NSPersistentStoreCoordinator`: The persistent store coordinator has a reference to the managed object model, as well as the managed object context. It communicates with the persistent object store. The persistent store coordinator also interacts with an object graph. This graph is where you create your entities and set up relationships within your app.
Core Data is not an easy topic, so you do not need to worry about the finer details. The more you work with Core Data, the easier it becomes to understand it. In this chapter, focus on obtaining a high-level understanding, and the rest will come.
Before iOS 10, you had to create an instance of each of the following: the managed object model, the managed object context, and the persistent store coordinator. Now, in iOS 10, these have been consolidated into what is called `NSPersistentContainer`. We will cover this shortly but, first, we need to create our data model.
# Creating a data model
The data model is where you create your app's model objects and their properties. For our project, we only need to create one model object, called **Review**. Let's create a managed object model now:
1. In the Navigator panel, right-click on the `Misc` folder and create a new group, called `Core Data`.
2. Next, right-click this new `Core Data` folder and click New File.
3. Inside the Choose a template for your new file screen, select iOS at the top and then scroll down to the Core Data section. From there, select Data Model. Then, hit Next:
4. Name the file `LetsEatModel` and click Create.
5. Click Add Entity in the screen that appears:
Then, in the bottom-right corner of the new screen, change the Editor Style to Graph Style:
In the Graph Style, double-click on Entity in the box in the middle of the graph to change your entity's name:
6. Update the text to say Review and then hit _Enter_.
7. Now that we have our first entity created, let's add our first attribute. Select Review Entity and click Add Attribute in the bottom-right corner of the screen. The word attribute appears under Attributes in the box in the middle of the screen:
8. You will see that Xcode shows an error. The reason for this error is that we created an attribute without giving it a type. Let's do that now.
9. Select the word attribute and open your Utilities panel. You will only see three icons: the File inspector, the Quick Help inspector, and the Data Model inspector.
10. Select the last icon, the Data Model inspector and, under Attribute, click on the dropdown for Attribute Type and change it from Undefined to String. The error should now disappear.
11. Next, under Attribute in the Data Model inspector, change the Name from attribute to name and hit _Enter_.
Your first attribute should now look as follows:
We have created our first attribute in the Graph Style and now need to set up the rest of our attributes, which we will do in the Table Style:
1. Switch the Editor Style toTable Style and then click Add Attribute.
2. Update the attribute to date and set its data type to Date. You do not have to do anything in the Data Model inspector for this attribute.
3. Next, select the + button in the Attributes section of the Table Style screen under the two attributes we just added.
4. Update this third attribute to `customerReview` and set its data type to String.
5. Next, add a fourth attribute, named rating, with a data type of Float.
6. Now, add a fifth attribute, named `restaurantID`, with a data type of Integer 32. When we save reviews, we save them with their `restaurantID`. Whenever we go to a restaurant detail page, we get all of the reviews just for that specific restaurant and then display them. If we do not have any reviews, then we display a default message.
7. Now, add a sixth attribute, named `title`, with a data type of String.
8. Lastly, add a seventh attribute, named `uuid`, with a data type of String and, under Attribute in the Data Model inspector, uncheck the Optional checkbox. This attribute is our unique ID for each review.
Your Attributes table should now look like the following:
Now that we have our attributes set, we need to do a few more things before we start working on some code.
# Entity autogeneration
We could have Xcode create a file for our Review Entity; however, if we wanted to add more attributes, we would have to generate more code. Core Data offers the ability to autogenerate our code for us. To take advantage of this feature, follow these steps:
1. In the list of entities in the left-hand panel, select our only Entity, Review.
2. After you select the entity, select the Data Model inspector in the Utilities panel. You should notice that your Data Model inspector panel has changed from when you were working on your Attributes:
3. Now, hit _command_ \+ _B_ to build the project. This will create the Review class that we created in Core Data. You will not see the file anywhere, but it will have been created.
Now, we need to create another entity, called `RestaurantPhoto`.
# The RestaurantPhoto Entity
Using the same steps as in the previous section, create a photo entity with the following values:
Now, hit _command_ \+ _B_ to build the project; this creates the `Photo` class that we created in Core Data.
We cannot just store images in Core Data, as they have to be converted into data first. Therefore, we need take the image that we used in the review and convert it to binary data for Core Data to save. Then, when we pull the review out of Core Data, we will convert it back into an image so that we can display it.
For learning purposes, we are storing images in Core Data. I would stay away from doing this as much as possible, because images can be large and you can quickly fill up the user's storage. If you are using a feed, you can save the URL path to the image instead of the actual image. If the user is not online, then you can display a placeholder in its place.
# Review item
We get this new `Review` class back from Core Data when we need to fetch items from it. Instead of passing the `Review` class around, we will create a generic data object that we can use instead.
When I work with stored data, I typically like to have two model objects: one that's used when storing data and another that's generic. In the past, passing around Core Data objects caused a lot of technical issues. These issues were addressed in iOS 10; however, with an overabundance of caution, I typically get the items from Core Data and then convert those objects into a struct.
Let's create this file now:
1. Right-click on the `Model` folder inside of the `Review Form` folder and select New File.
2. Inside the Choose a template for your new file screen, select iOS at the top, and then Swift File. Then, hit Next.
3. Name this file `ReviewItem` and click Create.
4. Update your file to the following:
import UIKit
struct ReviewItem {
var rating:Float?
var name:String?
var title:String?
var customerReview:String?
var date:Date?
var restaurantID:Int?
var uuid = UUID().uuidString
var displayDate:String {
let formatter = DateFormatter()
formatter.dateFormat = "MMMM dd, yyyy"
guardl let reviewDate = date else { return "" }
return formatter.string(from: reviewDate Date)
}
}
extension ReviewItem {
init(data:Review) {
if let reviewDate = data.date { self.date = reviewDate }
self.customerReview = data.customerReview
self.name = data.name
self.title = data.title
self.restaurantID = Int(data.restaurantID)
self.rating = data.rating
if let uuid = data.uuid { self.uuid = uuid }
}
}
This file is not doing anything special, other than using a variable to handle dates.
The extension in this file allows us to take the Review from Core Data and map it to a `ReviewItem`. Our custom `init()` method allows us to pass the `Review` object into the `init` parameters.
We need to create another item for the photos that we are saving. This file will have the same basic structure as the `ReviewItem` does. Let's create this file now:
1. Right-click `Controllers` to create a new group called `Photo Reviews`.
2. Right-click the `Photo Reviews` folder and select New File.
3. Inside the Choose a template for your new file screen, select iOS at the top, and then Swift File. Then, hit Next.
4. Name this file `RestaurantPhotoItem` and hit Create.
5. Update your file to the following:
struct RestaurantPhotoItem {
var photo:UIImage?
var date:NSDate?
var restaurantID:Int?
var uuid = UUID().uuidString
var photoData:NSData {
guard let image = photo else {
return NSData()
}
return NSData(data: image.pngData()!)
}
}
extension RestaurantPhotoItem {
init(data:RestaurantPhoto) {
self.restaurantID = Int(data.restaurantID)
if let restaurantPhoto = data.photo { self.photo = UIImage(data:restaurantPhoto, scale:1.0) }
if let uuid = data.uuid { self.uuid = uuid }
if let reviewDate = data.date { self.date = reviewDate }
}
}
The first part of this file is similar to what we did for the review item, except for the `photoData` variable. Since we cannot store an image directly in Core Data, we need to convert it into binary data. The `photoData` variable handles this for us and makes it easier when we save an item to pass `photoData` to Core Data.
Now that we have our `ReviewItem` and `RestaurantPhotoItem`, we need to set up our manager.
# Core Data manager
As we have done throughout this book, we are going to create a `Manager` class. This class will be responsible for getting data in and out of Core Data. Let's get started:
1. Right-click the `Core Data` folder in the `Common` folder and select New File.
2. Inside the Choose a template for your new file screen, select iOS at the top, and then Cocoa Touch Class. Then, hit Next.
3. In the options screen that appears, add the following:
New file:
* * Class: `CoreDataManager`
* Subclass: `NSObject`
* Also create XIB: Unchecked
* Language: `Swift`
4. Click Next and then Create.
When the file opens, under your `import UIKit`, add the following:
import CoreData
This import allows us to have access to the Core Data library. Next, inside the class definition, add the following:
let container:NSPersistentContainer
This constant, which is an `NSPersistentContainer`, gives us everything we need within a Core Data stack. As we discussed earlier, `NSPersistentContainer` is composed of three things: a persistent store coordinator, a managed object context, and a managed object model.
You may have noticed an error after adding this variable. The reason for the error is that we have not created an `init()` method.
Let's add this `init()` method after the constant we just added:
override init() {
container = NSPersistentContainer(name: "LetsEatModel")
container.loadPersistentStores { (storeDesc, error) in
guard error == nil else {
print(error?.localizedDescription as Any)
return
}
}
super.init()
}
This code is initializing the container and grabbing the managed object model we created earlier. The model is now able to see all of our entities and attributes therein.
Our `CoreDataManager` needs to do two things for us. We need to be able to add a new `ReviewItem` and fetch it. When we save a restaurant review, we want to be able to save the review with the restaurant. We do not need to save all of the restaurant information, since we can simply use `restaurantID`. When we go to restaurant details, we can check Core Data for any reviews for a particular restaurant by using `restaurantID`. Let's add the following method after our `init()` method to accomplish this task for us:
func fetchReviews(by identifier:Int) -> [ReviewItem] {
let moc = container.viewContext
let request:NSFetchRequest<Review> = Review.fetchRequest()
let predicate = NSPredicate(format: "restaurantID = %i", Int32(identifier))
var items:[ReviewItem] = []
request.sortDescriptors = [
NSSortDescriptor(key: "date", ascending: false)]
request.predicate = predicate
do {
for data in try moc.fetch(request) {
items.append(ReviewItem(data: data))
}
return items
} catch {
fatalError("Failed to fetch reviews: (error)")
}
}
Let's review this code. Our `fetchReviews(by:)` method takes an ID, and we use this to find reviews for a particular restaurant:
let moc = container.viewContext
let request:NSFetchRequest<Review> = Review.fetchRequest()
let predicate = NSPredicate(format: "restaurantID = %i", Int32(identifier))
In the first line, we are creating an instance of the **managed object context** ( **moc** ). This variable allows us to interact with Core Data. In the next line, we are creating a fetch request. This request is passed to the managed object context and tells it what we need. Finally, we are creating a predicate, which allows us to apply some search parameters. Specifically, we are saying that we want every `ReviewItem` that has the ID that we pass it:
request.sortDescriptors = [NSSortDescriptor(key: "date", ascending: false)]
request.predicate = predicate
Here, we are applying a sort descriptor to our request. Instead of getting reviews back in a random order, we sort all of the reviews by date:
do {
for data in try moc.fetch(request) {
items.append(ReviewItem(data: data))
}
return items
} catch {
fatalError("Failed to fetch reviews: (error)")
}
Finally, we are wrapping everything into a `do...catch` block. When the search occurs, it returns an array of `ReviewItems` or, if there were no `ReviewItems`, an empty array. If there was a problem with your setup, then you will get a fatal error. When the fetch is complete, we then loop through the items and create our `ReviewItems`.
We have added our method to get reviews; we need to do the same for fetching photos. Add the following after the `fetchReviews(identifier:)` method:
func fetchPhotos(by identifier:Int) -> [RestaurantPhotoItem] {
let moc = container.viewContext
let request:NSFetchRequest<Review> = RestaurantPhoto.fetchRequest()
let predicate = NSPredicate(format: "restaurantID = %i", Int32(identifier))
var items:[RestaurantPhotoItem] = []
request.sortDescriptors = [NSSortDescriptor(key: "date", ascending: false)]
request.predicate = predicate
do {
for data in try moc.fetch(request) {
items.append(ReviewItem(data: data))
}
return items
} catch {
fatalError("Failed to fetch photos: (error)")
}
}
Everything is the same as what we did to fetch review items, except we are fetching `RestaurantPhoto` items instead. Now, we need to add a method to save our data into Core Data. Let's add the next two methods by adding the following after our `init()` method:
func addReview(_ item:ReviewItem) {
let review = Review(context: container.viewContext)
review.name = item.name
review.title = item.title
review.date = NSDate()
if let rating = item.rating { review.rating = rating }
review.customerReview = item.customerReview
review.uuid = item.uuid
if let id = item.restaurantID {
review.restaurantID = Int32(id)
print("restaurant id (id)")
save()
}
}
func addPhoto(_ item:RestarauntPhotoItem) {
let photo = RestarauntPhoto(context: container.viewContext)
photo.date = NSDate()
photo.photo = item.photoData
photo.uuid = item.uuid
if let id = item.restaurantID {
photo.restaurantID = Int32(id)
print("restaurant id (id)")
save()
}
}
You will get an error because you have not created the `save()` method yet. Ignore it for now, as we will create that next.
This `addReview()` method takes a `ReviewItem` in the parameters. We convert the `ReviewItem` into a `Review` and then call the `save()` method.
Now, let's add the `save()` method after the `addReview()` method we just created:
fileprivate func save() {
do {
if container.viewContext.hasChanges {
try container.viewContext.save()
}
}
catch let error {
print(error.localizedDescription)
}
}
Once again, we are wrapping everything in a `do...catch` block. Inside of the do, we check to see whether the managed object context has changed. If it has changed, then we call the `save()` method. We have now completed our Core Data manager.
Next, we need to create another manager class. This manager is responsible for making calls to the Core Data manager, similar to how the corresponding manager in the explore manager is responsible for getting data from the plist. This gets us photos and reviews. Let's create this manager file now:
1. Right-click the `Misc` folder and select New File.
2. Inside the Choose a template for your new file screen, select iOS at the top, and then Cocoa Touch Class. Then, hit Next.
3. In the options screen that appears, add the following:
New file:
* * Class: `ReviewDataManager`
* Subclass: `NSObject`
* Also create XIB: Unchecked
* Language: `Swift`
4. Hit Next and then Create. Update your file to the following:
import Foundation
class ReviewDataManager: NSObject {
private var reviewItems:[ReviewItem] = []
private var photoItems:[RestaurantPhotoItem] = []
let manager = CoreDataManager()
func fetchReview(by restaurantID:Int) {
if reviewItems.count > 0 { reviewItems.removeAll() }
for data in manager.fetchReviews(by: restaurantID) {
reviewItems.append(data)
}
}
func fetchPhoto(by restaurantID:Int) {
if photoItems.count > 0 { photoItems.removeAll() }
for data in manager.fetchPhotos(by: restaurantID) {
photoItems.append(data)
}
}
func numberOfReviewItems() -> Int {
return reviewItems.count
}
func numberOfPhotoItems() -> Int {
return photoItems.count
}
func reviewItem(at index:IndexPath) -> ReviewItem {
return reviewItems[index.item]
}
func photoItem(at index:IndexPath) -> RestaurantPhotoItem {
return photoItems[index.item]
}
}
This manager class is similar to the other managers that we have created so far. In this manager, our fetch method takes an ID in the parameter. This ID represents the `restaurantID` that we use to search for `ReviewItems` in Core Data. If we find any `ReviewItems`, we add them to our array.
# Summary
In this chapter, you learned about what Core Data is and how to use it. We also looked at `NSManagedObjectModel`, `NSManagedObjectContext`, and `NSPersistentStoreCoordinator`, and how they work together inside Core Data. Even if they do not make sense to you—they did not work for me the first time—it is all right because it will eventually click. Finally, we created two Core Data models: one for reviews and one for photos.
In the next chapter, we will work on actually saving the data we create, as well as getting it back out. We will take our reviews and photos, and display them inside our restaurant details.
# Saving Reviews
We are just about done with our app. In this chapter, we will finally start saving reviews and photos in Core Data. We will then learn how to pull data from Core Data and display it in our app. A lot of the setup is already done for us, and most of what we will do is calls methods that we created earlier in this book.
In this chapter, we will cover the following topics:
* Saving items to Core Data
* Fetching items from Core Data
* Displaying items from Core Data in a Table View
# Saving reviews
First, we will start saving reviews in Core Data. Open up `ReviewFormViewController.swift` and, above `@IBOutlets`, add the following variable:
var selectedRestaurantID:Int?
Next, delete all of the print statements inside your `onSavedTapped(:)` method and then add the following:
@IBAction func onSaveTapped(_ sender: Any) {
var item = ReviewItem()
item.name = tfName.text
item.title = tfTitle.text
item.customerReview = tvReview.text
item.restaurantID = selectedRestaurantID
item.rating = Float(ratingView.rating)
let manager = CoreDataManager()
manager.addReview(item)
dismiss(animated: true, completion: nil)
}
This code is all we need to save an item in Core Data using `CoreDataManager`. To display reviews for a particular restaurant, we need to save every review with a restaurant identifier. Then, when we go to a certain restaurant, we will use the restaurant identifier to search Core Data to see if there are any saved reviews. We pass this identifier using a segue:
1. Open `RestaurantDetail.storyboard` and select the segue we will use to go to the `ReviewForm`.
2. In the Attributes inspector of the Utilities panel, update Identifier under Storyboard Segue to say `showReview`. Then, hit _enter_.
3. Next, we need to make sure that, when a user creates a review, we pass `restaurantID` to the Review Form View Controller. We need to update our `RestaurantItem` so that it has an ID. Open `RestaurantItem` after ` var imageURL:String?` and add the following:
var restaurantID:Int?
4. Next, inside the `CodingKeys:String` enum, add the new case:
case restaurantID = "id"
5. Open `RestaurantDetailViewController.swift` and add this method after the `viewDidLoad()` method (ignore the errors for now):
override func prepare(for segue: UIStoryboardSegue, sender: Any?) {
if let identifier = segue.identifier {
switch identifier {
case Segue.showReview.rawValue:
showReview(segue: segue)
default:
print("Segue not added")
}
}
}
The `prepare()` method inside `RestaurantDetailViewController` will check for the `showReview` segue identifier. If successful, it will call the `showReview()` method, which will take you to the Reviews list.
6. Next, add the following method above the `createRating()` method, inside the private extension:
func showReview(segue:UIStoryboardSegue) {
guard let navController = segue.destination as? UINavigationController,
let viewController = navController.topViewController as? ReviewFormViewController else {
return
}
viewController.selectedRestaurantID = selectedRestaurant?.restaurantID
}
7. While we are cleaning up, move the `initialize()` method into the `private` extension.
8. Next, open `ReviewFormViewController`; let's create a `private` extension and move `onSaveTapped(_:)` into it. Then, delete everything inside the method and update the method with the following:
private extension ReviewFormViewController {
@IBAction func onSaveTapped(_ sender: Any) {
var item = ReviewItem()
item.name = tfName.text
item.title = tfTitle.text
item.customerReview = tvReview.text
item.restaurantID = selectedRestaurantID
item.rating = Float(ratingView.rating)
let manager = CoreDataManager()
manager.addReview(item)
dismiss(animated: true, completion: nil)
}
}
Let's make sure that we are passing `restaurantID` by adding a `print` statement inside `ReviewFormViewController`.
9. Inside the `-viewDidLoad()` method, add the following `print` statement:
print(selectedRestaurantID as Any)
Let's build and run the project by hitting the Play button (or by using _command_ \+ _R_ ). You should now be able to see `restaurantID` in the console. You can create a review and, after you save the review, will be brought back to the restaurant detail view. However, we still can't display our reviews in restaurant details. We will work on this later in the chapter. Before we do that, let's look at how we can save photos in Core Data.
# Saving photos
Saving reviews was pretty simple, and is virtually no different to saving photos. Our code will be pretty similar to what we had for reviews. Open `PhotoFilterViewController` and update it with the following:
func checkSavedPhoto() {
if let img = self.imgExample.image {
var item = RestaurantPhotoItem()
item.photo = generate(image: img, ratio: CGFloat(102))
item.date = NSDate() as Date
item.restaurantID = selectedRestaurantID
let manager = CoreDataManager()
manager.addPhoto(item)
dismiss(animated: true, completion: nil)
}
}
This method will make sure that we have an image and that we can save it to Core Data with its restaurant ID. We need to add a method for when Save is tapped. Add the following method inside the private extension:
@IBAction func onSaveTapped(_ sender: AnyObject) {
DispatchQueue.main.async {
self.checkSavedPhoto()
}
}
Now, when a user taps the Save button, this will make sure that an image is saved in Core Data.
Note: Here, I am using a `DispatchQueue`. This is for UI purposes and might not be needed, but it helps with performance when something is using a lot of resources and locking up the phone.
Before we can save, we need to pass the restaurant identifier to `PhotoFilterViewController.swift`:
1. Open `RestaurantDetail.storyboard` and select the segue we will use to go to the Photo Filter View.
2. In the Attributes inspector of the Utilities panel, update Identifier under Storyboard Segue to say `showPhotoFilter`. Then, hit _enter_.
3. Inside `RestaurantDetailViewController.swift`, update your `prepare` method with the following:
override func prepare(for segue: UIStoryboardSegue, sender: Any?) {
if let identifier = segue.identifier {
switch identifier {
case Segue.showReview.rawValue:
showReview(segue: segue)
case Segue.showPhotoFilter.rawValue:
showPhotoFilter(segue: segue)
default:
print("Segue not added")
}
}
}
4. Next, add the following method after the `showReview()` method inside your `private` method:
func showPhotoFilter(segue:UIStoryboardSegue) {
guard let navController = segue.destination as? UINavigationController,
let viewController = navController.topViewController as? PhotoFilterViewController else {
return
}
viewController.selectedRestaurantID = selectedRestaurant?.restaurantID
}
We are passing the restaurant identifier to our photos, and we now have our photos saved in Core Data. After you save a photo, you are brought back to the restaurant detail view, but next, we need to display the photos in our Detail section.
We are missing one last thing. The photo review and review sections need to pull data from the database for it to be displayed. We need to create a class for each one, so let's start by adding this class:
1. Create a new folder called `Reviews`.
2. Right-click the folder and select New File.
3. Inside the Choose a template for your new file screen, select iOS at the top, and then Cocoa Touch Class. Then, hit Next.
4. In the options screen that appears, add the following:
New file:
* * Class: `ReviewsViewController`
* Subclass: `UIViewController`
* Also create XIB: Unchecked
* Language: `Swift`
5. Hit Next and then Create. When the file opens, replace everything with the following code:
import UIKit
class ReviewsViewController: UIViewController {
@IBOutlet weak var collectionView: UICollectionView!
var selectedRestaurantID:Int?
let manager = CoreDataManager()
var data: [ReviewItem] = []
override func viewDidLoad() {
super.viewDidLoad()
initialize()
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
setupDefaults()
}
}
6. Next, let's add our `private` extension by adding the following:
private extension ReviewsViewController {
func initialize() {
setupCollectionView()
}
func setupDefaults() {
checkReviews()
}
func setupCollectionView() {
let flow = UICollectionViewFlowLayout()
flow.sectionInset = UIEdgeInsets(top: 7, left: 7, bottom: 7, right: 7)
flow.minimumInteritemSpacing = 0
flow.minimumLineSpacing = 7
flow.scrollDirection = .horizontal
collectionView?.collectionViewLayout = flow
}
func checkReviews() {
let viewController = self.parent as? RestaurantDetailViewController
if let id = viewController?.selectedRestaurant?.restaurantID {
if data.count > 0 { data.removeAll() }
data = manager.fetchReviews(by: id)
if data.count > 0 {
collectionView.backgroundView = nil
}
else {
let view = NoDataView(frame: CGRect(x: 0, y: 0, width: collectionView.frame.width, height: collectionView.frame.height))
view.set(title: "Reviews")
view.set(desc: "There are currently no reviews")
collectionView.backgroundView = view
}
collectionView.reloadData()
}
}
}
This is the basic setup that we did before. Our `checkReviews()` method is a bit different, because we first check to see if there are any reviews at all. If there are none, we display a message that says There are currently no reviews. If there are, we do not display anything.
7. Next, let's add our Collection View extensions by adding the following to our data source:
extension ReviewsViewController: UICollectionViewDataSource {
func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int {
return data.count
}
func numberOfSections(in collectionView: UICollectionView) -> Int {
return 1
}
func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell {
let item = data[indexPath.item]
cell.lblName.text = item.name
cell.lblTitle.text = item.title
cell.lblReview.text = item.customerReview
cell.lblDate.text = item.displayDate
if let rating = item.rating { cell.ratingView.rating = CGFloat(rating) }
return cell
}
}
8. Next, let's add our Collection View extensions by adding the following to our layout:
extension ReviewsViewController: UICollectionViewDelegateFlowLayout {
func collectionView(_ collectionView: UICollectionView, layout collectionViewLayout: UICollectionViewLayout, sizeForItemAt indexPath:IndexPath) -> CGSize {
if data.count == 1 {
let width = collectionView.frame.size.width - 14
return CGSize(width: width, height: 200)
}
else {
let width = collectionView.frame.size.width - 21
return CGSize(width: width, height: 200)
}
}
}
Next, for our Collection View to work, we need to create our cell class and an extension of that class:
1. Right-click the `Reviews` folder and select New File.
2. Inside the Choose a template for your new file screen, select iOS at the top, and then Cocoa Touch Class. Then, hit Next.
3. In the options screen that appears, add the following:
New file:
* * Class: `ReviewCell`
* Subclass: `UICollectionViewCell`
* Also create XIB: Unchecked
* Language: `Swift`
4. Click Next and then Create.
5. In this new file, add the following code:
@IBOutlet weak var lblTitle: UILabel!
@IBOutlet weak var lblDate: UILabel!
@IBOutlet weak var lblName: UILabel!
@IBOutlet weak var lblReview: UILabel!
@IBOutlet weak var ratingView: RatingView!
6. Save the file and open up `RestaurantDetail.storyboard`.
7. Locate the `Container` that was created for `Reviews`:
8. Select the cell inside the Collection View. Select the View Controller, and in the Identity inspector, under Custom Class, set Class to `ReviewsViewController`. Then, hit e _nter_.
9. Under Identity inspector, update the class to `ReviewCell`.
10. Select the Collection View, and in the Identity inspector, click and drag from `dataSource` and delegate to the View Controller.
Build and run your project and add a couple of reviews; you should now see reviews appearing in your restaurant details:
We have two more things to update before the end of this chapter. Now that we are saving reviews, we have an overall rating for restaurants. Let's add this next.
# Adding an overall rating
To add an overall rating, we need to pull all of the reviews from Core Data, add them all together, and get an average. Let's add a new method to our Core Data manager to handle this. Add the following inside `CoreDataManager.swift`:
func fetchRestaurantRating(by identifier:Int) -> Float {
let reviews = fetchReviews(by: identifier).map({ $0 })
let sum = reviews.reduce(0, {$0 + ($1.rating ?? 0)})
return sum / Float(reviews.count)
}
In this method, we fetch all of the reviews for a restaurant by their ID. Then, we use the `reduce` method to add them all together, and finally, we calculate the average. Now, let's use this newly created method. Open up `RestaurantDetailViewController.swift`. Under the `selectedRestaurant` variable, add the following:
let manager = CoreDataManager()
Next, under the `createRating()` method, we just set our rating to `3.5` stars. Update this method to the following:
func createRating() {
if let id = selectedRestaurant?.restaurantID {
let value = manager.fetchRestaurantRating(by: id)
ratingView.rating = CGFloat(value)
if value.isNaN { lblOverallRating.text = "0" }
else { lblOverallRating.text = "\(value)" }
}
}
Now, our method is checking to make sure that we have a restaurant ID. If we do, then we set the rating for `ratingView`. We also update the overall label to display the average. Build and run your project, and you should now see a rating for restaurants that have one:
We are finished with this chapter, but there is one thing left that we did not do, and that's adding photo reviews. Your challenge is to add photo reviews and get them displayed in the Collection View. We covered everything you'll need in this chapter, and all of the code is the same. If you get stuck, feel free to use the project files that are in the next chapter.
# Summary
We covered a lot in this chapter, and we've now finished building our main app's primary functionality. Our app is starting to take shape. We were able to create a Core Data model and can now save reviews to Core Data. We can also display all of the reviews for a restaurant or pull out the last review and display it.
In the next chapter, we will work on putting the final touches to our app to make it more universal. Once we have done that, our main app will be finished, and we'll be able to focus on adding some cool features, such as an iMessage app, notifications, and 3D Touch.
# Universal
We have spent most of this book focusing on the logic of our app and getting it to work on iPhones. We have not paid much attention to the app working on iPads or other devices. In this chapter, we will look at the app on an iPad, as well as updating it on all iPhone devices. You will be surprised at how much is already working, and that only minor changes will need to be made to get our app to look how we want. We will also take the time to clean up some of our design elements to match the design more closely.
In this chapter, we will cover the following topics:
* Updating our app to be supported on all devices
* Learning about multitasking and how to code for it
* Cleaning up design elements and using global settings
# Explore
Let's make some design tweaks before we jump into making our layout work for every device and start to get this app more polished.
Let's compare what we can see on an iPhone 8 with the original design:
There are a few things we need to fix:
* Implement rounded corners
* Remove the gray background
* The navigation bar is being displayed
* Tab bar color
* Spacing
We will fix all of these, but we will focus on the first four right now. We have rounded corners in our photo filter list. We can implement these here. Open the `ExploreCell.swift` file by hitting _command_ \+ s _hift_ \+ _O_ , type `ExploreCell`, and hit _enter_. Add the following extension:
private extension ExploreCell {
func roundedCorners() {
imgExplore.layer.cornerRadius = 9
imgExplore.layer.masksToBounds = true
}
}
Add a `roundedCorners()` call inside the `layoutSubviews` method:
override func layoutSubviews() {
super.layoutSubviews()
roundedCorners()
}
Now that we have fixed the first issue, let's fix the second by removing the background color. Open `Explore.storyboard` and select `exploreCell` in the Outline view. Under the Utility panel, in the Attributes inspector, update the Background from LetsEat Dark Grey to White Color. The third issue, being the fact that the navigation bar is displayed, is pretty easy to fix as well. Open the `ExploreViewController.swift` file by hitting _command_ \+ _shift_ \+ _O_ , type `ExploreViewController`, and hit _Enter_. After `viewDidLoad()`, add the following method:
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
navigationController?.setNavigationBarHidden(true, animated: false)
}
That is all we need to do; now, every time we go to this view, we will always hide the Navigation bar at the top. Finally, let's update our app so that our tab bar buttons are the correct color. We need to add a new color to our Color Set called LetsEat Red and set the Hex value to `D0021B`. Open up the `AppDelegate.swift` file at the bottom of the file after the last curly brace, and add the following:
private extension AppDelegate {
func initialize() {
setupDefaultColors()
}
func setupDefaultColors() {
guard let red = UIColor(named: "LetsEat Red") else { return }
UITabBar.appearance().tintColor = red
UITabBar.appearance().barTintColor = .white
UITabBarItem.appearance()
.setTitleTextAttributes(
[NSAttributedString.Key.foregroundColor: UIColor.black],
for: UIControl.State.normal)
UITabBarItem.appearance()
.setTitleTextAttributes(
[NSAttributedString.Key.foregroundColor: red],
for: UIControl.State.selected)
UINavigationBar.appearance().tintColor = red
UINavigationBar.appearance().barTintColor = .white
UITabBar.appearance().isTranslucent = false
UINavigationBar.appearance().isTranslucent = false
}
}
Inside `application:didFinishLaunchingWithOptions:`, add the `initialize()` method call. Build and run the project by hitting the Play button (or by using _command_ \+ _R_ ):
You should now see that we have completed the first four items. Let's address the spacing issue next. Before we do, first, let's switch our device to any iPad. Then, build and run the project by hitting the Play button (or by using _command_ \+ _R_ ). You will see that it is not too bad currently, but the spacing is different on each device. So far, we have only set up values that work for one device. However, we need this to work on all devices.
Let's start with `Explore.storyboard`. First, we need to update the Auto Layout for our explore cells. Right now, we have a width set up for our image that needs to be more dynamic:
1. Open up `Explore.storyboard`.
2. Select the image inside the `exploreCell`.
3. Then, in the Utilities panel, select the Attributes inspector and change the Content Mode under the View section to Aspect Fill. Updating this will keep images from looking stretched, while still filling the entire area:
These are the only updates we need to make to our explore cell. Next, we are going to create a file that will let us know which device is being used. We can then use this to set up different looks, depending on the device. Let's create this file:
4. Right-click the `Misc` folder and select New File.
5. Inside the Choose a template for your new file screen, select iOS at the top, and then Swift File. Then, hit Next.
6. Name this file `Device` and then hit Create.
First, we need to update our `import` statement from `import Foundation` to `import UIKit`.
Next, add the following to the `import` statement:
struct Device {
static var currentDevice: UIDevice {
struct Singleton {
static let device = UIDevice.current
}
return Singleton.device
}
static var isPhone: Bool {
return currentDevice.userInterfaceIdiom == .phone
}
static var isPad: Bool {
return currentDevice.userInterfaceIdiom == .pad
}
}
Our new struct will now tell us whether we are on an iPad or an iPhone. Having a file like this is good because it allows you to avoid having to rewrite the same code. To implement this code, all we need to do is add a snippet of code like the following:
if Device.isPhone{ }
This statement will make our code more readable; if we need to add any more checks for particular devices, we can do it all in the same file. One more great use of putting code like this into its file is that, when you build the next app, you can add this file to your project and continue.
Next, let's open the `ExploreViewController.swift` file and make some more updates to our code. We need to create a variable that we will use for the spacing we want between items. Add the following before your `viewDidLoad()` method:
fileprivate let minItemSpacing: CGFloat = 7
Now, we need to create a function to set up some default Collection View values. We also need to create an `initialize()` method to call our setup function. Add the following method call inside of the `initialize()` method:
setupCollectionView()
Next, add the following inside of the `private` extension after the `initialize()` method:
func setupCollectionView() {
let flow = UICollectionViewFlowLayout()
flow.sectionInset = UIEdgeInsets(top: 7, left: 7, bottom: 7, right: 7)
flow.minimumInteritemSpacing = 0
flow.minimumLineSpacing = 7
collectionView?.collectionViewLayout = flow
}
This method will make sure that we have seven pixels of spacing all the way around. Finally, we need to create an extension that will let us handle all of the spacing programmatically. After the last curly brace, add the following extension:
extension ExploreViewController: UICollectionViewDelegateFlowLayout {
func collectionView(_ collectionView: UICollectionView, layout collectionViewLayout: UICollectionViewLayout, sizeForItemAt indexPath: IndexPath) -> CGSize {
if Device.isPad {
let factor = traitCollection.horizontalSizeClass == .compact ? 2:3
let screenRect = collectionView.frame.size.width
let screenWidth = screenRect - (CGFloat(minItemSpacing) * CGFloat(factor + 1))
let cellWidth = screenWidth / CGFloat(factor)
return CGSize(width: cellWidth, height: 195)
}
else {
let screenRect = collectionView.frame.size.width
let screenWidth = screenRect - 21
let cellWidth = screenWidth / 2.0
return CGSize(width: cellWidth, height: 195)
}
}
func collectionView(_ collectionView: UICollectionView, layout collectionViewLayout: UICollectionViewLayout, referenceSizeForHeaderInSection section: Int) -> CGSize {
return CGSize(width: self.collectionView.frame.width, height: 100)
}
}
Adding `UICollectionViewDelegateFlowLayout` allows us to update our cell item size in code. Let's discuss each part of the extension we just added. The `-collectionView:layout:sizeForItemAtIndexPath:` method is used to set the size of the cell. Inside this method, we are using the struct we created. We are checking to see whether we are using an iPad or an iPhone.
In the if part of the `if...else` statement, we are checking whether the screen is compact or not. If the screen is compact, then we want a two-column grid; otherwise, we want a three-column grid. We are also distributing our items evenly across the width of the screen.
In the else part of the `if...else` statement, we are just setting up a two-column grid on all phones. We get the screen width and then subtract `21`, and then we divide the result by `2` to distribute the cells evenly.
Now, build and rerun your project by hitting the Play button (or by using _command_ \+ _R_ ) and rotate the device. You will see that our layout spacing now updates:
Explore is now complete; let's move on to our locations list.
# Location listing
Let's compare our current location listing with the original design:
We have one thing that needs fixing: the large title. Updating to large titles is simple. Open up the `LocationViewController` and, inside of the `initialize()` method, add the following code after `manager.fetch()`:
title = "Select a Location"
navigationController?.navigationBar.prefersLargeTitles = true
In this code, we are setting a new iOS 11 feature, `prefersLargeTitles`, to `true`. If you build and run, you will see that we are good here now. Next, we will direct our attention to the restaurant listing page and go into more detail on the iPad and multitasking.
# Restaurant listing
For our restaurant listing page, we want a one-column grid on all phones and a two-column grid on all iPads. If you build and run the project by hitting the Play button (or by using _command_ \+ _R_ ) and go to a restaurant listing page, you will see that we need to fix the spacing on the iPad to show two columns correctly:
Let's see how we can fix this. Remember that we still want one column on the iPhone and a grid on the iPad. Open the `RestaurantListViewController.swift` file and add the following above the `createData()` method inside of the private extension:
func initialize() {
createData()
setupTitle()
if Device.isPad{ setupCollectionView() }
}
You will get an error for the `setupCollectionView()` method. Ignore it for now, as we will fix this shortly. This method checks whether the device is an iPad; if it is, it calls the `setupCollectionView()` method. Next, add the following under the `initialize()` method we just added:
func setupCollectionView() {
let flow = UICollectionViewFlowLayout()
flow.sectionInset = UIEdgeInsets(top: 7, left: 7, bottom: 7, right: 7)
flow.minimumInteritemSpacing = 0
flow.minimumLineSpacing = 7
collectionView?.collectionViewLayout = flow
}
The preceding method is the same as we previously added in the storyboard regarding spacing between items, but here, we are implementing it programmatically.
We have a couple more things that we need to address. First, we are going to have the size of the screen calculated for us programmatically. Just as we did in `ExploreViewController`, we are going to a new extension to handle our Collection View layout. Add the following before your `viewDidLoad()` method:
fileprivate let minItemSpacing: CGFloat = 7
Now, add the following at the bottom of the file, after the last curly brace:
extension RestaurantListViewController: UICollectionViewDelegateFlowLayout {
func collectionView(_ collectionView: UICollectionView, layout collectionViewLayout: UICollectionViewLayout, sizeForItemAt indexPath: IndexPath) -> CGSize {
if Device.isPad {
let factor = traitCollection.horizontalSizeClass == .compact ? 2:3
let screenRect = collectionView.frame.size.width
let screenWidth = screenRect - (CGFloat(minItemSpacing) * CGFloat(factor +
1))
let cellWidth = screenWidth / CGFloat(factor)
return CGSize(width: cellWidth, height: 325)
}
else {
let screenRect = collectionView.frame.size.width
let cellWidth = screenRect - 14
return CGSize(width: cellWidth, height: 325)
}
}
}
This code states that, if the device is an iPhone, a one-column grid will be shown; if it is an iPad, a two-column grid will be shown. Now, we need to update our `viewDidAppear()` method. Currently, we are calling both `createData()` and `setupTitle()`. We need to remove both of these calls and call `initialize()` instead. When you are finished, `viewDidAppear()` should look like the following:
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
initialize()
}
Let's build and run the project for the iPad by hitting the Play button (or by using _command_ \+ _R_ ):
The two-column grid is what we want for the iPad for our restaurant listing page, but we need to verify that we did not change the one-column grid on the iPhone. Switch the device back to any iPhone simulator and, after building and rerunning the project, you should still see a one-column grid on the iPhone.
There are still issues with the iPad setup. Switch back to the iPad and build and rerun the project by hitting the Play button (or by using _command_ \+ _R_ ). You will now see that, every time you update the size of the restaurant listing page, the grid updates as well, to fit the new size. Let's move on to the restaurant detail page.
# Updating the restaurant detail page
If you click on a restaurant and go to a restaurant detail page, you should see something similar to the following screenshot:
We do not have much to fix on this screen. If you scroll down to the bottom, you will see that the image we created is not sized correctly. We need to update this so that, depending on the device, we render the appropriate image size. We also need to update the Auto Layout. You can try other device sizes; you should see the same display on all screens:
1. Open `RestaurantDetail.storyboard`
2. Select the image map using the Outline view a1nd, in the Attributes inspector, update Content Mode to Aspect Fill
3. Now, with the image still selected, select the Pin icon and enter the following values:
* All values under Add New Constraints are set to `0`
* Uncheck the Constrain to margins checkbox
* Click on Add 4 Constraints
If you build and run now, you will see that our map fills the area, but our image is stretched. We can leave this but, if this were being submitted, making our image a certain size based on a device would be a much better way of handling this. We are done with cleaning up and making our app ready for the iPad. You should now be able to see how powerful Collection Views are and how they make it easy for you to have a custom look with very little code.
# Summary
You now have an app that functions correctly on all devices. You can see how using the Collection View gives your app some variety on different devices with very little code. As you get more and more comfortable with this, you will find other ways to make your app look unique on various devices.
We could submit the app as is right now and it would be perfectly fine, but why not take advantage of some additional features that you can implement?
In the next chapter, we will do just that by creating an iMessage app for our app.
# iMessages
Text messaging started with just simple text and the creation of emoticon faces using special characters. As smartphones became more commonplace, so did text messaging. Messages are now a significant form of communication for the vast majority of people. People find it easier to respond to a text message than to answer a phone call.
When Apple announced _iMessage_ apps and stickers, it took messaging to another level. We had stickers before this announcement, but now we had a fully-integrated system. iMessage does not only allow you to send a sticker to express a feeling or an emotion more effectively than words, you can now use messages to send the basketball score or even play games through text messages.
In this chapter, we are going to create an _iMessages_ app. This app will allow the user to look for restaurants and send reservations to others. We will build our UI to look similar to the phone app. To create the _iMessages_ app, we need to add a message extension to our app.
We will cover the following topics in this chapter:
* Building a custom message app UI
* Creating a framework
* Sharing code between multiple targets
* Learning how to send a reservation to others
# Understanding iMessages
Starting with the UI is always my preferred way to begin building an app, because you can get a feel for what you need to code. We are going to implement a single screen that will be a list of restaurants (accessible by hitting the sticker icon next to where a user writes their message).
The user can choose a restaurant for which they have a reservation and send it via messages to another person. Once that other person receives the message, that person will be able to tap on the reservation and see all of the details.
In a message View Controller, there are two types of presentation styles—compact and expanded:
Apple recommends that you have two different View Controllers for each style. However, since our screen is simple, we will use just one. Keep in mind, however, that if you want to make a more complicated layout, you should use two controllers.
# Creating our extension
Let's get started by working on the UI:
1. In the Navigator panel, select the Project navigator and then your project:
2. In the Standard Editor, locate the TARGETS area and then the + (plus button) at the bottom of the TARGETS area (if your TARGETS area is not displaying, hit the icon highlighted in blue to the left of General in the following screenshot):
3. Click the + (plus button) and then select iMessage Extension:
4. Click Next and you should see the following screen:
5. Set the Product Name to `MessageApp` and click Finish.
# Updating our assets
Next, we need to add the assets that are necessary for our _iMessages_ app:
1. In the `MessageApp` folder in the Navigator panel, select the `Assets.xcassets` folder.
2. Hit the _Delete_ button and then select Move to Trash in the screen that appears.
3. Open the project's `assets` folder downloaded from Packt's website (<https://www.packtpub.com/>).
4. Open `chapter_22` and drag the `Assets.xcassets` folder into your `MessageApp` folder, inside the Navigator panel. Don't do this in Xcode; you will need to open this up in Finder, just like we did at the beginning of the book.
5. In the options screen that appears, ensure that Copy items if needed and Create groups are both selected, and then select Finish.
6. Grab `MainInterface.storyboard` and replace it with the one in your assets folder from Packt's website.
If you open the `Assets.xcassets` folder, you will see that you now have an icon and two other image assets that you will need for your _iMessages_ app. If you open up `MainInterface.storyboard`, you will see the following:
Your storyboard is set up! Now let's look at how to get data into your iMessage app and display it.
# Creating a framework
Since all of our code for data was created in our iOS app, it does not make sense to rewrite it for our _iMessages_ app. We can create what is known as a framework to share our data between our iOS and iMessage apps.
Using frameworks along with app extensions allows us to put shared code in one place. That means less code and more efficiency, because you will not need to update code in multiple places when you have to make a change. Let's get started with creating our framework:
1. In the Navigator panel, select the Project navigator and then your project.
2. Find the TARGETS area and click on the + button at the bottom of that area.
3. Under the iOS tab, scroll to the bottom to Framework & Library, select Cocoa Touch Framework, and then hit Next:
4. Under Product Name, type `LetsEatDataKit` and then hit Finish.
You should now see the following folder and files in the `Products` folder in your Navigator panel:
5. Select the `LetsEatDataKit` target and ensure that, under Deployment Info, your Deployment Target is set to `12.0` and above. Also, make sure that App Extensions (Allow app extension API only) is checked:
6. Right-click the `LetsEatDataKit` folder in the Navigator panel and create a new group named `Restaurant`.
7. From your _Let's Eat_ app, drag the `RestaurantDataManager.swift` file from the `Restaurant` folder inside the `Model` folder into the newly-created `LetsEatDataKit` folder's `Restaurant` folder.
8. Drag the `RestaurantItem.swift` file from the `Map` folder inside the `Model` folder into the `LetsEatDataKit` folder's `Restaurant` folder.
9. Drag the `RestaurantAPIManager.swift` file from the `Misc` folder into the `LetsEatDataKit` folder's `Restaurant` folder.
10. Drag the entire `JSON` folder from inside the `Misc` folder into the `LetsEatDataKit` folder's `Restaurant` folder.
When you have completed these steps, you should have the following files in your `LetsEatDataKit` folder:
11. Open the `JSON` folder you just moved and, in the `Restaurant` folder, select the `Aspen.json` file.
12. In the Utilities panel, select the File inspector and locate the Target Membership section:
13. To set the target of this file not only to your app but also to your `MessageApp` and `LetsEatDataKit`, check `MessageApp` and `LetsEatDataKit` under Target Membership. Therefore, your _Let's Eat_ app, `MessageApp,` and `LetsEatDataKit` should all be checked:
14. Select each JSON file inside of the `json` folder and update all of the files so that they are all targeted to `LetsEat`, `MessageApp`, and `LetsEatDataKit`. Doing this means that all three targets will be able to access these JSON files.
15. Select each of the remaining three files inside the `LetsEatDataKit` folder's `Restaurant` folder and update them so that each one is targeted to `LetsEatDataKit` only. Doing this means that only your framework will be able to see these files.
16. Change your target from `MessageApp` to `LetsEatDataKit`:
Hit _command_ \+ _B_ to build the app without running it, and your build should be successful as long as you updated the target of all of your files.
Now, switch back to the _Let's Eat_ app and hit _command_ \+ _B_. You may notice some errors. Basically, we moved `restaurantItem`, Restaurant Manager, and `RestaurantAPIManager` out of the main project and into the framework, so our app does not know where these files are now. Let's fix that by doing the following:
1. Inside the `MapViewController.swift` file, add the following `import` at the top of the file:
import LetsEatDataKit
2. We need to update our `RestaurantItem`. We need to make this file public so that other files see it. Inside of the `RestaurantItem.swift` file, update your struct declaration to add `public` before the class so that it looks like the following:
public class RestaurantItem
3. Open your `RestaurantItem` class and update it, each of the following variables, and the `init()` method with `public` access:
public class RestaurantItem: NSObject, MKAnnotation, Decodable
public var name: String?
public var cuisines:[String] = []
public var latitude: Double?
public var longitude:Double?
public var address:String?
public var postalCode:String?
public var state:String?
public var imageURL:String?
public var restaurantID:Int?
public var title: String?
public var subtitle: String?
public var coordinate: CLLocationCoordinate2D
public enum CodingKeys: String, CodingKey
4. Save the file and your `RestaurantItem` errors will disappear.
We still have more minor updates to make. We need to make both `RestaurantAPIDataManager` and `RestaurantDataManager` public as well. Let's start with `RestaurantAPIDataManager` and update the following `struct` and method with `public` access:
public struct RestaurantAPIManager {
public static func loadJSON(file name:String) -> [[String:AnyObject]]
Next, update the class and each of the following methods inside `RestaurantDataManager` with `public` access:
public class RestaurantDataManager {
public func fetch(by location:String, withFilter:String="All", completionHandler:() -> Void)
public func numberOfItems() -> Int
public func restaurantItem(at index:IndexPath) -> RestaurantItem
We also need to make our `init()` method for our `RestaurantDataManager` class `public`; so, after the class declaration, add the following:
public init() {}
Having this `init()` method allows us to write the following:
let manager = RestaurantDataManager()
When we make it `public`, we are calling the `init()` method when we have `RestaurantDataManager()`.
Now, change the target to `LetsEatDataKit` and build it again by hitting _command_ \+ _B_. The build should be successful again at this point. If you open the `MapViewController` file, you should see that all of the errors are fixed in this file.
However, we still have more errors to address inside `MapDataManager`, `LocationViewController`, `RestaurantListViewController`, `ExploreViewController`, `RestaurantDetailViewController`, and `MessagesViewController`. Therefore, inside each of these three files, add the following at the top of each file in the `import` statement section:
import LetsEatDataKit
Next, hit _command_ \+ _B_ again, and there should be no errors inside of any of these three files, or in your entire project.
Now, if you switch the target back to your _Let's Eat_ app and build and run it by hitting the Play button (or by using _command_ \+ _R_ ), you should see that everything is working as expected. We can now start using this data in our _iMessages_ app. Before we move on, I want to explain why you would want to do this. It is good practice to adhere to the DRY (don't repeat yourself) principle. You do not want to have to recreate files you already have. This will become more evident when you need to update this file and you only have to do it once instead of multiple times in multiple places.
# Connecting your message cell
Now that we have our files in order, we can start connecting everything. Earlier, we created our cell, and now we need to create a cell class with which to connect it:
1. Right-click the `MessageApp` folder in the Navigator panel and select New File.
2. Inside the Choose a template for your new file screen, select iOS at the top, and then Cocoa Touch Class. Then, hit Next.
3. You will now see an options screen. Please add the following in the new file section:
* * Class: `RestaurantMessageCell`
* Subclass: `UICollectionViewCell`
* Also create XIB: Unchecked
* Language: `Swift`
4. Click Next and then Create.
5. In the new file, add the following inside of the class declaration:
@IBOutlet var lblTitle:UILabel!
@IBOutlet var lblCity:UILabel!
@IBOutlet var lblCuisine:UILabel!
6. Save the file and then open `MainInterface.storyboard` in the `MessageApp` folder in the Navigator panel.
7. In the Outline view, select the Collection View Cell.
8. Select the Identity inspector in the Utilities panel. Under Custom Class in the Class drop-down menu, select RestaurantMessageCell and hit _enter_.
9. Switch to the Attributes inspector in the Utilities panel, update the identifier to `restaurantCell`, and then hit _Enter_.
10. Switch to the Connections inspector in the Utilities panel, and click and drag from the empty circle next to each outlet listed to the corresponding `UILabel` in the screen shown in the following screenshot:
A. lblTitle
B. lblCity
C. lblCuisine
We now have our cell set up. Let's continue getting our _iMessages_ app working.
# Showing restaurants
We will display a list of restaurants, just like in our app, but we will not be doing the entire interface. Most of this code will be familiar to you, as we have used it before:
1. Open up the `MessagesViewController.swift` file in the Navigator panel and add the following code inside of the class declaration:
@IBOutlet var collectionView: UICollectionView!
let manager = RestaurantDataManager()
var selectedRestaurant:RestaurantItem?
2. Set up the Collection View defaults. Add the following method inside a `private` extension:
private extension MessagesViewController {
func setupCollectionView() {
let flow = UICollectionViewFlowLayout()
flow.sectionInset = UIEdgeInsets(top: 7, left: 7, bottom: 7, right: 7)
flow.minimumInteritemSpacing = 0
flow.minimumLineSpacing = 7
collectionView.collectionViewLayout = flow
collectionView.delegate = self
collectionView.dataSource = self
}
}
The first method we have done a few times just setting up our Collection view. The `createMessage()` method is where we set up our message for a message. Once we are done with the message, we insert into the Message.
You will see errors once you add the preceding code. Ignore them for now, as we will fix them shortly. Create an `initialize()` method that will set up the Collection View and fetch our data.
3. Add the following method above the `setupCollectionView()` method:
func initialize() {
setupCollectionView()
manager.fetch(by: "Chicago", completionHandler: { _ in
self.collectionView.reloadData()
})
}
Since this tab does not contain a location list, we will just pass a city in manually. Here, we use Chicago, but you can change it to any city of your choice.
4. Call the `initialize()` method inside the `viewDidLoad()` method, so that your `viewDidLoad()` method now looks as follows:
override func viewDidLoad() {
super.viewDidLoad()
initialize()
}
5. Let's create another extension for our Collection View delegates and data source. After the last curly bracket in the `MessagesViewController.swift` file, add the following `extension` declaration:
extension MessagesViewController:UICollectionViewDelegate, UICollectionViewDataSource, UICollectionViewDelegateFlowLayout {
}
6. Let's add all of the methods we need to get our Collection View showing data. Add the following inside your extension (which will get rid of the earlier errors):
func numberOfSections(in collectionView: UICollectionView) -> Int {
return 1
}
func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int {
return manager.numberOfItems()
}
func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell {
let cell = collectionView.dequeueReusableCell(withReuseIdentifier: "restaurantCell", for: indexPath) as! RestaurantMessageCell
let item = manager.restaurantItem(at: indexPath)
if let name = item.name { cell.lblTitle.text = name }
if let address = item.address { cell.lblCity.text = address }
if let cuisine = item.subtitle { cell.lblCuisine.text = cuisine }
return cell
}
func collectionView(_ collectionView: UICollectionView, layout collectionViewLayout: UICollectionViewLayout, sizeForItemAt indexPath: IndexPath) -> CGSize {
let cellWidth = self.collectionView.frame.size.width - 14
return CGSize(width: cellWidth, height: 78)
}
You should be very familiar with what we just added. We are setting up our Collection View data source as well as making sure our cells have a spacing of 14 pixels (7 on each side).
Lastly, before we build our app, we need to connect our Collection View in the storyboard:
1. Open up `MainInterface.storyboard` in the `MessageApp` folder in the Navigator panel
2. Select Message View Controller and then the Connections inspector in the Utilities panel
3. Under Outlets, click and drag from the empty circle next to `collectionView` to the Collection View in your scene
Let's change the target _Message App_ and build and run our _iMessages_ app by hitting the Play button (by using _command_ \+ _R_ ). Your app should look similar to the following after clicking the stickers button. It might take a while to load when first launching:
Hitting the arrow (highlighted by the red boxes) will change the screen from compact mode to expanded mode and back again. Now that we have our restaurants displaying, we need to be able to send restaurant reservations to other people. Let's add that next.
# iMessage crashing
This may or may not happen to you, but if you just tried to launch the app and it crashed, there is a fix for this:
1. In the simulator, open the `Messages` app
2. Select Kate
3. Click on the icon with three dots:
4. Click Edit:
5. Click the switch for `MessageApp`:
6. Click Done.
Build and rerun the app, and you should be fine. This error is an Apple bug, and performing these steps is the only way to fix this issue. Let's move on to sending reservations.
# Sending reservations
We need to set up our Collection View so that, when the user taps on a cell, it will add the reservation to the conversation in iMessages. When creating a message to send, we can set the following things:
We will use everything but Trailing Caption and Trailing Subcaption.
1. Open `MessagesViewController` in the `MessageApp` folder in the Navigator panel.
2. In your main class declaration, add the following method after the `setupCollectionView()` method in the `private` extension:
func createMessage(with restaurant:RestaurantItem) {
if let conversation = activeConversation {
let layout = MSMessageTemplateLayout()
layout.image = UIImage(named: "restaurant-detail")
layout.caption = "Table for 7, tonight at 10:00 PM"
layout.imageTitle = restaurant.name
layout.imageSubtitle = restaurant.cuisine
let message = MSMessage()
message.layout = layout
message.url = URL(string: "emptyURL")
conversation.insert(message, completionHandler: { (error: Error?) in
if error != nil {
print("there was an error (error)")
}
else {
self.requestPresentationStyle(.compact)
}
})
}
}
In this method, we are setting up `MSMessage`. We'll check for an active conversation first. If `true`, we'll then set up our layout. Here, we are just using an image from our assets to create an image background (we could have also used a video, for example). Also, we set the caption to `Table for 7, tonight at 10:00PM`. This allows the receiver to see all of the relevant information for the reservation. Next, we set the restaurant name as the image title and the restaurant's cuisine as the image subtitle. Then, we create an instance of `MSMessage`, pass it the layout we created, and give it a URL (which, in our case, is just an empty string, since we don't have a URL). Finally, we insert the message into the conversation. We need to make sure that, when we want to send a message, we are in compact mode; otherwise, the user will think that the app does not work.
Lastly, we just need to add the code that calls our `createMessage()` method. Add the following method in your extension, before the last curly bracket:
func collectionView(_ collectionView: UICollectionView, didSelectItemAt indexPath: IndexPath) {
selectedRestaurant = manager.restaurantItem(at: indexPath)
guard let restaurant = selectedRestaurant else { return }
createMessage(with: restaurant)
}
Here, we are checking for when the user taps a cell; then, we get `selectedRestaurant` and pass it to our `createMessage()` method.
Let's build and run the project by hitting the Play button (or by using _command_ \+ _R_ ). Select a restaurant and you will now see a message with the selected restaurant in the message area:
You can see that, with a little bit of work, you can add a nice _iMessages_ app to your app.
# Summary
In this chapter, we looked at how to add an _iMessages_ app to our app. We also created a framework that allowed us to use data in both our apps without having to duplicate code. We looked at what is involved in creating an `MSMessage` and how we can pass `MSMessageTemplateLayout` to an `MSMessage`. We now know that we can also send embedded videos, as well as images, when we send messages. Also, we can now send reservations through the _iMessages_ app with relevant data for a reservation.
In the next chapter, we will go back to our _Let's Eat_ app and learn how to work with in-app notifications.
# Notifications
Notifications were first launched in 2009 and are a staple of the iOS system. Whether from your favorite app or a text message, you have encountered a notification at some point while using a smartphone. Pre-iOS 10, if you had to work with notifications in iOS, you had two types of push notifications: remote (from a server) and local.
iOS 10 made changes to notifications that simplified them, but also made them more robust. In iOS 10, there is now one notification that covers both remote and local notifications, which is excellent for those who have worked with them in the past. Concerning breadth of functionality, notifications now allow you to embed rich media (such as images, video, and audio), and also have custom UI content.
In this chapter, we are going to learn how to create basic notifications, as well as notifications with embedded images. After we look at both of these examples, we'll look at how to create a custom UI for our notifications.
In this chapter, we'll cover the following topics:
* Learning how to build basic notifications
* Learning how to embed images into notifications
* Learning how to build a custom notification UI
# Starting with the basics
Let's begin by getting our app to send us basic notifications. Inside of our restaurant details page, we have three buttons that all say 7:30 PM, which currently don't do anything. We are going to update those buttons so that, when you tap on one of them, it creates a restaurant reservation notification. If this were a real reservations app, we would want to store these reservations. When the reservation date and time nears, we would then post a notification to the user as a reminder. Doing all of that is outside the scope of this book, so we will address creating a restaurant reservation notification.
# Getting permission
Before we can send any notifications, we must get the user's permission. Therefore, open the `AppDelegate.swift` file and add the following method after the `didFinishLaunchingWithOptions()` method:
func checkNotifications() {
UNUserNotificationCenter.current().requestAuthorization(options: [.alert, .sound, .badge]) { (isGranted, error) in
if isGranted {
print("Notifications permissions granted.")
} else {
print("Notifications permissions denied.")
}
}
}
This method checks for the user's authorization. If the user has not been asked, it displays a message to the user for permission to use notifications. When you add this method, you will get an error. The reason for this error is that we need to `import UserNotifications`. At the top of the file, under `import UIKit`, add the following:
import UserNotifications
Next, the method we just added needs to run inside of the `initialize()` method. Add the following after `setupDefaultColors()`:
checkNotifications()
Your `initialize()` method should now look like the following:
func initialize() {
setupDefaultColors()
checkNotifications()
}
Build and run the project by hitting the Play button (or using _command_ \+ _R_ ), and you should see the following message:
# Setting up notifications
Now that we have permission, we need to set up notifications. We will start setting up our buttons:
1. Open the `RestaurantDetailViewController.swift` file.
2. At the top of the file, under `import UIKit`, add the following:
import UserNotifications
3. Add the following method after our `@IBAction func unwindReviewCancel(segue: UIStoryboardSegue) {}` method and before the last curly bracket of our class file:
@IBAction func onTimeTapped(sender: UIButton) {
}
4. Save the file, and you will see an empty circle appear next to this new `@IBAction`.
5. Open the `RestaurantDetail.storyboard` for which we are going to use the time buttons for our notifications. Select each button. In the Attributes inspector, update the text inside of `eachbutton` to display 9:30pm, 10:00pm, and 10:30pm. You should get the following:
6. Select `RestaurantDetailViewController` and then select the Connections inspector in the Utilities panel. Under `Received Actions`, you should see `onTimeTappedWithSender`, which we added earlier:
7. Click and drag from the empty circle next to `onTimeTappedWithSender` to the first button (marked 9:30pm) in the restaurant detail scene:
8. In the prompt, select Touch Up Inside:
9. Repeat these steps for the remaining two buttons (10:00pm and 10:30pm), clicking and dragging the same circle (now filled) to each of the remaining buttons in the scene and then choosing Touch Up Inside for each prompt that follows.
10. Open `RestaurantDetailViewController.swift`; this is where we need to get the time from inside of the buttons and pass them to our notifications. Add the following method after the `onTimeTapped()` method:
func showNotification(sender:String?) {
print(sender as Any)
}
11. Inside of the `onTimeTapped()` method, add the following:
showNotification(sender: sender.titleLabel?.text)
We are now passing the time value to our `showNotification()` method. Build and run the project by hitting the Play button (or using _command_ \+ _R_ ). You should now see the time of each selected button in the console.
# Showing notifications
Now that we are showing a time, let's show our notification, along with the time selected:
1. Inside of the `showNotification()` method, delete the print statement and add the following:
let content = UNMutableNotificationContent()
if let name = selectedRestaurant?.name {
content.title = name
}
if let time = sender {
content.body = "Table for 2, tonight at \(time)"
}
content.subtitle = "Restaurant Reservation"
content.badge = 1
let trigger = UNTimeIntervalNotificationTrigger(timeInterval: 5, repeats: false)
let identifier = "letsEatReservation"
let request = UNNotificationRequest(identifier: identifier, content: content, trigger: trigger)
UNUserNotificationCenter.current().add(request, withCompletionHandler: nil)
Here, we are creating a notification content object. In this object, we are going to set the title, the body, the subtitle, the badge, and the sound.
2. Before the `showNotification()` method, add the following method:
func setupNotificationDefaults() {
UNUserNotificationCenter.current().delegate = self
}
This method is our `delegate` method for notifications. We get an error for our `delegate`, because we have not yet implemented the required functions.
3. Create an extension at the end of this file, after the last curly bracket. You may already have an extension in this file for our map if you tackled any challenges, if so, add this new extension after the last curly bracket of that `Map` extension. In either case, add the following code:
extension RestaurantDetailViewController: UNUserNotificationCenterDelegate {
func userNotificationCenter(_ center: UNUserNotificationCenter, willPresent notification: UNNotification, withCompletionHandler completionHandler: @escaping (UNNotificationPresentationOptions) -> Void) {
completionHandler([.alert, .sound])
}
}
4. Call the `setupNotificationDefaults()` method inside of our `initialize()` method. Your updated `initialize()` method should now look like as follows:
func initialize() {
setupLabels()
setupMap()
setupNotificationDefaults()
}
5. Build and run the project by hitting the Play button (or using _command_ \+ _R_ ). Open a restaurant detail page, tap the time button, and wait two seconds. You should see the following:
We just implemented a basic notification; however, we can do so much more. Next, let's get an image inside of our notification.
# Customizing our notifications
Now that we understand how to set up a basic notification, let's get into some more features that we can offer. Some of these features were introduced in iOS 11 and some were introduced in iOS 12. I will make sure that I note which features are from which OS. The first feature I want to talk about is a new iOS 12 feature. Deliver quietly allows users to have more control over their notifications, but Deliver quietly only sends notifications to the Notification Center. Let's look at how this works.
# Deliver quietly (iOS 12 feature)
Open `AppDelegate`. In the `checkNotifications()` method, you currently have three options: .alert, .sound, and .badge. Update these options by adding .provisional. When you are done, you should have the following:
func checkNotifications() {
UNUserNotificationCenter.current().requestAuthorization(options: [.alert, .sound, .badge, .provisional]) { (isGranted, error) in
if isGranted {
print("Notifications permissions granted.")
} else {
print("Notifications permissions denied.")
}
}
}
For this to work, we need to delete the app and re-run it. This time, when you rerun it, you will notice that the user is no longer promoted to use notifications. Proceed to a restaurant detail, and this time when you tap the time in the detail, you will notice that you no longer get notifications.
If you leave the app and go to the Notification Center (swipe down from the upper left corner of the phone), you will see the notification there instead:
Deliver quietly is a great feature if you want to get your users using notifications by getting them to show up in the Notification Center by default. There, the user can customize these notifications themselves in the settings. We are done with this feature, so delete the app and remove the .provisional from the request, as we will be moving forward along the normal route. Rerun your app and you should see the notification permission message again:
The next feature I want to look at is embedding images into your notifications. This feature was introduced in iOS 10. Before we can embed an image, we need a test image. In the `Misc` folder of the Navigator panel, create a new group, called `Images`. Then, in the project folder for this book, open the `asset` folder for this chapter and drag the image assets into the `Images` folder that we just created.
# Embedding images (iOS 10 feature)
Next, let's embed our images. First, return to the `RestaurantDetailViewController.swift` file and, in the `showNotification()` method we created, remove the following code:
let trigger = UNTimeIntervalNotificationTrigger(timeInterval: 5, repeats: false)
let identifier = "letsEatReservation"
let request = UNNotificationRequest(identifier: identifier, content: content, trigger: trigger)
UNUserNotificationCenter.current().add(request, withCompletionHandler: { error in
// handle error
})
Replace the deleted section of code with the following code:
guard let imgURL = Bundle.main.url(forResource: "sample-restaurant-img@3x", withExtension:"png") else { return }
let attachment = try! UNNotificationAttachment(identifier: "letsEatReservation", url:imgURL, options:nil)
content.attachments = [attachment]
sendNotification(with:content)
In this code, we are getting the image URL from our project and creating an attachment. We attach the rich media (here, an image) to the notification and send it. Next, let's add the `sendNotification()` method:
func sendNotification(with content:UNNotificationContent) {
let uuid = UUID().uuidString
let trigger = UNTimeIntervalNotificationTrigger(timeInterval:2, repeats:false)
let request = UNNotificationRequest(identifier:uuid, content:content, trigger:trigger)
UNUserNotificationCenter.current().add(request, withCompletionHandler: nil)
}
Build and rerun the project by hitting the Play button (or using _command_ \+ _R_ ). When you get to a restaurant detail page, tap the time button and wait five seconds. You should now see a thumbnail image in the notification:
Also, if you click and pull down on the notification, you should see the following:
Thus far, we have been receiving notifications while inside the app. If you want to test notifications outside of the app, take the following steps (you might have to update your timer from two to five seconds if it is too quick): build and run the project by hitting the Play button (or using _command_ \+ _R_ ). When you get to a restaurant detail page, tap the time button and then immediately hit _command_ \+ _shift_ \+ _H_. This takes you out of the app, and you will then see the following:
If you click and pull down on the notification, you will see the following:
Our notifications are looking good, but you really cannot do anything with them. It would be nice to confirm your reservation with a yes or no, for example. We need to add some buttons for the notifications to do this.
# Adding buttons
Before we add any more String values, it is a good practice to eliminate as many strings from your app as you can. Adding this file will not only eliminate strings, but it also keeps you from accidentally typing in the wrong value. For example, we could easily misspell identifier. Therefore, it is a protective measure to have it in an `enum`. Let's add a new file:
1. Right-click the `Misc` folder and select New File.
2. Inside of Choose a template for your new file screen, select iOS at the top and then choose Swift File. Then, hit Next.
3. Name this file `Identifier` and hit Create:
enum Identifier:String {
case reservationCategory
case reservationIdentifier = "letsEatReservation"
}
enum Option:String {
case one = "optionOne"
case two = "optionTwo"
}
We only need to add a few things to add buttons to our notifications. First, we need to update our restaurant detail:
1. Inside the `RestaurantDetailViewController.swift` file, add the following into the `showNotification()` method after `content.badge = 1`:
content.categoryIdentifier = Identifier.reservationCategory.rawValue
2. We will use this to create our button options for our notification. Open the `AppDelegate.swift` file. After the `checkNotifications()` method, add the following code:
func permissionGranted() {
let optionOne = UNNotificationAction(identifier: Option.one.rawValue, title: "Yes", options: [.foreground])
let optionTwo = UNNotificationAction(identifier: Option.two.rawValue, title: "No", options: [.foreground])
let category = UNNotificationCategory(identifier: Identifier.reservationCategory.rawValue, actions: [optionOne, optionTwo], intentIdentifiers: [], options: [])
UNUserNotificationCenter.current().setNotificationCategories([category])
}
3. In this function, we are setting up two actions: one for yes, and one for no. We are creating a category and setting it to our notification category, which defines the type of notification that we want to use. Add `self.permissionGranted()` inside the if statement:
4. We need to write code to handle when we receive a notification. Return to the `RestaurantDetailViewController.swift` file and add the following inside of your new extension for notifications after the `willPresent()` method:
func userNotificationCenter(_ center: UNUserNotificationCenter, didReceive response: UNNotificationResponse, withCompletionHandler completionHandler: @escaping () -> Void) {
if let identifier = Option(rawValue: response.actionIdentifier) {
switch identifier {
case .one :
print("User selected yes")
case .two:
print("User selected no")
}
}
completionHandler()
}
5. Build and run the project by hitting the Play button (or using _command_ \+ _R_ ). When you get the notification and pull down on it, you will see that you now have button options:
Inside of our `didReceive()` method, we are printing out what the user selected, but you can choose whatever `print` statement you like.
When you receive multiple notifications, they are displayed one after another. However, we can actually group them together instead. Let's see how grouped notifications work.
# Grouped notifications (iOS 11)
To get grouped notifications, we just need to give our notifications a thread ID inside of the `RestaurantDetailViewController.swift` file. Inside the file, add the following code after `content.categoryIdentifier = Identifier.reservationCategory.rawValue`:
if let id = selectedRestaurant?.restaurantID {
content.threadIdentifier = "\(id)"
}
Rerun the app, and when you get to a restaurant detail, tap the time button a few times and then swipe down from the upper-left corner to access your Notification Center. You should see grouped notifications now:
With grouped notifications, you can customize the summary text as well as the hidden text. Hidden text is the text that is shown to users when they do not want to see the information inside of the notification. Let's update the summary text as well as the hidden text next.
# Summary and hidden text (iOS 12)
Open the `AppDelegate` file and let's add a couple of things to the `permissionGranted()` method. Add the following under the let `optionTwo` constant:
// Add this under optionTwo
let hiddenRestaurantPlaceholder = "%u new restaurant invites."
let summaryFormat = "%u more restaurant invites for %@"
Next, delete the let category constant and replace it with the following:
let category = UNNotificationCategory(identifier:Identifier.reservationCategory.rawValue, actions:[optionOne,optionTwo], intentIdentifiers: [], hiddenPreviewsBodyPlaceholder: hiddenRestaurantPlaceholder, categorySummaryFormat: summaryFormat, options:[])
Your `permissionGranted()` method should now look like the following:
func permissionGranted() {
let optionOne = UNNotificationAction(identifier: Option.one.rawValue, title: "Yes", options: [.foreground])
let optionTwo = UNNotificationAction(identifier: Option.two.rawValue, title: "No", options: [.foreground])
let hiddenRestaurantPlaceholder = "%u new restaurant invites."
let summaryFormat = "%u more restaurant invites for %@"
let category = UNNotificationCategory(identifier:Identifier.reservationCategory.rawValue,actions:[optionOne,optionTwo],intentIdentifiers:[],hiddenPreviewsBodyPlaceholder: hiddenRestaurantPlaceholder, categorySummaryFormat: summaryFormat, options:[])
UNUserNotificationCenter.current().setNotificationCategories([category])
}
Build and run the app again. Hit the time button in restaurant details a few times and pull down on the Notification Center. You will now see that your summary text has been customized:
Next, go to the Settings app in the simulator, scroll down to the _LetsEat_ app, and select it:
Select Notifications | Show Previews, and then select Never:
Finally, go back to the Notification Center; you will see that your notifications have changed to hidden and are now displaying our custom summary text:
So far, we have looked at how to create basic notifications as well as notifications with images embedded in them. Next, we can take our app a step further by adding our custom UI to our notifications.
# Custom UI in notifications
To add custom UI to our notifications, we need to add an extension. Let's get started by doing the following:
1. In the Navigator panel, select the Project navigator and then your project.
2. At the bottom of the TARGETS area, click the + button.
3. Select Notification Content Extension under Application Extension and then click Next:
4. In the options screen that appears, set Product Name to `LetsEatNotificationExtension` and click Finish.
Now that our extension has been created, we need to be able to use it:
1. Open the `info.plist` file in the `LetsEatNotificationExtension` folder.
2. Tap the `NSExtension` disclosure arrow to open up that key.
3. Tap the disclosure arrow to open `NSExtensionAttributes`, under which you can see `UNNotificationExtensionCategory`:
This category is the category of the notification we set previously.
4. Update `myNotificationCategory` to `reservationCategory`.
Save the file and switch your target back to the _Let's Eat_ app. Build and run the project by hitting the Play button (or using _command_ \+ _R_ ). This time, instead of seeing our custom image, we now have the following:
You might have noticed something slightly off when you pulled down on the notification. The notification starts out large and then shrinks down. Inside of your `Info.plist` file, there is a property, `UNNotificationExtensionInitialContentSizeRatio`, that is currently set to `1`. Changing it to `0.25` makes this less noticeable.
Currently, this custom notification is showing us the custom and default content together. We can fix this by returning to our `Info.plist` inside of `LetsEatContentExtension`.
Inside of `NSExtensionAttributes`, add a new item, called `UNNotificationExtensionDefaultContentHidden`, and set the type as Boolean and the value to YES:
Save the file and build and run the project by hitting the Play button (or using _command_ \+ _R_ ). Once you pull down on the notification, you will see that the default content is hidden:
You can now update `MainInterface.storyboard` inside of your `LetsEatContentExtension` folder. In iOS 12, you can now add custom buttons into your storyboard for users to interact with. This is outside the scope of this book, but we have covered a lot of what you need already.
# Custom Notification Settings (iOS 12)
The last thing I want to cover is Custom Notification Settings. This feature is really good if you have an app that has buttons to toggle different notification settings off and on. Apple now lets users go directly from the Settings app and launch their custom settings page.
Before we get into this, in the `assets` folder for this chapter, drag and drop the `Settings` folder into the `Controllers` folder. When you are done, you should see the following (make sure that on the _LetsEat_ target is selected):
There is nothing special about the settings folder; it is just a screen with text. Let's look at how we can set it up. Open the `RestaurantDetailViewController.swift` file; in the `UNUserNotificationCenterDelegate` extension, add the following method:
func userNotificationCenter(_ center:UNUserNotificationCenter, openSettingsFor notification: UNNotification?) {
let storyboard = UIStoryboard(name:"NotificationSettings", bundle:nil)
let vc = storyboard.instantiateViewController(withIdentifier: "NotificationSettingsNavController")
self.present(vc, animated:true, completion:nil)
}
Next, open `AppDelegate`. In the `checkNotifications()` method, you currently have three options: `.alert`, `.sound`, and `.badge`. Update these options by adding `.providesAppNotificationSettings`. When you are done, you should have the following:
func checkNotifications() {
UNUserNotificationCenter.current().requestAuthorization(options: [.alert, .sound, .badge, .providesAppNotificationSettings]) { (isGranted, error) in
if isGranted {
print("Notifications permissions granted.")
self.permissionGranted()
} else {
print("Notifications permissions denied.")
}
}
}
Now, rerun the app and then go to the restaurant detail. Once you are in the restaurant detail, exit the app, go to the _LetsEat_ app settings inside of the Settings app, and select Notifications. You will now see a new link:
Next, tap the LetsEat Notification Settings button. You will be sent back to the LetsEat app and the settings page will launch:
For this example to work, you must go to the restaurant detail page before you go to settings. In a real-world application, you would want to set this up inside of your `AppDelegate` so that no matter where the user is in the app, the settings page will open.
# Summary
Notifications since iOS 10 are getting more and more powerful every year, and give you the flexibility to create rich custom content with very little work. In this chapter, we learned how to build basic notifications, as well as grouped notifications, and added a custom summary and hidden text. Then, we stepped it up a bit by adding embedded images into our notifications. We briefly looked at how to add a custom notification using an extension. Finally, we looked at launching Custom Notification Settings from the settings app into our app.
In the next chapter, we will look at SiriKit and look at how we can integrate it into our app.
# SiriKit
Last year, Apple announced the addition of a new framework called **SiriKit**. This framework allows developers to leverage Siri in their apps. For the last year, SiriKit has been slowly adopted by developers. This year, Apple added even more supported domains. In this chapter, we are going to add SiriKit support to our app.
My original goal was to have Siri set up restaurant reservations, but unfortunately, Apple software requires this feature to be done using MapKit. Using MapKit is not the real issue, though. The real problem is that you have to work with Apple to get this setup, so we can not make restaurant reservations using Siri directly. If you are working on an app that needs this feature, then you need to contact Apple support. In this chapter, we are going to set up the framework so that we can request money from someone. The setup for SiriKit is quite similar, so once you are comfortable with this chapter, you should not have a problem working through the others. Please note that to use SiriKit, you must have a developer license to do this chapter. Apple has made changes to what non-account holders have access to and SiriKit is one of them.
We will cover the following topics in this chapter:
* Working with Siri Shortcuts
* Understanding SiriKit
* Working with SiriKit extensions
* Working with SiriKit UI extensions
# Using Siri Shortcuts
In iOS 12, Apple introduced Siri Shortcuts. Siri Shortcuts are a way to create shortcuts to your app for your users. For example, let's say a user, every Tuesday, makes the same date night reservation with his wife at her favorite restaurant. Instead of having to go through all the steps each time, we can make this easier. Let's see how this works.
Open up your `Info.plist` file and add `academy.cocoa.LetsEat.reservation-activity-type` to `NSUserActivityTypes`. Make sure that you are in `Info.plist` for the app and not one of the other targets:
Now that we have our `Info.plist` set up, let's add some code. Open up the `RestaurantDetailViewController.swift` file and the following imports:
import Intents
import CoreSpotlight
import CoreServices
Next, add the following method after:
func setupReservation(with description: String) {
let reservationActivity = NSUserActivity(activityType: "academy.cocoa.LetsEat.reservation-activity-type")
reservationActivity.isEligibleForSearch = true
reservationActivity.isEligibleForPrediction = true
if let name = selectedResaurant?.name {
reservationActivity.title = "Reservation for 2 at \(name) at \(description)"
}
reservationActivity.suggestedInvocationPhrase = "Restaurant Reservation"
reservationActivity.userInfo = ["Key":"Value"]
let attributes = CSSearchableItemAttributeSet(itemContentType:kUTTypeItem as String)
let date = Date()
let dateFormatter = DateFormatter()
dateFormatter.dateFormat = "EEEE, MMM d, yyyy"
attributes.contentDescription = dateFormatter.string(from: date)
attributes.thumbnailData = UIImage(named:"mexican")?.pngData()
reservationActivity.contentAttributeSet = attributes
self.userActivity = reservationActivity
self.userActivity?.becomeCurrent()
}
This method creates a reservation activity. `NSUserActivity` is great for opening your app through shortcuts. If you want advanced controls, you will want to use Intents, which we will cover later in this chapter. The `suggestedInvocationPhrase` variable is used to give the user an idea about what phrase they should set for the Siri voice shortcut. We will also attach an image to the suggestion; in a real-world application, you would want to make this dynamic and change based on the `suggestedInvocationPhrase`, but that is beyond the scope of this book.
Now, add the `setupReservation()` method inside of the if statement of if let time `=` sender, since we need to pass the time to our reservation method:
if let time = sender {
content.body = "Table for 2, tonight at \(time)"
**// New line**
setupReservation(with: sender)
}
Finally, before you launch the simulator, go to Settings | Developer and scroll all the way down, and make sure that both the options under **Shortcuts Testing** are enabled, as shown here:
Now, launch the app, go to a restaurant's details, and hit a time similar to what we did for notifications. Once the notification appears, exit the app and go to your search by swiping right; in the search, type reservation and you will see our Siri suggestion:
# Siri voice shortcut
If you want to add a Siri voice shortcut, open the Settings app in the simulator and select Siri. Simply press the + button to add your shortcut as a Siri voice shortcut. This shortcut will quickly launch your app using Siri. With some advanced code, you can take this farther by sending the user to the correct detail page. This code is beyond the scope of the book, but you are at a good starting point for this. Next, let's discuss SiriKit and work with Intents. Please note that in order to continue with this chapter, you will need to have a developer account. You can get more information about this here: <https://developer.apple.com/support/app-capabilities/>.
# Understanding SiriKit
We first need to understand how Siri interacts with our app before we get started. Have a look at how it works through this diagram:
A user interacts with Siri to compose a request. Siri takes the request and looks through the intents for the requesting app. If the app is not found, Siri lets you know. If the app is located, but cannot do what was requested, Siri will notify you that the request cannot be made at this time. If the app can handle the intent, it will pass the information to your app. Your app does what it needs to do with that information and reports back to Siri. If the app needs further information, it lets Siri know what to request until the app has everything it needs or the user cancels the request.
# Supported intents
As of iOS 11, Apple currently supports the following intents:
* VoIP calling (initiate calls and search the user's call history)
* Messaging (send messages and search the user's received messages)
* Payments (send payments between users or pay bills)
* Lists and notes (create and manage notes and to-do list items)
* Visual codes (convey contact and payment information using Quick Response (QR) codes)
* Photos (search for and display photos)
* Workouts (start, end, and manage fitness routines)
* Ride booking (book rides and report their status)
* Car commands (manage vehicle door locks and get the vehicle's status)
* CarPlay (interact with a vehicle's CarPlay system)
* Restaurant reservations (create and manage restaurant reservations with help from the _Maps_ app)
This API requires you to work with Apple Maps before your app can use it. For information on how to get started, go to <http://mapsconnect.apple.com>.
We are going to use the Payment intent, which allows us to send payments between users or pay bills. When we are done, we can just say _"Hey Siri! Send $100 to Jason Clayton for dinner last night using LetsEat_. _"_ We can hook this up to any banking system, but at the moment, we have everything else set up for this. Let's get started.
# Enabling Siri's capabilities
The first thing we need to do is enable SiriKit:
1. In Xcode, go to your app and select the LetsEat target:
2. Next, click on the Capabilities tab:
3. Then, hit the switch for Siri to switch it to ON:
4. You should see the following when you are done:
5. You need a working developer account to do the following steps. Otherwise, you will see errors when trying to follow along. Next, we need to add a new target to our project. At the bottom of the TARGETS section, you should see a + button:
6. Click the + button and you will see the following screen:
7. Next, select Intents Extension under the iOS tab. Then, hit Next.
8. In the Options screen that appears, there are some fields to fill out or choose from. Add the following to the Options screen and then hit Next:
* * Product Name: `MakePayment`
* Team: Must have a team
* Organization Name: Your name/company name
* Organization Identifier: Your domain name in reverse order
* Language: `Swift`
* Include UI Extension: Checked
* Project: `LetsEat`
* Embed in Application: `LetsEat`
You should see the following:
When you have finished, we will have two extensions that have been added to our project: the `MakePayment` and `MakePaymentUI` extensions. These extensions are what we will use to add SiriKit to our project. We need to edit these extensions so that they can accept payments:
1. Open the `MakePayment` folder and select the `Info.plist` file.
2. Open all of the disclosure arrows under `NSExtension`. When they are all open, you should see the following:
Currently, the app is set up to use the Send Message intent, but we want to use the Send Payment intent.
3. Under `IntentsSupported`, delete `Item 1 (INSearchForMessagesIntent)` and `Item 2 (INSetMessageAttributeIntent)`:
4. Now, for `Item 0`, change `INSendMessageIntent` to `INSendPaymentIntent`.
5. Under `IntentsRestrictedWhileLocked`, add `INSendPaymentIntent` by clicking the + button:
6. When finished, you should see the following:
7. Next, open up `Info.plist` in the `MakePaymentUI` folder, open all the disclosure arrows under `NSExtension`, and change `INSendMessageIntent` under `IntentsSupported` to `INSendPaymentIntent`:
We have finished setting up our plist. Whenever we access something, we always have to ask permission, just like we did earlier when we accessed the user's photos. In the `Info.plist` file of the _LetsEat_ app, we need to update our plist to let users know that we need access for Siri and our reasoning. Add the `NSSiriUsageDescription` key.
For the key value, enter anything you want as an alert that the user sees. In the following example, the value is set as `This app uses Siri to send payments.`:
Now, inside of `AppDelete`, add the following import after `import UIKit`:
import Intents
Next, after `setupDefaultColors()`, add the following method:
func requestSiriPermissions() {
INPreferences.requestSiriAuthorization({ (status) in
print(status)
})
}
Then, in your `initialize()` method under `checkNotifications()`, add `requestSiriPermissions()`.
Adding this asks the user for permission to use Siri. Now, if you are going to use this in a real app, I would say that you should add a Settings section where users can use a switch to turn Siri on or off. You do not want to force users to use something without really giving them a reason. In iOS, once you ask a user for permission and they decline, they have to go into the Settings section. If you want to ask, then it is better to have another dialog box that asks for permission; if they say yes, then run the request, and if the users say no, then you do nothing. This way, you do not have to force your users to go to their phone's settings to turn this feature on. Now that we have our permissions set up, we need to create users that we can send money to.
# Creating users
When using SiriKit, it needs to have an `INPerson` object. An `INPerson` object is used by Siri to send users things—money, in our case. Let's create this new file:
1. Right-click the `Misc` folder and select New File.
2. Inside of the Choose a template for your new file screen, select iOS at the top and then Swift. Then, hit Next.
3. Save the file as `RestaurantContact`.
4. Click Create.
5. Add the following code to this file:
import Intents
struct RestaurantContact {
let name: String
let email: String
static func allContacts() -> [RestaurantContact] {
return [
RestaurantContact(name: "Jason Clayton", email: "jason@mac.com"),
RestaurantContact(name: "Joshua Clayton", email: "joshua@texas.edu"),
RestaurantContact(name: "Teena Harris", email: "teena@gmail.com")
]
}
func inPerson() -> INPerson {
let formatter = PersonNameComponentsFormatter()
let handle = INPersonHandle(value: email, type: .emailAddress)
if let components = formatter.personNameComponents(from: name) {
return INPerson(personHandle: handle, nameComponents: components, displayName: components.familyName, image: nil, contactIdentifier: nil, customIdentifier: nil)
}
else {
return INPerson(personHandle: handle, nameComponents: nil, displayName: nil, image: nil, contactIdentifier: nil, customIdentifier: nil)
}
}
}
Here, we are creating contacts that we can use to ask Siri to send money. We have set up three people to accept payment at this time: Jason Clayton, Joshua Clayton, and Teena Harris. When we request with Siri, these are the names that it looks for to see if the person exists. If the name is not in this list, Siri lets you know that the name is not found. This list can have any name you wish to have, so if you want to change the names to something else, you can do that now. Just make sure that when we get to the requesting section, you change the name there as well. Our `inPerson()` method is just formatting into a format that SiriKit needs to be able to read the object.
Next, update the Target Membership for this file to also include `MakePayment`:
We now need to add code that runs when the Send Payment intent is invoked.
# Updating our intent handler
We can now finally add our code that runs when the Send Payment intent is invoked. Open the `IntentHandler` class inside of the `MakePayment` extension folder. After the import intents line deletes everything else from this file, add the following code:
class IntentHandler: INExtension{
override func handler(for intent: INIntent) -> Any {
if intent is INSendPaymentIntent {
return SendMoneyIntent()
}
return self
}
}
Here, we are creating a custom intent handler. When the intent is to send payment, we want to run our `SendMoneyIntent` class. We need to create this file next. In the same file direction under the `IntentHandler` class, add the following:
class SendMoneyIntent: NSObject, INSendPaymentIntentHandling {
func handle(intent: INSendPaymentIntent, completion: @escaping (INSendPaymentIntentResponse) -> Void) {
if let person = intent.payee, let amount = intent.currencyAmount {
//handle payment
print("person \(person.displayName) - amount \(String(describing: amount.amount))")
completion(INSendPaymentIntentResponse(code: .success, userActivity: nil))
}
else {
completion(INSendPaymentIntentResponse(code: .failure, userActivity: nil))
}
}
}
In this class, the `handle()` method responds to a `SendPaymentIntent`. We are printing the person's display name and amount. We pass a completion block here, but in real production code, you would run whatever API you are using to verify the payment. Add the following inside of the `SendMoneyIntent` under the `handle()` method:
func resolvePayee(for intent: INSendPaymentIntent, with completion: @escaping (INPersonResolutionResult) -> Void) {
if let payee = intent.payee {
let contacts:[RestaurantContact] = RestaurantContact.allContacts()
var result: INPersonResolutionResult?
var matchedContacts:[RestaurantContact] = []
for contact in contacts {
print("checking existing: (contact.name) - (payee.displayName)")
if contact.name == payee.displayName {
matchedContacts.append(contact)
}
switch matchedContacts.count {
case 0:
print("no matches")
result = .unsupported()
case 1:
print("best matched")
let person = matchedContacts[0].inPerson()
result = INPersonResolutionResult.success(with: person)
default:
print("more than one match")
let person:[INPerson] = matchedContacts.map { contact in
return contact.inPerson()
}
result = INPersonResolutionResult.disambiguation(with: person)
}
}
completion(result!)
} else {
completion(INPersonResolutionResult.needsValue())
}
}
In this method, we are getting the payee's information and checking to see if the person matches one of our contacts. We are looping through the contacts and looking for a match. When completed, we return the result to Siri. If the user is not found, then Siri will tell you that the person is not found. If Siri finds the person, then `PaymentIntent` continues. Lastly, inside of the `IntentViewController`, update the `desiredSize` variable to the following:
var desiredSize: CGSize {
return CGSize(width: self.desiredSize.width, height: 150)
}
Here, we are setting the size of the UI to a `height` of `150`. Let's look at how we can test this.
# Testing Siri
We can test Siri on a device or in the simulator. If you want to test on a device, just change the target to the `MakePayment` target and plug in your iOS 12 device. If you want to test this in the simulator, you have two options. First, you can run the app and Siri in the simulator. At this point, you can hold down the power button and say `Send $100 to Jason Clayton for dinner last night using LetsEat` (or you use the name of whomever you added to the contacts we created earlier). Option two is that you can enter text that you want to display each time.
To set up this text, every time you run the app, select the `MakePayment` scheme:
1. Hit the Scheme dropdown again and select Edit Scheme...:
2. Then, under Siri Intent Query, put in the desired text, such as `Send $100 to Jason Clayton for dinner last night using LetsEat`, and then hit Close:
Remember that we have the `MakePayment` scheme. Run the `MakePayment` scheme. The first thing that will happen is that Siri will ask you for permission:
3. When you accept, Siri will show you your request and ask if you have received it:
When you accept, you will see that your money's been sent. In our example, we are not sending money, so this step will always go through:
Note that the reason Siri is asking for permission is that we are running Siri first, instead of the app. If we ran the app, we would get the following:
We are now done. We did not do anything with our UI, but you can add anything you want to your UI, such as a logo, a view or display to show to the payee, or whatever you decide. Have fun with it and make it your own.
# Summary
In this chapter, we looked at how to integrate Siri into our app. Even though Siri is limited to specific intents, we can still find unique ways to use it, such as using it for messaging, notes, and lists. The overall setup for each intent is the same—the only difference is what you do once the intent hits your app. In the next chapter, we will look at how to distribute our app to others for testing, as well as how to submit our app to the App Store.
# Beta and Store Submission
Over the course of this book, we've come a long way, from learning about Xcode to how to build an entire app. This process would not be complete, however, without actually learning how to submit the app to the App Store. This process may seem like a lot when doing it for the first time, but it becomes more natural and even second nature after a while.
When I submitted my first app, I was extremely nervous. I remember the relief I felt after submitting the app, but I was soon repeatedly checking the site and my inbox for that approval email. I'd heard many stories of people who spent a lot of time working on an app only to have it rejected; these fears are understandable, but know that Apple wants you to succeed. Even if your app gets rejected (and my first one did), it's not necessarily a bad thing.
My first app was a sports app, and it was rejected for two reasons. First, in Apple's eyes, the logo for my app was too similar to the NFL logo. To address this, I just made a generic logo with the initials of the app. Second, the quality of my images was considered not up to standard; therefore, I obtained better images. I then resubmitted my app, and within a couple of days, my app was approved. It is almost a certainty that you will encounter rejections, even if you have been doing this for a while. Take comfort in the fact that you can address any issues with your app and resubmit it for approval from the App Store.
In this chapter, we will cover everything you need to know about getting your app into the App Store. In Xcode 10, a lot of things are done for you behind the scenes; however, the goal of this chapter is to show you how you can set up things on your own. You will need a developer account to follow along with these steps. Go to <https://developer.apple.com/programs/> if you would like to purchase a developer account.
We will cover the following topics in this chapter:
* Creating a bundle identifier
* Generating a certificate signing request
* Creating production and development certificates
* Creating production and development provisioning profiles
* Creating an App Store listing
* Making the release build and submitting to the App Store
* Conducting internal and external testing
This chapter is set up for you to use as you need it. It is not meant for you to follow in order, as with the other chapters in this book. For example, you may need to create a bundle identifier and then need to know how to add external testers. Use this chapter as a resource for when you need to do one of these tasks.
# Creating a bundle identifier
When we created our project, we talked about the bundle identifier (also known as your App ID). This bundle identifier is used to identify your app, and therefore must be unique. Let's proceed with the following steps:
1. Log in to your Apple developer account, and you will see the following screen:
2. Click Certificates, IDs & Profiles.
3. Then, under IDs, click App IDs:
4. Click on the + button at the top right of the screen:
5. The Registering an App ID screen will appear, as follows:
6. In the top part of the Registering an App ID screen, as seen in the preceding screenshot, add the following:
* * Name: Update this field under App ID Description as `LetsEat`.
* Explicit App ID: This field under App ID Suffix should be selected.
* Bundle ID: This field under App ID Suffix should be filled with your details.
Make sure that the Bundle ID follows the standard naming convention: `com.yourcompanyname.letseat`. Your Bundle ID should be the same ID that you set up when we created the project. For example, mine is `cocoa.academy.letseat`.
7. Next, in the bottom part of the Registering an App ID screen, shown as follows, select the App Services that the app requires and then click Continue.
Our project does not have any App Services, but this is where you would set them if a future app required them:
If you later decide to add App Services, you can do so inside of Xcode. You would select the project under Targets, then select the Capabilities tab and modify it as necessary:
After verifying your App ID information, click Register:
Your App ID has now been created. Now, let's look at what certificates are and how to use them.
# Creating a certificate signing request
Whenever you work on a project, you will need to create a **certificate signing request** ( **CSR** ). You generate this certificate on your computer. Let's create one certificate for production (for the App Store) and one certificate for development (for building locally):
1. Open Keychain Access (which you can find by clicking on the search icon in the upper-right corner of your menu bar and typing `Keychain Access`):
2. In the menu bar, while in Keychain Access, select Keychain Access | Certificate Assistant | Request a Certificate From a Certificate Authority...:
3. Enter your email address for User Email Address and the app name for Common Name, and then select Saved to disk under Request is:
4. Then, click Continue.
5. In the screen that appears, enter the certificate name, select a save location, and click Save, as follows:
6. Click Done, export the certificate, and save it to your computer:
# Creating production and development certificates
We need to create production and development certificates. Production certificates are used for the App Store, while development certificates are used to verify that you are a team member who allows apps signed by you to launch on a device. Remember that Xcode 9 can now handle this for you, but knowing the process is still useful. Let's start by creating a production certificate first:
1. Log in to the Apple developer account, and you will see the following screen:
2. Click Certificates, IDs & Profiles, and then under Certificates, select All.
3. Click on the + button at the top right of the screen:
4. On the screen that appears, select App Store and Ad Hoc under Production and then click Continue:
5. The following screen then lists the steps required for creating a CSR file (which we have already created). Click Continue:
6. Upload the CSR created earlier by selecting Choose File under Upload CSR file, selecting the certificate file you saved, and clicking Open. Then, click Continue:
7. Next, download the certificate:
8. Then, install the downloaded certificate by double-clicking it.
For the development certificate, you will need to repeat these steps, except in the step where you choose the type of certificate you need, instead of selecting App Store and Ad Hoc under Production, you will select iOS App Development under Development. All the other steps will be the same.
# Creating a production provisioning profile
Now, let's create a production provisioning profile, which is used for distributing your application. Xcode 9 creates these for you, but again it is still good to know how to do it:
1. Log in to the Apple developer account, and you will see the following screen:
2. Click Certificates, IDs & Profiles, and under Provisioning Profiles, select All.
3. Click on the + button at the top right of the screen:
4. Select App Store under Distribution and then click Continue:
5. Select the Bundle ID created earlier and then click Continue:
6. Next, select the certificate created earlier and then click Continue:
7. Next, enter the Profile Name, Lets Eat Prod, and click Continue:
8. Download the profile:
9. Install the downloaded profile by double-clicking it.
# Creating a development provisioning profile
Now, let's create a development provisioning profile, which is used for building apps on your device using Xcode:
1. Log in to the Apple developer account.
2. Click Certificates, Identifiers & Profiles.
3. Next, under Provisioning Profiles, click All.
4. Then, click on the + button at the top right of the screen.
5. Next, select App Store under Distribution and then click Continue.
6. Select the Bundle ID created earlier and click Continue.
7. Next, select the certificate created earlier and then click Continue.
8. Enter the Profile Name, `Lets Eat Dev`, and click Continue:
9. Select the devices you wish to use or choose Select All:
10. Download the profile:
11. Install the downloaded profile by double-clicking it.
# Creating an App Store listing
Next, we are going to create the App Store listing:
1. Log in to your iTunes account (<https://appstoreconnect.apple.com>) and select My Apps:
2. Click on the + button at the top left of the screen:
3. Select New App:
4. Enter your app details and then hit Create:
The app will now be listed in your iTunes account.
# Creating an archive build
When you submit your app to the App Store, you need to create an archive. This archive will also be used for internal and external testing, which we will address shortly. When your archive is complete, you will upload it to the App Store. Let's create an archive now:
1. Open Xcode, select the project, and enter the following information:
* * Under Identity, update the Version and Build numbers to `1.1` and `2`, respectively.
* Under Signing, ensure Automatically manage signing is checked.
* Under Signing, select Team.
* For minor builds, you want to increment your Version number by `0.1` and your Build number by `1`. In some instances, developers make their Version numbers three digits (for example, `1.1.2`). This is all based on your business and how you want to handle Version numbers. If you are performing a major update, then you typically increment your Version number by `1`:
2. Select Generic iOS Device as the build destination:
3. Update your `Info.plist` by adding `ITSAppUsesNonExemptEncryption`, making its type `Boolean`, and setting its value to `NO`. The value should be `NO` unless you are using some special encryption. Since our app does not have special encryption, we will set our value to `NO`.
4. Select Product | Archive:
5. On the Archives tab on the screen that appears, select your Development Team and then hit Choose:
6. Your IPA file will now be created, so now click Upload:
7. You will see uploading begin, as shown in the following screenshot:
8. Then, when your upload is successful, you will see the following:
9. You will receive an email when your app is either approved or rejected. If rejected, once you fix the issues, you can resubmit it in the same manner by updating the archive and following the steps laid out previously.
# Internal and external testing
Internal and external testing use what is known as **TestFlight**. The _TestFlight_ app can be downloaded from the App Store. Let's look at how to create each type of testing.
# Internal testing
Internal testing does not go through a review process. You can only send builds to up to 25 testers for internal testing. Let's begin:
1. Log in to your iTunes Account and select My Apps.
2. Select your _Let's Eat_ app and then _TestFlight_.
3. On the left side of the page, select Internal Testing, and then on the right side of the page, click Select Version to Test:
4. Then, select the version you want to test and click OK:
5. You will now see the following screen:
6. Finally, click the + button next to Internal Testers and add your Internal Testers:
# External testing
External testing may or may not go through a review process, but with external testing, you can have up to 2,000 testers. For external testing, follow these steps:
1. Log in to your iTunes account and select My Apps.
2. Select your _Let's Eat_ app and then _TestFlight_.
3. On the left side of the page, select External Testing.
4. Next, on the right side of the page, click Add Build to Test, select your build, and hit OK:
5. Finally, click the + button next to External Testers and add your external testers:
6. When you are done adding testers, click the Start Testing button and you will see the following screen. You will need to provide the information requested:
7. Next, submit your app to Apple for review; you will receive an email when it is either approved or rejected. If rejected, once you have fixed the issues, you can resubmit it in the same manner by updating the archive and following the steps laid out previously.
# Summary
You have now completed the entire process of building an app and submitting it to the App Store. If you have gone from beginning to end, congratulate yourself, because it is genuinely a big feat.
At this point, all you can do is wait for Apple to review your project. The next week or so will be the most nerve-wracking (at least it was for me). Don't worry if your app gets rejected, because it happens to the most experienced of developers and is often fixable. Apps can be rejected for minor reasons that are easy to fix; however, you do not want to work for months on a project and miss something big that Apple will never approve. So, do your research regarding what is and is not acceptable to Apple. When you submit your apps to the App Store, please reach out to me on Twitter (`@thedevme`) to let me know—I would love to see what you have built.
# Other Books You May Enjoy
If you enjoyed this book, you may be interested in these other books by Packt:
**Mastering iOS 12 Programming - Third Edition**
Donny Wals
ISBN: 9781789133202
* Build a professional iOS application using Xcode 10 and Swift 4.2
* Use AutoLayout to create complex layouts that look great on every device
* Delve into advanced animations with UIViewPropertyAnimator and UIKit Dynamics
* Enhance your app by using instruments and building your own profiling tools
* Integrate iMessage, Siri, and more in your app through app extensions
* Train and use machine learning models with Core ML 2 and Create ML
* Create engaging augmented reality experiences with ARKit 2
**Swift Game Development - Third Edition**
Siddharth Shekar
ISBN: 9781788471152
* Deliver powerful graphics, physics, and sound in your game by using SpriteKit and SceneKit
* Set up a scene using the new capabilities of the scene editor and custom classes
* Maximize gameplay with little-known tips and strategies for fun, repeatable action
* Make use of animations, graphics, and particles to polish your game
* Understand the current mobile monetization landscape
* Integrate your game with Game Center
* Develop 2D and 3D Augmented Reality games using Apple's new ARKit framework
* Publish your game to the App Store
# Leave a review - let other readers know what you think
Please share your thoughts on this book with others by leaving a review on the site that you bought it from. If you purchased the book from Amazon, please leave us an honest review on this book's Amazon page. This is vital so that other potential readers can see and use your unbiased opinion to make purchasing decisions, we can understand what our customers think about our products, and our authors can see your feedback on the title that they have worked with Packt to create. It will only take a few minutes of your time, but is valuable to other potential customers, our authors, and Packt. Thank you!
| {
"redpajama_set_name": "RedPajamaBook"
} | 6,425 |
{"url":"https:\/\/www.oreilly.com\/library\/view\/shape-memory-alloy\/9780080999203\/XHTML\/B9780080999203000061\/sec-s0020.xhtml","text":"With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.\n\nNo credit card required\n\n6.3. Three-dimensional Phenomenological Constitutive Model for SMA\n\n6.3.1. Finite Strain Constitutive Model\n\nIn this section, the phenomenological thermomechanical finite strain SMA constitutive model proposed by Evangelista et\u00a0al.52 is presented. The model is deduced introducing a free energy function depending on internal variables able to describe the state of the phase transformation and to represent the history dependence of SMA behavior.\nThe model is based on the assumption of the local multiplicative split of the deformation gradient into an elastic part and a phase transformation part, denoted as Fe and Ft, respectively.93,94 Accordingly:\n\n$F(X,t)=Fe(X,t)Ft(X,t)$\n\n(6.1)\n\nat a typical material point position X\u2208\u03a9.\nIn particular, the proposed model assumes ...\n\nWith Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.\n\nNo credit card required","date":"2019-08-20 19:28:06","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 1, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.28326910734176636, \"perplexity\": 3396.941262545703}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-35\/segments\/1566027315558.25\/warc\/CC-MAIN-20190820180442-20190820202442-00174.warc.gz\"}"} | null | null |
«Кредо Вбивці: Виникнення () — короткометражний мультфільм, створений UbiWorkshop, відділом компанії Ubisoft, в листопаді 2010 року і був викладений в мережах PlayStation®Store, Xbox LIVE® і ITunes 16 листопада 2010 року. Сценарій був написаний Ітаном Перрі, режисером став Лорен Берньє.
Опис
Мультфільм являє собою "місток" між іграми Assassin's Creed II і Assassin's Creed: Brotherhood. У ньому розповідається історія Еціо Аудиторе, що збирає інформацію про те, яким чином Чезаре Борджіа прийшов до влади в Італії. Тому сцени мультфільму, в основному, складаються з діалогів між Еціо і його інформатором.
Аніматори намалювали мультфільм в стилі картин, написаних олійними фарбами, традиційних для живопису епохи Відродження. Крім цього, щоб порадувати фанатів серії, творці мультфільму використовували звукові ефекти і фонові зображення, взяті безпосередньо з ігор.
Цікаві факти
Спочатку цей проєкт називався "Секретний проєкт №3"
У найпершій сцені мультфільму показано вбивство Барона Валуа.
Assassin's Creed
Анімаційні короткометражні фільми
Фільми, засновані на відеоіграх | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,760 |
Q: Can I include random intercepts with fixed thresholds in cumulative ordered logistic/probit models? Suppose I fit a cumulative ordered logistic regression to longitudinal data where $y_{ikt}$ is the ordered categorical response $y$ of person $i$ at time $t$ to survey item $k$. Thus we observe responses to the same K survey question by the N people on T occasions. If we let Z be the number of response categories for $y$ (i.e. the number of logits) then the cumulative ordered requires we estimate at least Z - 1 threshold parameters.
Now suppose I let the ordered latent threshold parameters of the ordered logistic model vary across K as fixed effects so that I have $K*(Z-1)$ thresholds. These threshold are basically logit-by-survey item fixed intercepts. Crucially, these do not vary across time - only items and logits/categories of the response scale.
In this situation, is the model identified if I include random effects for each item $k$? I'm not sure if "item-logits thresholds" are nested within "item thresholds" or whether the fact that I do observe each item MULTIPLE times let's me do this.
Note: I cannot model the thresholds themselves as random effects due to Stan not having a reasonable way to ensure ordering of the thresholds in such cases without unconventional priors.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 794 |
Q: Proper way to spec JSON api I need to create CRUD json API for an entity that can be of two types.
If the entity is of 'type 1' then parameter_1 is always null and parameter_2 is an object of additional sub-parameters.
If the entity is of 'type 2' then parameter_1 is integer and parameter_2 is non-existent. My question is whether in this case it's a good practice to set parameter_2 to null, or set all sub-parameters to null instead? I'll give you an example below to be more specific.
'Type 1' object:
{
name: 'object 1',
parameter_1: null,
parameter_2: {
subparameter_1: 'somthing',
subparameter_2: 'somthing else'
}
}
Two ways of describing object of 'type 2':
{
name: 'object 2',
parameter_1: 123,
parameter_2: null
}
or
{
name: 'object 1',
parameter_1: 123,
parameter_2: {
subparameter_1: null,
subparameter_2: null
}
}
Which one is preferable according to REST best practices? Thanks.
A: It looks like you are forcing two different 'schemas' into a single one. If you have no other constraints, consider splitting these into two different resources.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,521 |
{"url":"https:\/\/nylogic.github.io\/set-theory-seminar\/2020\/07\/31\/alternative-cichon-diagrams-and-forcing-axioms-compatible-with-ch.html","text":"July 31\nCorey Switzer, CUNY\nDissertation defense: Alternative Cicho\u0144 diagrams and forcing axioms compatible with CH\nThis dissertation surveys several topics in the general areas of iterated forcing, infinite combinatorics and set theory of the reals. There are two parts. In the first half I consider alternative versions of the Cicho\u0144 diagram. First I show that for a wide variety of reduction concepts there is a Cicho\u0144 diagram for effective cardinal characteristics relativized to that reduction. As an application I investigate in detail the Cicho\u0144 diagram for degrees of constructibility relative to a fixed inner model of ZFC. Then I study generalizations of cardinal characteristics to the space of functions from $\\omega^\\omega$ to $\\omega^\\omega$. I prove that these cardinals can be organized into two diagrams analogous to the standard Cicho\u0144 diagram show several independence results and investigate their relation to cardinal invariants on omega. In the second half of the thesis I look at forcing axioms compatible with CH. First I consider Jensen's subcomplete and subproper forcing. I generalize these notions to larger classes which are (apparently) much more nicely behaved structurally. I prove iteration and preservation theorems for both classes and use these to produce many new models of the subcomplete forcing axiom. Finally I deal with dee-complete forcing and its associated axiom DCFA. Extending a well-known result of Shelah, I show that if a tree of height $\\omega_1$ with no branch can be embedded into an $\\omega_1$ tree, possibly with uncountable branches, then it can be specialized without adding reals. As a consequence I show that DCFA implies there are no Kurepa trees, even if CH fails.","date":"2022-05-27 07:06:45","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8966019153594971, \"perplexity\": 472.894543940783}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-21\/segments\/1652662636717.74\/warc\/CC-MAIN-20220527050925-20220527080925-00491.warc.gz\"}"} | null | null |
Albert Henry Spencer (8 March 1886 – 20 February 1971), often referred to as A. H. Spencer, was an Australian bookseller. He was a specialist in antiquarian bookselling and Australiana and established the Hill of Content bookshop in Melbourne, one of that city's "finest bookshops". He has been called "one of the last links with an heroic age of Australian bookselling and collecting".
Early life
Spencer was born in Balmain, New South Wales on 8 March 1886. His parents were Henry Spencer, a labourer and formerly of Denmark) and Alice Jane (née Prynne).
His father died when Bert (as he was always known to his family and friends) was just two years old, leaving the family in straitened circumstances. He attended Waverley Superior Public School where a sympathetic teacher and a class reader were to inspire in him a life-long love of poetry and especially of the Romantic poets. His mother worked hard to keep the family together but at the age of 14, Bert was forced to leave school to work as a boot-clicker (cutting out the uppers of boots).
Angus & Robertson, Sydney
Eight months later, he joined the booksellers and publishers Angus & Robertson as an errand boy. One of his regular tasks at this time was to deliver books by cab to various customers including David Scott Mitchell, whose collection would later become the basis for the Mitchell Library. Between 1900 and 1922 he was trained in the bookselling trade by "that formidable trio of booksellers", George Robertson and his employees Frederick Wymark and Jim Tyrrell. He became an expert in book-related Australiana and was made the head of Angus & Robertson's secondhand book department. During this period he became a "friend and confidant" of book collectors such as the businessman, collector and benefactor (Sir) William Dixson and the lawyer, judge, book collector, and author (Sir) John Ferguson.
Hill of Content, Melbourne
Spencer wanted to open his own bookshop but, not wanting to compete with Angus & Robertson, his employer of more than twenty years, he diplomatically decided to set up his shop in Victoria. In 1922, with the support of Frank Hobill Cole and George Robertson and a loan of £1,000 from Henry White (an uncle of Patrick White), he established a bookshop in Melbourne called the Hill of Content. Located at 86 Bourke Street (where it has remained ever since, except for a few months at Eastern Market), this bookshop emerged as "a major outlet for antiquarian, second-hand and fine new books".
Spencer's clientele in the early years included book collectors, the literary elite and notable citizens of both Melbourne and Sydney, including Sir William Dixon, Daryl and Lionel Lindsay, Dame Nellie Melba, Tom Roberts and Arthur Streeton, as well as "various Governors, and members of medical and legal professions". In that period the Australian Parliament was located at the Victorian Parliament Building in nearby Spring Street while the State Parliament sat at the Royal Exhibition Buildings close by so that "many prominent politicians frequented the shop as well". Another customer was John Masefield, the British Poet Laureate during his visit to Melbourne for the Victorian Centenary Celebrations in 1934.
In the 1920s, through his network of connections and due to his reputation as a bookseller of note, he was appointed to handle the dispersal of the important private libraries of Frank Hobill Cole, Robert Carl Sticht and Henry White. In 1923 Edgar Charles Harris joined the staff of Hill of Content and was of great assistance in handling "the majesty of the Sticht Collection". The sale of the Sticht collection helped ensure the survival of Hill of Content and the sale of the other two collections was "icing on the cake".
Spencer formed a private company to operate Hill of Content, "issuing the preference shares to Collins Street doctor-clients".
Spencer also published a number of books, mainly of verse, under the imprint of "A. H. Spencer, Hill of Content". During the Second World War he placed posters in his bookshop windows praising Britain's war effort. His advertisements for his books were "personal and colourful".
Later years
In 1952 Spencer sold Hill of Content to Angus & Robertson and began trading from his home at 41 Tennyson Street, Sandringham as the Shining Sea Bookroom. In the same year he superintended the transfer of Sir William Dixson's collection to the Public Library of New South Wales. In 1957 he rejoined his former employers Angus & Robertson in their Elizabeth Street, Melbourne bookshop and in 1959 he published his memoirs The Hill of Content.
Personal life
In 1909 Spencer married Eileen Rebecca O'Connor (died 1964), an accomplished pianist. They had two children, Robert Spencer (died 1946) and Joan Gerstad (died 1970).
Spencer was a Presbyterian, a Freemason and a Rotarian.
He died at Parkville, Victoria on 20 February 1971.
Further reading
Australian Booksellers Association, The Early Australian Booksellers: The Australian Booksellers Association Memorial Book of Fellowship (Adelaide: Australian Booksellers Association, 1980)
A. H. Spencer, The Hill of Content: Books, Art, Music, People (Sydney: Angus & Robertson, 1959).
References
External links
Albert Henry Spencer papers, 1920-1958, at State Library of New South Wales.
Papers, c.1909-1970 (manuscript) - Albert Henry Spencer 1886-1971, at State Library of Victoria.
20th-century Australian businesspeople
Businesspeople from Melbourne
1886 births
1971 deaths
Australian booksellers
Antiquarian booksellers
Australian publishers (people) | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 29 |
module CustomAttributes
module VERSION #:nodoc:
MAJOR = 0
MINOR = 1
BUILD = 0
STRING = [MAJOR, MINOR, BUILD].join('.')
end
end | {
"redpajama_set_name": "RedPajamaGithub"
} | 1,737 |
The ending was total SMH material, at least I got to watch a fight tho!
The New England Patriots are NFL champions with a four-point win punctuated by on-field fighting after their surprising win over the Seattle Seahawks.
The win marks the fourth Super Bowl win for Patriots coach Bill Belichick and quarterback Tom Brady. The pair has appeared in six Super Bowls.
The game was swung on New England's 10-play, 64-yard fourth quarter drive capped by a touchdown pass from Tom Brady to Julian Edelman with 2:02 left on the game clock. That play gave the Patriots their final lead. A tipped pass caught by Seahawks wide receiver Jermaine Kearse that was eerily similar to David Tyree's game-winning catch in Super Bowl XLII put Seattle in striking distance to make a comeback. But rookie cornerback Malcolm Butler sealed the win for the Patriots with an interception of Russell Wilson's pass. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,808 |
Whether ride-hailing services are eventually allowed in B.C. or not is not the question, according to Mo Anwar, of Richmond Taxi.
It's whether the service – which is illegal but operating freely in the Asian-language market in Richmond – should have to play by the same strict rules and regulations as regular, licensed cabs.
On Monday, as the provincial government discussed the future of ride-hailing in B.C. at a cross-party hearing, seven illegal ride-hailing services were already up and running around the province.
Five of them were operating in Richmond alone, according to the BC Taxi Association, which presented at the hearing.
And one of them, GoKabu, according to a former Global National Mandarin reporter, refused to pick up two passengers because the GlobalBC reporter was non-Chinese.
Mo Anwar, Richmond Taxi's general manager, said it costs, on average, about $8,000 to $10,000, on top of the price of a vehicle, to get a cab on the road legally.
Anwar doubts any of the drivers currently operating on the black market in Richmond - via Chinese-language apps, such as Udi Kuaiche and Racoon Go – have stumped up such money in order to do business.
And he claims the continued operation of such unregulated ride-hailing services is hitting his company where it hurts.
"Our business is down about 25 to 30 per cent from about a year ago," Anwar told the Richmond News.
"(The ride-hailing services) are all conducted in a different language, so we don't really know for sure what it going on.
Anwar said, for example, if a customer has an issue with one of his drivers, he can look into it and follow up with the customer.
"I'm not sure they can get that with those ride-hailing services," he added.
"At the end of the day, it's safety I'm concerned about. People are using these services, but there doesn't appear to be any regard for public safety.
Anwar said his drivers go through "quite vigorous training" for safety and quality control reasons.
"Oh, and we do have an app, as well," he joked.
The News attempted to contact one of the ride-hailing services operating in Richmond to see if they would pick up a non-Chinese passenger.
A message was left, in English, with Udi Kuaiche, but no one has yet replied.
Last October, the B.C.-based Passenger Transportation Branch (PTB) issued a warning to passengers to beware of using ride-hailing services due to concerns over passenger and driver safety.
Several companies, according to the PTB, are operating via apps under the names: Longmao; Udi Kuaiche; U Drop; RaccoonGo; GoKabu; Dingdang Carpool and AO Rideshare.
"These companies have been recruiting drivers to operate their personal vehicles as commercial passenger directed vehicles in the Lower Mainland," said the PTB's advisory.
"It is important that drivers providing commercial transportation services through these social media apps understand they are assuming all of the risk related to providing the service.
"It is the driver, not the app companies, which are operating illegally and are subject to penalties and fines of $1,150.
"These drivers are also subject to possible further sanctions for not disclosing the commercial use of their vehicles to lease and insurance providers.
The PTB said in October that it is currently investigating and issuing penalties to these operators.
If you have questions or concerns or want to make a report about these services, contact the PTB at 604-527-2198 or e-mail at PassengerTransportationBR@gov.bc.ca.
The News revealed last July, through an undercover reporter, the extent of the unregulated, ride-hailing services being run in Richmond.
We told how Chinese-language ride-sharing company Udi Kuaiche, launched last March, had been providing airport services and car rides in Metro Vancouver.
"Our company aims to serve everyone from the Chinese community and become the next leader in ride sharing," wrote Udi Kuaiche on its website.
The spokesperson for Udi Kuaiche said he was not worried about potential legal problems.
However, the City of Richmond said the lack of provincial regulations doesn't give the car-sharing company the green light.
"We believe it is illegal to run before the province's approval on ride-sharing services and the city will not issue them a business licence to operate in Richmond," said city spokesman Ted Townsend last July. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,554 |
Multitalented 'Drag Race: All Stars' Winner Brings New Show to Broward
Written by JW Arnold
"RuPaul's Drag Race: All Stars" winner Trixie Mattel brings her new show to the Broward Center on Friday, April 20. Credit: TrixieMattel.com.
When 28-year-old Brian Michael Firkus dons pumps and a wig, he transforms into the outrageous, life-sized Barbie doll Trixie Mattel, the winner of season 3 of "RuPaul's Drag Race: All Stars." But, behind the make-up, the thoughtful Wisconsin native is a talented performer, musician and songwriter who can imagine life after drag. "Trixie" is currently on a victory tour of the country with her new show "Trixie Mattel: Now with Moving Parts" and she/he took a few minutes recently to discuss "their" careers and future ambitions:
What can audiences expect in your new show?
I'm going to have to be naked, it's so hot in Florida! It is a stand-up show that features my musical gifts, video, lip sync…all spicy and exciting, it has some surprisingly moving sentiments behind the jokes, some serious moments, too. I'm going to make people laugh and cry.
Beyond your iconic drag skills, you're also a talented musician. When did you discover your love for music?
I was living as a mermaid and this sea witch wanted to steal my voice. Seriously, I grew up in the deep north woods of Wisconsin, on my grandfather's knee. He was a country musician. I never liked country, it was "old peoples' music," but as I got older—maybe 22 or 23 after my grandparents passed away—I realized the music has a lot of complexity and weight. By the late '90s, I was playing guitar, acoustic-driven pop music. I never really got into folk and country, even though I grew up listening to it, but now I've kind of come full circle. I want to continue to break into the world of folk music and be seen as a serious musician and not as reality show trash.
You came into this season of "All Stars" as an underdog after being eliminated not once, but twice in season 7. Why did you decide to try again?
Obviously, I was traumatized after the first time, going home early and feeling embarrassed. In episode 4 (season 7), when I thought I was going to eliminated, I felt so emasculated. It's funny how you can feel that way when you're in drag.
But you came back...
I knew the audience would have a lot invested in me. I'm living proof that you don't have to win to go on to be a success. I really felt the pressure and pressure doesn't help…I've always said that it's important to impress the judges, but it's more important to impress the other queens in the room. If you do a great job and the other queens think you did a great job, you won't go home because they remember that.
How has Trixie evolved over the years?
You can tell, since I started doing music and stand-up, that I've went from Malibu Barbie to Coachella Barbie. I do see myself evolving backwards and getting out of drag at some point. I don't see myself being able to sell the Trixie thing forever. I think about doing other characters, maybe a sort of male character. I like doing comedy, I like doing music and I like dressing up and I'll always be doing those things.
"Trixie Mattel: Now with Moving Parts" Tour comes to the Broward Center in Fort Lauderdale for one show only, Friday, April 20 at 8 p.m. Tickets start at $43.55 and VIP packages including a meet-and-greet and photographs are also available. Tickets at BrowardCenter.org.
drag queen RuPaul's Drag Race Trixie Mattel
Billy Dee Williams Describes Himself As Gender Fluid
Scarlett Johansson Says She "Mishandled" Transgender Role Scandal
11 Juicy Details From Elton John's New Memoir
YouTuber James Charles Losing Nearly 3 Million Subscribers Since Tati Westbrook Feud
Ellen DeGeneres Gifts $10,000 to Student who Came out in Commencement Speech
Charlize Theron Says Her Oldest Child is 'Not a Boy'
Watch: Emma Stone Goes Hard in 'SNL' Gay Porn Parody
'Transparent' to Kill Off Jeffrey Tambor's Character in Series Finale
CNN's Don Lemon is Engaged to Boyfriend Tim Malone
10 months 2 days ago
Julianne Moore Was Fired From 'Can You Ever Forgive Me?' Over Fat Suit, Fake Nose
11 months 21 hours ago
Police Say Smollett Staged Attack Because He Was 'Dissatisfied' with Salary
11 months 3 weeks ago
'Empire' Actor Defends Himself in 1st Comments Since Attack
Click to see more articles! | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 2,935 |
{"url":"https:\/\/www.gradesaver.com\/textbooks\/math\/algebra\/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition\/chapter-8-systems-of-linear-equations-and-problem-solving-8-1-systems-of-equations-in-two-variables-8-1-exercise-set-page-507\/19","text":"## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition)\n\n$(2,-1)$\nFor each equation find two points, plot them and join with line. $3x+y=5 \\Rightarrow y=-3x+5 \\rightarrow\\left[\\begin{array}{l} (0,5)\\\\ (1,2) \\end{array}\\right]$ $x-2y=4 \\Rightarrow \\left[\\begin{array}{llll} x=0 & \\Rightarrow y=-2 & \\rightarrow & (0,-2)\\\\ y=0 & \\Rightarrow x=4 & \\rightarrow & (4,0) \\end{array}\\right]$ Graphically solving: $x=2,\\ y=-1$. Checking:$\\qquad \\left\\{\\begin{array}{lll} 3(2)+(-1)=1 & \\Rightarrow & 6-1=5\\\\ & & \\\\ 2-2(-1)=4 & \\Rightarrow & 2+2=4 \\end{array}\\right.$ both equations are satisfied, $(2,-1)$ is the solution.","date":"2019-12-12 18:38:33","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.36977481842041016, \"perplexity\": 3784.0657963894782}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-51\/segments\/1575540545146.75\/warc\/CC-MAIN-20191212181310-20191212205310-00111.warc.gz\"}"} | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.