text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
Judy Penz Sheluk: No Fool's Journey glass dolphin Judy Penz Sheluk marketville mystery Judy Penz Sheluk and I first met at the Ontario Library Association's annual conference, where were both part of a presentation by author members of Crime Writers of Canada. She introduced herself to me and invited me on her blog, so of course I fell for her immediately. But since I'm not half as organized, it's taken me this long for me to land her on my own blog. Read all the way through for a surprise at the end! Q: One of your heroines, Callie Barnstable, spends a lot of time organizing her thoughts and writing down details of her meetings. Her father's manta is "A dull pencil is sharper than the sharpest mind." Sherlock Holmes's instant evaluations seem brilliant, but I suspect true investigations involve a lot of legwork and note taking. What do you think? A: I'm personally hopeless without writing things down. I have a notebook next to my bedside table, along with an LED pen (so I can write in the dark when flashes of brilliance come to me in the middle of the night), and I have a separate notebook for every current work-in-progress, where I jot down things that occur to me as I'm writing in Word. That might be possible character names or timelines or ages of characters (including year of birth, how old they were at certain years, etc.) I even have a "promo notes" notebook. I'm not Callie, but a lot of her quirks are my quirks. Q: Tattoos! The Medical Post ran an article on doctors with tattoos and patients' reactions. If it's not too much of a spoiler or too much of a personal question, Callie visits a tattoo parlour in A Fool's Journey. Do you have any tattoos? A: I don't have any tattoos and no plans to get one, because, like Callie, there is nothing in this world that I can imagine wanting permanently inked on my body. I think back to my late teens, when, after breaking up with my boyfriend, I became obsessed with butterflies — "Butterflies are free." I had butterfly earrings, necklaces…you get the idea. Well, fast forward a few (and I won't say how many) decades and I have absolutely no affinity towards butterflies. But had tattoos been in vogue at the time, I'm sure I'd have at least one or more butterflies somewhere on my body. Q: I was excited that the book opens with Callie inheriting $365,000 from her grandmother conditional on her investigating the disappearance of 20-year-old Brandon Colbeck. Now I have to go back and start with book #1, Skeletons in the Attic, where Callie inherits a house from her father, conditional on her investigating her mother's murder 30 years ago. It seems like Callie's family keeps dying and giving her big-ticket items with strings attached. As an author, what attracts you to the idea of a mysterious inheritance? A: Ha! Yes, Callie's been lucky with her inheritances, hasn't she? In the case of Skeletons, the idea came to me while my husband and I were at my lawyer's office to redo our wills. Our lawyer was delayed in court and while Mike read back issues of Bicycling magazine, I started jotting down notes (of course I have a notebook in my purse!): "What if I was here to inherit vs. write a will? And what if there were strings attached? And what if…" By the time our lawyer arrived, I'd written chapter one. In fact, a large part of the opening scenes are directly culled from that experience. With A Fool's Journey, I wanted to show Callie coming full circle: she's no longer the Toronto city kid/fish out of suburban water that she was in book 1. Another inheritance, and how she handles the case, demonstrates how much she's grown. Q: The case in Past and Present, book #2, involves a grandmother who met a "bad end" in 1956. Do you also enjoy researching mysteries set in the past, since all three books' cases take place 20-60 years ago? A: I was really struggling for an idea for book 2 in the series. At the time, my mom was very ill (COPD and related health issues). Going through her closet after she passed away, there was a small 1950s train case. Inside were her immigration papers from England to Canada on the TSS Canberra in 1952, her German passport (she moved to England after the war), her mother's (my grandmother's) and my father's death certificates, as well as some photographs and postcards. I'd never seen any of these things and she never spoke of her life "Before Canada" and marrying my father. I started by researching the Canberra through Pier 21, the Canadian Immigration Museum, and also through a friend of mine who collects ocean liner memorabilia. Before long, I was viewing things as if I was Callie, and honestly, that story just seemed to write itself after that. It was as if my mom were with me. The book was published Sept. 21, 2018, exactly two years after her death, and it's dedicated to her memory. Q: What made you decide to set your books in the fictional town of Marketville instead of the town of Newmarket? A: Some of the landmarks are similar to Newmarket, but I've taken a lot of liberties with the location. It just seemed better to give it a fictional name. I did the same with my Glass Dolphin series, where Lount's Landing is loosely based on Holland Landing, where I lived for 25 years. Q: I liked the hint of romance in A Fool's Journey. Do you like adding a bit of personal relationships to your fiction? A: Gosh, no. I'm the least romantic person on the planet (just ask my husband) and I tend to skip over romantic scenes in books I'm reading. As a result, I really struggle with adding romantic elements to my books. But in real life, people have relationships, and so my characters do, too. I will say, however, that I love the relationship between Arabella Carpenter and her ex-husband, Levon Larroquette (Glass Dolphin series) because they're so clearly meant for each other and refuse to admit it. Q: Very sorry to hear that your traditional publisher, Barking Rain Press (BRP), closed on July 7th. When you received the news, you were on vacation, and A Fool's Journey, was slated to release August 21st. I understand that you poured yourself some very expensive Chardonnay. And then what did you do? A: To be honest, BRP's closing wasn't a huge surprise. The publisher had gone through a plethora of personal problems over the past 18 months and it finally wore her down. I wrote a blog post about it "When Things Go South When You're North: The End of Barking Rain Press" for anyone interested in learning more, including details of the re-release of my two BRP titles (Skeletons in the Attic and A Hole in One). http://www.judypenzsheluk.com/2019/08/08/when-things-go-south-when-youre-north-the-end-of-barking-rain-press/ Q: What do you foresee for the future of writing and publishing, and your own journey in particular? A: I don't have a crystal ball, but I do think as more small press publishers open, with little idea of the amount of work or capitol outlay involved, and the razor thin profit margins, there will continue to be more authors "orphaned" as those same presses shutter their doors after a handful of years. I also think more authors will self-publish, but unfortunately, many of those will look at it as a "fast track" to getting published and won't invest in professional editing, proofreading, and cover art, all of which, to my mind, are essential, at least if you want to cultivate a following. As for medium-to-large presses, there will continue to be mergers and acquisitions. Publishing is a tough business. As for my future, I need only look at my past. I spent years working in the corporate world in management positions. I walked away in 2003, took a huge pay cut, and started freelance writing/editing, loved it, and never looked back. In 2018, I walked away from my last freelance gig to concentrate of writing books fulltime. Erica Jong said, "When I sit down at my writing desk, time seems to vanish. I think it's a wonderful way to spend one's life." I couldn't agree more. And here's your surprise. Judy and her husband on her wedding day! I re-wore my wedding dress on our anniversary last month, and Judy sent me a picture of her wedding too. They look fab! Judy Penz Sheluk is the bestselling author of the Glass Dolphin Mystery and Marketville Mystery series, and the editor of The Best Laid Plans: 21 Stories of Mystery & Suspense. Her short stories can be found in several collections. Judy is also a member of Sisters in Crime, International Thriller Writers, the Short Mystery Fiction Society, and Crime Writers of Canada, where she serves as Vice Chair on the Board of Directors. Find her at http://www.judypenzsheluk.com and on Amazon. Last reply was October 21, 2019 Judy Penz Sheluk View September 11, 2019 Thanks a million for this Melissa! melissayuaninnes replied: View October 21, 2019 Thank you for coming here, Judy!
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,694
\section*{Contents} 1. Introduction 2. Prerequisite: sets of finite perimeter and traces of functions 3. {Compactness of $M$-uniform domains} 4. {Existence of equilibrium liquid crystal droplets in Problem A-C} 5. {On the uniqueness of Problem C} References \section{Introduction} \label{Sec1} In this paper we study the existence of liquid crystal droplets $(\Omega_0, u_0)$, consisting of a domain $\Omega_0\subset\mathbb R^3$ representing the shape of a liquid crystal drop and a unit vector field $u_0 \in H^1(\Omega, \mathbb S^2)$ representing the average orientation field of liquid crystal molecules within the liquid crystal drop $\Omega$, that minimizes the total energy functional, including both the elastic energy in the bulk and the interfacial energy defined by \begin{eqnarray} \label{droplet-energy} E_f(u,\Omega):=\int_{\Omega} |\nabla u(x)|^2 \,dx + \int_{\partial^* \Omega} f\big(u(x)\cdot \nu_{\Omega}(x)\big) \,d\mathcal{H}^2(x), \end{eqnarray} among all pairs $(\Omega,u)$, where $\Omega$ is a domain of finite perimeter with a fixed volume that is compactly contained in the ball $B_{R_0}\subset\mathbb R^3$ with center $0$ and radius $R_0$ for some fixed constant $R_0>0$, and $u \in H^1(\Omega,\mathbb S^2)$, which is defined by $$H^1(\Omega,\mathbb S^2)\equiv\Big\{v\in H^1(\Omega,\mathbb R^3): \ |v(x)|=1 \ \mbox{a.e.}\ x\in\Omega\Big\}.$$ The functional $E_f(u,\Omega)$ should be understood in the sense that the surface integral is taken over the reduced boundary $\partial^* \Omega$ of $\Omega$, $u\ \lfloor_{\partial^* \Omega}$ is the trace of $u$ on $\partial^*\Omega$, $\nu_{\Omega}$ is the measure theoretical outer unit normal of $\partial^*\Omega$, and $f$ is usually assumed to have a nonnegative lower bound (with a typical choice of $f(t)=\mu(1+wt^2), t\in [-1,1],$ for some constants $\mu>0$ and $-1<w<1$). We will study the following minimization problem of \eqref{droplet-energy}. \smallskip \noindent\textbf{Problem A}. Find a pair $(\Omega, u)$ that minimizes $E_f(u,\Omega)$ over all pairs $(\Omega,u)$ where $\Omega$ is a domain of finite perimeter in a fixed ball $B_{R_0}\subset\mathbb R^3$, with a fixed volume $V_0>0$, and $u \in H^1(\Omega,\mathbb S^2)$, when $f:[-1,1]\to\mathbb R$ is a nonnegative, continuous convex function. We are also interested in the case when there is a constant contact angle condition between the liquid crystal orientation field $u$ and the reduced boundary of liquid crystal drop $\partial^*\Omega$, i.e., $u \cdot \nu_{\Omega} \equiv c$ on $\partial^*\Omega$, for some constant $c \in [-1, 1]$. In this case, the energy functional $E_f(u,\Omega)$ in \eqref{droplet-energy} reduces to \begin{eqnarray} \label{droplet-energy1} {E}(u,\Omega):=\int_{\Omega} |\nabla u(x)|^2 \,dx + \mu\mathcal{H}^{2}(\partial^* \Omega) \end{eqnarray} for some constant $\mu\ge 0$. Problem A can be reformulated as follows. \smallskip \noindent{\bf{Problem B}}. Find a pair $(\Omega, u)$ that minimizes ${E}(u,\Omega)$ over all pairs $(\Omega,u)$ where $\Omega$ is a domain of finite perimeter in a fixed ball $B_{R_0}\subset\mathbb R^3$, with a fixed volume $V_0>0$, and $u \in H^1(\Omega,\mathbb S^2)$ satisfies $u \cdot \nu_{\Omega} \equiv c$ on $\partial^* \Omega$ for some $c\in [-1,1]$. We would like to mention that the contact angle condition in Problem B is referred as \begin{itemize} \item [(i)] the planar anchoring condition when the constant $c=0$, and \item [(ii)] the homeotropic anchoring condition when the constant $c=1$. \end{itemize} We would like to point out that recently Geng and Lin in a very interesting paper \cite{GL} studied Problem B under the planar anchoring condition (i) in dimension two, and proved the existence of a minimizer $(\Omega, u)$ such that the optimal shape $\partial\Omega$ of the droplet is a chord-arc curve with two cusps, which can be parametrized in $H^{\frac32}$ and has its unit normal vector field $\nu_\Omega$ belongs to VMO. Because the homeotropic anchoring condition is an important physical condition, we are also interested in the following problem. \smallskip \noindent{\bf Problem C}. Find a solution to Problem B when the contact angle condition corresponds to $c=1$. \bigskip \noindent\textbf{Motivation}. The main difficulty of the minimization problems A, B, and C lies in showing the sequential lower semicontinuity of $E_f(u,\Omega)$ (or ${E}(u,\Omega)$) when both domains $\Omega$ and vector fields $u\in H^1(\Omega,\mathbb S^2)$ vary. It is even a difficult question to ask whether the configuration space is closed under weak convergence of liquid crystal pairs $(\Omega, u)$. In \cite{LP}, under the assumption that all admissible domains $\Omega\subset B_{R_0}$ are {\it convex} domains, Lin and Poon have proved that there exists a minimizing pair $(\Omega_0,u_0)$ of Problem A. Moreover, $u_0$ enjoys a partial regularity property similar to that of minimizing harmonic maps by Schoen and Uhlenbeck \cite{SU1, SU2}. It was further proven by \cite{LP} that, up to translations, $(\Omega_0, u_0)=(B_{R}, \frac{x}{|x|})$ is a unique minimizer of Problem C among convex domains with $|B_R|=V_0$. We would like to point out that the convexity assumption of admissible domains $\Omega$ plays a crucial role in \cite{LP}, since a minimizing sequence $(\Omega_i, u_i)$ of {\it convex} domains $\Omega_i\subset B_{R_0}$ with $|\Omega_i|=V_0$ has a subsequence $\Omega_{i_k}\rightarrow\Omega$ in $L^1$, for some bounded convex domain $\Omega\subset B_{R_0}$ with $|B_{R_0}|=V_0$, such that $\mathcal{H}^2(\partial \Omega_{i_k})\rightarrow \mathcal{H}^2(\partial \Omega)$ and $\nu_{\Omega_{i_k}}\rightarrow \nu_{\Omega}$ almost everywhere with respect to a spherical coordinate system\footnote{For example, one can parametrize $\partial\Omega_{i_k}$ and $\partial\Omega$ over the unit sphere $\mathbb S^2$.}. Moreover, there exists $u\in H^1(\Omega,\mathbb S^2)$ such that $\nabla u_{i_k}\chi_{\Omega_{i_k}}\rightarrow \nabla u \chi_{\Omega}$ weakly in $L^2(\mathbb R^3)$. The uniqueness of minimizer of Problem C among convex domains relies on the following important inequalities:\begin{eqnarray} \label{xin1} \int_{\Omega} |\nabla u(x)|^2 \,dx \ge \int_{\partial \Omega} H(x) \,d\mathcal{H}^2(x), \ \forall u \in H^1(\Omega, \mathbb S^2) \ {\rm{with}}\ u=\nu_{\Omega} \ {\rm{a.e.\ on}}\ \partial \Omega, \end{eqnarray} and \begin{eqnarray} \label{xin2} \int_{\partial \Omega} H(x) \,d\mathcal{H}^2(x) \ge \sqrt{ 4\pi \mathcal{H}^2 (\partial \Omega})\ \mbox{for convex $\Omega$}, \ {\rm{equality\ holds\ iff}}\ \Omega =B_R, \end{eqnarray} where $H$ denotes the mean curvature of $\partial \Omega$. In \cite{LP}, \eqref{xin1} is derived for any $\Omega\in W^{2,1}$, while \eqref{xin2} is proven by the Brunn-Minkowski inequality for convex domains. In this paper, we would like to relax the convexity assumption from \cite{LP} and investigate Problems A, B, and C over a larger class of domains possibly containing {\it non-convex} domains with {\it less regular} boundaries. The class of domains contains Sobolev extension domains with some uniform parameters, as well outer minimal domains. The main theorems of this paper arose from the Ph.D. thesis of the first author \cite{Li}. The interested reader can refer to \cite{Li} for more related results. \medskip \noindent\textbf{Outline of this paper}:\\ \indent In section 2, we will review certain classes of domains in $\mathbb{R}^n$, including $M$-uniform domains, which are Sobolev extension domains with constants depending on $M$ and $n$; the outer minimal domains, which are a generalization of convex domains. In section 3, we will show in Theorem \ref{maincompact} that, up to a set of measure zero, the $L^1$-limit of $M$-uniform domains is $M$-uniform. A few other results on the relation between $L^1$-convergence and Hausdorff convergence are also derived. In section 4, we will establish the weak lower semicontinuity of bulk elastic energy of $(\Omega,u)$ for two classes of domains: a) the admissible sets of $M$-uniform domains, and b) the admissible sets of outer minimal $M$-uniform domains. It is more subtle to prove the lower semicontinuity of surface energy for Problem A. We will only consider outer minimal sets and our proof is inspired by Reshetnyak's lower semicontinuity theorem (see \cite[Theorem 20.11]{Maggi}) and the perimeter convergence Lemma \ref{heng}. Thus combining the compactness of $M$-uniform domain results and the lower-semicontinuity results, the existence Theorem \ref{bigexistence} on problems A, B and C is proved among these classes of admissible sets. In section 5, we will apply results by \cite{DHMT}, \cite{FS}, \cite{HI1} and \cite{HI2} to show $(B_R, \frac{x}{|x|})$ is the unique minimizer of Problem C over strictly star-shaped mean convex $C^{1,1}$-domains, $C^{1,1}$-outer minimal sets, and $C^{1,1}$-revolutionary domains, see Theorem \ref{dafeiji1} and Remark \ref{dafeiji2}. \section{Prerequisite: sets of finite perimeter and traces of functions} We first stipulate some notations. Let $V_0>0$ be the fixed volume in the Problems A, B, C. Since the admissible domains in the problems have this fixed volume, we will use the convention that any minimizing sequences have their diameters larger than a universal constant $c_0=c_0(V_0)>0$ because of the isodiametric inequality (see \cite[Theorem 2.2.1]{EG}). We will denote by $B_r(x): =\{y \in \mathbb{R}^n: |y-x|<r\}$ and $B_r:=B_r(0)$. Throughout this paper all sets under consideration are contained in a large ball $B_{R_0}$, where $R_0>0$ is fixed. For any set $A\subset\mathbb R^n$, denote by $A_{\epsilon}$ the interior $\epsilon$-neighborhood $\{x \in A: B_{\epsilon}(x) \subset A\}$, and $A^{\epsilon}$ the exterior $\epsilon$-neighborhood $\bigcup_{x \in A} B_{\epsilon}(x)$. Denote by ${\rm{int}} (A)$ the topological interior part of $A$, $A^c=\mathbb{R}^n \setminus A$, and ${\rm{diam}}(A)$ the diameter of $A$. For $0\le d\le n$, $\mathcal{H}^d$ denotes the $d$-dimensional Hausdorff measure in $\mathbb R^n$. Let $d^H(\cdot,\cdot)$ denote the Hausdorff distance in $\mathbb R^n$. $P(A;D)$ denotes the distributional perimeter of $A$ in $D\subset\mathbb R^n$. For a set $A$ of finite perimeter, let $\nu_A$ denote the measure theoretical outer unit normal of the reduced boundary $\partial^*A$, and $\mu_A$ denotes the Gauss-Green measure of $A$, that is, $\mu_A=\nu_A \cdot \mathcal{H}^{n-1}\lfloor_{\partial^*A}$. Denote by $\omega_n$ the volume of unit ball in $\mathbb{R}^n$ and $|A|$ the Lebesgue measure of $A$.. For any open set $\Omega\subset\mathbb R^n$ and $u \in BV(\Omega)$, denote by $Du$ the distributional derivative of $u$, that is a vector-valued Radon measure, and $\|Du\|(\Omega)$ the total variation of $u$ on $\Omega$. In this paper, $``\lesssim_c"$ denotes an inequality up to constant multiplier $c>0$. For any measurable set $E$ and $0\le\alpha\le1$, we define $$E^\alpha=\Big\{x \in \mathbb{R}^n: \lim_{r\rightarrow0} \frac{|E\cap B_r(x)|}{|B_r(x)|}=\alpha\Big\},$$ and refer $E^1$ and $E^0$ as the measure theoretical interior and exterior part of $E$ respectively. Denote by $\partial_*E:=\mathbb{R}^n \setminus(E^0 \cup E^1)$ the measure theoretical boundary of $E$, which is also called the essential boundary. In this paper, we will need the following theorem, due to Federer (see \cite[Chapter 5]{EG}). \begin{theorem} \label{Federer} For any measurable set $E$, if $\mathcal{H}^{n-1}(\partial_* E)< \infty$, then $E$ is a set of finie perimeter. Furthermore, if $E$ is a set of finite perimeter, then $\mathbb{R}^n=E^0 \cup E^1 \cup \partial_* E$, $\partial^* E \subset E^{(1/2)} \subset \partial_*E$, and $\partial^* E =\partial_* E \, ({\rm{mod}}\ \mathcal{H}^{n-1})$. \end{theorem} Next, we recall the definition of $M$-uniform domains. \begin{definition} \label{uniformdomain} For $M\ge1$, a domain $\Omega\subset\mathbb R^n$ is called an $M$-uniform domain, if for any two points $x,y \in \Omega$, there is a rectifiable curve $\gamma : [0,1] \rightarrow \Omega$ such that $\gamma(0)=x,\ \gamma(1)=y$, and \begin{eqnarray} \label{Jones1} &\,& \mathcal{H}^1(\gamma([0,1])) \le M|x-y|, \\ &\,& d(\gamma(t), \partial \Omega) \ge \frac{1}{M} \min\big\{|\gamma(t)-x|,|\gamma(t)-y|\big\}, \, \forall t \in [0,1]. \label{Jones2} \end{eqnarray} \end{definition} \begin{remark} {\rm P. Jones \cite{Jo} introduced the notion of $(\epsilon, \delta)$-domain. One can check that any $(\epsilon,\infty)$-domain is an $M$-domain, with $M=\frac{2}{\epsilon}$. On the other hand, any $M$-uniform domain is a $(\frac{1}{M^2},\infty)$-domain\footnote{Since \eqref{Jones1} and \eqref{Jones2} imply $$d(\gamma(t),\partial\Omega)\ge \frac{1}{M}\frac{|\gamma(t)-x||\gamma(t)-y|}{\mathcal H^1(\gamma([0,1]))} \ge \frac{1}{M^2}\frac{|\gamma(t)-x||\gamma(t)-y|}{|x-y|}, \ \forall t\in [0,1].$$}. It was also proven by \cite{Jo} that any $(\epsilon,\delta)$ domain is a Sobolev extension domain, and the converse is true when $n=2$. We refer to \cite{GO} and \cite{Jo} for more details on $M$-uniform domains.} \end{remark} Since we will study minimization problems involving traces of bounded $H^1$ vector fields in this paper, we will need the following Gauss-Green formula. \begin{theorem} \label{traceextensiondomain} Let $\Omega$ be a bounded uniform domain of finite perimeter in $\mathbb{R}^n$ and $u \in H^1(\Omega)\cap L^{\infty}(\Omega)$. Then for any $\phi \in C_0^1(\mathbb{R}^n,\mathbb{R}^n)$, we have \begin{eqnarray} \label{fenbujifen} \int_{\Omega} u {\rm{div}}\phi +\int_{\Omega} \phi Du =\int_{\partial^* \Omega} (\phi\cdot\nu_{\Omega})u^* d\mathcal{H}^{n-1}, \end{eqnarray} where $\nu_{\Omega}$ is the measure-theoretic unit outer normal to $\partial^* \Omega$, and $u^*$ is given by the formula \begin{eqnarray} \label{GGlip} \lim_{r \rightarrow 0}\displaystyle \frac{\int_{B_r(x) \cap \Omega}|u-u^*(x)|}{r^n}=0,\ \mbox{$\mathcal{H}^{n-1}$-a.e. $x\in \partial^* \Omega$}. \end{eqnarray} \end{theorem} \begin{proof} According to \cite{Jo}, we may let $\hat{u} \in H^1_0(\mathbb{R}^n)\cap L^{\infty}(\mathbb{R}^n)$ be an extension of $u$ such that $\hat{u}=u$ in $\Omega$ and \begin{align*} \Vert \hat{u} \Vert _{H^1(\mathbb{R}^n)} \le C(n,\Omega) \Vert u \Vert _{H^1(\Omega)}. \end{align*} Hence $\hat{u} \in BV(\mathbb{R}^n)$, and thus according to \cite[Theorem 3.77]{afp}, the interior trace of $\hat{u}$, denoted by $\hat{u}^*$ here, is well-defined for $\mathcal{H}^{n-1}$-a.e. on $\partial^*\Omega$, and equals to $u^*$, given by \eqref{GGlip}, for $\mathcal{H}^{n-1}$-a.e. on $\partial^* \Omega$. Let $\tilde{u}=\hat{u}\chi_\Omega$. Since $\hat{u}$ is bounded, $u^* \in L^1(\partial^* \Omega)$ and thus by \cite[Theorem 3.84]{afp}, $\tilde{u}=\hat{u}\chi_{\Omega} \in BV(\mathbb{R}^n)$, with \begin{align*} D\tilde{u}=D\hat{u}\lfloor_{\Omega^1}-u^*\nu_\Omega\mathcal{H}^{n-1}\lfloor_{\partial^*\Omega}. \end{align*} Hence for any $\phi \in C_0^1(\mathbb{R}^n,\mathbb{R}^n)$, we have \begin{align} \label{ji1} \int_{\mathbb{R}^n}\phi D\tilde{u}=\int_{\Omega^1} \phi D\hat{u}-\int_{\partial^* \Omega}(\phi \cdot \nu_{\Omega})u^*\, d\mathcal{H}^{n-1}. \end{align}Since \begin{align*} \int_{\mathbb{R}^n}\phi D\tilde{u}=-\int_{\mathbb{R}^n}\tilde{u}\div \phi=-\int_{\Omega}u\div \phi, \end{align*}from \eqref{ji1} we have \begin{align} \label{j2} \int_{\Omega}u\div \phi+\int_{\Omega^1}\phi D\hat{u}=\int_{\partial^*\Omega}(\phi\cdot \nu_{\Omega})u^*d\mathcal{H}^{n-1}. \end{align} Since $\Omega$ is equivalent to $\Omega^1$ up to a set of Lebesgue measure zero and $\hat{u}\in H^1(\mathbb{R}^n)$, we have \begin{align} \label{ji3} D\hat{u}\lfloor_{\Omega^1}=D\hat{u}\lfloor_\Omega=Du\lfloor_{\Omega} \end{align} Hence \eqref{j2} and \eqref{ji3} imply \eqref{fenbujifen}. \end{proof} For the purpose later in this paper, we also introduce the following definition. \begin{definition} \label{D-c} For any $c>0$, we denote by $\mathcal{D}_c$ the class of bounded sets in $\mathbb R^n$ such that for any set $E \in \mathcal{D}_c$, \begin{eqnarray} \label{g1} |B_r(x)\cap E|>cr^n \end{eqnarray} holds for any $x \in \partial E$ and $0<r<{\rm{diam}}(E)$. \end{definition} Recall that two sets $E, F\subset \mathbb R^n$ are said to be $\mathcal{H}^n$-equivalent, denoted by $E\approx F$, if $E\Delta F=(E\setminus F)\cup (F\setminus E)$ has zero Lebesgue measure. Note that by the Lebesgue density theorem, if $E \in \mathcal{D}_c$, then $|\partial E \cap E^c|=0$. Hence $\partial E\subset E$ (mod $\mathcal{H}^{n}$) and $\overline{E}\approx E$. In particular, we have \begin{remark}\label{closure} {\rm Any $E\in \mathcal{D}_c$ is equivalent to its closure $\overline{E}$.} \end{remark} We also have \begin{remark} \label{feihua1}{\rm For $c>0$, if $E\in \mathcal{D}_c$ is a set of finite perimeter, then there is $c'>0$ depending only on $c$ and $n$ such that for any $x \in \overline{E}$ and $0<r<{\rm{diam}}(E)$, $|B_r(x) \cap E| \ge c'r^n$. } \end{remark} \begin{proof} For $x \in \overline{E}$ and $0<r<{\rm{diam}}(E)$, there are two cases: \\ (a) If $r \ge 2 d(x, \partial E)$, then there is $z \in \partial E$ such that $B_{\frac{r}{2}}(z) \subset B_{r}(x)$. Hence $$|B_{r}(x) \cap E| \ge |B_{\frac{r}2}(z) \cap E| \ge c(\frac{r}{2})^n=\frac{c}{2^n} r^n.$$ (b) If $r \le 2 d(x,\partial E)$, then $B_{\frac{r}{2}}(x) \subset E$ and hence $$|B_{r}(x) \cap E| \ge |B_{\frac{r}2}(x)|=\frac{\omega_n}{2^n}r^n.$$ Hence the conclusion holds with $c'=\min\{\frac{c}{2^n}, \frac{\omega_n}{2^n}\}$. \end{proof} The following proposition shows that any $M$-uniform domain belongs to $\mathcal{D}_c$ for some $c>0$. \begin{proposition} \label{shuyu} For any $M\ge 1$ and $c_0>0$, if $\Omega\subset\mathbb R^n$ is an $M$-uniform domain, with ${\rm{diam}} (\Omega) \ge c_0>0$, then $\Omega \in \mathcal{D}_c$ for some $c>0$ depending only on $M$, $n$ and $c_0$. \end{proposition} \begin{proof} For any $x \in \partial \Omega$ and $0<r<{\rm{diam}}(\Omega)$, we claim that there is a constant $c_1=c_1(M)>0$ such that $B_r(x) \cap \Omega$ contains a ball of radius $c_1r$. Indeed, since $0<r<{\rm{diam}}(\Omega)$, there is $y \in \Omega \setminus B_{\frac{r}2}(x)$. Let $\gamma$ be the curve joining $x$ and $y$ given by the definition of $M$-uniform domain. Choose $z \in \partial B_{\frac{r}{3}}(x) \cap \gamma$. Then we have that $z \in \Omega$ and $$d(z, \partial \Omega) \ge \frac{1}{M}\min\big\{|z-x|, |z-y|\big\}\ge \frac{1}{M}\min\big\{\frac{r}{3}, \frac{r}2-\frac{r}3\big\}=\frac{r}{6M}.$$ Hence $B_{c_1r}(z) \subset \Omega$, with $c_1=\frac{1}{6M}$. From this claim, we see that for any $x \in \partial \Omega$ and any $r<{\rm{diam}}(\Omega)$, $$|B_r(x) \cap \Omega| \ge |B_{c_1 r}(z)|\ge \omega_nc_1^nr^n.$$ This completes the proof. \end{proof} The following remark will be used in the proof of compactness of $M$-uniform domains. \begin{remark} \label{bukong}{\rm For $M>0$ and $c_0>0$, if $\Omega\subset\mathbb R^n$ is an $M$-uniform domain, with $|\Omega| \ge c_0$, then there is $r_0>0$ depending only on $M,n,c_0$ such that $\Omega$ contains a ball of radius $r_0$. } \end{remark} \begin{proof} It follows directly from the isodiametric inequality and Proposition \ref{shuyu}. \end{proof} Similar to $\mathcal D_c$, we also define the class $\mathcal{D}^c$ as follows. \begin{definition} \label{D^c} For $c>0$, the set class $\mathcal{D}^c$ consists of all bounded set $E\subset\mathbb R^n$ such that \begin{eqnarray} \label{g2} |B_r(x)\cap E^c|>cr^n \end{eqnarray} holds for any $x \in \partial E$ and $0<r<{\rm{diam}}(E)$. \end{definition} The following proposition from \cite[Proposition 12.19]{Maggi} yields that we can always find an $\mathcal{H}^n$-equivalent set $\widetilde{E}$ of any set $E$ of finite perimeter with slightly better topological boundary. \begin{proposition} \label{equiv} For any Borel set $E\subset\mathbb R^n$, there exists an $\mathcal{H}^n$-equivalent set $\widetilde{E}$ of $E$ such that for any $x \in \partial \widetilde{E}$ and any $r>0$, \begin{eqnarray} \label{spt} 0<|\widetilde{E} \cap B_r(x)|<\omega_nr^n. \end{eqnarray} In particular, ${\rm{spt}} \mu_E={\rm{spt}}\mu_{\widetilde{E}}=\partial \widetilde{E}$. \end{proposition} In order to illustrate the construction of such an equivalent set, which is needed in later sections, we will sketch the proof. \begin{proof} First, we define two disjoint open sets $$A_1:=\big\{x \in \mathbb{R}^n\ |\ \mbox{there exists $r>0$ such that $|E \cap B_r(x)|=0$}\big\},$$ and $$A_2:=\big\{x \in \mathbb{R}^n\ |\ \mbox{there exists $r>0$ such that $|E \cap B_r(x)|=\omega_nr^n$}\big\}.$$ Then by simple covering arguments we have that $|E \cap A_1|=0$ and $|A_2 \setminus E|=0$. Set $\widetilde{E}=(A_2 \cup E) \setminus A_1$. Then $$|\widetilde{E} \Delta E| \le |A_2 \setminus E|+|E \cap A_1|=0.$$ Moreover, since $A_2 \subset {\rm{int}}(\widetilde{E})$ and $\overline{\widetilde{E}} \subset \mathbb{R}^n \setminus A_1$, we have that $\partial \widetilde{E} \subset \mathbb{R}^n \setminus (A_1 \cup A_2)$ and hence \eqref{spt} holds. \end{proof} We now recall the notion of outer minimal sets, which can be viewed as a subsolution of area minimizing sets. It is a generalization of convex sets, see for example \cite[Definition 15.6]{Giusti} and related results therein. \begin{definition} \label{psc} A set $E\subset\mathbb R^n$ of finite perimeter is an outer minimal set, if $P(E) \le P(F)$ holds for any set $F \supset E$. \end{definition} We would like to point out that an outer-minimal set is also called as a pseudo-convex set by \cite{LT}. Thus by \cite[Corollary 7.16]{LT} we have \begin{remark} \label{dens} {\rm If $E\subset\mathbb R^n$ is an outer-minimizing and ${\rm{spt}} \mu_E=\partial E$, then $E \in \mathcal{D}^c$, for some $c>0$ depending only on $n$ and $E$. Consequently, $E={\rm{int}}(E)\ ({\rm{mod}}\mathcal{H}^n)$.} \end{remark} \begin{remark} {\rm Since the boundary of an outer minimal set (domain) can have positive $\mathcal{H}^n$ measure (see \cite{BGM}), an outer minimal domain may not be an $M$-uniform domain for any $M\ge 1$.} \end{remark} Combining Proposition \ref{shuyu} and Remark \ref{dens}, we have \begin{remark} \label{bianjiea} Let $\Omega$ be an $M$-uniform outer minimal domain with $\rm{spt} \mu_\Omega=\partial \Omega$, then $\Omega \in \mathcal{D}_c\cap \mathcal{D}^c$ for some $c>0$, and hence $\partial_*\Omega=\partial \Omega$. \end{remark} We would like to state the following proposition, which is a consequence of \cite[Corollary 1.10]{GHL}, since for any $E \in \mathcal{D}_c$, $\mathcal{H}^{n-1}(\partial E\cap E^0)=0$. \begin{proposition} \label{outside} Let $c>0$ and $E \in \mathcal{D}_c$. Then there exists bounded smooth sets $E_i$ such that $E_i \Supset E$, $E_i \rightarrow E$ in $L^1$ and $P(E_i)\rightarrow P(E_i)$. \end{proposition} \section{Compactness of $M$-uniform domains} In this section, we will establish in Theorem \ref{maincompact} the $L^1$-compactness property of $M$-uniform domains. We begin with \begin{lemma} \label{neijin} For $c>0$, suppose that $\{D_i\}\subset\mathcal{D}_c$ satisfies $D_i \rightarrow D$ in $L^1(\mathbb R^n)$ as $i\rightarrow\infty$. Then after modifying over a set of Lebesgue measure zero, $D \in \mathcal{D}_c$. Moreover, for any $\epsilon>0$, there is $N=N(\epsilon)>0$ such that for any $i>N$, the following properties hold:\\ (i) $D \subset D_i^{\epsilon}$.\\ (ii) $(D_i)_{\epsilon} \subset D$.\\ (iii) $D_i \subset D^{\epsilon}$. \\ In particular, $d^H(D_i,D) \rightarrow 0$ as $i\rightarrow\infty$. \end{lemma} \begin{proof} We first identify $D$ with its $\mathcal{H}^n$-equivalent set in the sense of Proposition \ref{equiv}. We argue by contradiction. If (i) were false, then there would exist $\epsilon_0>0$, $x_0 \in D$ and a sequence $k\rightarrow\infty$ such that $B_{\epsilon_0}(x_0) \cap D_k=\emptyset$. Hence by the hypothesis and Proposition \ref{equiv}, we obtain that $$0=|B_{\epsilon}(x_0) \cap D_k|\rightarrow |B_{\epsilon}(x_0) \cap D|>0,$$ this is impossible. If (ii) were false, then there would exist $\epsilon_0>0$ and a sequence of points $x_i \in (D_i)_{\epsilon_0} \setminus D$. Assume that $x_i \rightarrow x_0$. Then $x_0 \in \partial D \cup D^c$. Hence by the proof of Proposition \ref{equiv}, we have that $\omega_n\epsilon_0^n>|B_{\epsilon_0}(x_0) \cap D|$. On the other hand, since $B_{\epsilon_0}(x_i)\subset D_i$, we have that \begin{eqnarray*} \big|B_{\epsilon_0}(x_0) \cap D\big|&=&\lim_{i \rightarrow \infty}\big|B_{\epsilon_0}(x_i) \cap D\big| \ge\liminf_{i \rightarrow \infty} \big(|B_{\epsilon_0}(x_i) \cap D_i|-|D_i \Delta D|\big)\\ &=&\omega_n\epsilon_0^n-\limsup_{i \rightarrow \infty}|D_i \Delta D|=\omega_n\epsilon_0^n. \end{eqnarray*} We get a desired contradiction. If (iii) were false, then there would exist $\epsilon_0>0$ and a subsequence of $x_i \in D_i\setminus D^{\epsilon_0}$. Without loss of generality, assume $x_i \rightarrow x_0$ and thus $x_0 \in \mathbb{R}^n \setminus D^{\epsilon_0}$. By Remark \ref{feihua1}, there is a $c'>0$ depending only on $c$ and $n$ such that $$c'\epsilon_0 ^n \le \big|B_{\epsilon_0}(x_i) \cap D_i\big|.$$ On the other hand, it follows from $|B_{\epsilon_0}(x_0) \cap D|=0$ that \begin{eqnarray*} \liminf_{i \rightarrow \infty}\big|B_{\epsilon_0}(x_i) \cap D_i\big| &\le& \limsup_{i\rightarrow \infty}\big(|B_{\epsilon_0}(x_i) \cap D|+|D \Delta D_i|\big) \\ &\le& |B_{\epsilon_0}(x_0) \cap D|+\limsup_{i \rightarrow \infty} |D_i \Delta D|=0. \end{eqnarray*} This yields a desired contradiction. It remains to show $D \in \mathcal{D}_c$. Indeed, by Proposition \ref{equiv}, $x \in \partial D$ implies that $x \in {\rm{spt}}\mu_D$. Note $D_i \rightarrow D$ in $L^1(\mathbb R^n)$ implies that $\mu_{D_i} \stackrel{*}{\rightharpoonup} \mu_D$ as convergence of Radon measures. Hence there exists $x_i \in {\rm{spt}}\mu_{D_i} \subset \partial D_i$ such that $x_i \rightarrow x$ so that for any $r>0$, it holds that $$\big|B_r(x) \cap D\big|=\lim_i \big|B_r(x_i) \cap D\big| \ge \liminf_i \big|B_r(x_i) \cap D_i\big|-\limsup_i\big|D_i \Delta D\big| \ge cr^n.$$ This implies $D \in \mathcal{D}_c$. \end{proof} The following remark follows directly from (i) and (iii). \begin{remark} \label{new2}{\rm If $D_i$ and $D$ satisfy the same assumptions as in Lemma \ref{neijin}, and if ${\rm{int}}(D) \ne \emptyset$, then ${\rm{int}}(D)$ is connected.} \end{remark} Similar to Lemma \ref{neijin}, for a set in the class $\mathcal{D}^c$ we have \begin{lemma} \label{waijin} For $c>0$, if $\{D_i\} \subset \mathcal{D}^c$ and $D_i \rightarrow D$ in $L^1(\mathbb R^n)$, then after modifying a set of zero $\mathcal{H}^n$-measure, $D \in \mathcal{D}^c$. Moreover, for any $\epsilon>0$, there is $N=N(\epsilon)>0$ such that if $i>N$, the following properties holds:\\ (i) $D \subset D_i^{\epsilon}$.\\ (ii) $(D_i)_{\epsilon} \subset D$.\\ (iii) $D_{\epsilon} \subset D_i$. \end{lemma} The following corollary follows directly from Lemma \ref{waijin}. \begin{corollary} \label{new1} For any $c>0$ and a sequence $\{D_i\}\subset\mathcal{D}^c$ with uniformly bounded perimeters, there is an open set $D \in \mathcal{D}^c$ such that $D_i \rightarrow D$ in $L^1(\mathbb R^n)$. Moreover, $D$ and $D_i$ satisfy the properties {\rm{(}}i{\rm{)}},{\rm{(}}ii{\rm{)}} and {\rm{(}}iii{\rm{)}} of Lemma \ref{waijin}. \end{corollary} Now we are ready to prove the main theorem of this section. \begin{theorem} \label{maincompact} For $M>0$, $R_0>0$, and $c_0>0$, if $\{\Omega_i\}$ is a sequence of $M$-uniform domains in $B_{R_0}$ such that $|\Omega_i| \ge c_0>0$ and $\Omega_i \rightarrow D$ in $L^1(\mathbb R^n)$, then there is an $M$-uniform domain $\Omega$ such that $\Omega_i \rightarrow \Omega$ in $L^1(\mathbb R^n)$. \end{theorem} \begin{proof} As in Proposition \ref{equiv}, we assume ${\rm{spt}} \mu_D=\partial D$. We first prove that ${\rm{int}}(D) \ne \emptyset$. Indeed, notice that by Remark \ref{bukong}, there exists a $r_0>0$ depending only on $c_0,n$ and $M$ such that each $\Omega_i$ contains a ball of radius $r_0$. Therefore, for each $\Omega_i$, if $\epsilon<\frac{r_0}2$, then by definition $(\Omega_i)_{\epsilon}$ contains a ball of radius $\frac{r_0}2$. By Lemma \ref{neijin} (ii), $D$ also contains a ball of radius $\frac{r_0}2$ and hence ${\rm{int}}(D) \ne \emptyset$. Set $\Omega={\rm{int}}(D)$. It suffices to show that $\Omega$ is an $M$-uniform domain, since the $L^1$ convergence of $\Omega_i$ to $\Omega$ follows directly from Remark \ref{closure}, Proposition \ref{shuyu}, and the fact $\Omega \subset D \subset \overline{\Omega}$. Fix any $x,y \in \Omega$, then given any $N>>M$, say $N>2M$, we may choose $0<\epsilon<\frac{1}{N}$ so small that $k\epsilon < d(x, \partial \Omega) \le (k+1)\epsilon$, $k>>N$ (say $k>(1+1/M)(N+1)$), and $|x-y|>2(N+1)\epsilon$. From Lemma \ref{neijin} (i) and (iii), and since $int(\Omega) \ne \emptyset$, we know that $d^H(\Omega_i, \Omega) \rightarrow 0$, hence we we may choose $x_i, y_i \in \Omega_i \cap \Omega$, with $|x_i-x|<\epsilon, |y_i-y|<\epsilon$ for $i$ large. By Lemma \ref{neijin} (ii), we may also choose $i$ large such that \begin{eqnarray} \label{cru} (\Omega_i)_{\epsilon} \subset \Omega. \end{eqnarray} Also we choose $\gamma_i \subset \Omega_i$ to be the rectifiable curve connecting $x_i$ and $y_i$ in $\Omega_i$ as in the definition of $M$-uniform domain. For any $p \in \gamma_i$, if $p \in B_{N \epsilon}(x_i) \cup B_{N\epsilon}(y_i)$, then clearly $p \in B_{(N+1) \epsilon}(x) \cup B_{(N+1)\epsilon}(y) \subset \Omega$. Moreover, this implies \begin{equation} \label{a1} d(p, \partial \Omega) \ge k\epsilon-(N+1) \epsilon>\frac{1}{M}(N+1)\epsilon\ge \frac{1}{M}\min\{|p-x|,|p-y|\}. \end{equation} Clearly \eqref{a1} also holds for any $p$ on the line segment between $x_i$ and $x$, and between $y_i$ and $y$. If $p \notin B_{N \epsilon}(x_i) \cup B_{N\epsilon}(y_i)$, then $d(p, \partial \Omega_i) \ge \frac{1}{M} \min\{|p-x_i|,|p-y_i|\}> \frac{1}{M} N\epsilon$, thus $p \in (\Omega_i)_{N\epsilon/M} \subset (\Omega_i)_{\epsilon}\subset \Omega \cap \Omega_i$. Moreover, let $r=d(p,\partial((\Omega_i)_{\epsilon}))$, then by \eqref{cru}, $B_r(p) \subset \Omega$, so $d(p,\partial \Omega) \ge r=d\left(p, \partial ((\Omega_i)_{\epsilon})\right) \ge d(p,\partial \Omega_i)-\epsilon$. Therefore, \begin{equation} \label{a3} \frac{d(p,\partial \Omega)}{\min\{|p-x_i|,|p-y_i|\}} \ge \frac{d(p,\partial \Omega_i)-\epsilon}{\min\{|p-x_i|,|p-y_i|\}}\ge \frac{1}{M}-\frac{\epsilon}{N\epsilon}\ge \frac{1}{M}-\frac{1}{N}.\\ \end{equation} Hence by the choice of $\epsilon$ and $N$ we have that \begin{equation} \label{a2} d(p,\partial \Omega) \ge (\frac{1}{M}-\frac{1}{N})(\min\{|p-x|,|p-y|\}-\epsilon) \ge(\frac{1}{M}-\frac{1}{N})(\min\{|p-x|,|p-y|\})-\frac{1}{MN}. \end{equation} Therefore, we may let $\gamma^N$ be the curve with three parts. The first part connects $x$ and $x_i$ with line segment, the second part connects $x_i$ and $y_i$ with $\gamma_i$ as above and the third part connects $y_i$ and $y$ with line segment. It is clear that $\gamma^N \subset \Omega$ and $\gamma^N$ connects $x$ and $y$, then from \eqref{a1} and \eqref{a2} and the choice of $\epsilon$, we obtain\\ (i) $\mathcal{H}^1(\gamma^N) \le M|x-y|+2\frac{M+1}{N}$, and\\ (ii) $d(p, \partial \Omega) \ge (\frac{1}{M}-\frac{1}{N}) \min\{|p-x|,|p-y|\}-\frac{1}{MN}\quad \forall p\in \gamma^N$. \\ Then by compactness of $(\overline{\Omega}, d^H)$, and since $\gamma^N$ is connected, there is a compact connected set $E \subset \overline{\Omega}$ such that $d^H(\gamma^N, E) \rightarrow 0$ as $N \rightarrow \infty$. Then by \cite[Theorem 3.18]{Fa}, $$\mathcal{H}^1(E) \le \liminf_{N \rightarrow \infty} \mathcal{H}^1(\gamma^N) \le M|x-y|.$$ Then by \cite{Fa}[Lemma 3.12], $E$ is path connected, thus we can choose a curve $\gamma \subset E$ joining $x$ and $y$. For any $p \in \gamma$, we can choose sequence $p_N\in \gamma^N, p_N \rightarrow p$. Since $$d(p_N, \partial \Omega) \ge (\frac{1}{M}-\frac{1}{N}) \min\{|p_N-x|,|p_N-y|\}-\frac{1}{2MN},$$ we have, after sending $N \rightarrow \infty$, $$d(p, \partial \Omega) \ge \frac{1}{M} \min\{|p-x|,|p-y|\},$$ which also clearly implies $\gamma \subset int\,\Omega$. Then $\gamma$ satisfies both properties in the definition of $M$-uniform domain, thus $\Omega$ is $M$-uniform. By Remark \ref{new2} and Proposition \ref{shuyu}, $\Omega$ is a domain. This completes the proof. \end{proof} \begin{remark} The full generality of compactness of $M$-uniform domains is obtained in \cite[Theorem 1.2]{DLW}, where it is shown that any sequence of $M$-uniform domains with fixed volume must have uniformly bounded fractional perimeters, and thus have an $L^1$ limit up to a subsequence, and the limit is also $M$-uniform. \end{remark} \section{Existence of equilibrium liquid crystal droplets in Problem A-C} In this section we will study the existence of minimizers to Problems A-C, which can be extended in $n$-dimensions. We begin with the following Lemma, which plays a crucial role in Problems A-C over outer minimal sets. \begin{lemma} \label{heng} For $c>0$, let $\{E_i\}_{i=1}^\infty \in \mathcal{D}_c$ be a sequence of outward-minimizing sets such that $E_i \rightarrow E$ in $L^1$ as $i\to\infty$. Then $E\in\mathcal{D}_c$ is also an outward-minimizing set. Moreover, $P(E_i) \rightarrow P(E)$ and $\mathcal{H}^{n-1}(\partial_* E_i) \rightarrow \mathcal{H}^{n-1}(\partial_* E)$ as $i\to\infty$. \end{lemma} \begin{proof} Let $F \supset E$. Then by \cite[Proposition 3.38(d)]{afp} and the outward-minimality of $E_i$ we have $$P(E_i \cap F) \le P(F)+P(E_i)-P(E_i \cup F) \le P(F).$$ This implies $$P(E)=P(E\cap F)\le\liminf_i P(E_i\cap F)\le F(F).$$ Hence $E$ is outward-minimizing. By Lemma \ref{neijin} and Remark \ref{dens}, $E \in \mathcal{D}_c \cap \mathcal{D}^c$. It follows from Proposition \ref{outside} that for any $\epsilon>0$, there exists a smooth open set $O_\epsilon\Supset E$ such that $$P(O_\epsilon) \le P(E)+\epsilon.$$ Applying Lemma \ref{neijin} (iii), we have that there exists a sufficiently large $i_0\ge 1$ such that $$E_i \subset O_\epsilon, \forall i\ge i_0.$$ This, combined with the outward minimality of $E_i$, implies $$P(E_i)\le P(O_\epsilon)\le P(E)+\epsilon, \ \forall i\ge i_0.$$ Thus $$\limsup_i P(E_i) \le P(E).$$ On the other hand, by lower semicontinuity we have $$P(E) \le \liminf_i P(E_i).$$ Therefore $P(E_i) \rightarrow P(E)$ as $i\to \infty$. Since $E_i, E \in \mathcal{D}_c \cap \mathcal{D}^c\ $, the last statement follows from Theorem \ref{Federer}. \end{proof} Now we are ready to state the main theorem of this section. \begin{theorem} \label{bigexistence} The following statements hold: \begin{itemize} \item [i)] For $M\ge 1$, the infimum of Problem C in the class of $M$-uniform domains of finite perimeter is attained. \item [ii)] For $M>1$, the infimum of Problems A, B, C can be attained in the class of $M$-uniform outer minimal domains. \end{itemize} \end{theorem} \begin{proof} We first prove i). For a minimizing sequence $(\Omega_i,u_i)$, where $\Omega_i$ are $M$-uniform domains with finite perimeter and $u_i\in H^1(\Omega_i,\mathbb{S}^2)$. Let $\hat{u}_i\in H^1(B_{R_0},\mathbb R^3)$ be an extension of $u_i$ such that $$\|\hat{u}_i\|_{H^1(B_{R_0})} \le C(n,M) \|u_i\|_{H^1(\Omega_i)}.$$ Hence there is a $\hat{u} \in H^1(B_{R_0},\mathbb{R}^3)$ such that $$\hat{u}_i\rightharpoonup \hat{u} \ {\rm{in}}\ H^1(B_{R_0}).$$ By Theorem \ref{maincompact}, there is an $M$-uniform domain $\Omega\subset B_{R_0}$ such that $\Omega_i \rightarrow \Omega$ in $L^1$. Since $\nabla \hat{u}_i \rightharpoonup \nabla \hat{u}$ in $L^2(B_{R_0})$ and $\chi_{\Omega_i} \rightarrow \chi_{\Omega}$ in $L^1(B_{R_0})$, by the lower semicontinuity we have that \begin{eqnarray} \label{kalehaojiu} \int_{\Omega} |\nabla \hat{u}|^2 \le \liminf_{i \rightarrow \infty} \int_{\Omega_i} |\nabla \hat{u}_i|^2 =\liminf_{i \rightarrow \infty} \int_{\Omega_i} |\nabla {u}_i|^2. \end{eqnarray} Denote $u=\hat{u}\big|_{\Omega}$. Then it is not hard to see $|u|=1$ for a.e. $x\in\Omega$ so that $u\in H^1(\Omega,\mathbb S^2)$. In order to show $(\Omega, u)$ is a minimizer of Problem (C) among $M$-uniform domains of finite perimeter, we have to verify that $u^*=\nu_\Omega$ for $\mathcal{H}^{n-1}$-a.e. on $\partial^* \Omega$. In fact, it follows from $\chi_{\Omega_i}\rightarrow\chi_\Omega$ in $L^2(B_{R_0})$ and $div(\hat{u}_i)\rightharpoonup div(\hat{u})$ in $L^2(B_{R_0})$ and Theorem \ref{traceextensiondomain} that \begin{eqnarray*} P(\Omega_i)=\int_{\Omega_i} div ({u}_i) =\int_{B_{R_0}} \chi_{\Omega_i} div (\hat{u}_i) &\rightarrow& \int_{B_{R_0}} \chi_\Omega div (\hat{u})=\int_\Omega div(u)\\ &=&\int_{\partial^*\Omega}u^*\cdot \nu_{\Omega}\, d\mathcal{H}^{n-1}\le P(\Omega). \end{eqnarray*} This, combined with the lower semicontinuity property of perimeter, implies that $u^*=\nu_\Omega$ for $\mathcal{H}^{n-1}$-a.e. on $\partial^* \Omega$. Hence the proof of i) is complete. \medskip Next, we prove ii). For Problem A in part ii), let $(\Omega_h,u_h)$ be a minimizing sequence among $M$-uniform, outer minimal domains and $H^1$-unit vector fields on $\Omega_h$. Since $\Omega_h$ are outward-minimizing sets in $B_{R_0}$, $P(\Omega_h)$ are uniformly bounded. By Lemma \ref{heng} and Theorem \ref{maincompact}, we may assume that there exists an $M$-uniform, outer minimal domain $\Omega$ such that up to a subsequence, $\Omega_h \rightarrow \Omega$ in $L^1$ and $P(\Omega_h) \rightarrow P(\Omega)$. As in the proof of i) above, we may extend $u_h$ in $B_{R_0}$, still denoted as $u_h$, so that $u_h \rightharpoonup u$ in $H^1(B_{R_0}, \mathbb R^3)$ for some $u\in H^1(B_{R_0},\mathbb R^3)$. Thus we have $$\int_{\Omega} |\nabla u|^2 \le \liminf_h \int_{\Omega_h} |\nabla u_h|^2,$$ and $u(x)\in \mathbb S^2$ for a.e. $x\in\Omega$. Since $f$ is convex, we can write $$f(x)=\sup_i(a_ix+b_i).$$ In the following, we do not distinguish $u$ with $u^*$ on $\partial^* \Omega$, and we do not distinguish $\partial^* \Omega_h, \partial^* \Omega$ with $\partial \Omega_h, \partial \Omega$ due to Remark \ref{bianjiea}. Define $$\tau_h(A):=\mathcal{H}^{n-1}(\partial^* \Omega_h \cap A),\ \tau(A):=\mathcal{H}^{n-1}(\partial^* \Omega \cap A), \ {\rm{and}}\ \mu_h(A):=\int_{A}f(u_h \cdot \nu_h)d\tau_h,$$ for any measurable $A\subset\mathbb R^n$, where $\nu_h$ is the measure theoretical outer unit normal of $\Omega_h$. Then Lemma \ref{heng} implies that \begin{eqnarray} \label{0} \tau_h(A) \rightarrow \tau(A) \quad \mbox{as $h \rightarrow \infty$}. \end{eqnarray} Since $f$ is bounded and nonnegative, $\mu_h$ are nonnegative Radon measures so that we may assume there is a nonnegative Radon measure $\mu$ such that after passing to a subsequence, $\mu_h \rightharpoonup \mu$ as $h\to\infty$ as weak convergence of Radon measures. Decompose $\mu$ as $\mu=(D_{\tau}\mu)\tau+\mu^s, \mu^s \perp \tau$, and $\mu^s\ge 0$. Then \begin{eqnarray} \liminf_{h \rightarrow \infty} \mu_h(A) \ge \mu(A) \ge \int_A D_{\tau} \mu d\tau. \end{eqnarray} It follows from Theorem \ref{Federer} that $x \in \partial^* \Omega$ holds for $\tau$-a.e. $x \in B_{R_0}$. Now any such $x\in\partial^*\Omega$, we claim that there exists $r_j\to 0$ such that for $B_j=B_{r_j}(x)$, it holds that \begin{itemize} \item[(a)] $\mathcal{H}^{n-1}(\partial B_j \cap \partial \Omega)=0$ and $\mathcal{H}^{n-1}(\partial B_j \cap \partial \Omega_h)=0, \, \forall h\ge 1$. \item[(b)] $\displaystyle\int_{\partial B_j \cap \Omega_h} u_h \cdot \nu_{B_j}\,d\mathcal{H}^{n-1} \rightarrow \int_{\partial B_j \cap \Omega} u \cdot \nu_{B_j}\,d\mathcal{H}^{n-1}$\ as\ $h \rightarrow \infty$. \item[(c)] $\mu(\partial B_j)=0$. \item[(d)] $D_{\tau} \mu(x)=\displaystyle\lim_j\frac{\mu(B_j)}{\tau(B_j)}$ and $\displaystyle\lim_{j\rightarrow \infty} \frac{\int_{B_j}u \cdot \nu_{B_j} d\tau}{\tau(B_j)}=u(x)\cdot \nu(x)$. \end{itemize} Indeed, (a) and (c) are true because $\tau, \tau_h,$ and $\mu$ are nonnegative Radon measures. (d) follows from the Lebesgue differentiation Theorem. To see (b), let $\tilde{u}_h=u_h \chi_{\Omega_h}$ and $\tilde{u}=u\chi_{\Omega}$. Since $\tilde{u}_h \rightarrow \tilde{u}$ in $L^1$, we have \begin{eqnarray*} \int_{B_1(x)} |\tilde{u_h}-\tilde{u}| = \int_0^1 \int_{\partial B_r(x)} |\tilde{u}_h-\tilde{u}| d\mathcal{H}^{n-1} dr \rightarrow 0 \quad \mbox{as $h \rightarrow \infty$}. \end{eqnarray*}Therefore by Fatou's Lemma, $$\int_0^1 \liminf_{h \rightarrow \infty}\int_{\partial B_r(x)}|\tilde{u}_h-\tilde{u}|d\mathcal{H}^{n-1}\,dr=0,$$ hence for almost every $r \in (0,1)$ and for a subsequence of $h \rightarrow \infty$, \begin{eqnarray*} &&\big|\int_{\partial B_r(x) \cap \Omega_h} u_h \cdot \nu_{B_r(x)}\,d\mathcal{H}^{n-1}-\int_{\partial B_r(x) \cap \Omega} u \cdot \nu_{B_r(x)}\,d\mathcal{H}^{n-1}\big|\\ && \le \int_{\partial B_r(x)} |\tilde{u}_h-\tilde{u}| d\mathcal{H}^{n-1} \rightarrow 0. \end{eqnarray*} This finishes the proof of (b). Now we return to the proof of v). By (c), \begin{eqnarray*} \label{3} \mu(B_j) = \lim_{h \rightarrow \infty} \mu_h(B_j) = \lim_{h \rightarrow \infty} \int_{\partial \Omega_h \cap B_j} f(u_h\cdot \nu_h)\,d\mathcal{H}^{n-1}. \end{eqnarray*} Also as $h \rightarrow \infty$, up to a subsequence we have \begin{eqnarray*} &&\int_{\partial \Omega_h \cap B_j} u_h \cdot \nu_h\,d\mathcal{H}^{n-1}\\ &&= \int_{\partial(\Omega_h \cap B_j)} u_h \cdot \nu_{\Omega_h \cap B_j}\,d\mathcal{H}^{n-1} - \int_{\partial B_j \cap \Omega_h} u_h \cdot \nu_{B_j} \,d\mathcal{H}^{n-1}, \\ &&= \int_{ \Omega_h \cap B_j} divu_h-\int_{\partial B_j \cap \Omega_h} u_h \cdot \nu_{B_j} \,d\mathcal{H}^{n-1},\\ &&\rightarrow\int_{\Omega \cap B_j} divu - \int_{\partial B_j \cap \Omega} u \cdot \nu_{B_j} \,d\mathcal{H}^{n-1}, \\ &&= \int_{\partial(\Omega \cap B_j)} u \cdot \nu_{\Omega \cap B_j}\,d\mathcal{H}^{n-1} - \int_{\partial B_j \cap \Omega} u \cdot\nu_{B_j} \,d\mathcal{H}^{n-1}\\ \label{5.11} &&=\int_{\partial \Omega \cap B_j} u \cdot \nu_\Omega\,d\mathcal{H}^{n-1}. \end{eqnarray*} Therefore, for $\tau$-a.e. $x\in B_{R_0}$, it follows \begin{eqnarray} D_{\tau} \mu(x)&=&\lim_j\frac{\mu(B_j)}{\tau(B_j)}\\ &=&\lim_j\lim_h \frac{ \int_{\partial \Omega_h \cap B_j} f(u_h\cdot \nu_h)\,d\mathcal{H}^{n-1}}{\mathcal{H}^{n-1}(\partial \Omega \cap B_j)}\nonumber\\ & \ge &\lim_j\lim_h \frac{ \int_{\partial \Omega_h \cap B_j}(a_i u_h\cdot \nu_h+b_i)\,d\mathcal{H}^{n-1}}{\mathcal{H}^{n-1}(\partial \Omega \cap B_j)}\nonumber\\ &=&\lim_j\frac{ \int_{\partial \Omega \cap B_j}(a_i u\cdot \nu_\Omega+b_i)\,d\mathcal{H}^{n-1}}{\mathcal{H}^{n-1}(\partial \Omega \cap B_j)},\quad \mbox{also by}\, \eqref{0}\nonumber\\ &=& a_i u(x) \cdot \nu_\Omega(x) +b_i. \nonumber \end{eqnarray} Hence $D_{\tau}\mu \ge f(u\cdot \nu_\Omega)$ for $\tau$- a.e. $x\in B_{R_0}$, and \begin{eqnarray} \liminf_h \int_{\partial \Omega_h} f(u_h \cdot \nu_h)\,d\mathcal{H}^{n-1}&=&\liminf_h \mu_h(B_{R_0}) \ge \int_{B_R} D_{\tau} \mu d\tau\nonumber\\ &\ge& \int_{B_{R_0}} f(u\cdot \nu) d\tau =\int_{\partial \Omega} f(u\cdot\nu)\,d\mathcal{H}^{n-1}. \end{eqnarray} Therefore, $(\Omega,u)$ is a minimizer.\\ To complete the proof of statements in ii), it remains to show if $(\Omega_i, u_i)$ are a minimizing sequence in Problem (B) and converges weakly to $(\Omega,u)$, then $u \cdot \nu = c$ for $\mathcal{H}^{n-1}$-a.e. on $\partial^* \Omega$. This can be seen from $$\liminf_{i \rightarrow \infty} \int_{\partial^* \Omega_i} f(u_i\cdot \nu_i)d\mathcal{H}^{n-1}\ge \int_{\partial^* \Omega}f(u \cdot \nu)d\mathcal{H}^{n-1}.$$ In fact, by choosing $f(t)=\mu (t-c)^2$ we have that \begin{eqnarray} \label{614} \int_{\partial^* \Omega} (u \cdot \nu-c)^2\,d\mathcal{H}^{n-1}\le \liminf_{i \rightarrow \infty} \int_{\partial^* \Omega_i} (u_i \cdot \nu_i-c)^2\,d\mathcal{H}^{n-1}=0. \end{eqnarray} Hence $u \cdot \nu\equiv c$ for $\mathcal{H}^{n-1}$-a.e. on $\partial^* \Omega$. This completes the proof. \end{proof} \section{On the uniqueness of Problem C} In this section, we will show the uniqueness of Problem C in the class of $C^{1,1}$-star-shaped, mean convex domains in $\mathbb R^3$. We will assume the domains has volume $V_0=|B_1|$, where $B_1\subset\mathbb R^3$ is the unit ball centered at $0$. We begin with \begin{lemma} For any bounded $C^{1,1}$-domain $\Omega\subset\mathbb R^3$, \label{2} \begin{equation} \label{2.1} \inf\big\{\int_{\Omega}|\nabla u|^2\ \big|\ u \in H^1(\Omega, \mathbb S^2), u=\nu_\Omega \, \mbox{on $ \partial \Omega$}\big\} \ge \int_{\partial \Omega} H_{\partial\Omega}\,d\mathcal{H}^2, \end{equation} where $H_{\partial\Omega}$ is the mean curvature of $\partial\Omega$. \end{lemma} \begin{proof} Let $u\in H^1(\Omega, \mathbb S^2)$, with $u=\nu_\Omega$ on $\partial\Omega$, be such that $$\int_\Omega |\nabla u|^2=\inf\big\{\int_{\Omega}|\nabla u|^2\ \big|\ u \in H^1(\Omega, \mathbb S^2), u=\nu_\Omega \, \mbox{on $ \partial \Omega$}\big\}.$$ Then by \cite{SU1, SU2}, $u\in C^\infty(\Omega\setminus\{a_i\}_{i=1}^N,\mathbb S^2)$ for a finite set $\displaystyle\cup_{i=1}^N \{a_i\}\Subset\Omega$. Observe that $$(div(u))^2-tr(\nabla u)^2=div(div(u)u-(\nabla u) u)\ \ \ {\rm{in}}\ \ \Omega\setminus\cup_{i=1}^N \{a_i\}.$$ By \cite[Proposition 2.2.1]{LW}, we have that $$|\nabla u|^2 \ge (divu)^2-tr(\nabla u)^2 \ \ \ {\rm{in}}\ \ \Omega\setminus\cup_{i=1}^N \{a_i\}.$$ By \cite[Theorem 1.9]{AL}, near each $a_i$, $u(x) \sim R(\frac{x-a_i}{|x-a_i|})$ for some rotation $R\in O(3)$. In particular, one has that for $r>0$ sufficiently small, $$\Big|\int_{\partial B_r(a_i)} (div(u)u-(\nabla u) u)\cdot \nu_{B_r(a_i)}\,d\mathcal{H}^2\Big| =O(r).$$ Hence \begin{eqnarray*} &&\int_{\Omega} |\nabla u|^2\\ &&\ge \int_{\Omega \setminus \cup_{i=1}^NB_r(a_i)} (div(u))^2-tr(\nabla u)^2 \\ &&= \int_{\Omega \setminus \cup_{i=1}^NB_r(a_i)} div((divu)u-(\nabla u) u) \\ &&= \int_{\partial \Omega} (div(u)u-(\nabla u) u)\cdot \nu_{\Omega}\,d\mathcal{H}^2\\ &&\ - \sum_{i=1}^n\int_{\partial B_r(a_i)} (div(u)u-(\nabla u) u)\cdot \nu_{B_r(a_i)}\,d\mathcal{H}^2\\ &&\ge \int_{\partial \Omega} \big(div (u)-((\nabla u)\nu_\Omega) \cdot \nu_\Omega \big)\,d\mathcal{H}^2-CN r \\ &&= \int_{\partial \Omega} \big(div_{\partial \Omega} \nu_\Omega\big)\,d\mathcal{H}^2 -CN r =\int_{\partial \Omega} H_{\partial\Omega}\,d\mathcal{H}^2-CN r. \end{eqnarray*} This implies \eqref{2.1} after sending $r \rightarrow 0$. \end{proof} The inequality \eqref{2.1} leads us to study the minimization of the total mean curvatures. It is well-known that \begin{equation} \label{key} \int_{\partial \Omega} H_{\partial\Omega}\,d\mathcal{H}^2 \ge 4\sqrt{\pi P(\Omega)} \end{equation} is true if $\Omega$ is convex, and the equality holds if and only if $\Omega$ is a ball. Very recently, Dalphin-Henrot-Masnou-Takahashi \cite{DHMT} proved that if $\Omega$ is a revolutionary solid and $H \ge 0$, then \eqref{key} is true, and the equality holds if and only if $\Omega$ is a ball. Without the mean convexity, \eqref{key} is false, see \cite{DHMT}. In the next lemma we present a proof that \eqref{key} is true if $\Omega$ is a $C^{1,1}$ star-shaped and mean convex domain. The key ingredient of the proof is based on the result by Gerhardt \cite{G90}. We remark that a more general version of \eqref{key} has been proven by Guan-Li \cite{GL}. Here we will sketch the proof, since it is elementary in $\mathbb R^3$. \begin{lemma} \label{3} The inequality \eqref{key} holds, if $\Omega$ is $C^{1,1}$-strictly star-shaped and mean convex. \end{lemma} \begin{proof} By the remark below, we may assume $\Omega\in C^\infty$. By a standard argument, we can perturb $\Omega$ so that $H>0$ everywhere. Indeed, represent $\partial \Omega$ as an embedding $F^0:\mathbb S^2 \to \mathbb R^3$ and consider the mean curvature flow $\{F_t :\mathbb S^2 \to \mathbb R^3: t\in [0,T)\}$, which is a family of embeddings so that $$\frac{\partial F}{\partial t} = H \nu_t \ \ 0<t<T; \ \ F_0=F^0, $$ where $\nu_t$ is the inward unit normal of the embedding $F_t$. It is well-known that the solution exists for a short time $T>0$. If $t>0$ is small, then $F_t(\mathbb S^2)$ remains to be star-shaped. The evolution of the mean curvature $H$ of $F_t(\mathbb S^2)$ is given by $$\frac{\partial H}{\partial t} = \Delta H + |A|^2 H,$$ where $A$ is the second fundamental form of $F_t(\mathbb S^2)$. Then the strong maximum principle implies that $H>0$ everywhere on $F_t(\mathbb S^2)$ for $t>0$. It is clear that after a small perturbation in $C^1$-norm, $\Omega$ is still strictly star-shaped. Hence it suffices to prove \eqref{key} by assuming $H>0$ everywhere on $\partial \Omega$. We argue it by contradiction. Suppose there were a strictly star-shaped domain $\Omega$ with $H>0$ everywhere on $\partial \Omega$ such that $$\frac{\int_{\partial \Omega} H \,d\mathcal{H}^2}{ 4\sqrt{\pi P(\Omega)}}<1.$$ Representing $\partial \Omega$ as an embedding $G_0:\mathbb S^2 \to \mathbb R^3$. Now consider the inverse mean curvature flow $\{G_t :\mathbb S^2 \to \mathbb R^3: t\in [0,\infty)\}$, which is a family of embeddings that solves $$\frac{\partial G}{\partial t} = \frac{1}{H} \nu_t,$$ where $\nu_t$ is the inward unit normal of the embedding $G_t$. It has been shown by Gerhardt \cite{G90} that $S_t:=G_t(\partial \Omega)$ converges to the unit sphere $\mathbb S^2$, up to rescalings by $e^{-t/2}$, as $t\to \infty$. Set $$y(t)=\frac{\int_{S_t} H\,d\mathcal{H}^2} { 4\sqrt{\pi Area(S_t)}}, \ t>0.$$ Observe that $y(t)$ is scaling-invariant. Therefore, $y(0)<1$ and $y(t) \rightarrow 1$ as $t\to\infty$. On the other hand, using the evolution equations under the inverse mean curvature flow we have that $$\frac{d}{dt}H=-\Delta H-\frac{|A|^2}{H},$$ and $$\frac{d}{dt}\sqrt{g}=\sqrt{g}, $$ where $\Delta$ is the surface Laplacian and $g$ is the metric on surface $S_t$ induced by Euclidean metric in $\mathbb R^3$. Direct calculations imply \begin{eqnarray*} \frac{d}{dt}\left(\frac{\int_{S_t}H\,d\mathcal{H}^2}{4\sqrt{\pi P(\Omega)}}\right)&=& \Big(\int_{S_t}\big(H-\frac{|A|^2}{H}\big)\,d\mathcal{H}^2\Big)\frac{1}{4\sqrt{\pi Area(S_t)}}-\frac{\int_{S_t}H\,d\mathcal{H}^2}{8\sqrt{\pi Area(S_t)}}\\ &=& \frac{1}{4\sqrt{\pi Area(S_t)}}\left(\int_{S_t}\frac{2K}{H}\,d\mathcal{H}^2-\frac{1}{2}\int_{S_t}H\,d\mathcal{H}^2\right)\\ &=& \frac{1}{4\sqrt{\pi Area(S_t)}}\int_{S_t} \frac{4K-H^2}{2H}\, d\mathcal{H}^2 \le 0, \end{eqnarray*} since $H^2 \ge 4K$, here $K$ is the Gauss curvature of $S_t$. Therefore, $y(t) \le y(0)<1$ for all $t>0$. We get a desired contradiction. \end{proof} \begin{remark} \label{hsa} \eqref{key} is actually true for any $C^1$-strictly star-shaped surface with bounded nonnegative generalized mean curvature, in particular for a $C^{1,1}$-mean convex surface. Indeed, by \cite[Lemma 2.6]{HI2}, we can find a family of smooth strictly star-shaped mean convex hypersurfaces converging to the surface uniformly in $C^{1,\alpha} \cap W^{2,p}$ for $0<\alpha<1$ and $1<p<\infty$ so that the total mean curvature of the smooth surfaces converges to the total mean curvature of the original surface. We refer the reader to \cite{HI2} for the detail. \end{remark} By Lemma \ref{3} and the isoperimetric inequality $P(\Omega) \ge 4\pi(\frac{3}{4 \pi}|\Omega|)^{2/3}$, we immediately have \begin{corollary} \label{4}It holds that \begin{eqnarray*} &&\inf\{\int_{\Omega}|\nabla u|^2: \mbox{$\Omega$ is $C^{1,1}$-star-shaped, mean convex}, |\Omega|=|B_1|, u \in H^1(\Omega, \mathbb S^2), \\ &&\qquad\qquad u=\nu_\Omega \, \mbox{on $ \partial \Omega$}\}\ge 8\pi, \end{eqnarray*} and the equality holds if and only if $\Omega=B_1$, up to translation and rotation. \end{corollary} As a consequence, we have \begin{theorem} \label{dafeiji1} The Problem (C) over $C^{1,1}$-star-shaped and mean convex domains is uniquely achieved at $\Omega=B_1$ and $u(x)=\frac{x}{|x|}$. \end{theorem} \begin{proof} By direct calculations, $$\int_{B_1} |\nabla (\frac{x}{|x|})|^2 =\int_{B_1} \frac{2}{|x|^2}=8\pi.$$ Hence by the first statement in Corollary \ref{4}, \eqref{0} is attained at $(B_1, \frac{x}{|x|})$. The uniqueness follows from the last statement of Corollary \ref{4} and \cite[Theorem 7.1]{BCL}. \end{proof} \begin{remark} \label{dafeiji2} Huisken first proves that \eqref{key} holds if $\Omega$ is $C^{1,1}$-outer minimal (not necessarily connected), though it seems that he didn't publish it. See also Freire-Schwartz \cite[Theorem 5]{FS}. Hence the same result as in Theorem \ref{dafeiji1} holds in the class of $C^{1,1}$-outer minimal open sets. By \cite{DHMT}, the same result as in Theorem \ref{dafeiji1} holds in the class of smooth domains of revolution. \end{remark} \bibliographystyle{amsalpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,575
{"url":"http:\/\/codereview.stackexchange.com\/questions\/19054\/filtering-a-treeview","text":"# Filtering a TreeView\n\nI use the code below for filtering a treeview. Can this be improved?\n\nprivate IEnumerable<TreeNode> FindNodeByValue(TreeNodeCollection nodes, string searchstring)\n{\n\nforeach (TreeNode node in nodes)\n{\nif (node.Value.IndexOf(searchstring,\nStringComparison.CurrentCultureIgnoreCase) >= 0)\nyield return node;\nelse\n{\nforeach (var subNode in FindNodeByValue(node.ChildNodes, searchstring))\nyield return subNode;\n}\n}\n}\n\nprotected void Button1_Click(object sender, EventArgs e)\n{\n\nvar query= FindNodeByValue(TreeView1.Nodes, fieldFilterTxtBx.Text);\nif (query != null)\n{\n\/\/TreeView1.Nodes[0].Expand();\n\/\/TreeView1.Nodes.Clear();\nforeach (TreeNode node in query.ToList())\n{\n\n}\n\n\/\/ TreeNode newnode = new TreeNode(\"Detail Engineering\");\n\nTreeView1.ExpandAll();\n}\n\nelse\n{\n\nLabel1.Text = \"No file found\";\n\n}\n}\n\n-\nWhat do you mean by \"refresh and populate\"? Do you want to clear the list and just show matching nodes? Do you want to just jump to the matching node and expand down to it? \u2013\u00a0 Bobson Nov 27 '12 at 19:50\ni need clear the list and just show matching nodes after button click \u2013\u00a0 masoud Nov 28 '12 at 5:15\n\nWhat's the problem with your implementation? You clear the tree and add new (filtered) nodes. Somehow, TreeView1.Nodes.Clear(); is under comment, so it may be supposed that you want to preserve full node structure. If so, just store it in memory:\n\nvar query = FindNodeByValue(PersistentNodeSet.Nodes, fieldFilterTxtBx.Text);\nif (query != null)\n{\nTreeView1.Nodes.Clear();\nforeach (TreeNode node in query.ToList())\nTreeView1.ExpandAll();\n}\n\n\nAnd if you want the tree to preserve original structure, you need to use recursion in FindNodeByValue().\n\n-\n\nButton1_Click\n\n\u2022 The var query can't be null. So replace it with if (query.Any())\n\n\u2022 There is no need to call ToList() you can just iterate over the items.\n\nforeach (TreeNode node in query)\n{\n\n}\n\n\nbut basically it doesn't make much sense, because first you search in TreeView1.Nodes and then you add the found items again to the treeview.\n\n\u2022 If you reverse the logic of the if condition, you can save some horizontal spacing.\n\nif (!query.Any()) { Label1.Text = \"No file found\"; return; end if\n\nforeach (TreeNode node in query.ToList()) { TreeView1.Nodes.Add(node); }\n\nTreeView1.ExpandAll();\n\nGenerell\n\n\u2022 you should use braces {} for single if statements and loops, for your code to be less errorprone.\n\u2022 dead code should be deleted\n-","date":"2015-03-29 23:48:53","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.24464590847492218, \"perplexity\": 7842.768395120375}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-14\/segments\/1427131298819.92\/warc\/CC-MAIN-20150323172138-00105-ip-10-168-14-71.ec2.internal.warc.gz\"}"}
null
null
Houston Rockets Coach Stephen Silas Receives Second Career Ejection In Loss To Pelicans Strategic or frustration, Houston Rockets coach Steven Cyrus' night ended early in a game against the New Orleans Pelicans. "He was fighting for the players," Kevin Porter Jr. said. "The game started with a series of questionable calls and the coach's frustration came out. We loved that he fought for us. He didn't want it to become like that. He took it for us as head coach." With four seconds left in the first quarter, Cyrus received two technical fouls that led to his ejection Wednesday night in the Rockets' 119-108 loss to the Pelicans at the Smoothie King Center. His dismissal came after authorities called Usman Galba for illegal screening of Naj Marshall. served. Cyrus said he was frustrated that he didn't get a call. But he was mostly upset with the Rockets' poor start. Houston was down 16-6 five minutes into the game. After losing his 45 points in the first quarter, the Rockets trailed the Pelicans by his 31 points in the second quarter in 7 minutes and 27 seconds. "I needed to play better… I needed something," Cyrus said. "The best thing about the NBA is that we have the opportunity to right our wrongs. But it has to be right the first time. You can't start a game like this." Cyrus' early exit was the second exit of his coaching career. His first game was on April 3, when he lost to the Timberwolves in Minnesota, 139-132. Without Brandon Ingram (toes) and Zion Williamson (hamstrings), CJ McCallum scored 12 of 28 points in the first quarter. Six players scored in double figures for the Rockets. KJ Martin came off the bench to lead Houston on both ends. He had a team-high 16 points on 7-of-7 shooting, two steals and one block. Jalen Green also added 16 points but scored 5/15 from the field. You can follow Coty M. Davis on Twitter @CotyDavis_24 Want the latest news and insider info on the Houston Rockets? click here Follow Inside the Rockets facebook here Subscribe to our weekly podcast Bleav In The Rockets now! click here to listen. Follow Inside the Rockets on Twitter @InsideRocketsFN Tchewa posts career night in loss to Temple George's Goes for Career Night in Matchup with No. 17/17 TCU
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,640
Valid online only at Newegg Australia. Offer Not valid in stores. Cannot be applied to past purchases. Promo codes cannot be combined. Not valid on purchases of gift cards, previous purchases or redeemable for cash. Get the best verified Newegg Australia coupons. No one beat Newegg Australia on price. Fast delivery. Cancellation or refund requests are subject to the refund policy What are you waiting for! Last chance to place an order before it's out of stock. Click to discover your favorites. How to Use Newegg Coupon Codes? You will love how easy it is to add a Newegg Coupon Code when shopping on Newegg.com. The step-by-step guide below details the entire process. Visit the Newegg website and browse the product categories to find the product you are looking for. Alternatively, you can just use a keyword to search for your product using the search bar. Click on the product. This will reveal the product page. Click on the 'ADD TO CART' button. This will reveal a page with two choices, 'CONTINUE SHOPPING' or 'VIEW SHOPPING CART.' Click on the second choice. This will reveal the checkout page. Complete the checkout process and wait for your product to be delivered. As you can see, you will be able to apply your Newegg promo code within a few short steps. Remember, to insert the right shipping details when checking out to ensure that you receive your purchase. How to Save from Newegg? There are a million and one ways to save when shopping on Newegg.com. Here are but four of the easiest ways to ensure that your next consumer electronic device does not leave a hole in your savings account. The number one way to save on the Newegg website is to visit the 'DEALS and SERVICES' page on their website. The page lists all the deals that you can take advantage of on the site. There are daily deals on the website, deals for those who subscribe to the company's mailing list, and deals for those who use the retailer's mobile App to do their shopping. What's more, if you don't mind shopping clearance and refurbished PCs, and electronics, you can enjoy some very steep discounts on the page's "CERTIFIED REFURBISH OUTLET" section. The discounts on the section are very deep as the items are not exactly new but have been certified as working perfectly. Shopping in this section could help you save a serious amount of cash. You can also save money on the Newegg website by signing up to become a Newegg Premier Member. It costs some money to sign up but the benefits are there for all to see. It costs about $30 to be a member for 6 months and $50 to be a member for a whole year. As a member, you will get to enjoy free expedited shipping on all eligible items. You will also enjoy free returns on most products you purchase. Newegg Premier Members also do not pay the restocking fee in case they return an item. As a Newegg Premier member, you can also enjoy rush processing and dedicated customer service. Most importantly, you will get exclusive deals via email every now and then. These deals will help you save good money especially if you plan on becoming a regular shopper on Newegg.com. The third biggest way to save money on Newegg.com is to get the Newegg credit card. Newegg credit card holders enjoy incredible benefits when shopping on the online retailer. Some of the benefits include, sale alerts, exclusive deals and offers, and promotional financing offers on most products. For certain products, you will be charged no interest if you pay in full within 6 months, while for other products, you won't pay any interest if you pay in full within 12 months. The fourth biggest way to reduce your shopping total on Newegg.com is to use their price match guarantee feature. Basically, the feature enables you to claim back part of your purchase price if you find the same item you bought on Newegg being sold at a lower price on any major retailer. According to Newegg.com, major retailers include Walmart, Target, Staples, Sears, Office Max, Office Depot, Kmart, Game Stop, Frys, Dell, Crutchfield, CDW, Best Buy, and Amazon. If you find the same product being sold at a lower price on any of these retailers, use Newegg's price match claim form to get part of your purchase price refunded. The price match guarantee is only valid for 14 days from the date of your purchase. Newegg has been around for less than two decades and it has grown tremendously to become one of the biggest online retailers of quality consumer electronics. The company was founded in the year 2000 by Mr. Chang who has now done two stints as its CEO. There are quite a number of ways you can save money when shopping on Newegg.com. You can save money by visiting the retailer's 'DEALS & SERVICES' page for great deals. You can also save money by signing up to become a Newegg Premier Member or a Newegg Credit Card Holder. Lastly, you can save money using Newegg's Price Match Guarantee feature. Don't pay in full, use these means to pay less on your next purchase on Newegg. Up to 77% OFF from NewEgg 217 Coupon Codes. Newegg is an electronics retailer specializing in computer products and accessories. The retailer is solely based online but has made good progress over the last two decades and now sells products worth around $3 billion every year. The company also has about 3,000 employees. The company was founded in the year 2000 by an American of Taiwanese descent. The founder was a man by the name Fred Chang. He joined the company name from "new egg" because, in his tradition, the egg is a symbol of infinite potential while the word new was used to symbolize the new hope that companies operating the then-new e-commerce business model would survive. After founding the company, Fred Chang served as the company's CEO and Chairman for a good number of years before Tally Liu took over from him in 2008. However, Mr. Chang soon took back his position as CEO in 2010 when Tally Liu left the company. The company has steadily grown in revenue over the years from barely making any sales in 2000 to making billions of dollars. Newegg's steady growth is proven by the fact that it is often cited as one of the largest private companies in the states. In 2009, the company was listed as the 234th largest private company in the US, which is no mean feat. A few years before in 2005, Newegg.com was named by the Internet Retailer Magazine as one of the world's largest internet retailers. In 2017, the company was ranked by the Internet Retailer the 21st best internet retailer. This was just last year. Newegg.com now rakes in sales of over $3 billion every year. The company also makes profits to the tune of tens of millions of dollars. The company now has about 2,500 employees working at its offices. Chang is also no longer the owner even though he remains the GLOBAL CEO of Newegg.com. The new owners of Newegg are now Liaison Interactive, a Chinese technology company. The company sponsors/ participates in several techs and PC gaming events around the world. Right now you can buy a variety of items from Newegg.com including the latest computers and their components, the latest electronics, PC gaming accessories, networking tools and accessories, software packages, automotive items, health and fitness items, and home décor items. There are quite a number of deals that you can use to cost-save when shopping at Newegg.com. There are also exclusive email deals and coupons that you can get by simply subscribing to the company's newsletter. This will allow you to regularly receive information about the latest deals, products, and promos on the Newegg.com company. Read on to find out how you can save more. Newegg does not have a separate free shipping policy. However, a good number of items on their online store can be shipped for free. The items available for free shipping are clearly marked "free shipping." Additionally, you can use a filter on their online store to exclusively shop for products that are available for free shipping. Moreover, premier Newegg account holders (a subscription service) enjoy free expedited shipping on all qualifying items. It usually takes no more than 3 business days for the account holders to receive their ordered items. Newegg accepts returns for products bought on its website. However, not just any return will be accepted. The company has set in stone some guidelines that must be met for a return to be accepted and a refund to be issued. Perhaps the most exciting guideline is that the company accepts all unopened products if returned within 30 days from the date of purchase. This is regardless of the individual return policy for the product. However, other guidelines in the company's return policy do not sound as exciting e.g. the company may deduct a restocking fee from your refund before ending you the rest of the amount. Nevertheless, the bottom line is that the company accepts refunds for products purchased on its website as long as they are undamaged and returned in their original packaging with all the accompanying accessories and other items. Sharing is caring. Submit A Coupon for NewEgg here.
{ "redpajama_set_name": "RedPajamaC4" }
101
Comments: Used books don't have access codes, ships from U.S.A. 10th Paperback may have wear and/or considerable writing, ships fast!!!, choose expedited for quicker shipping. David Sadava is the author of 'Life: The Science of Biology, Vol. 2: Evolution, Diversity, and Ecology, 10th Edition', published 2012 under ISBN 9781464141232 and ISBN 1464141231. Loading marketplace prices 49 copies from $5.28 How does the rental process work?
{ "redpajama_set_name": "RedPajamaC4" }
3,459
Q: Crystal Reports - Trying to reset page numbers based on database field instead of group The issue that I am having is that I need to update page numbers based on data in the page header. What I am trying to do is create a packing slip. All the customer and order information is in the page header. I need to reset the pages if there is more than one page for an order. Right now, I am getting the number of N of M pages, but M is the total number of pages for all orders (example 1 of 18 pages, because there are 17 orders and one order is two pages long). I want it to be 1 of 1 of there is only one order for the packing slips, but 1 of 2 if there is more than one page. Does this make sense? Any ideas? Thanks A: Try this: * *Right Click the page footer in the left margin *Click SECTION EXPERT *Click the FORMULA BUTTON to the right of RESET PAGE NUMBER AFTER PageNumber = 2 A: Assuming you are grouping by order number and starting a new page for each new order - by having checked the New Page After option in the Section Expert for the group footer - then you can check the option to Reset Page Number After in the Section Expert for the group footer.
{ "redpajama_set_name": "RedPajamaStackExchange" }
735
Attention: Miley Live es el tercer álbum en vivo de la cantante estadounidense Miley Cyrus. Lanzado el 1 de abril de 2022 por Columbia Records. La mayor parte del álbum, exceptuando la pista 'Attention', se grabó durante su concierto como parte del Super Bowl Music Fest en el Crypto.com Arena de Los Ángeles el 12 de febrero de 2022, con la lista de canciones que ha interpretado de anteriores materiales de estudio como Plastic Hearts (2020), Miley Cyrus & Her Dead Petz (2015), Bangerz (2013), The Time of Our Lives (2009), Breakout (2008) y Meet Miley Cyrus (2007), junto con algunas canciones versionadas. El álbum también incluye dos pistas inéditas: «Attention» y «You». Cyrus mencionó que el álbum fue «comisariado por los fanáticos para los fanáticos» Antecedentes Cyrus anunció el álbum al final de su presentación en Lollapalooza Brasil en São Paulo el 27 de marzo de 2022. «You» se interpretó por primera vez durante el especial Miley's New Year's Eve Party en Miami el 31 de diciembre de 2021, mientras que un adelanto de «Attention» fue tocado durante el concierto del Super Bowl Music Fest en el que se grabó el álbum en febrero de 2022. Los vídeos musicales de «We Can't Stop x Where Is My Mind?», «Wrecking Ball x Nothing Compares 2 U», «Never Be Me» y «Like a Prayer» se estrenaron el 28, 29, 30 de marzo y 1 de abril de 2022, respectivamente. El 25 de abril la cantante anunció la edición de lujo del disco para el 29 de abril con cinco nuevas canciones y una interpretación más de "You", las cuales formaron parte de su gira de festivales en Sudamérica titulada Attention Tour, incluyendo una colaboración con la cantante brasileña Anitta. Ella comentó sobre la incorporación de «Angels Like You» en su concierto en Colombia en agradecimiento porque la canción llegó al puesto número uno en iTunes en ese país y porque sus fans cantaron la canción toda la noche afuera del hotel donde se hospedaba en Bogotá. Recepción crítica Emily Swingle de Clash elogió la voz versátil de la cantante y dijo que «la voz de Cyrus es realmente una fuerza a tener en cuenta, que se adapta perfectamente a cualquier género que elija abordar. Desde el juguetón golpe de hip-hop country que es «4x4», el rap pesado de «23», hasta la rica versión blues de «Maybe» de Janis Joplin, parece que Cyrus puede encajar en casi cualquier género en el que meta sus patas». El escritor Dani Blum de Pitchfork elogió las portadas del álbum, pero cuestionó la inclusión de canciones menores del catálogo de Cyrus como «23» diciendo: «Cyrus suena débil, ahogada por la sirena que gira constantemente y que sustenta la canción. Es discordante escuchar estas canciones ahora, y la apropiación se convirtió en su legado. Durante años, Cyrus se aferró a los estilos y la estética del hip hop, creando controversia tras controversia, pero aquí, suena apenas comprometida con el tramo de canciones de Bangerz». Lista de canciones Edición estándar Notas Todas las pistas se indican como «En vivo», excepto «Attention». «Attention» está estilizado en mayúsculas. Personal Músicos Técnico Posicionamiento en listas Historial de lanzamiento Referencias Álbumes de Miley Cyrus Álbumes en vivo de 2022
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,720
{"url":"https:\/\/www.derickdeleon.com\/2014\/02\/distance-sensor-working-on-raspberry-pi.html","text":"### Distance sensor working on a Raspberry Pi\n\nA fun way to start playing with your Raspberry Pi is using LEDs and buttons, from there you can try other sensors.\u00a0This is a simple, hands on tutorial to get the HC-SR04 distance sensor working using the Raspberry Pi.\n\nMaterials\n\nThe Sensor\n\n HC-SR04 Ultrasonic Sensor\nThe HC-SR04 is a very simple ultrasonic sensor. Though it is not very delicate, it is still advised to manipulate with caution.\n\nBasically, this sensor emits a sound pulse, that echoes on the measured object's surface and is reflected back to the sensor. The time that the pulse took from being emitted to received is the base used to calculate the distance.\n\nBear in mind that the surface should be parallel to the sensor, if not the reflection and the distance measurement will be affected.\n Correct way to operate\n Incorrect way to operate\nPinout:\n\u2022 Vcc: 5V power supply\n\u2022 Trig: Trigger pin (input)\n\u2022 Gnd: Power ground\nThe way it works is by changing the trigger pin to logic 1 (in this case 3.3V from the Raspberry Pi GPIO will do), this will generate an 8 pulse at 40 kHz. This will be followed by the receiving pin changing to logic 1 (5V) and will thus remain until the pulse is reflected back and changed to logic 0 (0V).\n\nThe formula for distance is:\n\n$d=\\frac{v_{sound}\\times t_{pulse}}{2}$\n\u2022 d is the distance\n\u2022 v is the speed of sound, 340 m\/s\n\u2022 t is the time it took the pulse to get back to the sensor\nNote: The reason that it is divided by 2 is because the pulse has to travel twice the distance to get back to the source.\n\nThe Wiring\n\nAs the Raspberry Pi GPIO can only handle 3.3V input voltage, we will need to use a voltage divider. This will divide the 5V from the sensor and will keep the GPIO safe. It is important to note that the resistances values should always be R1 < R2 < 2R1.\n\n The wiring with voltage divider\n\nThe Software\n\nCopy or download this code and run it in terminal, remember to use root privileges. It should look something like the image below. You can always halt the execution by pressing Ctrl+C at any time.\n\n# Derick DeLeon\n# 2014-01\n# HC-SR04_example.py\n# This is an atempt to make the HC-SR04 ultrasonic distance sensor work\n\nimport RPi.GPIO as GPIO\nimport time\n\ndef prepare(GPIO_ECHO, GPIO_TRIGGER):\n# Set pins as output and input\nGPIO.setup(GPIO_TRIGGER,GPIO.OUT) # Trigger\nGPIO.setup(GPIO_ECHO,GPIO.IN) # Echo\n\n# Set trigger to False (Low)\nGPIO.output(GPIO_TRIGGER, False)\n\n# Allow module to settle\ntime.sleep(0.5)\n\n# get the distance from the sensor, echo - the input from sensor\ndef get_distance(GPIO_ECHO, GPIO_TRIGGER):\n\n# Send 10us pulse to trigger\nGPIO.output(GPIO_TRIGGER, True)\ntime.sleep(0.00001)\nGPIO.output(GPIO_TRIGGER, False)\nstart = time.time()\n\nwhile GPIO.input(GPIO_ECHO)==0:\nstart = time.time()\n\nwhile GPIO.input(GPIO_ECHO)==1:\nstop = time.time()\n\n# Calculate pulse length\nelapsed = stop-start\n\n# Distance pulse travelled in that time is time\n# multiplied by the speed of sound (cm\/s)\ndistance = elapsed * 34300\n\n# That was the distance there and back so halve the value\ndistance = distance \/ 2\nreturn distance\n\n# MAIN\nGPIO.setwarnings(False)\n# Set to Board pin number mapping\nGPIO.setmode(GPIO.BCM)\n# Set up the GPIO channels\ntrigger = 17\necho = 27\nprepare(echo, trigger)\ntry:\nwhile True:\nd = get_distance(echo, trigger)\nprint d\nexcept KeyboardInterrupt:\nGPIO.cleanup()\n\nGPIO.cleanup()","date":"2021-03-01 16:19:44","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.24741502106189728, \"perplexity\": 3562.7280582181834}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-10\/segments\/1614178362741.28\/warc\/CC-MAIN-20210301151825-20210301181825-00513.warc.gz\"}"}
null
null
Ultimate Textile 21 ft. Shirred Pleat Polyester Table Skirt - 42'' Bar Height, Black by Ultimate Textile at The Primavera Blog. MPN: POLY-21FT-42-110. Hurry! Limited time offer. Offer valid only while supplies last. Cover 42'' tall high-top bar, food prep or catering tables with Black Polyester Table Skirts at your wedding, trade show event or formal presentation. Cover 42" tall high-top bar, food prep or catering tables with Black Polyester Table Skirts at your wedding, trade show event or formal presentation. These easy care machine wash skirts drop 42 inches to the floor and feature velcro sewn inside the skirt for ease of attachment with our included Tableskirt Clips. Due to variations in computer monitors, colors shades may vary in appearance from screen to screen. Please search Ultimate Textile for matching and complementary items in all shapes and sizes. Black 21 foot table skirts Polyester Tablecloths are an excellent choice for trade shows, catering and buffet banquet tables, hotels and party rooms.
{ "redpajama_set_name": "RedPajamaC4" }
5,466
I got drawn into climate science because of its overlap with astronomy. For example, the pacing of ice ages is determined by changes in Earth's orbit, which in turn gives scientists a record of known changes in Earth's energy budget that can be used to assess how Earth's climate responds to disturbances. After studying climate science and realizing how little I know, I was appalled by how many people in the US disparage climate science while knowing much less than I did. Thus, I started sharing my learning as a test of myself and as teaching aids to those who are interested. The science of climatology is well founded in astronomy. The images on the right are links illustrations and presentations that reside in the overlap between these fields. Some of my other climate science projects are listed here too. Below is a graph showing Earth's orbit parameters, incident solar radiation at a few latitudes, levels of the greenhouse gases CO2 and CH4, and temperatures relative to a 1970-2000 baseline. Click the range bars (in white) on the timeline or drag the sliders to examine time periods.Click any graph to bring it to the front. Hover over a data point to read its value. Click a vertical scale marker (when displayed) to pin it; click again to remove it. Geological timeline 4.7 billion year time which includes climate events (along with evolutionary and geological events). Earth Instrumental Temperature Record GISS, NOAA, Hadley, and satellite temperatures displayed with CO2 and sunspot activity. Earth Orbit Viewer Interactive illustration about long-term changes to Earth's orbit. This illustration complemenets the graphs shown below. Earth, Orbit, and Climate A presentation on the astronomical foundation to climatology. Refer to the presentation for more information about the orbital cycles illustrated below.
{ "redpajama_set_name": "RedPajamaC4" }
7,658
In Egyptian mythology Kneph was originally the breath of life, his name meaning soul-breath. Indeed, according to Plutarch and Diodorus, Kneph was identical with the Greek pneuma. Kneph in this context was a spirit that breathed life into things, giving them form. Kneph eventually became considered to be the creator god himself, in Elephantine, although his identity was finally assimilated into the more important god Amun. In art, Kneph was depicted as a ram, the animal symbolic of the ba, a major aspect of the Egyptian notion of the soul; the Egyptian word for "ram" was "ba". He was also depicted wearing a uraeus, symbolic of his authority, as creator. In his hand he always bears the ankh, symbol of life.
{ "redpajama_set_name": "RedPajamaC4" }
8,891
{"url":"http:\/\/eccc.hpi-web.de\/report\/2010\/053\/","text":"REPORTS > DETAIL:\n\n### Paper:\n\nTR10-053 | 28th March 2010 22:03\n\n#### Hardness of Approximately Solving Linear Equations Over Reals\n\nTR10-053\nAuthors: Dana Moshkovitz, Subhash Khot\nPublication: 28th March 2010 22:09\nKeywords:\n\nAbstract:\n\nIn this paper, we consider the problem of approximately solving a\nsystem of homogeneous linear equations over reals, where each\nequation contains at most three variables.\n\nSince the all-zero assignment always satisfies all the equations\nexactly, we restrict the assignments to be non-trivial\". Here is\nan informal statement of our result: assuming the Unique Games\nConjecture, it is $\\NP$-hard to distinguish whether there is a\nnon-trivial assignment that satisfies $1-\\delta$ fraction of the\nequations or every non-trivial assignment fails to satisfy a\nconstant fraction of the equations with a margin\" of\n$\\Omega(\\sqrt{\\delta})$.\n\nWe develop linearity and dictatorship testing procedures for\nfunctions $f: \\R^n \\mapsto \\R$ over a Gaussian space, which could be\nof independent interest.\n\nOur research is motivated by a possible approach to proving the Unique Games Conjecture.\n\n### Comment(s):\n\nComment #1 to TR10-053 | 15th July 2010 20:58\n\n#### An improved version is available\n\nAuthors: Subhash Khot, Dana Moshkovitz\nAccepted on: 15th July 2010 20:58\nKeywords:\n\nComment:\n\nSubsequently to this work, we established the NP-hardness of the same problem (without assuming the Unique Games Conjecture). This appears as ECCC TR10-112.\n\nISSN 1433-8092 | Imprint","date":"2014-04-18 05:35:30","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7793560028076172, \"perplexity\": 2952.3899027687553}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-15\/segments\/1397609532573.41\/warc\/CC-MAIN-20140416005212-00561-ip-10-147-4-33.ec2.internal.warc.gz\"}"}
null
null
with�many friends around the world." Blutus photo taken from Mahasarakam. Thank you K.Pui. Jiggo and Tim from England making eye contact accompanied with May, his gf. Mr.Dion Cragg from South Aferica with Max's daughter. Mrs.Rattana, a housewife, with Barbie. Thailand's Congressman, Vorasith Gultinun, with Vincent's son. One is never enough. K.Golf has taken Panda as his new family member after K.Seven. Mr.Poomchai Lumsum, CEO of Maung Thai Life Insurance Co.Ltd, posing with Monty, another beatiful son of Max. My honor that Mr.Golf, a brother of HRH Princess Srirat, has adopted Seven, a gorgeous son Max as his new member. Dr.Ron, a Prosthodontist, with Thomas. Mr.Bug and his handsome son, Smily, posing on his new motorbike. After one year of waiting, finally Mr.Anton got what he wants. Miss Aim is holding her new baby, Melon. Napachai Honey Melody won BBIG posing with Paul Enrique Moral in Philippines. Martin has grown up and has a beatiful head like his father, Max. Thank you K.Nook for sending this picture. Ms.Sumitra, a business owner, is holding the latest son of Max, TuaDum. Max has proven himself as a great producer. Marcus will definitely have a good home. That's what Mrs.Poonpilarb and her children said. Mr.Supoj, a businessman, holding Florence while his son holding Mouse, thier new member. Hana is going to live her happy life with a very lovely Nagasaki family. Ms.Teoy & her baby, CJ, came visit us at the dog shows. Ms.Ying and her new champion Napachai Celine. Metasith, Data Warehouse Manager, with Batman, the first son of Brewster. Jeep & his gf, toys model dealers, with Summer, sired by Vincent. Heng & Pen, Wung Lung Bakery's owners with Sam, Robin x Zhara's daughter. Ms.Gaww adopts our cream boy from the first litter sired by Max. Mr.Seewee & his friend from Malaysia took a snap shot with Fido.
{ "redpajama_set_name": "RedPajamaC4" }
2,088
The Locked Door (1929), filme estadunidense The Locked Door (2012), filme chinês Desambiguações de cinema
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,412
Diese Liste führt Burgen und Schlösser in Mittelböhmen auf und umfasst die Mittelböhmische Region und die Hauptstadt Prag. Sie ist Teil der Liste von Burgen und Schlössern in Tschechien. Die Liste erhebt keinen Anspruch auf Vollständigkeit. Es werden die tschechischen Namen verwendet und in Klammern die deutschen Namen angegeben. Mittelböhmische Region (Středočeský kraj) Hauptstadt Prag (Hlavní město Praha) Weblinks Burgen und Schlösser in Tschechien (tschech.) Burgen und Schlösser in der Mittelböhmischen Region (tschech.) Burgen und Schlösser in der Hauptstadt Prag (tschech.) !Mittelbohmen !Mittelbohmen Bohmen #Mittel
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,122
Q: how to stop the fadein and fadout repeating over multiple hovers? fadein and fadout repeating when i use multiple hover on the company names.. how can i prevent the repeatation...thanks in advance.... demo link : http://fiddle.jshell.net/nikhilvkd/77WPz/ $(function(){ $(".aa").hover(function(){ $(this).find(".aa-over").fadeIn(); } ,function(){ $(this).find(".aa-over").fadeOut(); }); }); $(document).bind('mousemove', function(e){ $('.aa-over').css({ left: e.pageX + 20, top: e.pageY }); }); A: Try to clear the animation queue before starting a new one by using .stop(true) $(function () { $(".aa").hover(function () { $(this).find(".aa-over").stop(true).fadeIn(); }, function () { $(this).find(".aa-over").stop(true).fadeOut(); }); }); $(document).bind('mousemove', function (e) { $('.aa-over').css({ left: e.pageX + 20, top: e.pageY }); }); DEMO
{ "redpajama_set_name": "RedPajamaStackExchange" }
965
\section{\protect\bigskip Introduction.\protect\footnotetext[1] Electronic CNRS\ preprint hal-00464794 (17/03/2010)}} I prove that the summation over arbitrary ribbon graphs with legs produces explicit solutions to the noncommutative Batalin-Vilkovisky equation, which I have introduced in \cite{B06a}. This generalizes the construction of A_{\infty }-$structure via summation over trees, see \cite{K}, \cite{M98} \cite{H} and references therein. The noncommutative Batalin-Vilkovisky equation is the equation \begin{equation} \hbar \Delta S+\frac{1}{2}\{S,S\}=0\Leftrightarrow \Delta (\exp \frac{1} \hbar }S)=0 \label{ncBV} \end{equation for elements of symmetric product of cyclic words \begin{equation*} S=\sum_{i,g}\hbar ^{2g+i-1}S_{i,g},~~S_{i,g}\in Symm^{i}(\oplus _{j=1}^{\infty }((\Pi B)^{\otimes j})^{\mathbb{Z}/j\mathbb{Z}})^{\vee } \end{equation* where $B$ is a $\mathbb{Z}/2\mathbb{Z}$-graded vector space with odd scalar product, $\Delta $ is the odd second order operator, defined via dissection-gluing of cyclic cochains see section 1.2 in \cite{B06b} or section 5 in \cite{B06a}. For $B$ with even scalar product the noncommutative Batalin-Vilkovisky equation and the operator $\Delta $ are defined on elements of \emph{exterior} product of cyclic words $Symm(\oplus _{j=1}^{\infty }\Pi ((\Pi B)^{\otimes j})^{\mathbb{Z}/j\mathbb{Z}})^{\vee }$. I've associated in \cite{B06b} to any solution $S$ to equation (\ref{ncBV}) the $A-$infinity $gl(N)-$equivariant matrix integral of the form, in the odd scalar product case, \begin{equation*} \int_{\gamma }\exp \frac{1}{\hbar }(Tr(m_{A_{\infty }})+\sum_{2g+i>1}\hbar ^{2g+i-1}S_{i,g}(X)+\frac{1}{2}\left\langle [\Xi ,X],X\right\rangle )w~dX \end{equation* where $m_{A_{\infty }}=S_{1,0}$ is the term of $S$ of the lowest order in \hbar $, which is the cyclic $A_{\infty }-$algebra tensor. The term $\exp \frac{1}{\hbar }(\sum_{2g+i>1}\hbar ^{2g+i-1}S_{i,g}+\frac{1}{2}\left\langle [\Xi ,X],X\right\rangle )$ can be understood as a multitrace and equivariant completion of $\exp (\frac{1}{\hbar }Tr(m_{A_{\infty }}))$ to a closed gl(N)-$equivariant differential form, see \cite{B09a}. This is the integration framework which I've proposed to associate with the equation \{m_{A_{\infty }},m_{A_{\infty }}\}=0$. These integrals can be understood as the generalisation of the matrix Airy integral to arbitrary higher dimensions and, simulteneously, as the noncommutative generalisation of periods of Calabi-Yau manifolds. They have many remarkable properties, see \cite{B06b}, \cite{B09a}, \cite{B09c}. For example, I've proven in \cite{B06b} that their asymptotic expansion is given by the pairing between the characteristic classes $c_{S}\in H_{\ast } \overline{\mathcal{M}}_{g,n})$ of the quantum $A_{\infty }-$algebra and the classes $c_{\Lambda }\in H^{\ast }(\overline{\mathcal{M}}_{g,n})$ associated with the odd part of the equivariant $gl(N)-$action. The solutions to the equation (\ref{ncBV}) are the lagrangians for these matrix integrals and this is one of the principal reasons why it is important to develop the instrument, which constructs explicitely and classifies such solutions. The universal look of the noncommutative Batalin-Vilkovisky equation and the property that it leads to the remarkable integration theory, suggests that the $A_{\infty }-$enhancement, widely used in the non-commutative derived algebraic geometry, should in many interesting cases be considered as the first order approximation to the full structure described by the solution to the noncommutative Batalin-Vilkovisky equation in the space of symmetric or exterieur powers of cyclic tensors. I explain below how to write down such full structure in this context, see the corollary \ref{corwell}. I work from the beginning in the equivariant extension of the noncommutative Batalin-Vilkovisky formalism \cite{B09a}, \cite{B06b}. In particular my results in this paper are valid in the framework of algebraic structures with supersymmetry. My basic setting throughout the paper is an odd linear operator $I$, which acts as a symmetry of associative algebra $A$, or more general algebraic structure, and whose square is, in general, nonzero I^{2}\neq 0$. The interesting applications include both the cases with I^{2}=0$ and with $I^{2}\neq 0$. An important example from \cite{B06b}, deeply related with tauthological classes on the moduli space of Riemann sufaces, is related with the odd differentiation $[\Xi ,\cdot ]$ acting on the Bernstein-Leites odd general matrix algebra $q(N)$ with its odd trace, where $\Xi $ is an odd matrix from $q(N)$. In this case $I^{2}=[\Xi ^{2},\cdot ]$ and $I^{2}\neq 0$. Briefly, my construction is a sum, over ribbon graphs with legs, of the tensors $W_{\Gamma }$, defined on symmetric/exterior products of cyclic words, and given by the contraction defined by the ribbon graph, where the elements of $\Pi B$ are attached to the legs of the ribbon graph, the structure constants of the algebra $A$ (associative, $A_{\infty }$) are attached to the vertices and the propagator, which is the inverse to the scalar product modified by homotopy inverse to $I$, is attached to the edges, see (\ref{wg}). I give also the generalizations for summation over stable ribbon graphs, whose relation with the noncommutative Batalin-Vilkovisky equation was described in \cite{B06a}. The construction here is very closely related with my construction from \cit {B06a}. In \cite{B06a} I've constructed the homology class in the stable ribbon graph complex from any solution of the noncommutative Batalin-Vilkovisky equation. The construction there associates the number to any stable ribbon graph with no legs. It is the contraction, with the products of certain multitrace tensors $m_{i,g}$ attached to the vertices, and the inverse to the scalar product attached to the edges. Here the ingredients in the construction giving solution to the Batalin-Vilkovisky equation are similar. Except that here I relax the condition on the linear term: $d^{2}\neq 0$. To avoid the confusion this linear term is denoted by $I $. In addition I take a self-adjoint operator $H$ such that $Id-[I,H]=P$ is idempotent, $[I^{2},H]=0$, notice that $H^{2}\neq 0$ in general. In a sense it is a homotopy inverse to $I$ on the subspace $Id-P$. I use $H$ to modify the scalar product associate with edges. The outcome of the construction is a solution, on the image of the idempotent $P$, to the noncommutative Batalin-Vilkovisky equation (equivariant if $I^{2}$ is nonzero on the image of $P$). The similarity of the construction in this paper and the construction from the previous paper \cite{B06a} is not accidental. In fact, the quasiisomorphism $PA\hookrightarrow A$ induces the quasi-isomorphism from the dg-Lie algebra $\underline{Mor}(\mathbb{M}_{\mathcal{D}^{\vee }}\mathcal P}^{dual},\mathcal{E}[A])$ to $\underline{Mor}(\mathbb{M}_{\mathcal{D}^{\vee }}\mathcal{P}^{dual},\mathcal{E}[PA]),$ see \cite{B06a}, section 5.1. And the construction described below is the result of the action of this quasi-isomorphism on the solution to the noncommutative Batalin-Vilkovisky equation. On the other hand such solution is the same, by the theorem 1 from \cite{B06a}, as the action of the stable ribbon graphs complex, whose part on graphs with no legs gives the homology classes construction from \cit {B06a}. It is important to stress that in order that the summation over ribbon graphs, with the $A_{\infty }-$tensors attached to vertices, gives the solution to the noncommutative Batalin-Vilkovisky equation, the following extra condition must be imposed on the cyclic $A_{\infty }-$algebra \begin{equation*} \Delta m_{A_{\infty }}=0. \end{equation* One significant case when this condition is automatically satisfied is the case of the $\mathbb{Z}/2\mathbb{Z}$-graded associative algebra. Another possibility to construct the solution on the subspace $PA\subset A$ is to extend the $A_{\infty }-$structure on $A$ to a solution to the noncommutative Batalin--Vilkovisky equation in the initial space $A$ and then take the sum over \emph{stable} ribbon graphs. I describe the case when such extension is possible in the corollary\nolinebreak\ \ref{corwell}. Here is the brief description of the content of the paper. In section \re {sAss} I describe the summation over ribbon graphs starting from the input data given by the odd symmetry $I$ acting on the $\mathbb{Z}/2\mathbb{Z} -graded associative algebra $A$, $\dim _{k}A<\infty $ , with odd or even scalar product. This is the main construction of the paper. Next I consider the case of cyclic $A_{\infty }-$algebra. The general case of associative algebra with no scalar product, and also A_{\infty }-$algebra with no scalar product, is then reduced to the case of the algebra with scalar product by putting $\widetilde{A}=A\oplus (\Pi A)^{\vee }$ with odd scalar product given by natural odd pairing between $A$ and $(\Pi A)^{\vee }$, or by putting $\widetilde{A}=A\oplus A^{\vee }$ with even scalar product given by natural even pairing between $A$ and $A^{\vee } . In the section \ref{sectmodop} I give the generalization to the case of summation over graphs with arbitrary $\mathbb{S}_{n}-$modules, endowed with some contraction operations, which are attached to the vertices. The supersymmetry $I$ in this case is acting on an algebra over $\mathcal{FP}$, the Feynman transform of an arbitrary twisted modular operad $\mathcal{P}$, and the summation over $\mathcal{P}-$marked graphs gives the solution to the $\mathcal{P}-$type Batalin-Vilkovisky equation introduced in \cit {B06a}. In the last section \ref{sectionIntegr} I outline the applications of the construction by recalling the relations from \cite{B06b}, \cite{B09a} of the noncommutative Batalin-Vilkovisky formalism with equivariant matrix integration, in particular the correspondence between solutions to the noncommutative Batalin-Vilkovisky equation and equivariantly closed differential forms on $\left( gl(N|N)\otimes \Pi V\right) _{0}$. \emph{Notations}. I work in the tensor category of super vector spaces, over an algebraically closed field $k$, $char(k)=0$. Let $V_{0}\oplus V_{1}$ be a $\mathbb{Z}/2\mathbb{Z}$-graded vector space. I denote by $\overline{\alpha } $ the parity of an element $\alpha $ and by $\Pi V$ the super vector space with inversed parity. Element $(a_{1}\otimes a_{2}\otimes \ldots \otimes a_{n})$ of $A^{\otimes n}$ is denoted by $(a_{1},a_{2},\ldots ,a_{n})$. I denote by $V^{\vee }$ the dual vector space $\limfunc{Hom}(V,k)$. For a module $U$ over a finite group $G$ I denote via $U^{G}$ the subspace of invariants: $\{\forall g\in G:gu=u|u\in U\}$. A graph $\Gamma $ is a triple (Flag(\Gamma ),\lambda ,\sigma )$, where $Flag(\Gamma )$ is a finite set, whose elements are called flags, $\lambda $ is a partition of $Flag(\Gamma ) , and $\sigma $ is an involution acting on $Flag(\Gamma )$. By partition here one understands a disjoint decomposition into unordered subsets. These subsets are the vertices of the graph. The set of vertices is denoted by Vert(\Gamma )$. The subset of $Flag(\Gamma )$ corresponding to vertex $v$ is denoted by $Flag(v)$. The cardinality of $Flag(v)$ is called the valence of v$ and is denoted $n(v)$. The edges of the graph are the pairs of flags forming a non-trvial two-cycle of the involution $\sigma $. The set of edges is denoted $Edge(\Gamma )$. The legs of the graph are the fixed elements of the involution $\sigma $. The set of legs is denoted $Leg(\Gamma )$. The number of legs is denoted $n(\Gamma )$. The cardinality of a finite set $X$ is denoted by $|X|$. Throughout the paper, unless it is stated explicitely otherwise, $(-1)^{\epsilon }$ in the formulas denotes the standard Koszul sign, which can be worked out by counting $(-1)^{\overline{a}\overline{b}} every time the objects $a$ and $b$ are interchanged to obtain the given formula. \section{\protect\bigskip\ Associative algebra with odd differentiation. \label{sAss}} I consider in this section a $\mathbb{Z}/2\mathbb{Z}-$graded associative algebra $A$, $\dim _{k}A<\infty $ , with multiplication denoted by m_{2}:A^{\otimes 2}\rightarrow A$ and an \emph{odd differentiation} I:A\rightarrow \Pi A \begin{equation*} Im_{2}(a,b)=m_{2}(Ia,b)+(-1)^{\overline{a}}m_{2}(a,Ib), \end{equation* in particular, if $I^{2}=0$ then this is a d$(\mathbb{Z}/2\mathbb{Z)} g-algebra. \subsection{\protect\bigskip Odd scalar product.} I assume that the algebra $A$, is cyclic with odd scalar product \begin{equation*} \beta :A^{\otimes 2}\rightarrow \Pi A, \end{equation* so that the three tensor \begin{eqnarray*} m &\in &((\Pi A)^{\otimes 3})^{\vee } \\ m(\pi a,\pi b,\pi c) &=&(-1)^{\overline{b}}\beta (m_{2}(a,b),c) \end{eqnarray* is cyclically invariant \begin{equation*} m(\pi a,\pi b,\pi c)=(-1)^{(c+1)(a+b)}m(\pi c,\pi a,\pi b) \end{equation* and that $\beta $ is preserved by $I$ \begin{equation*} \beta (Ia,b)+(-1)^{\overline{a}}\beta (a,Ib)=0. \end{equation* The modification for the variant with an even scalar product are described below. Below I consider also the variant for general d$(\mathbb{Z}/2\mathbb{Z}) g-algebra without scalar product. It is reduced to the case with even/odd scalar product by putting $\widetilde{A}=A\oplus A^{\vee }$, or $\widetilde{ }=A\oplus \Pi A^{\vee }$ with their natural scalar products. I have \begin{equation} m(Ia,b,c)+(-1)^{\overline{a}}m(a,Ib,c)+(-1)^{\overline{a}+\overline{b }m(a,b,Ic)=0 \label{Im} \end{equation which reflect the Leibnitz rule for the differentiation $I$. Denote by \beta ^{\vee }\in (\Pi A)^{\otimes 2}$ the tensor of the scalar product on the dual vector space, then for any $a,b,c,d\in A$, \begin{equation} \left\langle m(\pi a,\pi b,\cdot )m(\cdot ,\pi c,\pi d),\beta ^{\vee }\right\rangle =(-1)^{\varepsilon }\left\langle m(\pi d,\pi a,\cdot )m(\cdot ,\pi b,\pi c),\beta ^{\vee }\right\rangle \label{mm} \end{equation which is the associativity of the multiplication $m$. \bigskip Let $H$ be an odd selfadjoint operato \begin{equation*} H:A\rightarrow \Pi A,~~~H^{\vee }=H \end{equation* such that \begin{equation} Id-[I,H]=P \label{Idp} \end{equation is an idempotent operator $P:A\rightarrow A, \begin{equation*} ~P^{2}=P. \end{equation* I assume also that $H$ commutes with $I^{2}$, this is automatic if $I^{2}=0 . Such $H$ can be always be found for example by considering the kernel \ker I^{2}=\{x|I^{2}x=0\}$, on which $H$ is a homotopy on the complement to a space representing the cohomology of $I|_{\ker I^{2}}$, and the orthogonal complement $(\ker I^{2})^{\intercal }$ of $\ker I^{2}$ on which $I^{2}$ is invertible and on which $H$ can be taken for example to be $\frac{1}{2 I|_{(\ker I^{2})^{\intercal }}^{-1}$. Notice that in general $H^{2}\neq 0$. I denote by $B$ the subspace which is the image of the idempotent $P$. Let $\Gamma $ be a tri-valent ribbon graph, i.e. the trivalent graph with fixed cyclic orders on the sets of the three flags attached to every vertex. Let $\Sigma _{\Gamma }$ be the corresponding oriented two-dimensional surface. Then I put: \begin{itemize} \item \bigskip the three-tensors \begin{equation*} m^{v}\in ((\Pi A)^{\otimes Flag(v)})^{\vee } \end{equation* on every vertex $v$ \item the two tensors \begin{gather*} \beta _{H}^{\vee ,e}\in (\Pi A)^{\otimes \{f,f^{\prime }\}}, \\ \beta _{H}^{\vee ,e}=\beta ^{\vee }(H^{\vee }u_{f},v_{f^{\prime }})=(-1)^ \overline{u_{f}}~\overline{v_{f^{\prime }}}}\beta ^{\vee }(H^{\vee }v_{f^{\prime }},u_{f}) \end{gather* for any interieur edge $e=(ff^{\prime })$ \item element $a_{l}\in \Pi B$, for any leg $l\in Leg(\Gamma ),$ this gives a partition of the set of elements $\{a_{l}\}_{l\in Leg(\Gamma )}$ to the subsets corresponding to the components of the boundary $\partial \Sigma _{\Gamma }$ and the cyclic orders on these subsets. \end{itemize} Notice that both $m^{v}$ and $\beta _{H}^{\vee ,e}$ are even elements, so that the products \begin{equation*} \tbigotimes_{v\in Vert(\Gamma )}m^{v}\in ((\Pi A)^{\otimes Flag(\Gamma )})^{\vee } \end{equation*} and \begin{equation*} \tbigotimes_{e\in Edge(\Gamma )}\beta _{H}^{\vee ,e}\in (\Pi A)^{\otimes Flag(\Gamma )\setminus Leg(\Gamma )} \end{equation*} are canonically defined. \begin{definition} I define the tensor $W_{\Gamma }$ as the contraction \begin{equation} W_{\Gamma }(\tbigotimes_{l\in Leg(\Gamma )}a_{l})=\left\langle \tbigotimes_{v\in Vert(\Gamma )}m^{v},\left( \tbigotimes_{e\in Edge(\Gamma )}\beta _{H}^{\vee ,e}\right) \tbigotimes_{l\in Leg(\Gamma )}a_{l}\right\rangle \label{wg} \end{equation} \end{definition} Notice that $W_{\Gamma }$ is cyclically invariant on every subset of \{a_{l}\}_{l\in Leg(\Gamma )}$ corresponging to a component of the boundary of $\Sigma _{\Gamma }$. Moreover the cyclic orders on flags at vertices induce the orientation on the ribbon graph $\Gamma $, whose detailed analysis, see e.g.\cite{B09b}, \cite{B06a} or section \ref{sectmodop} below, shows that $W_{\Gamma }$ belongs to the symmetric product \begin{equation*} W_{\Gamma }\in Symm(\oplus _{j=1}^{\infty }(\Pi B^{\otimes j})^{\mathbb{Z}/ \mathbb{Z}})^{\vee } \end{equation* Let $\chi (\Sigma _{\Gamma })$ denotes the genus of $\Sigma _{\Gamma }$, \begin{equation*} \chi (\Sigma _{\Gamma })=2-2g(\Sigma _{\Gamma })-i(\Sigma _{\Gamma }), \end{equation* where $g(\Sigma _{\Gamma })$, $i(\Sigma _{\Gamma })$ are the genus and the number of boundary components of $\Sigma _{\Gamma }$. I put \begin{equation} S=\tsum_{\{\Gamma \}}\hbar ^{1-\chi (\Sigma _{\Gamma })}W_{\Gamma } \label{S} \end{equation where the sum is over isomorphism classes of connected trivalent graphs with nonempty subsets of legs on every boundary component of $\Sigma _{\Gamma }$. One can include the graphs with empty subsets of legs on boundary components by adding the constant term to the Batalin-Vilkovisky operator $\Delta $, I leave the details to the interested reader. \begin{proposition} The number of such ribbon graphs with fixed $\chi (\Sigma _{\Gamma })$ and fixed number of $n(\Gamma )$ of legs is finite. \end{proposition} \begin{proof} This is a standard lemma, whose proof I include here for convenience of the reader. The number of flags give \begin{equation*} n(\Gamma )+2|Edge(\Gamma )|=3|Vert(\Gamma )| \end{equation* Also \begin{equation*} \chi (\Sigma _{\Gamma })=|Vert(\Gamma )|-|Edge(\Gamma )| \end{equation* since $\Sigma _{\Gamma }$ is homotopic to the geometric realisation of \Gamma $. It follows that \begin{equation*} |Edge(\Gamma )|=n(\Gamma )-3\chi (\Sigma _{\Gamma }) \end{equation* and \begin{equation*} |Vert(\Gamma )|=n(\Gamma )-2\chi (\Sigma _{\Gamma }) \end{equation* and hence the number of such graphs is finite. \end{proof} \begin{theorem} \label{th1}The sum over ribbon graphs $S$ defined in (\ref{S}) satisfy the equivariant noncommutative Batalin-Vilkovisky equation: \begin{equation} \hbar \Delta S+\frac{1}{2}\{S,S\}+I^{\vee }S=0,\,\, \label{eqBV} \end{equation in particular if $I|_{B}$ is zero then $S$ is the solution of the non-commutative Batalin-Vilkovisky equation from \cite{B06a},\cite{B06b} \begin{equation*} \hbar \Delta S+\frac{1}{2}\{S,S\}=0 \end{equation* If $I|_{B}\neq 0$, but $I^{2}|_{B}=0$, then $S+$ $S_{0,2}$ is also a solution to the non-commutative Batalin-Vilkovisky equation from \cite{B06a} \cite{B06b}, where $S_{0,2}=(-1)^{\epsilon }\beta (I\cdot ,\cdot )|_{B}$ is the quadratic term corresponding to the differential $I|_{B}$. \end{theorem} \begin{proof} The proof is straightforward. For a trivalent graph $\Gamma $ and an internal edge $e\in Edge(\Gamma )$ consider the three tensors \begin{equation*} W_{\Gamma ,e}^{[I,H]},W_{\Gamma ,e}^{Id},W_{\Gamma ,e}^{P}\in Symm(\oplus _{j=1}^{\infty }((\Pi B)^{\otimes j})^{\mathbb{Z}/j\mathbb{Z}})^{\vee } \end{equation* which are defined by the same contraction as $W_{\Gamma }$ except that at the edge $e\in Edge(\Gamma )$ I put the tensors \begin{equation*} \beta ^{\vee }([I^{\vee },H^{\vee }]u_{f},v_{f^{\prime }}),~\beta ^{\vee }(u_{f},v_{f^{\prime }}),\beta ^{\vee }(P^{\vee }u_{f},v_{f^{\prime }}) \end{equation* correspondingly instead of $\beta _{H}^{\vee ,e}$. Then, from (\ref{Idp}) \begin{equation*} W_{\Gamma ,e}^{P}=W_{\Gamma ,e}^{Id}-W_{\Gamma ,e}^{[I,H]}. \end{equation* By summing over $v\in Vert(\Gamma )$ of the Leibnitz rule (\ref{Im}) and noticing that \begin{equation*} \beta ^{\vee }(H^{\vee }I^{\vee }u_{f},v_{f^{\prime }})+\beta ^{\vee }(u_{f},H^{\vee }I^{\vee }v_{f^{\prime }})=-\beta ^{\vee }([I^{\vee },H^{\vee }]u_{f},v_{f^{\prime }}) \end{equation* I get \begin{equation*} I^{\vee }W_{\Gamma }-\tsum_{e}W_{\Gamma ,e}^{[I,H]}=0. \end{equation* Next I use (\ref{mm}) to substitute in $W_{\Gamma ,e}^{Id}$ the contraction \begin{equation*} \left\langle m(\pi a,\pi b,\cdot )m(\cdot ,\pi c,\pi d),\beta ^{\vee }\right\rangle \end{equation* corresponding to the internal edge $e\in Edge(\Gamma )$ by \begin{equation*} (-1)^{\varepsilon }\left\langle m(\pi d,\pi a,\cdot )m(\cdot ,\pi b,\pi c),\beta ^{\vee }\right\rangle . \end{equation* This corresponds to passing from the trivalent ribon graph $\Gamma $ to the trivalent ribbon graph $\Gamma ^{\prime }$ obtained by the standard transformation on the edge $e$, preserving the overal cyclic order of the flags corresponding to $\pi a$, $\pi b$, $\pi c$, $\pi d$. This transformation preserves the surface $\Sigma _{\Gamma }$ and the distribution of elements of $Leg(\Gamma )$ over the boundary components of \Sigma _{\Gamma }$. Therefore the sum of $W_{\Gamma ,e}^{Id}$ over all internal edges and over the set of trivalent graphs, having the same $\Sigma _{\Gamma }$ with same distribution of $Leg(\Gamma )$ over the boundary components, is zero: \begin{equation*} \tsum_{\{\Gamma \},\Sigma _{\Gamma }=\Sigma ,e\in Edge(\Gamma )}W_{\Gamma ,e}^{Id}=0. \end{equation* Notice that $P^{2}=P$ implies that \begin{equation*} \beta ^{\vee }(P^{\vee }u_{f},v_{f^{\prime }})=\beta ^{\vee }(P^{\vee }u_{f},P^{\vee }v_{f^{\prime }}). \end{equation* Then, from the definition of the Batalin-Vilkovisky operator and the odd Poisson bracket on $B$ it follows that \begin{multline*} \tsum_{\{\Gamma \}}\hbar ^{2-\chi (\Sigma _{\Gamma })}\Delta W_{\Gamma } \frac{1}{2}\{\tsum_{\{\Gamma \}}\hbar ^{1-\chi (\Sigma _{\Gamma })}W_{\Gamma },\tsum_{\{\Gamma ^{\prime }\}}\hbar ^{1-\chi (\Sigma _{\Gamma ^{\prime }})}W_{\Gamma ^{\prime }}\}= \\ =\tsum_{\{\widetilde{\Gamma }\},e\in Edge(\widetilde{\Gamma })}\hbar ^{1-\chi (\Sigma _{\Gamma })}W_{\widetilde{\Gamma },e}^{P} \end{multline* where each term on left hand side corresponds precisely to the right hand side term $\hbar ^{1-\chi (\Sigma _{\Gamma })}W_{\widetilde{\Gamma },e}^{P} , where $\widetilde{\Gamma }$ is obtained by gluing two legs to form the edge $e$ from either the single surface or the two surfaces . Notice that the condition, that $\Delta $ does not get contributions from the neighboring points on the same circle, and that $\Delta $ and $\{\cdot ,\cdot \}$ do not get contributions from pair of cycles with just one element on each, corresponds precisely to the fact that the resulting surface $\Sigma _{\widetilde{\Gamma }}$ has always nonempty subsets of $Legs \widetilde{\Gamma })$ on the boundary components. \end{proof} \begin{remark} In the infinite dimensional case instead of $\beta ^{\vee }(H^{\vee }\cdot ,\cdot )$ the kernel constructed from appropriate resolution of the diagonal, i.e. $A$ as $A^{op}\otimes A$-bimodule, must be used. Details will appear elsewhere. \end{remark} \begin{corollary} \label{corwell}Given a $\mathbb{Z}/2\mathbb{Z-}$graded cyclic $A_{\infty }- algebra $B$, if $B$ has cyclic d($\mathbb{Z}/2\mathbb{Z})$g associative model $A$, in the sense that $B$ is obtained from $A$ via the summation over\ trees, such that for $A$ the contractions (\ref{wg}) over arbitrary trivalent ribbon graphs are well-defined, then the summation over such graphs gives an extension of the cyclic $A_{\infty }-$algebra on $B$ to the solution of the non-commutative Batalin-Vilkovisky equation. \end{corollary} \subsection{Even scalar product.} Assume now that the scalar product on $A$ is even: \begin{equation*} \beta :A^{\otimes 2}\rightarrow A. \end{equation* Then given an \emph{odd differentiation} $I:A\rightarrow \Pi A$ and an odd selfadjoint operator $H:A\rightarrow \Pi A,~~~$satisfying (\ref{Idp}), I construct the tensors $W_{\Gamma }$ for any ribbon trivalent graph by the same contraction (\ref{wg}). The only difference is that in this case, both the three-tensors $m^{v}$ attached to the vertices and the two-tensors \beta _{H}^{\vee ,e}$ are odd and I sum over oriented ribon graphs where the orientation is an orientation on the space $k^{Flag(\Gamma )}$. Carefull analysis of the corresponding orientation on $\Gamma $, analogous to the one from \cite{B06a}, shows that $W_{\Gamma }$ belongs to the \emph{exterior} power of the space of cyclic tensor \begin{equation*} W_{\Gamma }\in Symm(\oplus _{j=1}^{\infty }\Pi (\Pi B^{\otimes j})^{\mathbb{ }/j\mathbb{Z}})^{\vee } \end{equation* Then I define the sum over oriented trivalent graphs parallel to (\ref{S}). For the case of the even scalar product the variant of the previous theorem holds. The proof is the same. \begin{theorem} The sum over oriented trivalent ribbon graphs $S$ satisfy the equivariant noncommutative Batalin-Vilkovisky equation: \begin{equation*} \hbar \Delta S+\frac{1}{2}\{S,S\}+I^{\vee }S=0,\,\, \end{equation* in particular if $I|_{B}=0$ then $S$ is the solution of the non-commutative Batalin-Vilkovisky equation from \cite{B06a},\cite{B06b} \begin{equation*} \hbar \Delta S+\frac{1}{2}\{S,S\}=0 \end{equation* If $I|_{B}\neq 0$, but $I^{2}|_{B}=0$, then $S+S_{0,2}$ is also a solution to the non-commutative Batalin-Vilkovisky equation from \cite{B06a},\cit {B06b}, where $S_{0,2}=(-1)^{\epsilon }\beta (I\cdot ,\cdot )|_{B}$ is the quadratic term corresponding to the differential $I|_{B}$. \end{theorem} \subsection{General algebras.} Let now $A$, $\dim _{k}A<\infty $ , be an arbitrary $\mathbb{Z}/2\mathbb{Z-} graded algebra, with an \emph{odd differentiation} $I:A\rightarrow \Pi A$. This case is reduced to the previous cases by putting $\widetilde{A}=A\oplus (\Pi A)^{\vee }$ with odd scalar product $\beta $ given by natural odd pairing between $A$ and $(\Pi A)^{\vee }$, or by putting $\widetilde{A =A\oplus A^{\vee }$ with even scalar product $\beta $ given by natural even pairing between $A$ and $A^{\vee }$. Then $\widetilde{A}$ is naturally an associative algebra with odd, respectively even, scalar product. For any a^{\ast },b^{\ast }$ from $(\Pi A)^{\vee }$, respectfully from $A^{\vee },$ \begin{equation*} m_{2}(a^{\ast },b^{\ast })=0 \end{equation* and $m_{2}(a^{\ast },b)$ takes value in $(\Pi A)^{\vee }$, respectfully in A^{\vee }$, \begin{equation*} m_{2}(a^{\ast },b)c=a^{\ast }(m_{2}(b,c)) \end{equation* and similarly for $m_{2}(a,b^{\ast })$. The cyclic three-tensor describing this cyclic associative algebra is simply the initial multplication tensor \begin{equation*} m_{2}\in (\Pi A\otimes \Pi A)^{\vee }\otimes A \end{equation* considered as element of \begin{equation*} ((\Pi A^{\vee }\oplus A)^{\otimes 3})^{\mathbb{Z}/3\mathbb{Z}} \end{equation* or, respectfully, \begin{equation*} \Pi ((\Pi A^{\vee }\oplus \Pi A)^{\otimes 3})^{\mathbb{Z}/3\mathbb{Z}} \end{equation* The \emph{odd differentiation} $I$ action extends naturally to $\widetilde{A} $. Consider the case of $\widetilde{A}=A\oplus (\Pi A)^{\vee }$ with its odd scalar product, $\dim _{k}A<\infty $. Suppose that $H$ is an odd operato \begin{equation*} H:A\rightarrow \Pi A,~~~ \end{equation* such that \begin{equation} Id-[I,H]=P \label{idh} \end{equation is an idempotent operator $P:A\rightarrow A, \begin{equation*} ~P^{2}=P. \end{equation* I assume also that $H$ commutes with $I^{2}$, this is automatic if $I^{2}=0 . Then both $H$ and $P$ act naturally on $\widetilde{A}$ as self-adjoint operators and I apply to this situation the construction of tensors W_{\Gamma }$ for ribbon graphs described above. The tensors \begin{equation*} W_{\Gamma }^{B}\in Symm(\oplus _{j=1}^{\infty }((\Pi B^{\vee }\oplus B)^{\otimes j})^{\mathbb{Z}/j\mathbb{Z}}) \end{equation* are defined by the contraction (\ref{wg}). The contraction is given by the sum over markings by \begin{equation*} Flag(\Gamma )\rightarrow \{\Pi A,A^{\vee }\} \end{equation* such that for any edge, its two flags are marked differently, and for any vertex there is exactly one flag which is marked by $\Pi A$, with no other extra restrictions. In particular such marking gives an orientation for every edge, from $A^{\vee }$ to $\Pi A$, and there must be exactly one edge exiting every vertex. The legs of $\Gamma $, which correspond to the points sitting on the boundary of the surface $\Sigma _{\Gamma }$, are also marked as either entries ($B^{\vee }$) or exits ($\Pi B$). And I define $S^{B}$ by the summation as above \begin{equation*} S^{B}=\tsum_{\{\Gamma \}}\hbar ^{1-\chi (\Sigma _{\Gamma })}W_{\Gamma }^{B} \end{equation* where the sum is over isomorphism classes of connected trivalent graphs with such orientation on edges and with nonempty subsets of legs on every boundary component of $\Sigma _{\Gamma }$. Similarly I define the tensors $W_{\Gamma }$ in the case of $\widetilde{A =A\oplus A^{\vee }$ with its even pairing. These tensors belong to the space of exterior powers of linear functionals on cyclic words on elements of $\Pi B\ $ and $\Pi B^{\vee }$ \begin{equation*} W_{\Gamma }^{\Pi B}\in Symm(\oplus _{j=1}^{\infty }\Pi ((\Pi B\oplus \Pi B^{\vee })^{\otimes j})^{\mathbb{Z}/j\mathbb{Z}}) \end{equation* and I define $S^{\Pi B}$as their sum over oriented ribbon graphs as above. \begin{theorem} Let $A$ be an arbitrary $\mathbb{Z}/2\mathbb{Z-}$graded algebra, $\dim _{k}A<\infty $ , with an \emph{odd differentiation} $I:A\rightarrow \Pi A$ and a homotopy $H$, for which the operator (\ref{idh}) is idempotent. The sums over ribbon graphs $S^{B}$ and $S^{\Pi B}$ give the solutions to the two variants of the equivariant noncommutative Batalin-Vilkovisky equation in the spaces of symmetric, respectfully exterior powers, of cyclic words, consisting of elements from $\Pi B^{\vee }$ and $B$, respectfully from $\Pi B^{\vee }$ and $\Pi B$: \end{theorem} \begin{eqnarray*} \hbar \Delta S^{B}+\frac{1}{2}\{S^{B},S^{B}\}+I^{\vee }S^{B} &=&0 \\ \hbar \Delta S^{\Pi B}+\frac{1}{2}\{S^{\Pi B},S^{\Pi B}\}+I^{\vee }S^{\Pi B} &=&0. \end{eqnarray*} \begin{proof} This is an immediate corollary of the theorem from the previous subsection for the algebras with odd/ even invariant scalar product $\widetilde{A =A\oplus (\Pi A)^{\vee }$ or $\widetilde{A}=A\oplus A^{\vee }$. \end{proof} \section{Graphs with the insertion of $A_{\infty }-$tensors.} Assume now that $A$ is a $\mathbb{Z}/2\mathbb{Z-}$graded $A_{\infty }- algebra, $\dim _{k}A<\infty $ . I relax, as above, the condition of the square of differential equals to zero, and assume that it is simply an \emph odd operator} $I:A\rightarrow \Pi A$, which together with other structure maps $m_{n}\in ((\Pi A)^{\otimes n})^{\vee }\otimes A$, $n\geq 2$, satisfy the standard $A_{\infty }-$constrains, except perhaps the very first, so that, in general $I^{2}\neq 0$ : for any $n\geq 2 \begin{multline} Im_{n}(v_{1},\ldots ,v_{n})-\tsum_{l}(-1)^{\epsilon }m_{n}(v_{1},\ldots ,Iv_{l},\ldots v_{n})= \label{imn} \\ =\tsum_{i+j=n+1}(-1)^{\epsilon }m_{i}(v_{1},\ldots ,m_{j}(\ldots ),\ldots v_{n}) \notag \end{multline or, equivalently, \begin{equation*} I^{\vee }m+\{m,m\}=0 \end{equation*} I assume first that $A$ has also an invariant odd scalar product $\beta $ so that all tensors \begin{equation*} m_{n}\in ((\Pi A)^{\otimes n+1})^{\vee },\beta (~m_{n}(v_{1},\ldots ,v_{n}),v_{n+1}) \end{equation* are cyclic invariant, the variant without scalar product is reduced as above to this case by taking $\widetilde{A}=A\oplus (\Pi A)^{\vee }$, see below. Let as above $H$ be an odd selfadjoint operato \begin{equation*} H:A\rightarrow \Pi A,~~~H^{\vee }=H \end{equation* such that \begin{equation} Id-[I,H]=P \label{idhh} \end{equation is an idempotent operator $P:A\rightarrow A$, whose image I denote by $B$. I assume also as above that $H$ commutes with $I^{2}$, this is of course automatic if $I^{2}=0$. Now I define the tensors $W_{\Gamma }$, by inserting the cyclyc tensors m_{n(v)}\in $ $((\Pi A)^{\otimes Flag(v)})^{\vee }$ at vertices, as above, where $\Gamma $ is now a ribbon graph, with valency $n(v)$ for any vertice at least three \begin{equation*} W_{\Gamma }(\tbigotimes_{l\in Leg(\Gamma )}a_{l})=\left\langle \tbigotimes_{v\in Vert(\Gamma )}m_{n(v)},\left( \tbigotimes_{e\in Edge(\Gamma )}\beta _{H}^{\vee ,e}\right) \tbigotimes_{l\in Leg(\Gamma )}a_{l}\right\rangle \end{equation* and \begin{equation*} W_{\Gamma }\in Symm(\oplus _{j=1}^{\infty }(\Pi B^{\otimes j})^{\mathbb{Z}/ \mathbb{Z}})^{\vee } \end{equation* The following is the standard lemma, which ensures that the sum over ribbon graphs is actually finite at each order of $\hbar $. \begin{proposition} The number of ribbon graphs, with valency at every vertex $n(v)\geq 3$, with fixed $\chi (\Sigma _{\Gamma })$ and fixed number of exterior legs is finite. \end{proposition} \begin{proof} Similarly to above, if I denote the number of vertices of valency $n$ via v_{n}$: \begin{equation*} \chi (\Sigma _{\Gamma })+|Edge(\Gamma )|=\tsum v_{n} \end{equation* and \begin{equation*} n(\Gamma )+2|Edge(\Gamma )|=\tsum nv_{n}. \end{equation* It follows that \begin{equation*} \tsum (n-2)v_{n}=n(\Gamma )-2\chi (\Sigma _{\Gamma }) \end{equation* and hence $\tsum v_{n}\leq const$ and $|Edges(\Gamma )|\leq const$. \end{proof} At the next step however, looking carefully at the proof of the equation for $S$ above, one sees that one immediately runs into a problem because of tadpoles, i.e. self-contractions of nonneighbouring flags at the same vertex, unless the following important condition \begin{equation*} \Delta m_{n}=0 \end{equation* is imposed, which I assume from now till the end of this section. I define next, similarly to above \begin{equation} S=\tsum_{\{\Gamma \}}\hbar ^{1-\chi (\Sigma _{\Gamma })}W_{\Gamma } \label{SA} \end{equation where the sum is over isomorphism classes of connected ribbon graphs with vertices of valency $n(v)\geq 3$, and with nonempty subsets of legs on every boundary component of $\Sigma _{\Gamma }$. \begin{theorem} Let the odd operator $I$ and the cyclically invariant tensors $m_{n}\in ((\Pi A)^{\otimes n+1})^{\vee }$, $n\geq 2$, satisfy \begin{eqnarray*} I^{\vee }m+\{m,m\} &=&0 \\ \Delta m &=&0 \end{eqnarray* Then, given the homotopy $H$ (\ref{idhh}), the sum $S$ (\ref{SA}) over ribbon graphs satisfy the equivariant noncommutative Batalin-Vilkovisky equation associated with $(B,\beta |_{B})$: \begin{equation*} \hbar \Delta S+\frac{1}{2}\{S,S\}+I^{\vee }S=0,\,\, \end{equation* in particular if $I|_{B}=0$ then $S$ is the solution of the non-commutative Batalin-Vilkovisky equation from \cite{B06a},\cite{B06b} \begin{equation*} \hbar \Delta S+\frac{1}{2}\{S,S\}=0 \end{equation* If $I|_{B}\neq 0$, but $I^{2}|_{B}=0$, then $S+S_{0,2}$ is also a solution to the non-commutative Batalin-Vilkovisky equation from \cite{B06a},\cit {B06b}, where $S_{0,2}=(-1)^{\epsilon }\beta (I\cdot ,\cdot )|_{B}$ is the quadratic term corresponding to the differential $I|_{B}$. \end{theorem} \begin{proof} The proof is parallel to the above. \end{proof} As above the case of algebra with no scalar product is reduced to the cyclic algebra case. \begin{proposition} Same result holds in the context of arbitrary $\mathbb{Z}/2\mathbb{Z-} graded equivariant $A_{\infty }-$algebra, $\dim _{k}A<\infty $ , with odd differentiation $I:A\rightarrow \Pi A$, \begin{equation*} I^{\vee }m+\{m,m\}=0 \end{equation* As above the algebra must satisfy in addition the condition $\Delta m=0$, satisfied trivially by the associative algebras. This case is reduced to the previous cases by putting $\widetilde{A}=A\oplus (\Pi A)^{\vee }$ with odd scalar product $\beta $ given by natural odd pairing between $A$ and $(\Pi A)^{\vee }$, or by putting $\widetilde{A}=A\oplus A^{\vee }$ with even scalar product $\beta $ given by natural even pairing between $A$ and A^{\vee }$. \end{proposition} \begin{theorem} Given arbitrary solution to the equivariant non-commutative Batalin-Vilkovisky equation on $\mathbb{Z}/2\mathbb{Z-}$graded vector space A$, $\dim _{k}A<\infty $, with odd scalar product $\beta $, \begin{gather} \widehat{m}\in Symm(\oplus _{j=1}^{\infty }(\Pi A^{\otimes j})^{\mathbb{Z}/ \mathbb{Z}})^{\vee }[[\hbar ]], \\ \hbar \Delta \widehat{m}+\frac{1}{2}\{\widehat{m},\widehat{m}\}+I^{\vee \widehat{m}=0,\,\, \notag \end{gather} and a homotopy $H$ (\ref{idhh}), I define the tensors $W_{\Gamma }$ for \emph{stable} ribbon graphs by contraction as above and the sum $S$ over such \emph{stable} ribbon graphs, then $S$ is again a solution to the equivariant non-commutative Batalin-Vilkovisky equation on $(B,\beta |_{B})$ \begin{equation*} \hbar \Delta S+\frac{1}{2}\{S,S\}+I^{\vee }S=0 \end{equation* The same result holds in the case of the even scalar product. \begin{proof} Analogous to the above, see also the general case of algebras over modular operad in the next section. \end{proof} \end{theorem} \section{Construction of solutions to the $\mathcal{P}$-Batalin-Vilkovisky equation.\label{sectmodop}} Let now $A=\oplus _{i}A^{i}$ be a $\mathbb{Z}-$graded vector space with the odd scalar product $\beta $ of degree $2l+1$, and let $A$ is an algebra over $\mathcal{FP}$, the Feynman transform of a modular operad $\mathcal{P}$. I consider for simplicity the $\mathbb{Z}-$graded version as defined in \cit {GK}, see also \cite{B06a}. Without loss of generality one can assume that l=1,$the general case is reduced to this case by a twisting by a cocycle. The parallel results hold in the $\mathbb{Z}/2\mathbb{Z}-$graded setting and also for even scalar products, I leave details to an interested reader. By theorem 1 from \cite{B06a}, an algebra over $\mathcal{FP}$ is defined by set of elements \begin{equation*} m_{n,b}\in \left( \left( (A{}[1]\right) ^{\otimes n})^{\vee }\otimes \mathcal{P}((n,b))\right) ^{\mathbb{S}_{n}} \end{equation* with $\widehat{m}=\sum_{n,b}\hbar ^{b}m_{n,b}$ satisfying the Batalin-Vilkovisky equation of $\mathcal{P}-$geometry, the equation (5.5) from \cite{B06a}. As above I consider more general notion, which is natural to call "equivariant algebra over $\mathcal{FP}$". This is the $\mathbb{Z}$-graded vector space $A$ with scalar product $\beta $, with a degree $1$\emph{\ anti-selfadjoint\emph{\ }operator $I:A\rightarrow A[1]$,and provided with the morphism from the free $\mathcal{K}-$twisted modular operad \begin{equation*} \Phi :\mathbb{M}_{\mathcal{K}}\mathcal{P}^{dual}\rightarrow \mathcal{E}[A] \end{equation* equivariant with respect to the odd differentiations \begin{equation*} I\circ \Phi =\Phi \circ d_{\mathcal{FP}}. \end{equation* This is equivalent for $\widehat{m}$ to satisfy the equivariant version of the $\mathcal{P}-$Batalin-Vilkovisky equation: \begin{equation} \hbar \Delta \widehat{m}+\frac{1}{2}\{\widehat{m},\widehat{m}\}+I^{\vee \widehat{m}=0,~~~m_{n,b}\in \left( \left( (A{}[1]\right) ^{\otimes n})^{\vee }\otimes \mathcal{P}((n,b))\right) ^{\mathbb{S}_{n}} \label{pBV} \end{equation where $\Delta $ and $\{\cdot ,\cdot \}$ are defined in \cite{B06a}, . Let as above $H$ be a homotopy, degree $-1$, selfadjoint operato \begin{equation*} H:A\rightarrow A[-1],~~~H^{\vee }=H \end{equation* such that \begin{equation*} Id-[I,H]=P \end{equation* is an idempotent operator $P:A\rightarrow A$, whose image I denote by $B$. Assume as above that $H$ commutes with $I^{2}$, this is automatic if $I^{2}=0 $. Next I define the summation over $\mathcal{P}-$decorated graphs with legs, i.e. graphs with decorations from $\mathcal{P}((Flag(v)))$ attached to vertices. For a stable graph $\Gamma $ I put \begin{itemize} \item \bigskip the tensors \begin{equation*} m^{v}\in \left( \left( (A{}[1]\right) {}^{\otimes Flag(v)})^{\vee }\otimes \mathcal{P}((Flag(v),b(v)))\right) ^{Aut(Flag(v))} \end{equation* on every vertex $v$, obtained from $m_{n(v),b(v)}$ using the functor of extension to finite sets, \item the two tensors \begin{gather*} \beta _{H}^{\vee ,e}\in (A[1])^{\otimes \{f,f^{\prime }\}}, \\ \beta _{H}^{\vee ,e}=\beta ^{\vee }(H^{\vee }u_{f},v_{f^{\prime }})=(-1)^ \overline{u_{f}}~\overline{v_{f^{\prime }}}}\beta ^{\vee }(H^{\vee }v_{f^{\prime }},u_{f}) \end{gather* for any interieur edge $e=(ff^{\prime })$ \item element $a_{l}\in B[1]$, for any leg $l$. \end{itemize} Notice that both $m^{v}$ and $\beta _{H}^{\vee ,e}$ are degree zero elements, so that the products $\tbigotimes_{v\in Vert(\Gamma )}m^{v}$ and \tbigotimes_{e\in Edge(\Gamma )}\beta _{H}^{\vee ,e}$ are canonically defined. Recall that for the modular operad $\mathcal{P}$, for any stable graph $\Gamma $ a bijection $Leg(\Gamma )\leftrightarrow \{1,\ldots n\}$ there is the composition ma \begin{equation*} \mu _{\Gamma }:\tbigotimes_{v\in Vert(\Gamma )}\mathcal{P ((Flag(v),b(v)))\rightarrow \mathcal{P}((n(\Gamma ),b(\Gamma ))) \end{equation*} \begin{definition} I define the tensor \begin{equation*} W_{\Gamma }\in \left( ((B[1]){}^{\otimes n(\Gamma )})^{\vee }\otimes \mathcal{P}((n(\Gamma ),b(\Gamma )))\right) ^{\mathbb{S}_{n(\Gamma )}} \end{equation* as the contraction of tensors on $A[1]$ times the $\mathcal{P}-$composition structure map $\mu _{\Gamma }$ \begin{equation} W_{\Gamma }(\tbigotimes_{l\in Leg(\Gamma )}a_{l})=\left\langle \tbigotimes_{v\in Vert(\Gamma )}m^{v},\left( \tbigotimes_{e\in Edge(\Gamma )}\beta _{H}^{\vee ,e}\right) \tbigotimes_{l\in Leg(\Gamma )}a_{l}\right\rangle \otimes \mu _{\Gamma } \label{Wp} \end{equation} \end{definition} For a given graph $\Gamma $ and choice of basis in every $\mathcal{P ((Flag(v),b(v)))$ this expresion is the sum, over markings of vertices of \Gamma $ by basis elements of $\mathcal{P}$, of the corresponding contractions of tensors on $A[1]$. \begin{definition} The sum over $\mathcal{P}$-marked graphs is defined by \begin{equation} S_{n,b}=\sum_{\Gamma \in \lbrack G((n,b))]}W_{\Gamma } \label{Sp} \end{equation where $[G((n,b))]$ denotes the set of isomorphisms classes of pairs $(\Gamma ,\rho )$ where $\Gamma $ is a stable graph with $n(\Gamma )=n$, $b(\Gamma )=b $ and $\rho $ is a bijection $Leg(\Gamma )\leftrightarrow \{1,\ldots n\} . I put \begin{equation*} S=\sum_{n,b}\hbar ^{b}S_{n,b} \end{equation*} \end{definition} The sum in the definition of $S_{n,b}$ is finite, see \cite{GK} lemma 2.16. \begin{theorem} The series $S$, given by the sum over $\mathcal{P}$-marked graphs, satisfy the equivariant Batalin-Vilkovisky equation associated with the modular operad $\mathcal{P}$: \begin{equation*} \hbar \Delta S+\frac{1}{2}\{S,S\}+I^{\vee }S=0,\,\, \end{equation* in particular if $I^{2}|_{B}=0$ then $S$ is the solution of the Batalin-Vilkovisky equation, associated with the modular operad $\mathcal{P} , from \cite{B06a}: \begin{equation*} \hbar \Delta S+\frac{1}{2}\{S,S\}+dS=0 \end{equation* where I denoted by $d$ the differential $I^{\vee }|_{B^{\vee }}$. By the theorem 1 of \cite{B06a} , this is equivalent to the fact, that $S$ defines on $B$ the structure of algebra over $\mathcal{FP}$, the Feynman transform of the modular operad $\mathcal{P}$. \end{theorem} \begin{proof} The proof is parallel to the proof of the theorem \ref{th1} above, via introducing the tensors $W_{\Gamma ,e}^{[I,H]},W_{\Gamma ,e}^{Id},W_{\Gamma ,e}^{P}$ and verifying that they satisfy analogous relations. One has to use the definition of the $\mathcal{P}$-type Batalin-Vilkovisky operator $\Delta $ from \cite{B06a}. \end{proof} \begin{remark} The particular case of this statement, in its nonequivariant version with I^{2}=0$, applied for the modular extension of the $L_{\infty }-$operad, gives the transfer of solutions to the ordinary (commutative) Batalin-Vilkovisky equation. In the case $\widetilde{A}=A\oplus (A[1])^{\vee }$ one gets the solutions via the summation over graphs with oriented edges. A similar statement which starts from solutions of degree \leq 3$ to the commutative Batalin-Vilkovisky equation, satisfying certain extra boundary conditions, and in which the sum is taken over the subset of "directed" graphs of the set of oriented graphs, is established in \cite{M08 , together with some generalisation. \end{remark} \begin{remark} Any cyclic operad can be considered as modular with zero selfcontractions. Hence this theorem gives also the analogous result for algebras over the Bar -transform of cyclic operads, and also, via setting $\widetilde{A}=A\oplus A^{\vee }$ for ordinary operads, see \cite{H}, \cite{M08} and references therein. The equivariant version, with the odd derivation relaxing the condition on the differential $d^{2}\neq 0$, is new even in the standard case of ordinary/cyclic operads. \end{remark} \begin{proposition} For any solution to the \emph{equivariant} Batalin-Vilkovisky equation, the construction of the homology class of the associated graph complex of \mathcal{P}$-marked graphs from \cite{B06a} works without change and gives the homology class of the complex dual to $\mathcal{FP}|_{n(\Gamma )=0}$. \end{proposition} \begin{proof} This is immediate from the definition, the $n(\Gamma )=0$ part of the map \Phi $ satisfie \begin{equation*} \Phi _{n(\Gamma )=0}(d_{\mathcal{FP}}(\alpha ))=I(\Phi _{n(\Gamma )=0}(\alpha ))=0 \end{equation* since the odd operator $I$ acts by zero on $k=\mathcal{E}[A]((0,b))$. \end{proof} \begin{theorem} Let $\widehat{m}$ be a solution to the equivariant Batalin-Vilkovisky equation of $\mathcal{P}-$geometry (\ref{pBV}) and let $H$ be a self-adjoint nilpotent homotopy as above. Then the solution $S$ to (\ref{pBV}) obtained by the summation over $\mathcal{P}$-marked graphs (\ref{Wp}), (\ref{Sp}), and $\widehat{m}$ have the same characteristic homology class in the graph complex of $\mathcal{P}$-marked graphs.$\square $ \end{theorem} \section{$A$-infinity $gl(N)-$equivariant matrix integrals.\labe {sectionIntegr}} Here I briefly describe the $A$-infinity $gl(N)-$equivariant matrix integrals which I've introduced in \cite{B06b}. I focus on the odd-dimensional case. I recall in particular the results from \cite{B09a}, that establish the correspondence between solutions to the noncommutative Batalin-Vilkovisky equation and equivariantly closed differential forms on \left( gl(N|N)\otimes \Pi V\right) _{0}$. Let $V$ be a $\mathbb{Z}/2\mathbb{Z}$-graded vector space, $\dim _{k}V<\infty $ , with \emph{odd }scalar product $\beta :V^{\otimes 2}\rightarrow \Pi k$. Let \begin{gather} m_{A_{\infty }}\in \tbigoplus_{j}(((\Pi V)^{\otimes j})^{\mathbb{Z}/j\mathbb Z}})^{\vee },~ \\ \{m_{A_{\infty }},m_{A_{\infty }}\}=0 \label{mmA} \end{gather be the cyclic tensor defining the structure of cyclic $A_{\infty }-$algebra on $A$, with the invariant odd scalar product $\beta $. To any product of cyclic words from \begin{equation*} Symm(\oplus _{j=1}^{\infty }((\Pi V\oplus \Pi k_{\Lambda })^{\otimes j})^ \mathbb{Z}/j\mathbb{Z}})^{\vee } \end{equation* I associate, using the invariant theory, the invariant functions, which are product of traces, fro \begin{equation} \left( \mathcal{O}(gl(N|N)\otimes \Pi V)\otimes \mathcal{O}(\Pi gl(N|N))\right) ^{gl(N|N)} \label{oglnv} \end{equation In particular $Tr(m_{A_{\infty }})$ denotes the function on $gl(N|N)\otimes \Pi V$ \begin{equation*} \tsum_{(i_{1}\ldots i_{k})}(-1)^{\epsilon }m_{A_{\infty }},_{i_{1}\ldots i_{k}}tr(X^{i_{1}}\cdot \ldots \cdot X^{i_{k}}),~~~X=X^{i}\otimes \pi e_{i},~X^{i}\in gl(N|N),~e_{i}\in V \end{equation* extending the cyclic $A_{\infty }$ tensor $m_{A_{\infty }}$. I've introduced in \cite{B06b} integrals \begin{equation} \int_{\gamma \in \left( gl(N|N)\otimes \Pi V\right) _{0}}\exp \frac{1}{\hbar }(s_{\Lambda }+Tr(m_{A_{\infty }})+\sum_{2g+i>2}\hbar ^{2g+i-1}S_{i,g}^{mTr})\varphi ~dX \label{expmaplus} \end{equation where \begin{equation*} s_{\Lambda }=\tsum_{i_{1}i_{2}}(-1)^{\epsilon }\beta _{i_{1}i_{2}}tr([\Xi ,X^{i_{1}}],X^{i_{2}}), \end{equation* is the hamiltonian of action of odd matrix $\Xi \in gl(N|N)_{1}$, which in generic case without loss of generality can be assumed to be of the form \Xi =\left( \begin{array}{cc} 0 & Id \\ \Lambda & \end{array \right) $, $\Lambda \in $ $gl(N).$ And $S_{i,g}^{mTr}$ are the multitrace elements corresponding to products of cyclic words \begin{equation*} S_{i,g}\in Symm^{i}(\oplus _{j=1}^{\infty }((\Pi V)^{\otimes j})^{\mathbb{Z /j\mathbb{Z}})^{\vee } \end{equation* Functions on the odd symplectic affine manifold $gl(N|N)\otimes \Pi V$ are identified canonically with polyvector fields on $\left( gl(N|N)\otimes \Pi V\right) _{0}$ which are in turn identified with differential forms on the same space, using a constant in the affine coordinates holomorphic volume form $dX$. The invariant functions from (\ref{oglnv}) are then mapped to gl(N)$- equivariant differential forms, with respect to $gl(N)$ acting by conjugation via block-diagonal embedding. Let $d_{DR}-i_{\Lambda }$ denotes the $gl(N)$-equivariant differential. \begin{proposition} \begin{enumerate} \item \label{propintegr} The lagrangian $S=m_{A_{\infty }}+\sum_{2g+i>2}\hbar ^{2g+i-1}S_{i,g}$ satisfies the noncommutative Batalin-Vilkovisky equation (\ref{ncBV}) if and only if the $gl(N)- equivariant differential form corresponding to $s_{\Lambda }+S$ is closed for any $N \begin{equation*} \left( d_{dR}-i_{\Lambda }\right) \left( \exp \frac{1}{\hbar }(s_{\Lambda }+Tr(m_{A_{\infty }})+\sum_{2g+i>2}\hbar ^{2g+i-1}S_{i,g}^{mTr})\vdash dX\right) =0 \end{equation*} \item Similarly, for any \begin{equation*} \varphi =\sum_{i,g\geq 0}\hbar ^{2g+i-1}\varphi _{i,g},~~\varphi _{i,g}\in Symm^{i}(\oplus _{j=1}^{\infty }((\Pi B)^{\otimes j})^{\mathbb{Z}/j\mathbb{Z })^{\vee }, \end{equation* \begin{gather*} \hbar \Delta \varphi +\frac{1}{2}\{S,\varphi \}=0\Longleftrightarrow \\ \left( d_{dR}-i_{\Lambda }\right) \left( \varphi ^{mTr}\exp \frac{1}{\hbar (s_{\Lambda }+S^{mTr})\vdash dX\right) =0 \end{gather*} \item For \begin{equation*} f\in \left( \mathcal{O}(gl(N|N)\otimes \Pi (V\oplus k_{\Lambda })\right) ^{gl(N|N)}(\hbar ) \end{equation*} the $gl(N)-$equivariant differential form corresponding to $s_{\Lambda }+f$ is closed \begin{equation*} \left( d_{dR}-i_{\Lambda }\right) \left( \exp \frac{1}{\hbar }(s_{\Lambda }+f)\vdash dX\right) =0 \end{equation* if and only if $f$ satisfies the noncommutative equivariant Batalin-Vilkovisky equation \begin{equation*} \hbar \Delta f+\frac{1}{2}\{f,f\}+I^{\vee }f=0,\,\, \end{equation* where $I=[\Xi ,\cdot ]$ \end{enumerate} \end{proposition} \begin{proof} The proof is immediate from the results of \cite{B09a}. \end{proof} \begin{remark} The algebra $gl(N|N)$ with its even trace, can be replaced without any other change in the formalism, by any finite dimensional super associative algebra $g$ with trace, which satisfy $tr_{g}(l_{a})=0$, where $l_{a}$ is the operator of multiplication by the arbitrary element $a$. The most part of the formalism works without change also for any finite dimensional super associative algebra with trace. \end{remark} \subsection{Equivariant matrix integrals for associative algebras and intersections on moduli spaces of curves.} \begin{proposition} Let $A$, $\dim _{k}A<\infty $, be associative d($\mathbb{Z}/2\mathbb{Z} )g-algebra with \emph{odd }scalar product $\beta $, multiplication described by cyclic 3-tensor $m_{2}$. Then $\Delta (m_{2})=0$ an \begin{equation*} S=\left( -\frac{1}{2}\tsum_{i_{1}i_{2}}(-1)^{\epsilon }\beta _{i_{1}i_{2}}tr([\Xi ,X^{i_{1}}],X^{i_{2}})+\frac{1}{3! \tsum_{i_{1}i_{2}i_{3}}(-1)^{\epsilon }m_{2,i_{1}i_{2}i_{3}}tr(X^{i_{1}}X^{i_{2}}X^{i_{3}})\right) \end{equation* defines closed $gl(N)$-equivariant differential form on $(gl(N|N)\otimes \Pi V)_{0}$. It can be seen as an odd higher dimensional generalisation of the matrix Airy integral. It's asymptotic expansion is given, as it folows from theorem 1 of \cite{B06b}), by the sum over trivalent ribbon graphs \begin{equation*} \exp \left( const\sum_{\Gamma }\hbar ^{-\chi _{\Gamma }}c_{S}(\Gamma )c_{\Lambda }(\Gamma )\right) \end{equation* where $c_{\Lambda }\in H^{\ast }(\overline{\mathcal{M}}_{g,n}),$ $c_{S}\in H_{\ast }(\overline{\mathcal{M}}_{g,n})$ are the cocycle and the cycle on the stable ribbon graph complex defined for any stable ribbon graph in \cit {B06b}, \cite{B09b} and associated with the odd differentiation $I=[\Lambda _{01},\cdot ]$, $I^{2}\neq 0$, of the associative algebra $gl(N|N)$ and with the solution $S$ to the noncommutative Batalin-Vilkovisky equation.$\square $ \end{proposition}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,668
\section{Introduction} In the end of the eighties, the oscillations of the photon distribution function (DF) of high energy squeezed and correlated states were discovered in \cite{s1,s2}; the authors of \cite{s1} studied the oscillatory behavior of the single-mode squeezed-state Husimi function (HF) projected into photon number states, $P_{n}(p,q;\lambda,\phi) = | \langle n | p,q; \lambda, \phi \rangle |^{2}$, where $p$ and $q$ are the space variables associated with the two quadratures of a monochromatic electromagnetic (EM) field, $\lambda$ is the squeezing parameter and $\phi$ is a rotation angle in phase space. They suggested that for $p=0$, $q = 7 \sqrt{2}$, $\phi=0$ and a fixed $\lambda = 21$, such oscillatory behavior can be explained in terms of quantum interference effects and were taken as a signature of a nonclassical state. More recently, Gagen \cite{s3} generalized this study to incorporate interference structures in the Bohr-Sommerfeld trajectories associated with a superposition of quantum states. On the other hand, Dutta {\em et al}. \cite{s4} verified an additional feature present in $P_{n}(p,q;\lambda,\phi)$, when plotted as function of $n$: this distribution exhibits the structure of beats (collapses and revivals) at large values of $n \; (\geq 10)$ for $\lambda = 201$, $p^{2} + q^{2} = 98$ and $\phi \approx \pi /2$. Compared to the value $\lambda$ used in \cite{s1}, the high value for $\lambda$ is crucial for the occurence of beats. These oscillations were attributed to interference effects in phase space \cite{s4}; however, since a detailed explanation about the beats has not been presented until the present time, we judge that they deserve a deeper investigation. Recently, Chountasis and Vourdas \cite{s5} showed that the Weyl function is an important tool for quantum interference effects. In particular, they studied the Wigner and Weyl functions for a superposition of $m$ quantum states $| s_{i} \rangle$, where each function is decomposed into diagonal and nondiagonal terms, where the nondiagonal term is responsible for the interference between the states $| s_{i} \rangle$. Adopting a different approach and using the formalism developed in \cite{s6}, and in order to shed more light on the origin of the beats predicted in \cite{s4}, here we decompose the squeezed state HF into three functions: the two marginal (one for $p$ and the other for $q$) and the correlation distribution functions (MDF and CDF, respectively). Our results corroborate the phase-space interference concept and complement the graphical treatment proposed by Mandal \cite{s7}. This paper is organized as follows. In section II we discuss the solution of the differential pseudo-diffusion equation (see reference \cite{s6}), and in section III we define the phase space MDF and CDF. The formal and numerical results are given in section IV, where we show that oscillations and beats are present in the CDF function. Section V is dedicated to a summary and conclusions. Two appendices are also presented, containing calculational details. Appendix A contains the main steps to calculate the MDFs by direct integration, and in appendix B we calculate the CDF. \section{The pseudo-diffusion equation and its solutions} The mapping of the statistical operator $\mbox{\boldmath $\rho$}$ (describing a state of the EM field) on the squeezed-states representation (SSR) permits us to write the HF $P$ as follows \cite{s6}: \begin{equation} \label{e1} P(p,q;\lambda,\phi) = \mbox{${\rm Tr}$} \left[ \mbox{\boldmath $\rho$} {\bf \Pi}(p,q;\zeta) \right] = \mbox{${\rm Tr}$} \left[ \mbox{\boldmath $\rho$}_{r} {\bf \Pi}(p_{r},q_{r};\lambda) \right] = P_{r}(p_{r},q_{r};\lambda) \; , \end{equation} where \begin{equation} \label{e2} {\bf \Pi}(p,q;\zeta) = |pq; \zeta \rangle \langle pq; \zeta | = {\bf R}(\phi/2) {\bf \Pi}(p_{r},q_{r};y) {\bf R}^{\dagger}(\phi/2) \end{equation} is a projection operator, ${\bf R}(\phi/2) = \exp \left( \frac{\mbox{$\dot{\imath}$} \phi}{2} \; {\bf a}^{\dagger} {\bf a} \right)$ is the rotation operator, $q_{r} = q \cos (\phi/2) + p \sin (\phi/2)$ and $p_{r} = p \cos (\phi/2) - q \sin (\phi/2)$ are the rotated phase space variables expressed in terms of the old ones, $\mbox{\boldmath $\rho$}_{r} = {\bf R}^{\dagger}(\phi/2) \; \mbox{\boldmath $\rho$} \; {\bf R}(\phi/2)$ and $\lambda \equiv e^{-2y} \; (0 < \lambda < \infty)$. Now, if one considers the mixed state $\mbox{\boldmath $\rho$} = \sum_{n=0}^{\infty} p_{n} |n \rangle \langle n|$, diagonal in the Fock basis, $\mbox{\boldmath $\rho$}_{r}$ will be invariant under rotations since ${\bf R}(\phi/2) | n \rangle = e^{\mbox{$\dot{\imath}$} n \phi/2} |n \rangle$. Consequently, the associated HF is given by $P(p,q; \lambda,\phi) = P(p_{r},q_{r};\lambda)$. This relation is useful in the sense that it is sufficient to consider the calculation of the unrotated distribution $P(p,q;\lambda)$, with variables changed from $(p,q)$ to $(p_{r},q_{r})$ in the final result, respectively. The number state $|n \rangle \langle n |$ is a typical example where this relation can be used. In \cite{s6} we demonstrated that $P(p,q;\lambda,\phi)$ satisfies the partial differential equation \begin{equation} \label{e3} {\bf \Gamma}(p,q;\lambda,\phi) P(p,q;\lambda,\phi) = 0 \; , \end{equation} where \begin{eqnarray} \label{e4} {\bf \Gamma}(p,q;\lambda,\phi) &=& \frac{\partial}{\partial \lambda} - \frac{1}{4 \lambda^{2}} \left\{ \left[ \lambda^{2} \cos^{2} (\phi/2) - \sin^{2} (\phi/2) \right] \frac{\partial^{2}}{\partial p^{2}} + \left[ \lambda^{2} \sin^{2} (\phi/2) - \cos^{2} (\phi/2) \right] \frac{\partial^{2}}{\partial q^{2}} \right. \nonumber \\ & & - \left. (\lambda^{2} + 1) \sin \phi \; \frac{\partial^{2}}{\partial q \partial p} \right\} \end{eqnarray} is a linear differential operator. For $\phi=0$, equation (\ref{e3}) is similar to the diffusion equation in two dimensions where the parameter $\lambda$ plays the role of time. In this situation, since the diffusion coefficients have opposite signs, the equation describes a diffusive (infusive) process in the $p$ ($q$) variable. For this reason, it has been called the {\em pseudo-diffusion equation} \cite{s8,s9}. Here we consider the formal solution of equation (\ref{e3}), obtained by the Fourier transform (FT) method, written as an integral equation with the kernel depending on the squeeze and rotation parameters, \begin{eqnarray} \label{e5} P(p,q;\lambda,\phi) &=& \int_{- \infty}^{\infty} \frac{d\xi d\eta}{2 \pi} \; e^{\mbox{$\dot{\imath}$} (\eta p - \xi q)} K(\xi,\eta;\lambda,\phi) \widetilde{P}(\xi,\eta) \nonumber \\ &=& \int_{- \infty}^{\infty} \frac{d\xi d\eta}{2 \pi} \; e^{\mbox{$\dot{\imath}$} (\eta p_{r} - \xi q_{r})} K(\xi,\eta;\lambda,0) \widetilde{P}_{r}(\xi,\eta) \nonumber \\ &=& P_{r}(p_{r},q_{r};\lambda) \; . \end{eqnarray} The kernel is \begin{equation} \label{e6} K(\xi,\eta;\lambda,\phi) = \exp \left( - \frac{\lambda - 1}{4 \lambda} \left\{ [ \lambda \sin^{2} (\phi/2) - \cos^{2} (\phi/2)] \xi^{2} + [ \lambda \cos^{2} (\phi/2) - \sin^{2} (\phi/2)] \eta^{2} + (\lambda + 1) \sin \phi \; \xi \eta \right\} \right) \end{equation} and $\widetilde{P}(\xi,\eta)$ is the FT of the HF $P(p,q)$ for $\lambda =1$ and $\phi =0$. We notice that $K(\xi,\eta;\lambda,\phi)$ is an {\em unbounded} function, responsible for the squeezing `propagation' of an `initial' function $P(p,q)$ to $P(p,q;\lambda,\phi)$ for any values of $\lambda$ and $\phi$ in their domain. Thus the existence of a `propagated' $P(p,q;\lambda,\phi)$ depends on the functional form of $\widetilde{P}(\xi,\eta)$, since the integral in the first line in (\ref{e5}) for $\widetilde{P}(\xi,\eta) =$ constant does not exist. From the pseudo-diffusion equation \rf{e3} and the linear differential operator ${\bf \Gamma}$, equation (\ref{e5}) shows the following symmetry properties \begin{equation} \label{e7} P(p,q;\lambda,\phi) = P(q,-p;\lambda,\phi \pm \pi) = P(q,p;\lambda^{-1},-\phi) \end{equation} and \begin{equation} \label{e8} K(\xi,\eta;\lambda,\phi) = K(\eta,-\xi;\lambda,\phi \pm \pi) = K(\eta,\xi;\lambda^{-1}, -\phi) \; . \end{equation} Now, our aim is to show that the Glauber-Sudarshan distribution $P^{c}(p,q; \lambda,\phi)$ and HF $P(p,q;\lambda,\phi)$ are related by \begin{equation} \label{e9} P^{c}(p,q;\lambda,\phi) = {\bf \Lambda}(p,q;\lambda,\phi) P(p,q;\lambda,\phi) = P(p,q;-\lambda, \phi) \; , \end{equation} with \begin{equation} \label{e10} {\bf \Lambda}(p,q;\lambda,\phi) \equiv \exp \left[ - \frac{1}{2} \left( \lambda \frac{\partial^{2}}{\partial p_{r}^{2}} + \lambda^{-1} \frac{\partial^{2}} {\partial q_{r}^{2}} \right) \right] \; , \end{equation} which can also be written as \begin{displaymath} {\bf \Lambda}(p,q;\lambda,\phi) = \exp \left\{ - \frac{1}{2 \lambda} \left[ [ \lambda^{2} \cos^{2} (\phi/2) + \sin^{2} (\phi/2) ] \frac{\partial^{2}}{\partial p^{2}} + [ \lambda^{2} \sin^{2} (\phi/2) + \cos^{2} (\phi/2) ] \frac{\partial^{2}}{\partial q^{2}} - (\lambda^{2} - 1) \sin \phi \; \frac{\partial^{2}}{\partial p \partial q} \right] \right\} \; , \end{displaymath} since \begin{eqnarray} \frac{\partial^{2}}{\partial p_{r}^{2}} &=& \sin^{2} (\phi/2) \frac{\partial^{2}}{\partial q^{2}} + \cos^{2} (\phi/2) \frac{\partial^{2}} {\partial p^{2}} - \sin \phi \frac{\partial^{2}}{\partial q \partial p} \; , \nonumber \\ \frac{\partial^{2}}{\partial q_{r}^{2}} &=& \cos^{2} (\phi/2) \frac{\partial^{2}}{\partial q^{2}} + \sin^{2} (\phi/2) \frac{\partial^{2}} {\partial p^{2}} + \sin \phi \frac{\partial^{2}}{\partial q \partial p} \; . \nonumber \end{eqnarray} Applying the differential operator ${\bf \Lambda}$ on (\ref{e5}), we get \begin{eqnarray} \label{e11} {\bf \Lambda}(p,q;\lambda,\phi) P(p,q;\lambda,\phi) &=& \int_{- \infty}^{\infty} \frac{d\xi d\eta}{2\pi} \left[ {\bf \Lambda}(p,q;\lambda,\phi) \; e^{\mbox{$\dot{\imath}$} (\eta p_{r} - \xi q_{r})} \right] K(\xi,\eta;\lambda,0) \widetilde{P}_{r}(\xi,\eta) \nonumber \\ &=& \int_{- \infty}^{\infty} \frac{d\xi d\eta}{2\pi} \; e^{\mbox{$\dot{\imath}$} (\eta p_{r} - \xi q_{r})} \underbrace{e^{\frac{1}{2} (\lambda^{-1} \xi^{2} + \lambda \eta^{2})} K(\xi,\eta;\lambda,0)}_{K(\xi,\eta;-\lambda,0)} \widetilde{P}_{r}(\xi,\eta) \nonumber \\ &=& \int_{- \infty}^{\infty} \frac{d\xi d\eta}{2\pi} \; e^{\mbox{$\dot{\imath}$} (\eta p_{r} - \eta q_{r})} K(\xi,\eta;-\lambda,0) \widetilde{P}_{r}(\xi,\eta) \nonumber \\ &=& P(p,q;-\lambda,\phi) \; . \end{eqnarray} The second equality is obtained using the following relation \begin{eqnarray} {\bf \Lambda}(p,q;\lambda,\phi) \; e^{\mbox{$\dot{\imath}$} (\eta p_{r} - \xi q_{r})} &=& \sum_{k=0}^{\infty} \frac{(-1)^{k}}{2^{k} k!} \left( \lambda^{-1} \frac{\partial^{2}}{\partial q_{r}^{2}} + \lambda \frac{\partial^{2}}{\partial p_{r}^{2}} \right)^{k} e^{\mbox{$\dot{\imath}$} ( \eta p_{r} - \xi q_{r})} \nonumber \\ &=& \sum_{k=0}^{\infty} \frac{1}{k!} \left[ \frac{1}{2} \left( \lambda^{-1} \xi^{2} + \lambda \eta^{2} \right) \right]^{k} e^{\mbox{$\dot{\imath}$} (\eta p_{r} - \xi q_{r})} \nonumber \\ &=& e^{\frac{1}{2} (\lambda^{-1} \xi^{2} + \lambda \eta^{2})} \; e^{\mbox{$\dot{\imath}$} (\eta p_{r} - \xi q_{r})} \nonumber \end{eqnarray} and, from the definition of the kernel $K(\xi,\eta;\lambda,0)$, we conclude that \begin{displaymath} e^{\frac{1}{2} (\lambda^{-1} \xi^{2} + \lambda \eta^{2})} K(\xi,\eta;\lambda,0) = e^{\frac{1}{4} (\lambda + 1) \left( \eta^{2} + \lambda^{-1} \xi^{2} \right)} = K(\xi,\eta;-\lambda,0) \; . \end{displaymath} Thus, the DF $P^{c}(p,q;\lambda,\phi)$ is directly obtained by changing the signal of the squeezing parameter $\lambda \to -\lambda $ in the HF. Consequently, this result shows that $P^{c}_{n}(p,q;\lambda,\phi)$ does not exist as a bounded function for the number states, however it exists as an ultradistribution. Equation (\ref{e9}) is a generalization of previous results obtained in \cite{s9,s10} for $\phi = 0$. \section{Marginal and correlation distribution functions} The HF $P(p,q;\lambda,\phi)$ can be written as a sum of two terms: the first is the product of the two MDF, in $q$ and $p$, and describes the noncorrelated part; the second term is the CDF and contains the phase-space correlations \cite{s6}. So, the HF (\ref{e1}) can be written as \begin{equation} \label{e12} P(p,q;\lambda,\phi) = R(p;\lambda,\phi) Q(q;\lambda,\phi) + C(p,q;\lambda,\phi) \; , \end{equation} where \begin{equation} \label{e13} Q(q;\lambda,\phi) = \int_{- \infty}^{\infty} \frac{dp}{\sqrt{2 \pi}} \; P(p,q; \lambda,\phi) \; , \end{equation} and \begin{equation} \label{e14} R(p;\lambda,\phi) = \int_{- \infty}^{\infty} \frac{dq}{\sqrt{2 \pi}} \; P(p,q; \lambda,\phi) \end{equation} are the MDFs, and $C(p,q;\lambda,\phi)$ is the CDF. However, the calculation of (\ref{e13}) and (\ref{e14}) by direct integration displays difficulties when $\phi \neq 0$ (see appendix A). Thus, the aim of this section is to show that expressions for the MDFs can be obtained in a much simpler way by using the formalism of section II. Substituting the right hand side (RHS) of the first line of equation (\ref{e5}) into the integrands of equations (\ref{e13}) and (\ref{e14}), and then carrying out the integrations we get \begin{equation} \label{e15} Q(q;\lambda,\phi) = \int_{- \infty}^{\infty} \frac{d \xi}{\sqrt{2 \pi}} \; e^{- \mbox{$\dot{\imath}$} \xi q} k_{Q}(\xi;\lambda,\phi) \widetilde{Q} (\xi) \end{equation} and \begin{equation} \label{e16} R(p;\lambda,\phi) = \int_{- \infty}^{\infty} \frac{d \eta}{\sqrt{2 \pi}} \; e^{\mbox{$\dot{\imath}$} \eta p} k_{R}(\eta;\lambda,\phi) \widetilde{R} (\eta) \; , \end{equation} where \begin{equation} \label{e17} k_{Q}(\xi;\lambda,\phi) = \exp \left\{ - \frac{\lambda - 1}{4 \lambda} \left[ \lambda \sin^{2} (\phi/2) - \cos^{2} (\phi/2) \right] \xi^{2} \right\} \end{equation} and \begin{equation} \label{e18} k_{R}(\eta;\lambda,\phi) = \exp \left\{ - \frac{\lambda - 1}{4 \lambda} \left[ \lambda \cos^{2} (\phi/2) - \sin^{2} (\phi/2) \right] \eta^{2} \right\} \end{equation} are the reduced kernels responsible for the `propagation' of the squeezing. The functions $\widetilde{Q}(\xi)$ and $\widetilde{R}(\eta)$ are the respective FT of the Husimi functions $Q(q)$ and $R(p)$ for $\lambda = 1$ (absence of squeezing). Futhermore, the equations (\ref{e15}) and (\ref{e16}) are solutions of the partial differential equations \begin{eqnarray} \label{e19} \left[ \frac{\partial}{\partial \lambda} - \frac{\lambda^{2} \sin^{2} (\phi/2) - \cos^{2} (\phi/2)}{4 \lambda^{2}} \frac{\partial^{2}}{\partial q^{2}} \right] Q(q;\lambda,\phi) &=& 0 \; , \\ \label{e20} \left[ \frac{\partial}{\partial \lambda} - \frac{\lambda^{2} \cos^{2} (\phi/2) - \sin^{2} (\phi/2)}{4 \lambda^{2}} \frac{\partial^{2}}{\partial p^{2}} \right] R(p;\lambda,\phi) &=& 0 \; . \end{eqnarray} In analogy to (\ref{e8}), the reduced kernels $k_{Q}$ and $k_{R}$ have the symmetry properties \begin{equation} \label{e21} k_{Q(R)}(x;\lambda,\phi) = k_{R(Q)}(x;\lambda^{-1},\phi) = k_{R(Q)}(x;\lambda, \pi \pm \phi) \; , \end{equation} which reflect directly into the MDFs, \begin{eqnarray} \label{e22} Q(q;\lambda,\phi) &=& R(q;\lambda^{-1},\phi) = R(q;\lambda,\pi \pm \phi) \; , \nonumber \\ R(p;\lambda,\phi) &=& Q(p;\lambda^{-1},\phi) = Q(p;\lambda,\pi \pm \phi) \; . \end{eqnarray} Consequently, the calculation of $Q(q;\lambda,\phi)$ is sufficient for determining the function $R(p;\lambda,\phi)$, and vice-versa. Now we analyze the structure of the kernel (\ref{e6}), which can be factorized as \begin{equation} \label{e23} K(\xi,\eta;\lambda,\phi) = k_{Q}(\xi;\lambda,\phi) k_{R}(\eta;\lambda,\phi) k_{C}(\xi,\eta;\lambda,\phi) \; , \end{equation} where the first two factors on the RHS (reduced kernels) `propagate' the initial HF in an independent way, {\em i.e.}, if in the `initial' $(\lambda = 1)$ HF the phase-space variables are not correlated, they will remain as such for any other value of $\lambda$. The factor \begin{equation} \label{e24} k_{C}(\xi,\eta;\lambda,\phi) = \exp \left[ - \left( \frac{\lambda^{2} - 1}{4 \lambda} \sin \phi \right) \xi \eta \right] \end{equation} introduces additional (or new) correlations into an `initial' HF when $\phi \neq n \pi$, with $n \in {\rm I\!N}$. Otherwise, we obtain $k_{C}=1$ and $K(\xi,\eta;\lambda,n \pi) = k_{Q}(\xi;\lambda,n \pi) k_{R}(\eta;\lambda,n \pi)$, one reduced kernel for each variable. As a consequence of the factorization (\ref{e23}) it is interesting to rewrite the CDF $C(p,q;\lambda,\phi)$ as a sum of two terms, \begin{equation} \label{e25} C(p,q;\lambda,\phi) = C^{(1)}(p,q;\lambda,\phi) + C^{(2)}(p,q;\lambda,\phi) \; , \end{equation} defined as \begin{equation} \label{e26} C^{(1)}(p,q;\lambda,\phi) = \int_{- \infty}^{\infty} \frac{d\xi d\eta}{2 \pi} \; e^{\mbox{$\dot{\imath}$} (\eta p - \xi q)} K(\xi,\eta;\lambda,\phi) \widetilde{C}(\xi,\eta) \end{equation} and \begin{equation} \label{e27} C^{(2)}(p,q;\lambda,\phi) = \int_{- \infty}^{\infty} \frac{d\xi d\eta}{2 \pi} \; e^{\mbox{$\dot{\imath}$} (\eta p - \xi q)} k_{Q}(\xi;\lambda,\phi) k_{R}(\eta;\lambda,\phi) \left[ k_{C} (\xi,\eta;\lambda,\phi) - 1 \right] \widetilde{R}(\eta) \widetilde{Q}(\xi) \; . \end{equation} Here we assume that the HF contains `initial' correlations [see equation (\ref{e12})] with its FT being $\widetilde{P}(\xi,\eta) = \widetilde{R}(\eta) \widetilde{Q}(\xi) + \widetilde{C}(\xi,\eta)$, and that `propagation' of correlations originates from two sources. The first, the RHS of equation (\ref{e26}), is responsible for the `propagation' of squeezing into the `initial' correlations $\widetilde{C}(\xi,\eta)$. In the second, equation (\ref{e27}), `propagation' occurs only for $\phi \neq n \pi$ when additional correlations are created into the `initial' FT of the uncorrelated part of the HF, $\widetilde{R}(\eta) \widetilde{Q}(\xi)$. In appendix B, the correlations $C^{(1)}$ and $C^{(2)}$ are obtained for the Fock states in the SSR. \section{Fock states in the squeezed states representation} The density operator $\mbox{\boldmath $\rho$}_{n} = |n \rangle \langle n|$ mapped in the coherent states representation yields a Poisson distribution \cite{s11} \begin{equation} \label{e28} P_{n}(p,q) = | \langle pq | n \rangle |^{2} = \frac{1}{n!} \left( \frac{p^{2} + q^{2}} {2} \right)^{n} \exp \left( - \frac{p^{2} + q^{2}}{2} \right) \; , \end{equation} with $n = 0,1,2,\ldots$. So, the respective Fourier transform \begin{equation} \label{e29} \widetilde{P}_{n}(\xi,\eta) = \frac{(-1)^{n}}{2^{2n} n!} \sum_{k=0}^{n} \frac{n!}{k! (n-k)!} \; {\cal H}_{2k} \left( \frac{\xi}{\sqrt{2}} \right) {\cal H}_{2(n-k)} \left( \frac{\eta}{\sqrt{2}} \right) \exp \left( - \frac{\xi^{2} + \eta^{2}}{2} \right) \; , \end{equation} where ${\cal H}_{m}(x)$ is the Hermite polynomial, represents an initial step for calculating the Husimi function $P_{n}(p,q;\lambda,\phi)$ in the squeezed states representation. In fact, substituting equation (\ref{e29}) into (\ref{e5}) and evaluating the integrations over $\xi$ and $\eta$ \cite[\S 7.374-8]{s12}, we get \begin{equation} \label{e30} P_{n}(p,q;\lambda,\phi) = \frac{2 \sqrt{\lambda}}{\lambda + 1} \left( \frac{\lambda - 1} {\lambda + 1} \right)^{n} \frac{1}{2^{n} n!} \left| {\cal H}_{n} \left( \frac{\lambda q_{r} + \mbox{$\dot{\imath}$} p_{r}}{\sqrt{\lambda^{2} - 1}} \right) \right|^{2} \exp \left( - \frac{\lambda q_{r}^{2} + p_{r}^{2}}{\lambda + 1} \right) \; , \end{equation} where $q_{r}$ and $p_{r}$ are the rotated variables defined in section II. This expression was inittialy obtained in \cite{s13}, and later used by Schleich {\em et al}. \cite{s1} in the oscillatory behavior study of the distribution $P_{n}(p,q;\lambda,0)$. Now, using the mathematical relation \cite{s14} \begin{displaymath} \left| {\cal H}_{n}(z) \right|^{2} = 2^{n} n! \sum_{k=0}^{n} \; (-1)^{k} {\cal L}_{k}^{(-1/2)}(2 x^{2}) \; {\cal L}_{n-k}^{(-1/2)}(-2 y^{2}) \qquad (z = x + \mbox{$\dot{\imath}$} y) \; , \end{displaymath} in which ${\cal L}_{m}^{(\alpha)}(x)$ is the associated Laguerre polynomial, equation (\ref{e30}) can be written in an equivalent form, \begin{equation} \label{e31} P_{n}(p,q;\lambda,\phi) = \frac{2 \sqrt{\lambda}}{\lambda + 1} \left( \frac{\lambda - 1} {\lambda + 1} \right)^{n} \sum_{k=0}^{n} \; (-1)^{k} {\cal L}_{k}^{(-1/2)} \left( \frac{2 \lambda^{2} q_{r}^{2}}{\lambda^{2} - 1} \right) {\cal L}_{n-k}^{(-1/2)} \left( - \frac{2 p_{r}^{2}}{\lambda^{2} - 1} \right) \exp \left( - \frac{\lambda q_{r}^{2} + p_{r}^{2}}{\lambda + 1} \right) \; . \end{equation} The Husimi function $Q_{n}(q)$ is obtained with the help of equation (\ref{e28}), {\em i.e.}, \begin{equation} \label{e32} Q_{n}(q) = \int_{- \infty}^{\infty} \frac{dp}{\sqrt{2 \pi}} \; P_{n}(p,q) = \exp \left( - \frac{q^{2}}{2} \right) \sum_{k=0}^{n} {\cal L}_{n-k}^{(-1/2)}(0) \; \frac{q^{2k}}{2^{k} k!} \; , \end{equation} whose Fourier transform is given by \begin{equation} \label{e33} \widetilde{Q}_{n}(\xi) = \sum_{k=0}^{n} {\cal L}_{n-k}^{(-1/2)}(0) \; {\cal L}_{k}^{(-1/2)} \left( \frac{\xi^{2}}{2} \right) \exp \left( - \frac{\xi^{2}} {2} \right) \; . \end{equation} Substituting this result into equation (\ref{e15}) and doing the integration with respect to $\xi$, we get \begin{eqnarray} \label{e34} Q_{n}(q;\lambda,\phi) &=& \sqrt{\frac{2 \lambda}{(\lambda + 1) [ \cos^{2} (\phi/2) + \lambda \sin^{2} (\phi/2)]}} \; \sum_{k=0}^{n} \; (-1)^{k} {\cal L}_{n-k}^{(-1/2)} (0) \left[ \frac{\lambda - 1}{\lambda + 1} \frac{\cos^{2} (\phi/2) - \lambda \sin^{2} (\phi/2)}{\cos^{2} (\phi/2) + \lambda \sin^{2} (\phi/2)} \right]^{k} \nonumber \\ & & \times {\cal L}_{k}^{(-1/2)} \left[ \frac{2 \lambda^{2} q^{2}}{(\lambda^{2}-1)[ \cos^{4} (\phi/2) - \lambda^{2} \sin^{4} (\phi/2)]} \right] \exp \left[ - \frac{\lambda q^{2}}{(\lambda + 1) [ \cos^{2} (\phi/2) + \lambda \sin^{2} (\phi/2)]} \right] \; . \end{eqnarray} In order to obtain the MDF $R_{n}(p;\lambda,\phi)$, we only need the symmetry properties (\ref{e22}), \begin{eqnarray} \label{e35} R_{n}(p;\lambda,\phi) &=& \sqrt{\frac{2 \lambda}{(\lambda + 1) [ \lambda \cos^{2} (\phi/2) + \sin^{2} (\phi/2)]}} \; \sum_{k=0}^{n} \; {\cal L}_{k}^{(-1/2)}(0) \left[ \frac{\lambda - 1}{\lambda + 1} \frac{\lambda \cos^{2} (\phi/2) - \sin^{2} (\phi/2)} {\lambda \cos^{2} (\phi/2) + \sin^{2} (\phi/2)} \right]^{n-k} \nonumber \\ & & \times {\cal L}_{n-k}^{(-1/2)} \left[ - \frac{2 \lambda^{2} p^{2}}{(\lambda^{2}-1) [ \lambda^{2} \cos^{4} (\phi/2) - \sin^{4} (\phi/2)]} \right] \exp \left[ - \frac{\lambda p^{2}}{(\lambda + 1) [ \lambda \cos^{2} (\phi/2) + \sin^{2} (\phi/2)]} \right] \; . \end{eqnarray} Figures 1(a)-(d) show the three-dimensional plots of $P_{n}(p,q;\lambda,\phi)$ versus $n$ and $\phi$, for $\lambda = 21,201,1/21,1/201$, respectively. The plane $\phi =0$ in figure 1(a) corresponds to the oscillations pointed out in \cite{s1}, which depend strongly on $\phi$, showing a periodicity of $\pi$. Now, for $\lambda=201$ [figure 1(b)] we observe the occurence of rich structures, although the beats pointed out in \cite{s4} can not be perceived. In fact, they are revealed in figures 2 and 3. Figures 2(a)-(f) show the plots of $P_{n}$ versus $n$ for $\phi =85^{o},...,90^{o}$ and $\lambda = 201$, where the beat structure becomes evident; however, it disappears at angles close to $90^{o}$. Following the arguments presented in \cite{s4} and corroborated by Mandal \cite{s7}, this beat structure is a consequence of the quantum interference in phase space. Figures 3(a)-(f) show the plots of $C_{n}$ versus $n$ for the same parameters used in figures 2(a)-(f), where the beat structure is present again. This fact connects the {\em correlations} and {\em interference} effects in phase space, and provides further insights to the phenomenon. Moreover, we observe a similar kind of plots for $\lambda =1/21$ and $\lambda = 1/201$, figures 1(c) and 1(d), respectively, where now they are shifted by $\pi/2$. \section{Summary and conclusions} We have considered the Husimi function $P(p,q;\lambda,\phi)$ with emphasis on the marginal and correlation distribution functions, showing that all three satisfy the pseudo-diffusion equations if one considers that the squeezing parameter $\lambda$ plays the role of a time. The solution, obtained from the Fourier transform method, permits calculating $P(p,q;\lambda,\phi)$, given an `initial' Husimi function $P(p,q)$ with a kernel $K(\xi,\eta;\lambda,\phi)$ responsible by the propagation of squeezing. The decomposition of the kernel in three factors, equation (\ref{e23}), permits writing the CDF as a sum of two terms, having different interpretations: the first term, equation \rf{e26}, is the propagation of `initial' correlations contained in $P(p,q)$; whereas the second term, equation \rf{e27}, is responsible for introducing additional correlations into the uncorrelated `initial' product of the MDFs $Q(q) R(p)$. Finally we remind that the formal procedure employed throughout this paper is advantageous if compared with the direct and lengthy calculation exposed in appendix A. In the specific case of the number state, the decomposition of the CDF into two terms should permit to investigate more thouroughly the origin of beats. Although having attained a formal expression for both (see appendix B), the numerical calculation presents difficulties due to its complexity. Multimode-squeezed-states representation can also be considered within the present formalism. In particular, M. Selvadoray {\em et al}. \cite{s15} studied the two-mode-squeezed-state photon distribution. Again they verified the presence of beats. In this case, the correlation distribution function plays a crucial role in the understanding of this effect. \section*{Acknowledgments} MAM acknowledges financial support from FAPESP, S\~ao Paulo, project number 97/14551-4. SSM acknowledges financial support from CNPq, Brasil. This work has also been partially supported by Conv\^enio FINEP/PRONEX Grant number 41/96/0935/00.
{ "redpajama_set_name": "RedPajamaArXiv" }
7,336
{"url":"http:\/\/mkweb.bcgsc.ca\/pi\/piday2017\/poetry.mhtml","text":"Trance opera\u2014Spente le Stellebe dramaticmore quotes\n\n# 3.14: fun\n\nDNA on 10th \u2014 street art, wayfinding and font\n\n# visualization + design\n\nThe 2019 Pi Day art celebrates digits of $\\pi$ with hundreds of languages and alphabets. If you're a kid at heart\u2014rejoice\u2014there's a special edition for you!\n\n# $\\pi$ Day 2017 Art Posters - Star charts and extinct animals and plants\n\n2019 $\\pi$ has hundreds of digits, hundreds of languages and a special kids' edition.\n2018 $\\pi$ day\n2017 $\\pi$ day\n2016 $\\pi$ approximation day\n2016 $\\pi$ day\n2015 $\\pi$ day\n2014 $\\pi$ approx day\n2014 $\\pi$ day\n2013 $\\pi$ day\nCircular $\\pi$ art\n\nOn March 14th celebrate $\\pi$ Day. Hug $\\pi$\u2014find a way to do it.\n\nFor those who favour $\\tau=2\\pi$ will have to postpone celebrations until July 26th. That's what you get for thinking that $\\pi$ is wrong. I sympathize with this position and have $\\tau$ day art too!\n\nIf you're not into details, you may opt to party on July 22nd, which is $\\pi$ approximation day ($\\pi$ \u2248 22\/7). It's 20% more accurate that the official $\\pi$ day!\n\nFinally, if you believe that $\\pi = 3$, you should read why $\\pi$ is not equal to 3.\n\nAll art posters are available for purchase.\nI take custom requests.\n\nCaelum non animum mutant qui trans mare currunt.\n\u2014Horace\n\nThis year: creatures that don't exist, but once did, in the skies.\n\nAnd a poem.\n\nThis year's $\\pi$ day song is Exploration by Karminsky Experience Inc. Why? Because \"you never know what you'll find on an exploration\".\n\nIf you like space, you'll love my the 12,000 billion light-year map of clusters, superclusters and voids. Find the biggest nothings in Bo\u00f6tes and Eridanus.\n\n## create myths and contribute!\n\nWant to contribute to the mythology behind the constellations in the $\\pi$ in the sky? Many already have a story, but others still need one. Please submit your stories!\n\nThis year I wanted to something more than visuals. Space is vast, so let's fill it with words.\n\nI asked my good friend and poet, Paolo Marcazzan, to collaborate. I described the idea for the art\u2014a universe of stars based on $\\pi$ and the extinct creatures that live within it\u2014and asked him to find the matching words.\n\nAnd they could not have been more perfect.\n\nharken to my anger, mother Nyx, for the deceptions of the gods \u2014Aeschylus, Eumenides It\u2019s not so much stifled intent but tussle and fracas in the backstage of the heart spoken in privatives and the high tone of failed mending. Says there is nothing to see and you are seeing it. A truth that likes it here in the caul of light swallowed and us locked in probability or antecedent. Where this run to untold receding and dispersed signature ends in parallels denied l\u2019amor che move il sole e l\u2019altre stelle or silenced music that is number on a brim of echo, capsized chamber drawn into our constellation, and cooling.\n\n\u2014Paolo Marcazzan\n\n## author's note\n\nThe poem situates the dark as a place for contention and ongoing confrontation. Whether in the recesses of space or heart, the poem probes the territory of distance, absence, uncertainty and muteness. It considers the relational as default positioning of existence (earthly, universal), and that which remains unmet within that context.\n\nLife in its dimension of cross-grained, often broken linearity is juxtaposed with a quote form Dante that references instead his vision of sidereal circularity as the benign force that moves all things in the universe. For the earthbound, the questions and concerns remain those of identity, passage, escape from transiency, and slow tempering of hope.\n\nVIEW ALL\n\n# Yearning for the Infinite \u2014 Aleph 2\n\nMon 18-11-2019\n\nDiscover Cantor's transfinite numbers through my music video for the Aleph 2 track of Max Cooper's Yearning for the Infinite (album page, event page).\n\nYearning for the Infinite, Max Cooper at the Barbican Hall, London. Track Aleph 2. Video by Martin Krzywinski. Photo by Michal Augustini. (more)\n\nI discuss the math behind the video and the system I built to create the video.\n\n# Hidden Markov Models\n\nMon 18-11-2019\n\nEverything we see hides another thing, we always want to see what is hidden by what we see.\n\u2014Rene Magritte\n\nA Hidden Markov Model extends a Markov chain to have hidden states. Hidden states are used to model aspects of the system that cannot be directly observed and themselves form a Markov chain and each state may emit one or more observed values.\n\nHidden states in HMMs do not have to have meaning\u2014they can be used to account for measurement errors, compress multi-modal observational data, or to detect unobservable events.\n\nNature Methods Points of Significance column: Hidden Markov Models. (read)\n\nIn this column, we extend the cell growth model from our Markov Chain column to include two hidden states: normal and sedentary.\n\nWe show how to calculate forward probabilities that can predict the most likely path through the HMM given an observed sequence.\n\nGrewal, J., Krzywinski, M. & Altman, N. (2019) Points of significance: Hidden Markov Models. Nature Methods 16:795\u2013796.\n\n### Background reading\n\nAltman, N. & Krzywinski, M. (2019) Points of significance: Markov Chains. Nature Methods 16:663\u2013664.\n\n# Hola Mundo Cover\n\nSat 21-09-2019\n\nMy cover design for Hola Mundo by Hannah Fry. Published by Blackie Books.\n\nHola Mundo by Hannah Fry. Cover design is based on my 2013 $\\pi$ day art. (read)\n\nCurious how the design was created? Read the full details.\n\n# Markov Chains\n\nTue 30-07-2019\n\nYou can look back there to explain things,\nbut the explanation disappears.\nYou'll never find it there.\nThings are not explained by the past.\nThey're explained by what happens now.\n\u2014Alan Watts\n\nA Markov chain is a probabilistic model that is used to model how a system changes over time as a series of transitions between states. Each transition is assigned a probability that defines the chance of the system changing from one state to another.\n\nNature Methods Points of Significance column: Markov Chains. (read)\n\nTogether with the states, these transitions probabilities define a stochastic model with the Markov property: transition probabilities only depend on the current state\u2014the future is independent of the past if the present is known.\n\nOnce the transition probabilities are defined in matrix form, it is easy to predict the distribution of future states of the system. We cover concepts of aperiodicity, irreducibility, limiting and stationary distributions and absorption.\n\nThis column is the first part of a series and pairs particularly well with Alan Watts and Blond:ish.\n\nGrewal, J., Krzywinski, M. & Altman, N. (2019) Points of significance: Markov Chains. Nature Methods 16:663\u2013664.\n\n# 1-bit zoomable gigapixel maps of Moon, Solar System and Sky\n\nMon 22-07-2019\n\nPlaces to go and nobody to see.\n\nExquisitely detailed maps of places on the Moon, comets and asteroids in the Solar System and stars, deep-sky objects and exoplanets in the northern and southern sky. All maps are zoomable.\n\n3.6 gigapixel map of the near side of the Moon, annotated with 6,733. (details)\n100 megapixel and 10 gigapixel map of the Solar System on 20 July 2019, annotated with 758k asteroids, 1.3k comets and all planets and satellites. (details)\n100 megapixle and 10 gigapixel map of the Northern Celestial Hemisphere, annotated with 44 million stars, 74,000 deep-sky objects and 3,000 exoplanets. (details)\n100 megapixle and 10 gigapixel map of the Southern Celestial Hemisphere, annotated with 69 million stars, 88,000 deep-sky objects and 1000 exoplanets. (details)\n\n# Quantile regression\n\nSat 01-06-2019\nQuantile regression robustly estimates the typical and extreme values of a response.\n\nQuantile regression explores the effect of one or more predictors on quantiles of the response. It can answer questions such as \"What is the weight of 90% of individuals of a given height?\"\n\nNature Methods Points of Significance column: Quantile regression. (read)\n\nUnlike in traditional mean regression methods, no assumptions about the distribution of the response are required, which makes it practical, robust and amenable to skewed distributions.\n\nQuantile regression is also very useful when extremes are interesting or when the response variance varies with the predictors.\n\nDas, K., Krzywinski, M. & Altman, N. (2019) Points of significance: Quantile regression. Nature Methods 16:451\u2013452.\n\n### Background reading\n\nAltman, N. & Krzywinski, M. (2015) Points of significance: Simple linear regression. Nature Methods 12:999\u20131000.","date":"2019-12-12 02:04:17","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.26565197110176086, \"perplexity\": 4791.110892234335}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-51\/segments\/1575540534443.68\/warc\/CC-MAIN-20191212000437-20191212024437-00301.warc.gz\"}"}
null
null
Our Gerson Veggie Lentil Loaf is the perfect dish for all you busy bees out there! Heat it up for a late night snack, load up your Tupperware for a scrumptious lunch or bring it to your next family gathering. Missing some of the ingredients? No problem! Mix it up by incorporating your favorite veggies and you've got an easy, healthy dish you'll "loaf" forever. Blend 1 1/2 cups of lentils and the parsley together with an immersion blender or in a food processor. Mix with remaining ingredients, except tomato sauce, and place into a loaf pan lined with unbleached parchment paper. Cover, then bake for 25 minutes. Carefully remove the cover and pour Simple Gerson Tomato Sauce over loaf. Bake uncovered an additional 20 minutes. Serve with extra sauce and mashed potatoes. Sub Gerson Ketchup (see Gerson Therapy Cookbook page 146), Gerson Gravy (see Gerson Therapy Cookbook page 146) or Golden Gravy (see Gerson Therapy Cookbook page 147) for the Simple Gerson Tomato Sauce. Use red onion instead of yellow onion. Add one small diced bell pepper. Substitute 1 cup of uncooked rolled oats for the rice.
{ "redpajama_set_name": "RedPajamaC4" }
5,226
\section{Introduction}\label{sec:sec:QECIntro} The micro-computer revolution of the late 20th century has arguably been of greater impact to the world that any other technological revolution in history. The advent of transistors, integrated circuits, and the modern microprocessor has spawned literally hundreds of devices from pocket calculators to the iPod, all now integrated through an extensive worldwide communications system. However, as we enter the 21st century, the rate at which computational power is increasing is driving us very quickly to the realm of quantum physics. The component size of individual transistors on modern microprocessors are becoming so small that quantum effects will soon begin to dominate over classical electronic properties. Unfortunately the current designs for micro-electronics mean that quantum mechanical behavior will tend to result in unpredictable and unwanted behavior. Therefore, we have two choices: to keep trying to suppressing quantum effects in classically fabricated electronics or move to the field of quantum information processing (QIP) where we instead exploit them. This leads to a paradigm shift in the way we view and process information and has lead to considerable interest from physicists, engineers, computer scientists and mathematicians. The counter-intuitive and strange rules of quantum physics offers enormous possibilities for information processing and the development of a large scale quantum computer is the holy grail of many groups worldwide. While the advent of Shor's algorithm~\cite{S94} certainly spawned great interest in quantum information processing and demonstrated that the utilization of a quantum computer could lead to algorithms far more efficient than those used in classical computing, there was a great deal of debate surrounding the practicality of building a large scale, controllable, quantum system. It was well known even before the introduction of quantum information that coherent quantum states were extremely fragile and many believed that to maintain large, multi-qubit, coherent quantum states for a long enough time to complete {\em any} quantum algorithm was unrealistic~\cite{U95}. Additionally, classical error correction techniques are intrinsically based on a digital framework. Hence, can the vast amount of knowledge gained from classical coding theory be adapted to the quantum regime where while the {\em readout} of qubits is digital but actual manipulations are analogue. Starting in 1995, several papers appeared, in rapid succession, proposing codes which were appropriate to perform error correction on quantum data~\cite{S95,S96,CS96,LMPZ96}. This was the last theoretical aspect needed to convince the general community that quantum computation was indeed a possibility. Since this initial introduction, the progress in this field has been extensive. Initial work on error correction focused heavily on developing quantum codes~\cite{S96++,CG97,G96,PVK96}, introducing a more rigorous theoretical framework for the structure and properties of Quantum Error Correction (QEC)~\cite{KL00,CRSS98,G97,KLV99,KLP05} and the introduction of concepts such as fault-tolerant quantum computation~\cite{S96+,DS96,G98} which leads directly to the threshold theorem for concatenated QEC~\cite{KLZ96,AB97}. In more recent years QEC protocols have been developed for various systems, such as continuous variables~\cite{LS98,B98,L08,ATKYBLF08}, ion-traps and other systems containing motional degrees of freedom~\cite{LW03+,ST98}, adiabatic computation~\cite{JFS06} and globally controlled quantum computers~\cite{BBK03}. Additionally, work still continues on not only developing more complicated (and in some ways, more technologically useful) protocols such as subsystem codes~\cite{B06} and topological codes~\cite{K97,DKLP02,RHG07,FSG08} but also advanced techniques to implement error correction in a fault-tolerant manner~\cite{S97,S02,DA07}. Along with QEC, other methods of protecting quantum information were also developed. These other techniques would technically be placed in a separate category of error avoidance rather than error correction. The most well known technique of error avoidance is protocols such as decoherence free subspaces (DFS)~\cite{DG97,DG98,ZR97,ZR97+,DG98+,LW03}. While this protocol has the mathematical structure of a self correcting quantum code, it is largely a technique to suppress certain, well structured, noise models. As with QEC, this field of error avoidance is vast, now incorporating ideas from optimal control to create specially designed control sequences to counteract the effect of errors induced from environmental coupling. These new methods of dynamical decoupling can take simple structures such as Bang-Bang control~\cite{VL98,VT98,Z99}, to more complicated and generalized protocols to help decouple qubits from the environment~\cite{VKL99,FLP04,VK03,VK05}. This review deals exclusively with the concepts of QEC and fault-tolerant quantum computation. Many papers have reviewed error correction and Fault-tolerance~\cite{G97+,NC00,G00,KLABVZ02,S03+,G09}, however to cater for a large audience, we attempt to describe QEC and Fault-tolerance in a much more basic manner, largely through examples. Instead of providing a more rigorous review of error correction, we instead try to focus on more practical issues involved when working with these ideas. For those who have recently begun investigating quantum information processing or those who are focused on other important theoretical and/or experimental aspects related to quantum computing, searching through this enormous collection of work is daunting especially if a basic working knowledge of QEC is all that is required. We hope that this review of the basic aspects of QEC and fault-tolerance will allow those with little knowledge of the field to quickly become accustomed to the various techniques and tricks that are commonly used. We begin the discussion in section~\ref{sec:prelim} where we share some preliminary thoughts on the required properties of any quantum error correcting protocol. In section~\ref{sec:error} we review some basic noise models from the context of how they influence quantum algorithms. Section~\ref{sec:sec:QEC} introduces quantum error correction through the traditional example of the 3-qubit code, illustrating the circuits used for encoding and correction and why the principal of redundant encoding suppresses the failure of encoded qubits. Section~\ref{sec:sec:QEC} then introduces the stabilizer formalism~\cite{G97+}, demonstrating how QEC circuits are synthesized once the structure of the code is known. In section~\ref{sec:sec:decoherence} we then briefly return to the noise models and relate the abstract analysis of QEC, where errors are assumed to be discrete and probabilistic, to some of the physical mechanisms which can cause errors. Sections~\ref{sec:Fault-tolerance} and~\ref{sec:operations} introduces the concept of fault-tolerant error correction, the threshold theorem and how logical gate operations can be applied directly to quantum data. We then move on to circuit synthesis in section~\ref{sec:FTcircuit} presenting a basic fault-tolerant circuit design for logical state preparation using the $[[7,1,3]]$ Steane code as a representative example of how to synthesize fault-tolerant circuits from the stabilizer structure of quantum codes. Finally in section~\ref{sec:modern} we review specific codes for qubit loss and examine two of the more modern techniques for error correction. We briefly examine quantum subsystem codes~\cite{B06} and topological surface codes~\cite{DKLP02,FSG08} due to both their theoretical elegance and their increasing relevance in quantum architecture designs~\cite{DFSG08}. \section{Preliminaries} \label{sec:prelim} Before discussing specifically the effect of errors and the basics of Quantum Error Correction (QEC) we first dispense with the very basics of qubits and quantum gates. We assume a basic working knowledge with quantum information~\cite{EJ96,NC00} and this brief discussion is used simply to define our notation for the remainder of this review. The fundamental unit of quantum information, the qubit, which unlike classical bits can exist in coherent superpositions of two states, denoted $\ket{0}$ and $\ket{1}$. These basis states can be photonic polarization, spin states, electronic states of an ion or charge states of superconducting systems. An arbitrary state of an individual qubit, $\ket{\phi}$, can be expressed as, \begin{equation} \ket{\phi} = \alpha\ket{0} + \beta\ket{1} \end{equation} where normalization requires, $|\alpha|^2+|\beta|^2 = 1$. Quantum gate operations are represented by unitary operations acting on the Hilbert space of a qubit array. Unlike classical information processing, conservation of probability for quantum states require that all operations be reversible and hence unitary. When describing a quantum gate on an individual qubit, any dynamical operation, $G$, is a member of the unitary group $U(2)$, which consists of all $2\times 2$ matrices where $G^{\dagger} = G^{-1}$. Up to a global (and unphysical) phase factor, any single qubit operation can be expressed as a linear combination of the generators of $SU(2)$ as, \begin{equation} G = c_I \sigma_I + c_x \sigma_x + c_y\sigma_y + c_z\sigma_z \end{equation} where, \begin{equation} \sigma_x = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \quad \sigma_y = \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix}, \quad \sigma_z = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}, \end{equation} are the Pauli matrices, $\sigma_I$ is the $2\times 2$ identity matrix and the co-efficients $(c_I,c_x,c_y,c_z) \in \mathcal{C}$ satisfy $|c_I|^2+|c_x|^2+|c_y|^2+|c_z|^2 = 1$. The concept of Quantum Error Correction (QEC) is fundamental to the large scale viability of quantum information processing. Although the field is largely based on classical coding theory, there are several issues that need to be considered when transferring classical error correction to the quantum regime. First, coding based on data-copying, which is extensively used in classical error correction cannot be used due to the no-cloning theorem of quantum mechanics~\cite{WZ82}. This result implies that there exists no transformation resulting in the following mapping, \begin{equation} U\ket{\phi} \otimes \ket{\psi} = \ket{\phi} \otimes \ket{\phi}. \end{equation} i.e. it is impossible to perfectly copy an unknown quantum state. This means that quantum data cannot be protected from errors by simply making multiple copies. Secondly, direct measurement cannot be used to effectively protect against errors, since this will act to destroy any quantum superposition that is being used for computation. Error correction protocols must therefore be employed which can detect and correct errors without determining {\em any} information regarding the qubit state. Finally, unlike classical information, qubits can experience traditional bit errors, $\ket{0} \leftrightarrow \ket{1}$ but unlike classical information, qubit are also susceptible to phase errors $\ket{1} \leftrightarrow -\ket{1}$. Hence any error correcting procedure needs to be able to simultaneously correct for both. At its most basic level, QEC utilizes the idea of redundant encoding where quantum data is protected by extending the size of the Hilbert space for a single, logically encoded qubit and essentially spreading out the information over multiple qubits. This way, errors only perturb codeword states by small amounts which can then be detected and corrected, without directly measuring the quantum state of any qubit. \section{Quantum Errors: Cause and Effect}\label{sec:error} Before we even begin discussing the details of quantum error correction, we first examine some of the common sources of errors in quantum information processing and contextualize what they imply for computation. We will consider several important sources of errors and how they influence a trivial, single qubit, quantum algorithm. This trivial algorithm will be a computation consisting of a single qubit, intitilized in the $\ket{0}$ state undergoing $N$ identity operations. Such that the final, error free state is, \begin{equation} \ket{\psi}_{\text{final}} = \prod^N \sigma_I\ket{0} = \ket{0}, \end{equation} Measurement of the qubit in the $\ket{0}$, $\ket{1}$ basis will consequently yield the result 0 with a probability of unity. We examine, independently, several common sources of error from the effect they have on this simple quantum algorithm. Hopefully, this introductory section will show that while quantum errors are complicated physical effects, in QIP the relevant measure is the theoretical success probability of a given quantum algorithm. \subsection{Coherent Quantum Errors: You don't know what you are doing!} The first possible source of error is coherent, systematic control errors. This type of error is typically associated with bad system control and/or characterization where imprecise manipulation of the qubit introduces inaccurate Hamiltonian dynamics. As this source of error is produced by inaccurate control of the system dynamics it does not produce mixed states from pure states (i.e. it is a coherent, unitary error and does not destroy the quantum coherence of the qubit but instead causes you to apply an undesired gate operation). In our trivial algorithm, we are able to model this several different ways. To keep things simple, we assume that incorrect characterization of the control dynamics leads to an identity gate which is not $\sigma_I$, but instead introduces a small rotation around the $X$-axis of the Bloch sphere, i.e. \begin{equation} \ket{\psi}_{\text{final}} = \prod^N e^{i\epsilon \sigma_x}\ket{0} = \cos(N\epsilon)\ket{0} + i\sin(N\epsilon)\ket{1}. \end{equation} We now measure the system in the $\ket{0}$ or $\ket{1}$ state. In the ideal case, the computer should collapse to the state $\ket{0}$ with a probability of one, $P(\ket{0})=1$. However we now find, \begin{equation} \begin{aligned} &P(\ket{0}) = \cos^2(N\epsilon) \approx 1- (N\epsilon)^2, \\ &P(\ket{1}) = \sin^2(N\epsilon) \approx (N\epsilon)^2. \end{aligned} \end{equation} Hence, the probability of error in this trivial quantum algorithm is given by $p_{\text{error}} \approx (N\epsilon)^2$, which will be small given that $N\epsilon \ll 1$. The systematic error in this system is proportional to both the small systematic over rotation and the total number of applied identity operations. \subsection{Decoherence: The devil is in the environment} Environmental decoherence is another important source of errors in quantum systems. Once again we will take a very basic example of a decoherence model and examine how it influences our trivial algorithm. Later in section~\ref{sec:sec:decoherence} we will illustrate a more complicated decoherence model that arises from standard mechanisms. Consider a very simple environment, which is another two level quantum system. This environment has two basis states, $\ket{e_0}$ and $\ket{e_1}$ which satisfies the completeness relations, \begin{equation} \bra{e_i}e_j\rangle = \delta_{ij}, \quad \ket{e_0}\bra{e_0} + \ket{e_1}\bra{e_1} = I. \end{equation} We will also assume that the environment couples to the qubit in a specific way. When the qubit is in the $\ket{1}$ state, the coupling flips the environmental state while if the qubit is in the $\ket{0}$ state nothing happens to the environment. Additionally, as we anticipate the effect of this decoherence model we will slightly alter our trivial algorithm. Rather than considering a qubit prepared in the $\ket{0}$ state and applying $N$ identity operations, we instead modify the algorithm to the following, \begin{equation} \begin{aligned} \ket{\psi}_{\text{final}} = H\sigma_IH\ket{0} &=H\sigma_I \frac{1}{\sqrt{2}}(\ket{0} + \ket{1}) \\ &= H \frac{1}{\sqrt{2}}(\ket{0}+\ket{1}) = \ket{0}. \end{aligned} \end{equation} Essentially we are performing two $H \equiv $ Hadamard operations separated by a wait stage, represented by the Identity gate. Finally, this model assumes the system/environment interaction only occurs during this wait stage of the algorithm. As with the previous algorithm we should measure the state $\ket{0}$ with probability one. The reason for modifying our trivial algorithm is because this specific decoherence model acts to reduce coherence between the $\ket{0}$ and $\ket{1}$ basis states and hence we require a coherent superposition to observe any effect from the environmental coupling. We now assume that the environment starts in the pure state, $\ket{E} = \ket{e_0}$, and couples to the system such that, \begin{equation} H\sigma_I H\ket{0}\ket{E} = \frac{1}{2}(\ket{0}+\ket{1})\ket{e_0} + \frac{1}{2}(\ket{0}-\ket{1})\ket{e_1} \end{equation} As we are considering environmental decoherence, pure states will be transformed into classical mixtures, hence we now move into the density matrix representation for the state $H\sigma_I H\ket{0}\ket{E}$, \begin{equation} \begin{aligned} \rho_{f} &= \frac{1}{4}( \ket{0}\bra{0} + \ket{0}\bra{1}+\ket{1}\bra{0}+\ket{1}\bra{1})\ket{e_0}\bra{e_0}\\ &+\frac{1}{4}( \ket{0}\bra{0} - \ket{0}\bra{1}-\ket{1}\bra{0}+\ket{1}\bra{1})\ket{e_1}\bra{e_1} \\ &+ \frac{1}{4}( \ket{0}\bra{0} - \ket{0}\bra{1}+\ket{1}\bra{0}-\ket{1}\bra{1})\ket{e_0}\bra{e_1}\\ &+\frac{1}{4}( \ket{0}\bra{0} + \ket{0}\bra{1}-\ket{1}\bra{0}-\ket{1}\bra{1})\ket{e_1}\bra{e_0}. \end{aligned} \end{equation} Since we do not measure the environmental degrees of freedom, we trace over this part of the system, giving, \begin{equation} \begin{aligned} \text{Tr}_{E}(\rho_f) &= \frac{1}{4}( \ket{0}\bra{0} + \ket{0}\bra{1}+\ket{1}\bra{0}+\ket{1}\bra{1})\\ &+\frac{1}{4}( \ket{0}\bra{0} - \ket{0}\bra{1}-\ket{1}\bra{0}+\ket{1}\bra{1}) \\ &= \frac{1}{2}(\ket{0}\bra{0}+\ket{1}\bra{1}). \end{aligned} \end{equation} Measurement of the system will consequently return $\ket{0}$ 50\% of the time and $\ket{1}$ 50\% of the time. This final state is a complete mixture of the qubit states and is consequently a classical system. The coupling to the environment removed all the coherence between the $\ket{0}$ and $\ket{1}$ states and consequently the second Hadamard transform, intended to rotate $(\ket{0}+\ket{1})/\sqrt{2} \rightarrow \ket{0}$ has no effect. Since we assumed that the system/enviroment coupling during the wait stage causes the environmental degree of freedom to ``flip" when the qubit is in the $\ket{1}$ state, this decoherence model implicitly incorporates a temporal effect. The temporal interval of our identity gate in the above algorithm is long enough to enact this full controlled-flip operation. If we assumed a controlled rotation that is not a full flip on the environment, the final mixture will not be 50/50. Instead there would be a residual coherence between the qubit states and an increased probability of our algorithm returning a $\ket{0}$. Section~\ref{sec:sec:decoherence} revisits the decoherence model and illustrates how time-dependence is explicitly incorporated. \subsection{Loss, Leakage, Measurement and Initialization: Variations of the above} \label{sec:lossleakage} Other sources of error such as qubit initialization, measurement errors, qubit loss and qubit leakage are modeled in a very similar manner. Measurement errors and qubit loss are modeled in the same way as environmental decoherence. Measurement errors are described by utilizing the following measurement projection onto a qubit space, \begin{equation} A = (1-p_M)\ket{0}\bra{0} + p_M\ket{1}\bra{1} \end{equation} where $p_M \in [0,1]$ is the probability of measurement error. If we have a pure state $\rho = \ket{0}\bra{0}$, the probability of measuring a $\ket{0}$ is, \begin{equation} P(\ket{0}) = \text{Tr}(A\rho) = (1-p) \end{equation} indicating that the correct result is observed with probability $1-p$. Qubit loss is modeled in a slightly similar manner. When a qubit is lost, it is essentially coupled to the environment which acts to measure the system, with the classical information lost. This coupling follows the decoherence analysis shown earlier, where a 50/50 mixed state of the qubit results. Therefore the projector onto the qubit space is given by $A = \frac{1}{2}(\ket{0}\bra{0} + \ket{1}\bra{1})$, which is identical to simply tracing over the lost qubit and equivalent to a measurement error of probability $p=0.5$. With this type of error channel, not only is the physical object lost (and hence cannot be directly measured), but an initially pure qubit is converted to a completely mixed state. While this model of qubit loss is equivalent to environmental coupling, correcting this type of error requires additional machinery on top of standard QEC protocols. The difficulty with qubit loss is the initial detection of whether the qubit is actually present. While standard correction protocols can protect against the loss of information on a qubit, this still assumes that the physical object still exists in the computer. Hence in loss correction protocols, an initial non-demolition detection method must be employed (which determines if the qubit is actually present without performing a projective measurement on the computational state) before standard correction can be utilized to correct the error. Initialization of the qubit can be modeled either using a coherent systematic error model or using the decoherence model. The specific methodology depends largely on the physical mechanisms used to initialize the system. If a decoherence model is employed, initialization is modeled exactly the same way as imperfect measurement. If we have a probability $p_I$ of initialization error, the initial state of the system is given by the mixture, \begin{equation} \rho_i = (1-p_I)\ket{0}\bra{0} + p_I\ket{1}\bra{1}. \end{equation} In contrast, we could consider an initialization model which is achieved via a coherent unitary operation where the target is the desired initial state. In this case, the initial state is pure, but contains a non-zero amplitude of the undesired target, for example, \begin{equation} \ket{\psi}_i = \alpha\ket{0} + \beta\ket{1} \end{equation} where $|\alpha|^2+|\beta|^2 = 1$ and $|\beta|^2 \ll 1$. The interpretation of these two types of initialization models is identical to the coherent and incoherent models presented. Again, the effect of these types of errors relates to the probabilities of measuring the system in an erred state. One final type of error that we can briefly mention is the problem of qubit leakage. Qubit leakage manifests itself due to the fact that most systems utilized for qubit applications are not simple two level quantum systems. For example, Fig~\ref{fig:calcium} (from Ref.~\cite{S97+}) illustrates the energy level structure for a $^{43}$Ca$^+$ ion utilized for ion trap quantum computing at Oxford. \begin{figure}[ht] \begin{center} \includegraphics[width=0.55\textwidth]{Calcium.pdf} \caption{(from Ref.~\cite{S97+}) Energy level structure for the $^{43}$Ca$^+$ investigated by the Oxford ion-trapping group. The structure of this ion is clearly not a 2-level quantum system. Hence leakage into non-qubit states is an important factor to consider.} \label{fig:calcium} \end{center} \end{figure} The qubit in this system is defined only with two electronic states, however the system itself contains many more levels (including some which are used for qubit readout and initialization through optical pumping and photo-luminescence). As with systematic errors, leakage can occur when improper control is applied to such a system. In the case of ion-traps, qubit transitions are performed by focusing finely tuned lasers resonant on the relevant transitions. If the laser frequency fluctuates or additional levels are not sufficiently detuned from the qubit resonance, the following transformation could occur, \begin{equation} U\ket{0} = \alpha\ket{0} + \beta\ket{1} + \gamma\ket{2}, \end{equation} where the state $\ket{2}$ is a third level which is now populated due to improper control. The actual effect of this type of error can manifest in several different ways. The primary problem with leakage is that it violates the basic assumption of a qubit structure to the computer. As quantum circuits and algorithms are fundamentally designed assuming the computational array is a collection of 2-level systems, operators of the above form (which in this case is operating over a 3-level space) will naturally induce unwanted dynamics. Another important implication of applying non-qubit operations is how these levels interact with the environment and hence how decoherence effects the system. For example, in the above case, the unwanted level, $\ket{2}$, may be extremely short lived leading to an emission of a photon and the system relaxing back to the ground state. For these reasons, leakage is one of the most problematic error channels to correct using QEC. In general, leakage induced errors need to be corrected via the non-demolition detection of a leakage event (i.e. determining if the quantum system is confined to a qubit without performing a measurement discriminating the $\ket{0}$ and $\ket{1}$ states~\cite{P98,GBP97,VWW05}) or through the use of complicated pulse control which acts to re-focus a improperly confined quantum gate back to the qubit subspace~\cite{WBL02,BLWZ05}. In the context of mass manufacturing of qubit systems, leakage would be quantified immediately after the fabrication of a device, using intrinsic characterization protocols such as those discussed in Ref.~\cite{DSOCH07}. If a particular system is found to be improperly confined to the qubit subspace it would simply be discarded. Employing characterization at this stage would then eliminate the need to implement pulse control of leakage, shortening gate times and ultimately reducing error rates in the computer. In this section we introduced the basic ideas of quantum errors and how they effect the success of a quantum algorithm. Section~\ref{sec:sec:decoherence} will return in a more focused manner to error models and how they relate to error correction in a quantum computer. \section{QEC, a good starting point: The 3-qubit code} The 3-qubit bit-flip code is traditionally used as a basic introduction to the concept of Quantum Error Correction. However, it should be emphasized that the 3-qubit code {\em does not} represent a full quantum code. This is due to the fact that the code cannot simultaneously correct for both bit and phase flips (see section.~\ref{sec:sec:decoherence}), which is a sufficient condition for correcting errors for an arbitrary error mapping on a single qubit. This code is a standard repetition code which was extended by Shor~\cite{S95} to the full 9-qubit quantum code which was the first demonstration that QEC was possible. The 3-qubit code encodes a single logical qubit into three physical qubits with the property that it can correct for a single $\sigma_x \equiv X$ bit-flip error. The two logical basis states $\ket{0}_L$ and $\ket{1}_L$ are defined as, \begin{equation} \ket{0}_L = \ket{000}, \quad \quad \ket{1}_L = \ket{111}, \end{equation} such that an arbitrary single qubit state $\ket{\psi} = \alpha\ket{0} + \beta\ket{1}$ is mapped to, \begin{equation} \begin{aligned} \alpha\ket{0} + \beta\ket{1} &\rightarrow \alpha\ket{0}_L + \beta\ket{1}_L \\ &= \alpha\ket{000} + \beta\ket{111} = \ket{\psi}_L. \end{aligned} \end{equation} Fig.~\ref{fig:3qubit} illustrates the quantum circuit required to encode a single logical qubit via the initialization of two ancilla qubits and two CNOT gates. \begin{figure}[ht] \begin{center} \includegraphics[width=0.3\textwidth]{3qubit.pdf} \caption{Quantum Circuit to prepare the $\ket{0}_L$ state for the 3-qubit code where an arbitrary single qubit state, $\ket{\psi}$ is coupled to two freshly initialized ancilla qubits via CNOT gates to prepare $\ket{\psi}_L$.} \label{fig:3qubit} \end{center} \end{figure} The reason why this code is able to correct for a single bit flip error is the binary distance between the two codeword states. Notice that three individual bit flips are required to take $\ket{0}_L \leftrightarrow \ket{1}_L$, hence if we assume $\ket{\psi} = \ket{0}_L$, a single bit flip on any qubit leaves the final state closer to $\ket{0}_L$ than $\ket{1}_L$. The distance between two codeword states, $d$, defines the number of errors that can be corrected, $t$, as, $t = \lfloor(d-1)/2\rfloor$. In this case, $d=3$, hence $t=1$. How are we able to correct errors using this code without directly measuring or obtaining information about the logical state? Two additional ancilla qubits are introduced, which are used to extract {\em syndrome} information (information regarding possible errors) from the data block without discriminating the exact state of any qubit, Fig.~\ref{fig:3qubit2} illustrates. \begin{figure*}[ht] \begin{center} \includegraphics[width=0.65\textwidth]{3qubit2.pdf} \caption{Circuit required to encode and correct for a single $X$-error. We assume that after encoding a single bit-flip occurs on one of the three qubits (or no error occurs). Two initialized ancilla are then coupled to the data block which only checks the parity between qubits. These ancilla are then measured, with the measurement result indicating where (or if) an error has occurred, without directly measuring any of the data qubits. Using this {\em syndrome} information, the error can be corrected with a classically controlled $X$ gate. } \label{fig:3qubit2} \end{center} \end{figure*} For the sake of simplicity we assume that all gate operations are perfect and the only place where the qubits are susceptible to error is the region between encoding and correction. We will return to this issue in section~\ref{sec:Fault-tolerance} when we discuss Fault-tolerance. We also assume that at most, a single, complete bit flip error occurs on one of the three data qubits. Correction proceeds by introducing two ancilla qubits and performing a sequence of CNOT gates, which checks the parity of the three qubits. Table~\ref{tab:errors} summarizes the state of the whole system, for each possible error, just prior to measurement. \begin{table}[ht!] \begin{center} \vspace*{4pt} \begin{tabular}{c|c} Error Location & Final State, $\ket{\text{data}}\ket{\text{ancilla}}$ \\ \hline No Error & $\alpha\ket{000}\ket{00} + \beta\ket{111}\ket{00}$ \\ Qubit 1 & $\alpha\ket{100}\ket{11} + \beta\ket{011}\ket{11}$ \\ Qubit 2 & $\alpha\ket{010}\ket{10} + \beta\ket{101}\ket{10}$ \\ Qubit 3 & $\alpha\ket{001}\ket{01} + \beta\ket{110}\ket{01}$ \\ \end{tabular} \caption{Final state of the five qubit system prior to the syndrome measurement for no error or a single $X$ error on one of the qubits. The last two qubits represent the state of the ancilla. Note that each possible error will result in a unique measurement result (syndrome) of the ancilla qubits. This allows for a $X$ correction gate to be applied to the data block which is classically controlled from the syndrome result. At no point during correction do we learn anything about $\alpha$ or $\beta$.} \label{tab:errors} \end{center} \end{table} For each possible situation, either no error or a single bit-flip error, the ancilla qubits are flipped to a unique state based on the parity of the data block. These qubits are then measured to obtain the classical {\em syndrome} result. The result of the measurement will then dictate if an $X$ correction gate needs to be applied to a specific qubit, i.e. \begin{widetext} \begin{equation} \begin{aligned} &\text{Ancilla Measurement:} \quad \ket{00}, \quad \text{Collapsed State:} \quad \alpha\ket{000} + \beta\ket{111} \quad \therefore \text{Clean State} \\ &\text{Ancilla Measurement:} \quad \ket{01}, \quad \text{Collapsed State:} \quad \alpha\ket{001} + \beta\ket{110} \quad \therefore \text{Bit Flip on Qubit 3} \\ &\text{Ancilla Measurement:} \quad \ket{10}, \quad \text{Collapsed State:} \quad \alpha\ket{010} + \beta\ket{101} \quad \therefore \text{Bit Flip on Qubit 2} \\ &\text{Ancilla Measurement:} \quad \ket{11}, \quad \text{Collapsed State:} \quad \alpha\ket{100} + \beta\ket{011} \quad \therefore \text{Bit Flip on Qubit 1} \\ \end{aligned} \end{equation} \end{widetext} Provided that only a single error has occurred, the data block is restored. Notice that at no point during correction do we gain any information regarding the co-efficients $\alpha$ and $\beta$, hence the computational wave-function will remain intact during correction. This code will only work if a maximum of one error occurs. If two $X$ errors occur, then by tracking the circuit through you will see that the syndrome result becomes ambiguous. For example, if an $X$ error occurs on both qubits one and two, then the syndrome result will be $\ket{01}$. This will cause us to mis-correct by applying an $X$ gate to qubit 3. Therefore, two errors will induce a logical bit flip and causes the code to fail, as expected. To be absolutely clear on how QEC acts to restore the system and protect against errors. Let us now consider a different and more physically realistic error mapping. We will assume that the errors acting on the qubits are coherent rotations of the form $U = \exp (i\epsilon \sigma_x)$ on each qubit, with $\epsilon \ll 1$. We choose coherent rotations so that we can remain in the state vector representation. This is not a necessary requirement, however more general incoherent mappings would require us to move to density matrices. We assume that each qubit experiences the same error, hence the error operator acting on the state is, \begin{equation} \begin{aligned} \ket{\psi}_E = E&\ket{\psi}_L,\\ E = U^{\otimes 3} &= (\cos(\epsilon)I + i\sin(\epsilon)\sigma_x)^{\otimes 3} \\ &= c_0III+ c_1 (\sigma_x\sigma_I\sigma_I+\sigma_I\sigma_x\sigma_I+\sigma_I\sigma_I\sigma_x) \\ &+ c_2 (\sigma_x\sigma_x\sigma_I+\sigma_I\sigma_x\sigma_x+\sigma_x\sigma_I\sigma_x) \\ &+ c_3 \sigma_x\sigma_x\sigma_x. \end{aligned} \end{equation} where, \begin{equation} \begin{aligned} &c_0 = \cos^3(\epsilon), \\ &c_1 = i\cos^2(\epsilon)\sin(\epsilon) \\ &c_2 = -\cos(\epsilon)\sin^2(\epsilon)\\ &c_3 = -i\sin^3(\epsilon). \end{aligned} \end{equation} Now let's examine the transformation that occurs when we run the error correction circuit in Fig.~\ref{fig:3qubit2}, which we denote via the unitary transformation, $U_{QEC}$, over {\em both} the data and ancilla qubits, \begin{widetext} \begin{equation} \begin{aligned} U_{QEC} E\ket{\psi}_L\ket{00} &= c_0\ket{\psi}_L\ket{00} +c_1 (\sigma_x\sigma_I\sigma_I\ket{\psi}_L\ket{11} + \sigma_I\sigma_x\sigma_I\ket{\psi}_L\ket{10} + \sigma_I\sigma_I\sigma_x\ket{\psi}_L\ket{01}) \\ &+c_2 (\sigma_x\sigma_x\sigma_I \ket{\psi}_L\ket{01} + \sigma_I\sigma_x\sigma_x\ket{\psi}_L\ket{11} +\sigma_x\sigma_I\sigma_x\ket{\psi}_L\ket{10}) +c_3\sigma_x\sigma_x\sigma_x\ket{\psi}_L\ket{00} \end{aligned} \end{equation} \end{widetext} Once again, the ancilla block is measured and the appropriate correction operator is applied, yielding the results (up to renormalization), \begin{widetext} \begin{equation} \begin{aligned} &\text{Ancilla Measurement:} \quad \ket{00}, \quad \text{Collapsed State (with correction) :} \quad c_0\ket{\psi}_L + c_3\sigma_x\sigma_x\sigma_x\ket{\psi}_L \\ &\text{Ancilla Measurement:} \quad \ket{01}, \quad \text{Collapsed State (with correction) :} \quad c_1\ket{\psi}_L + c_2\sigma_x\sigma_x\sigma_x\ket{\psi}_L \\ &\text{Ancilla Measurement:} \quad \ket{10}, \quad \text{Collapsed State (with correction) :} \quad c_1\ket{\psi}_L + c_2\sigma_x\sigma_x\sigma_x\ket{\psi}_L \\ &\text{Ancilla Measurement:} \quad \ket{11}, \quad \text{Collapsed State (with correction) :} \quad c_1\ket{\psi}_L + c_2\sigma_x\sigma_x\sigma_x\ket{\psi}_L \\ \end{aligned} \end{equation} \end{widetext} In each case, after correction (based on the syndrome result), we are left with approximately the same state. A superposition of a ``clean state" with the logically flipped state, $\sigma_x\sigma_x\sigma_x\ket{\psi}$. The important thing to notice is the amplitudes related to the terms in the superposition. If we consider the unitary $U$ acting on a single, unencoded qubit, the rotation takes, \begin{equation} U\ket{\psi} = \cos(\epsilon)\ket{\psi} + i\sin(\epsilon)\sigma_x\ket{\psi}, \end{equation} Consequently, the fidelity of the single qubit state is, \begin{equation} F_{\text{unencoded}} = |\bra{\psi}U\ket{\psi}|^2 = \cos^2{\epsilon} \approx 1-\epsilon^2 \end{equation} In contrast, the fidelity of the encoded qubit state after a cycle of error correction is, \begin{equation} \begin{aligned} F_{\text{no detection}} = \frac{|c_0|^2}{|c_0|^2+|c_3|^2} &= \frac{\cos^6(\epsilon)}{\cos^6(\epsilon)+\sin^6(\epsilon)} \\ &\approx 1-\epsilon^6, \end{aligned} \end{equation} with probability $1-3\epsilon^2+O(\epsilon^4)$ and \begin{equation} \begin{aligned} F_{\text{error detected}} &= \frac{|c_1|^2}{|c_1|^2+|c_2|^2} \\ &= \frac{\cos^4(\epsilon)\sin^2(\epsilon)}{\cos^4(\epsilon)\sin^2(\epsilon)+\sin^4(\epsilon)\cos^2(\epsilon)} \\ &\approx 1-\epsilon^2. \end{aligned} \end{equation} with probability $3\epsilon^2 + O(\epsilon^4)$. This is the crux of how QEC suppresses errors at the logical level. During a round of error correction, if no error is detected (which if the error rate is small, occurs with high probability), the error on the resulting state is suppressed from $O(\epsilon^2)$ to $O(\epsilon^6)$, while if a single error is detected, the fidelity of the resulting state remains the same. This is expected, as the 3-qubit code is a single error correcting code. If one error has already been corrected then the failure rate of the logical system is conditional on experiencing one further error (which will be proportional to $\epsilon^2$). As $\epsilon \ll 1$ the majority of correction cycles will detect no error and the fidelity of the resulting encoded state is higher than when unencoded. Note, that as $\epsilon^2 \rightarrow 1/3$ the benefit of the code disappears as every correction cycle detects an error and the resulting fidelity is no better than an unencoded qubit It should be stressed that {\bf no error correction scheme will, in general, restore a corrupted state to a perfectly clean code-state}. The resulting state will contain a superposition of a clean state and corrupted states, the point is that the fidelity of the corrupted states, at the logical level, is greater than the corresponding fidelity for unencoded qubits. Consequently the probability of measuring the correct result at the end of a specific algorithm increases when the system is encoded. This example shows the basic principles of error correction. As mentioned earlier, the 3-qubit code does not represent a full quantum code and the error model that we considered neglected imperfect gates and the possibility of errors occurring during state preparation and/or correction. In the coming sections we will briefly take a look at several full quantum codes, both used for quantum memory and computation and we will introduce the concept of full QEC using stabilizer codes. This will then lead to a description of full fault-tolerant quantum error correction. \section{The Nine Qubit Code: The First Full Quantum code} The nine qubit error correcting code was first developed by Shor~\cite{S95} in 1995 and is based largely on the 3-qubit repetition code. The Shor code is a degenerate single error correcting code able to correct a logical qubit from one discrete bit flip, one discrete phase flip or one of each on any of the nine physical qubits and is therefore sufficient to correct for any continuous linear combination of errors on a single qubit. The two basis states for the code are, \begin{equation} \begin{aligned} \ket{0}_L = \frac{1}{\sqrt{8}}(\ket{000}+\ket{111})(\ket{000}+\ket{111})(\ket{000}+\ket{111}) \\ \ket{1}_L = \frac{1}{\sqrt{8}}(\ket{000}-\ket{111})(\ket{000}-\ket{111})(\ket{000}-\ket{111}) \\ \end{aligned} \end{equation} and the circuit to perform the encoding is shown in Fig.~\ref{fig:9encode}. \begin{figure}[ht] \begin{center} \includegraphics[width=0.4\textwidth]{9encode.pdf} \caption{Circuit required to encode a single qubit with Shor's nine qubit code.} \label{fig:9encode} \end{center} \end{figure} Correction for $X$ errors, for each block of three qubits encoded to $(\ket{000}\pm \ket{111})/\sqrt{2}$ is identical to the three qubit code shown earlier. By performing the correction circuit shown in Fig.~\ref{fig:3qubit2} for each block of three qubits, single $\sigma_x \equiv X$ errors can be detected and corrected. Phase errors ($\sigma_z \equiv Z$) are corrected by examining the sign differences between the three blocks. The circuit shown in Fig.~\ref{fig:9qubit2} achieves this. \begin{figure*}[ht] \begin{center} \includegraphics[width=0.8\textwidth]{9qubit2.pdf} \caption{Circuit required to perform phase correction for the 9-qubit code. } \label{fig:9qubit2} \end{center} \end{figure*} The first set of six CNOT gates compares the sign of blocks one and two of the code state and the second set of CNOT gates compares the sign for blocks two and three. Note that a phase flip on {\em any} one qubit in a block of three has the same effect, this is why the 9-qubit code is referred to as a degenerate code. In other error correcting codes, such as the 5- or 7-qubit codes~\cite{S96,LMPZ96}, there is a one-to-one mapping between correctable errors and unique states, in degenerate codes such as this, the mapping is not unique. Hence provided we know in which block the error occurs it does not matter which qubit we apply the correction operator to. As the 9-qubit code can correct for single $X$ errors in any one block of three and a single phase error on any of the nine qubits, this code is a full quantum error correcting code (we will detail in section~\ref{sec:sec:decoherence} why phase and bit correction is sufficient for the correction of arbitrary qubit errors). Even if a bit {\em and} phase error occurs on the same qubit, the $X$ correction circuit will detect and correct for bit flips while the $Z$ correction circuit will detect and correct for phase flips. As mentioned, the $X$ error correction does have the ability to correct for up to three individual bit flips (provided each bit flip occurs in a different block of three). However, in general the 9-qubit code is only a single error correcting code as it cannot handle multiple errors if they occur in certain locations. The 9-qubit code is in fact a member of a broader class of error correcting codes known as Bacon-Shor or subsystem codes~\cite{B06}. Subsystem codes have the property that certain subgroups of error operators do not corrupt the logical space. This can be seen by considering phase errors that occur in pairs for any block of three. For example, a phase flip on qubits one, two, four and five will leave both logical states unchanged. Subsystem codes are very nice codes from an architectural point of view. Error correction circuits and gates are generally simpler than for non-subsystem codes, allowing for circuit structures more amenable to the physical restrictions of a computer architecture~\cite{AC07}. Additionally as subsystem codes that can correct for a larger number of errors have a similar structure, we are able to perform dynamical switching between codes, in a fault-tolerant manner, which allows us to adapt the error protection in the computer to be changed depending on the noise present at a physical level~\cite{SEDH07}. We will return and revisit subsystem codes later in section~\ref{sec:subsystem} \section{Quantum Error Detection} \label{sec:detection} So far we have focused on the ability to not only detect errors, but also to correct them. Another approach is to not enforce the correction requirement. Post-selected quantum computation, developed by Knill~\cite{K05} demonstrated that large scale quantum computing could be achieved with much higher noise rates when error detection is employed instead of more costly correction protocols. The basic idea in post-selected schemes is to encode the computer with error detecting circuits and if errors are detected, the relevant subroutine of the quantum algorithm is reset and run again, instead of performing active correction. One of the downside to these types of schemes is that although they lead to large tolerable error rates, the resource requirements are unrealistically high. The simplest error detecting circuit is the 4-qubit code~\cite{GBP97}. This encodes two logical qubits into four physical qubits with the ability to detect a single error on either of the two logical qubits. The four basis states for the code are, \begin{equation} \begin{aligned} &\ket{00} = \frac{1}{\sqrt{2}}(\ket{0000}+\ket{1111}), \\ &\ket{01} = \frac{1}{\sqrt{2}}(\ket{1100}+\ket{0011}), \\ &\ket{10} = \frac{1}{\sqrt{2}}(\ket{1010}+\ket{0101}), \\ &\ket{11} = \frac{1}{\sqrt{2}}(\ket{0110}+\ket{1001}). \end{aligned} \end{equation} Fig.~\ref{fig:4qubit} illustrates the error detection circuit that can be utilized to detect a single bit and/or phase flip on one of these encoded qubits. \begin{figure*}[ht] \begin{center} \includegraphics[width=0.8\textwidth]{4qubit.pdf} \caption{Circuit required detect errors in the 4-qubit error detection code. If both ancilla measurements return $\ket{0}$, then the code state is error free. If either measurement returns $\ket{1}$, an error has occurred. Unlike the 9-qubit code, the detection of an error does not give sufficient information to correct the state.} \label{fig:4qubit} \end{center} \end{figure*} If a single bit and/or phase flip occurs on one of the four qubits then the ancilla qubits will be measured in the $\ket{1}$ state. For example, let us consider the cases when a single bit flip occurs on one of each of the four qubits. The state of the system, just prior to the measurement of the ancilla is in table.~\ref{tab:errors2}. \begin{table}[ht!] \begin{center} \vspace*{4pt} \begin{tabular}{c|c} Error Location & Final State, $\ket{\text{data}}\ket{\text{ancilla}}$ \\ \hline No Error & $\ket{\psi}_L\ket{00}$ \\ Qubit 1 & $X_1\ket{\psi}_L\ket{10}$ \\ Qubit 2 & $X_2\ket{\psi}_L\ket{10}$ \\ Qubit 3 & $X_3\ket{\psi}_L\ket{10}$ \\ Qubit 4 & $X_4\ket{\psi}_L\ket{10}$ \\ \end{tabular} \caption{Qubit and ancilla state, just prior to measurement for the 4-qubit error detection code when a single bit-flip has occurred on at most one of the four qubits.} \label{tab:errors2} \end{center} \end{table} Regardless of the location of the bit flip, the ancilla system is measured in the state $\ket{10}$. Similarly if one considers a single phase error on any of the four qubits the ancilla measurement will return $\ket{01}$. In both cases no information is obtained regarding {\em where} the error has occurred, hence it is not possible to correct the state. Instead the subroutine can be reset and re-run. \section{Stabilizer Formalism} \label{sec:sec:QEC} So far we have presented error correcting codes from the perspective of their state representations and their preparation and correction circuits. This is a rather inefficient method for describing the codes as the state representations and circuits clearly differ from code to code. The majority of error correcting codes that are used within the literature are members of a class known as stabilizer codes. Stabilizer codes are very useful to work with. The general formalism applies broadly and there exists general rules to construct preparation circuits, correction circuits and fault-tolerant logical gate operations once the stabilizer structure of the code is specified. The stabilizer formalism which was first introduced by Daniel Gottesman~\cite{G97+} uses essentially the Heisenberg representation for quantum mechanics which describes quantum states in terms of operators rather that the basis states themselves. An arbitrary state $\ket{\psi}$ is defined to be stabilized by some operator, $K$, if it is a $+1$ eigenstate of $K$, i.e. \begin{equation} K\ket{\psi} = \ket{\psi}. \end{equation} For example, the single qubit state $\ket{0}$ is stabilized by the operator $K = \sigma_z$, i.e. \begin{equation} \sigma_z\ket{0} = \ket{0} \end{equation} Defining multi-qubit states with respect to this formalism relies on the group structure of multi-qubit operators. Within the group of all possible, single qubit operators, there exists a subgoup, denoted the Pauli group, $\mathcal{P}$, which contains the following elements, \begin{equation} \mathcal{P} = \{\pm \sigma_I, \pm i \sigma_I, \pm \sigma_x, \pm i \sigma_x,\pm \sigma_y, \pm i \sigma_y,\pm \sigma_z, \pm i \sigma_z\}. \end{equation} It is easy to check that these matrices form a group under multiplication through the commutation and anti-commutation rules for the Pauli set, $\{\sigma_i \} = \{ \sigma_x,\sigma_y,\sigma_z\}$, \begin{equation} [\sigma_i,\sigma_j] = 2i\epsilon_{ijk}\sigma_k, \quad \quad \{\sigma_i,\sigma_j\} = 2\delta_{ij}, \end{equation} where, \begin{equation} \epsilon_{ijk} = \Bigg \{ \begin{array}{l} +1\text{ for } (i,j,k) \in \{(1,2,3), (2,3,1), (3,1,2)\}\\ -1 \text{ for } (i,j,k) \in \{(1,3,2), (3,2,1), (2,1,3)\}\\ 0 \text{ for } i=j, j=k, \text{ or } k=i \end{array} \end{equation} and \begin{equation} \delta_{ij} = \Bigg \{ \begin{array}{cr} 1\text{ for } i = j\\ 0 \text{ for } i \neq j. \end{array} \end{equation} The Pauli group extends over N-qubits by simply taking the $N$ fold tensor product of $\mathcal{P}$, i.e. \begin{equation} \begin{aligned} \mathcal{P}_N &= \mathcal{P}^{\otimes N} \\ &= \{\pm \sigma_I, \pm i \sigma_I, \pm \sigma_x, \pm i \sigma_x,\pm \sigma_y, \pm i \sigma_y,\pm \sigma_z, \pm i \sigma_z\}^{\otimes N}. \end{aligned} \end{equation} An $N$-qubit stabilizer state, $\ket{\psi}_N$ is then defined via an $N$-element Abelian subgroup, $\mathcal{G}$, of the $N$-qubit Pauli group, in which $\ket{\psi}_N$ is a $+1$ eigenstate of each element, \begin{equation} \begin{aligned} \mathcal{G} &= \\ &\{\; G_i \;|\; G_i\ket{\psi} = \ket{\psi}, \; [G_i,G_j] = 0 \; \forall \; (i,j) \} \subset \mathcal{P}_N. \label{eq:stabdef} \end{aligned} \end{equation} Given this definition, the state $\ket{\psi}_N$ can be equivalently defined either through the state vector representation {\em or} by specifying the stabilizer set, $\mathcal{G}$. Many extremely useful multi-qubit states are stabilizer states, including two-qubit Bell states, Greenberger-Horne-Zeilinger (GHZ) states~\cite{GHZ89,GHSZ90}, Cluster states~\cite{BR01,RB01} and codeword states for QEC. As an example, consider a three qubit GHZ state, defined as, \begin{equation} \ket{\text{GHZ}}_3 = \frac{\ket{000} + \ket{111}}{\sqrt{2}}. \end{equation} This state can be expressed via any three linearly independent elements of the $\ket{\text{GHZ}}_3$ stabilizer group for example, \begin{equation} \begin{aligned} G_1 &= \sigma_x\otimes \sigma_x \otimes \sigma_x \equiv XXX, \\ G_2 &= \sigma_z\otimes \sigma_z \otimes \sigma_I \equiv ZZI, \\ G_3 &= \sigma_I \otimes \sigma_z \otimes \sigma_z \equiv IZZ. \end{aligned} \end{equation} where the right-hand side of each equation is the short-hand representation of stabilizers. Note that these three operators form an Abelian group [Eq.~\ref{eq:stabdef}] as, \begin{equation} \begin{aligned} [G_i,G_j]\ket{\psi} &= G_iG_j\ket{\psi} - G_jG_i\ket{\psi} \\ &= \ket{\psi}-\ket{\psi} = 0, \quad \forall \quad [i,j,\ket{\psi}]. \end{aligned} \end{equation} Similarly, the four orthogonal Bell states, \begin{equation} \begin{aligned} \ket{\Phi^{\pm}} &= \frac{\ket{00} \pm \ket{11}}{\sqrt{2}}, \\ \ket{\Psi^{\pm}} &= \frac{\ket{01} \pm \ket{10}}{\sqrt{2}}, \end{aligned} \end{equation} are stabilized by the operators, $G_1 = (-1)^aXX$, and $G_2 = (-1)^b ZZ$. Where $[a,b] \in \{0,1\}$ and each of the four Bell states correspond to the four unique pairs, $\{\Phi^+,\Psi^+,\Phi^-,\Psi^-\} = \{ [0,0],[0,1],[1,0],[1,1]\}$. \section{QEC with stabilizer codes}\label{sec:sec:QEC2} The use of the stabilizer formalism to describe quantum error correction codes is extremely useful since it allows for easy synthesis of correction circuits and also clearly shows how logical operations can be performed directly on encoded data. As an introduction we will focus on arguably the most well known quantum code, the 7-qubit Steane code, first proposed in 1996~\cite{S96}. The 7-qubit code represents a full quantum code that encodes seven physical qubits into one logical qubit, with the ability to correct for a single $X$ and/or $Z$ error. The $\ket{0}_L$ and $\ket{1}_L$ basis states are defined as, \begin{widetext} \begin{equation} \begin{aligned} |0\rangle_L = \frac{1}{\sqrt{8}}(&|0000000\rangle + |1010101\rangle + |0110011\rangle + |1100110\rangle + |0001111\rangle + |1011010\rangle + |0111100\rangle + |1101001\rangle),\\ |1\rangle_L = \frac{1}{\sqrt{8}}(&|1111111\rangle + |0101010\rangle + |1001100\rangle + |0011001\rangle + |1110000\rangle + |0100101\rangle + |1000011\rangle + |0010110\rangle). \label{eq:log} \end{aligned} \end{equation} \end{widetext} The stabilizer set for the 7-qubit code is fully specified by the six operators, \begin{equation} \begin{aligned} &K^1 = IIIXXXX, \quad \quad K^2 = XIXIXIX,\\ &K^3 = IXXIIXX, \quad \quad K^4 = IIIZZZZ \\ &K^5 = ZIZIZIZ, \quad \quad K^6 = IZZIIZZ. \end{aligned} \label{eq:stab7} \end{equation} As the 7-qubit codeword states are specified by only six stabilizers, the code contains two basis states, which are the logical states. With a final operator, $K^7 = ZZZZZZZ=Z^{\otimes 7}$ fixing the state to one of the codewords, $K^7\ket{0}_L = \ket{0}_L$ and $K^7\ket{1}_L = -\ket{1}_L$. The 7-qubit code is defined as a $[[n,k,d]] = [[7,1,3]]$ quantum code, where $n=7$ physical qubits encode $k=1$ logical qubit with a distance between basis states $d=3$, correcting $t = (d-1)/2 = 1$ error. Notice that the stabilizer set separates into $X$ and $Z$ sectors which defines the code as a Calderbank-Shor-Steane (CSS) code. CSS codes are extreamly useful since they allow for straightforward logical gate operations to be applied directly to the encoded data [Section~\ref{sec:operations}] and are reasonably easy to derive from classical codes. Although the 7-qubit code is the most well known Stabilizer code, there are other stabilizer codes which encode multiple logical qubits and correct for more errors~\cite{G97+}. The downside to these lager codes is that they require more physical qubits and more complicated error correction circuits. Tables~\ref{tab:9qubit} and~\ref{tab:5qubit} shows the stabilizer structure of two other well known codes, the 9-qubit code~\cite{S95} which we have examined and the 5-qubit code~\cite{LMPZ96} which represents the smallest possible quantum code that corrects for a single error. \begin{table}[ht] \begin{center} \vspace*{4pt} \begin{tabular}{c|c|c|c|c|c|c|c|c|c} $K^1$ & $Z$&$Z$&$I$&$I$&$I$&$I$&$I$&$I$&$I$ \\ $K^2$ & $Z$&$I$&$Z$&$I$&$I$&$I$&$I$&$I$&$I$ \\ $K^3$ & $I$&$I$&$I$&$Z$&$Z$&$I$&$I$&$I$&$I$ \\ $K^4$ & $I$&$I$&$I$&$Z$&$I$&$Z$&$I$&$I$&$I$ \\ $K^5$ & $I$&$I$&$I$&$I$&$I$&$I$&$Z$&$Z$&$I$ \\ $K^6$ & $I$&$I$&$I$&$I$&$I$&$I$&$Z$&$I$&$Z$ \\ $K^7$ & $X$&$X$&$X$&$X$&$X$&$X$&$I$&$I$&$I$ \\ $K^8$ & $X$&$X$&$X$&$I$&$I$&$I$&$X$&$X$&$X$ \\ \end{tabular} \caption{The eight Stabilizers for the 9-qubit Shor code, encoding nine physical qubits into one logical qubit to correct for a single $X$ and/or $Z$ error. } \label{tab:9qubit} \end{center} \end{table} \begin{table}[ht] \begin{center} \vspace*{4pt} \begin{tabular}{c|c|c|c|c|c} $K^1$ & $X$&$Z$&$Z$&$X$&$I$ \\ $K^2$ & $I$&$X$&$Z$&$Z$&$X$ \\ $K^3$ & $X$&$I$&$X$&$Z$&$Z$ \\ $K^4$ & $Z$&$X$&$I$&$X$&$Z$ \\ \end{tabular} \caption{The Four Stabilizers for the [[5,1,3]] quantum code, encoding five physical qubits into one logical qubit to correct for a single $X$ and/or $Z$ error. Unlike the 7- and 9-qubit codes, the [[5,1,3]] code is a non-CSS code, since the stabilizer set does not separate into $X$ and $Z$ sectors.} \label{tab:5qubit} \end{center} \end{table} \subsection{State Preparation} Using the stabilizer structure for QEC codes, the logical state preparation and error correcting procedure is straightforward. Recall that the codeword states are defined as $+1$ eigenstates of the stabilizer set. In order to prepare a logical state from some arbitrary input, we need to forcibly project qubits into eigenstates of these operators. Consider the circuit shown in Fig.~\ref{fig:opmeas}. \begin{figure}[ht] \begin{center} \includegraphics[width=0.45\textwidth]{opmeas.pdf} \caption{Quantum Circuit required to project an arbitrary state, $\ket{\psi}_I$ into a $\pm 1$ eigenstate of the Hermitian operator, $U = U^{\dagger}$. The measurement result of the ancilla determines which eigenstate $\ket{\psi}_I$ is projected to.} \label{fig:opmeas} \end{center} \end{figure} For some arbitrary input state, $\ket{\psi}_I$, an ancilla which is initialized in the $\ket{0}$ state is used as a control qubit for a Hermitian operation ($U^{\dagger} = U$) on $\ket{\psi}_I$. After the second Hadamard gate is performed, the state of the system is, \begin{equation} \ket{\psi}_F = \frac{1}{2} ( \ket{\psi}_I + U\ket{\psi}_I)\ket{0} + \frac{1}{2}(\ket{\psi}_I - U\ket{\psi}_I)\ket{1}. \end{equation} The ancilla qubit is then measured in the computational basis. If the result is $\ket{0}$, the input state is projected to (neglecting normalization), \begin{equation} \ket{\psi}_F = \ket{\psi}_I+U\ket{\psi}_I. \end{equation} Since $U$ is Hermitian, $U\ket{\psi}_F=\ket{\psi}_F$, hence $\ket{\psi}_F$ is a $+1$ eigenstate of $U$. If the ancilla is measured to be $\ket{1}$, then the input is projected to the state, \begin{equation} \ket{\psi}_F = \ket{\psi}_I-U\ket{\psi}_I, \end{equation} which is the $-1$ eigenstate of $U$. Therefore, provided $U$ is Hermitian, the general circuit of Fig.~\ref{fig:opmeas} will project an arbitrary input state to a $\pm 1$ eigenstate of $U$. This procedure is well known and is refered to as either a ``parity" or ``operator" measurement~\cite{NC00}. From this construction it should be clear how QEC state preparation proceeds. Taking the $[[7,1,3]]$ code as an example, 7-qubits are first initialized in the state $\ket{0}^{\otimes 7}$, after which the circuit shown in Fig.~\ref{fig:opmeas} is applied three times with $U = (K^1,K^2,K^3)$, projecting the input state into a simultaneous $\pm 1$ eigenstate of each $X$ stabilizer describing the $[[7,1,3]]$ code. The result of each operator measurement is then used to classically control a single qubit $Z$ gate which is applied to one of the seven qubits at the end of the preparation. This single $Z$ gate converts any $-1$ projected eigenstates into $+1$ eigenstates. Notice that the final three stabilizers do not need to be measured due to the input state, $\ket{0}^{\otimes 7}$, already being a $+1$ eigenstate of $(K^4,K^5,K^6)$. Fig.~\ref{fig:7qubitprep} illustrates the final circuit, where instead of one ancilla, three are utilized to speed up the state preparation by performing each operator measurement in parallel. \begin{figure}[ht] \begin{center} \includegraphics[width=0.45\textwidth]{7qubitprep.pdf} \caption{Quantum circuit to prepare the $[[7,1,3]]$ logical $\ket{0}$ state. The input state $\ket{0}^{\otimes 7}$ is projected into an eigenstate of each of the $X$ stabilizers shown in Eq. \ref{eq:stab7}. After each ancilla measurement the classical results are used to apply a single qubit $Z$ gate to qubit $i = 1^{M_2}+2^{M_3}+4^{M_1}$ which converts the state from a $-1$ eigenstates of $(K^1,K^2,K^3)$ to $+1$ eigenstates.} \label{fig:7qubitprep} \end{center} \end{figure} As a quick aside, let us detail exactly how the relavant logical basis states can be derived from the stabilizer structure of the code by utilizing the preparation circuit illustrated above. Instead of the 7-qubit code, we will use the stabilizer set shown in Table~\ref{tab:5qubit} to calculate the $\ket{0}_L$ state for the 5-qubit code. The four code stabilizers are given by, \begin{equation} \begin{aligned} &K^1 = XZZXI, \quad \quad K^2 = IXZZX,\\ &K^3 = XIXZZ, \quad \quad K^4 = ZXIXZ. \end{aligned} \end{equation} As with the 7-qubit code, projecting an arbitrary state into a $+1$ eigenstate of these operators define the two, logical basis states $\ket{0}_L$ and $\ket{1}_L$, with the operator $\bar{Z} = ZZZZZ$, fixing the state to either $\ket{0}_L$ or $\ket{1}_L$. Therefore, calculating $\ket{0}_L$ from some initial un-encoded state requires us to project the initial state into a $+1$ eigenstate of these operators. If we take the initial, un-encoded state as $\ket{00000}$, then it already is a $+1$ eigenstate of $\bar{Z}$. Therefore, to find $\ket{0}_L$ we simply calculate, \begin{equation} \begin{aligned} \ket{0}_L&=\prod_{i=1}^4 (I^{\otimes 5} + K^i)\ket{00000}, \end{aligned} \end{equation} up to normalization. Expanding out this product, we find, \begin{equation} \begin{aligned} \ket{0}_L = \frac{1}{4}( &\ket{00000}+\ket{01010}+\ket{10100}-\ket{11110}+\\ &\ket{01001}-\ket{00011}-\ket{11101}-\ket{10111}+\\ &\ket{10010}-\ket{11000}-\ket{00110}-\ket{01100}-\\ &\ket{11011}-\ket{10001}-\ket{01111}+\ket{00101}). \end{aligned} \end{equation} Note, that the above state vector does not match up with those given in~\cite{LMPZ96}. However, these vectors are equivalent up to local rotations on each qubit. Therefore, matching up the original state requires locally perturbing the stabilizer set to reflect these rotations. \subsection{Error Correction} Error correction using stabilizer codes is a straightforward extension of state preparation. Consider an arbitrary single qubit state that has been encoded, \begin{equation} \alpha\ket{0} + \beta\ket{1} \rightarrow \alpha\ket{0}_L + \beta\ket{1}_L = \ket{\psi}_L. \end{equation} Now assume that an error occurs on one (or multiple) qubits which is described via the operator $E$, where $E$ is a combination of $X$ and/or $Z$ errors over the $N$ physical qubits of the logical state. By definition of stabilizer codes, $K^i\ket{\psi}_L = \ket{\psi}_L$, $i \in [1,..,N-k]$, for a code encoding $k$ logical qubits. Hence the erred state, $E\ket{\psi}_L$, satisfies, \begin{equation} K^iE\ket{\psi}_L = (-1)^m EK^i\ket{\psi}_L = (-1)^m E\ket{\psi}_L. \end{equation} where $m$ is defined as, $m=0$, if $[E,K^i]=0$ and $m=1$, if $\{E,K^i\} = 0$. Therefore, if the error operator commutes with the stabilizer, the state remains a $+1$ eigenstate of $K^i$, if the error operator anti-commutes with the stabilizer then the logical state is flips to now be a $-1$ eigenstate of $K^i$. Hence the general procedure for error correction is identical to state preparation. Each of the code stabilizers are sequentially measured. Since a error free state is already a $+1$ eigenstate of all the stabilizers, any error which anti-commutes with a stabilizer will flip the eigenstate and consequently the parity measurement will return a result of $\ket{1}$. Taking the $[[7,1,3]]$ code as an example, you can see that if the error operator is $E = X_i$, where $i = (1,...,7)$, representing a bit-flip on any {\em one} of the 7 physical qubits, then regardless of the location, $E$ will anti-commute with a unique combination of $(K^4,K^5,K^6)$. Hence the classical results of measuring these three operators will indicate if and where a single $X$ error has occurred. Similarly, if $E=Z_i$, then the error operator will anti-commute with a unique combination of, $(K^1,K^2,K^3)$. Consequently, the first three stabilizers for the $[[7,1,3]]$ code correspond to $Z$ sector correction while the second three stabilizers correspond to $X$ sector correction. Note, that correction for Pauli $Y$ errors are also taken care of by correcting in the $X$ and $Z$ sector since a $Y$ error on a single qubit is equivalent to both an $X$ and $Z$ error on the same qubit, i.e. $Y = iXZ$. Fig. \ref{fig:correct} illustrates the circuit for full error correction with the $[[7,1,3]]$ code. As you can see it is simply an extension of the preparation circuit [Fig. \ref{fig:7qubitprep}] where all six stabilizers are measured across the data block. \begin{figure*}[ht] \begin{center} \includegraphics[width=\textwidth]{7qubitcorr.pdf} \caption{Quantum circuit to to correct for a single $X$ and/or $Z$ error using the $[[7,1,3]]$ code. Each of the 6 stabilizers are measured, with the first three detecting and correcting for $Z$ errors, while the last three detect and correct for $X$ errors.} \label{fig:correct} \end{center} \end{figure*} Even though we have specifically used the $[[7,1,3]]$ code as an example, the procedure for error correction and state preparation is identical for all stabilizer codes allowing for full correction for both bit and phase errors without obtaining any information regarding the state of the logical qubit. \section{Digitization of Quantum Errors}\label{sec:sec:decoherence} Up until now we have remained fairly abstract regarding the analysis of quantum errors. Specifically, we have examined QEC from the standpoint of a discrete set of Pauli errors occurring at certain locations within a larger quantum circuit. In this section we examine how this analysis of errors relates to more realistic processes such as environmental decoherence and systematic gate errors. Digitization of quantum noise is often assumed when people examine the stability of quantum circuit design or attempt to calculate thresholds for concatenated error correction. However, the equivalence of discrete Pauli errors to more general, continuous, noise only makes sense when we consider the stabilizer nature of the correction procedure. Recall from section~\ref{sec:sec:QEC} that correction is performed by re-projecting a potentially corrupt data block into $+1$ eigenstates of the stabilizer set. It is this process that acts to digitize quantum noise, since a general continuous mapping from a ``clean" codeword state to a corrupt one will not satisfy the stabilizer conditions. we will first introduce how a coherent systematic error, caused by imperfect implementation of quantum gates, are digitized during correction, after which we will briefly discuss environmental decoherence from the standpoint of the Markovian decoherence model. \subsection{Systematic gate errors} We have already shown an example of how systematic gate errors are digitized into a discrete set of Pauli operators in Sec.~\ref{sec:error}. However, in that case we only considered a very restrictive type of error, namely the coherent operator $U=\exp(i\epsilon X)$. We can easily extend this analysis to cover all forms of systematic gate errors. Consider an $N$ qubit unitary operation, $U_N$, which is valid on encoded data. Assume that $U_N$ is applied inaccurately such that the resultant operation is actually $U_N'$. Given a general encoded state $\ket{\psi}_N$, the final state can be expressed as, \begin{equation} U_N' \ket{\psi}_L = U_E U_N \ket{\psi}_L = \sum_j \alpha_j E_j \ket{\psi'}_L, \end{equation} where $\ket{\psi'}_L = U_N\ket{\psi}_L$ is the perfectly applied $N$ qubit gate, (i.e. the stabilizer set for $\ket{\psi'}_L$ remains invariant under the operation $U_N$ [see Sec.~\ref{sec:operations}]). and $U_E$ is a coherent error operator which is expanded in terms of the $N$ qubit Pauli Group, $E_j \in P_N$. Now append two ancilla blocks, $\ket{A_0}^X$ and $\ket{A_0}^Z$, which are all initialized and are used for $X$ and $Z$ sector correction, then run a full error correction cycle, which we represent by the unitary operator, $U_{\text{QEC}}$. It will be assumed that $\ket{\psi}_L$ is encoded with a {\em hypothetical} QEC code which can correct for $N$ errors (both $X$ and/or $Z$), hence there is a one-to-one mapping between the error operators, $E_j$, and the orthogonal basis states of the ancilla blocks, \begin{equation} \begin{aligned} &U_{\text{QEC}}U_N'\ket{\psi}_L\ket{A_0}^X\ket{A_0}^Z \\ &= U_{\text{QEC}} \sum_j \alpha_jE_j\ket{\psi'}_L \ket{A_0}^X\ket{A_0}^Z \\ &= \sum_j \alpha_j E_j \ket{\psi'}_L\ket{A_j}^X\ket{A_j}^Z. \end{aligned} \end{equation} The ancilla blocks are then measured, projecting the data blocks into the state $E_j\ket{\psi'}_L$ with probability $|\alpha_j|^2$, after which the correction $E_j^{\dagger}$ is applied based on the syndrome result. As the error operation $E_j$ is simply an element of $\mathcal{P}_N$, correcting for $X$ and $Z$ independently is sufficient to correct for all error operators (as $Y$ errors are corrected when a bit and phase error is detected and corrected on the same qubit). For very small systematic inaccuracies, the expansion co-efficient, $\alpha_0$, which corresponds to $E_0 = I^{\otimes N}$ will be very close to 1, with all other co-efficients small. Hence during correction there will be a very high probability that no error is detected. This is the digitization effect of quantum error correction. Since codeword states are specific eigenstates of the stabilizers, then the re-projection of the state when each stabilizer is measured forces any continuous noise operator to collapse to the discrete Pauli set, with the magnitude of the error dictating the probability that the data block is projected into discrete perturbation of a ``clean" state. \subsection{Environmental decoherence} A complete analysis of environmental decoherence in relation to quantum information is a lengthy topic. Instead of a detailed review, we will instead simply present a specific example to highlight how QEC relates to environmental effects. The Lindblad formalism~\cite{G91,NC00,DWM03} provides an elegant method for analyzing the effect of decoherence on open quantum systems. This model does have several assumptions, most notably that the environmental bath couples weakly to the system (Born approximation) and that each qubit experiences un-correlated noise (Markovian approximation). While these assumptions are utilized for a variety of systems~\cite{BHPC03,BM03,BKD04}, it is known that they may not hold in some cases~\cite{HMCS00,MCMS05,APNYT04,ALKH02}. Particularly in superconducting systems where decoherence can be caused by small numbers of fluctuating charges. In this case more specific decoherence models need to be considered. Using this formalism, the evolution of the density matrix can be written as, \begin{equation} \partial_t \rho = -\frac{i}{\hbar} [H,\rho] + \sum_k \Gamma_k \mathcal{L}[\rho]. \end{equation} Where $H$ is the Hamiltonian, representing coherent, dynamical evolution of the system and $\mathcal{L}_k[\rho]=([L_k,\rho L_k^{\dagger}]+[L_k\rho, L_k^{\dagger}])/2$ represents the incoherent evolution. The operators $L_k$ are known as the Lindblad quantum jump operators and are used to model specific decoherence channels, with each operator parameterized by some rate $\Gamma_k \geq 0$. This differential equation is known as the quantum louiville equation or more generally, the density matrix master equation. To link Markovian decoherence to QEC, consider a special set of decoherence channels that help to simplify the calculation, representing a single qubit undergoing dephasing, spontaneous emission and spontaneous absorption. Dephasing of a single qubit is modelled by the Lindblad operator $L_1 = Z$ while spontaneous emission/absorption are modelled by the operators $L_2 = \ket{0}\bra{1}$ and $L_3 = \ket{1}\bra{0}$ respectively. For the sake of simplicity we assume that absorption/emission occur at the same rate, $\Gamma$. Consequently, the density matrix evolution is given by, \begin{equation} \partial_t \rho = -\frac{i}{\hbar}[H,\rho] + \Gamma_Z (Z\rho Z - \rho) + \frac{\Gamma}{2}(X\rho X + Y\rho Y -2\rho). \label{eq:diff} \end{equation} If it is assumed that the qubit is not undergoing any coherent evolution ($H = 0$), i.e. a memory stage within a quantum algorithm, then Eq.~\ref{eq:diff} can be solved by re-expressing the density matrix in the Bloch formalism. Set $\rho(t) = I/2 + x(t)X + y(t)Y + z(t)Z$, then Eq. \ref{eq:diff}, with $H=0$, reduces to, $\partial_t S(t) = AS(t)$ with $S(t) = (x(t),y(t),z(t))^T$ and \begin{equation} A = \begin{pmatrix} -(\Gamma + 2\Gamma_z) & 0 &0 \\ 0 &-(\Gamma + 2\Gamma_z) &0\\ 0 &0 &-2\Gamma \end{pmatrix}. \end{equation} This differential equation is easy to solve, leading to, \begin{equation} \begin{aligned} \rho(t) &= [1-p(t)]\rho(0) + p_x(t) X\rho(0) X \\ &+ p_y(t) Y \rho(0) Y + p_z(t) Z \rho(0) Z, \end{aligned} \end{equation} where, \begin{equation} \begin{aligned} p_x(t) = & p_y(t) = \frac{1}{4}(1-e^{-2\Gamma t}), \\ &p_z(t) = \frac{1}{4}(1+e^{-2\Gamma t}-2e^{-(\Gamma +2\Gamma_z)t}), \\ &p(t) = p_x(t) + p_y(t) + p_z(t). \end{aligned} \end{equation} If this single qubit is part of a QEC encoded data block, then each term represents a single error on the qubit experiencing decoherence. Two blocks of initialized ancilla qubits are added to the system and the error correction protocol run. Once the ancilla qubits are measured, the state will collapse to no error, with probability $1-p(t)$, or a single $X$,$Y$ or $Z$ error, with probabilities $p_x(t),p_y(t)$ and $p_z(t)$. We can also see how temporal effects are incorporated into the error correction model. The temporal integration window $t$ of the master equation will influence how probable an error is detected and corrected for a fixed rate $\Gamma$. The longer between correction cycles, the more probable the qubit experiences an error. \subsection{More General mappings} Both the systematic gate errors and the errors induced by environmental decoherence illustrate the digitization effect of quantum error correction. However, we can quite easily generalize digitization to arbitrary mappings of the density matrix. In this case consider a more general Krauss map on a multi-qubit density matrix, \begin{equation} \rho \rightarrow \sum_k A_k^{\dagger}\rho A_k \end{equation} where $\sum A_k^{\dagger}A_k = I$. For the sake of simplicity let us choose a simple mapping where $A_1 = (Z_1+iZ_2)/\sqrt{2}$ and $A_k = 0$ for $k\neq 1$. This mapping essentially represents dephasing on two qubits. However, this type of mapping (when considered in the context of error correction) represents independent $Z$ errors on either qubit one or two. To illustrate, first expand out the density matrix (neglecting normalization), \begin{equation} \rho \rightarrow A_1^{\dagger}\rho A_1 = Z_1\rho Z_1 + Z_2\rho Z_2 - iZ_1\rho Z_2 + iZ_2 \rho Z_1 \end{equation} Note that only the first two terms in this expansion, on their own, represent physical mixtures, the last two off-diagonal terms are actually irrelevant in the context of QEC and are removed during correction. To illustrate we again assume that $\rho$ represents a protected qubit, where $Z_1$ and $Z_2$ are {\em physical} errors on qubits comprising the codeblock. As we are only considering phase errors in this example, we will ignore $X$ correction (but the analysis automatically generalizes if the error mapping contains $X$ terms). A fresh ancilla block, represented by the density matrix $\rho^z_0$ is coupled to the system and the unitary $U_{QEC}$ is run, \begin{widetext} \begin{equation} \begin{aligned} U_{QEC}^{\dagger}\rho'\otimes \rho^z_0 U_{QEC} = &U_{QEC}^{\dagger}Z_1\rho Z_1\otimes \rho^z_0 U_{QEC} + U_{QEC}^{\dagger}Z_2\rho Z_2\otimes \rho^z_0 U_{QEC}\\ - &iU_{QEC}^{\dagger}Z_1\rho Z_2\otimes \rho^z_0 U_{QEC} + iU_{QEC}^{\dagger}Z_2 \rho Z_1\otimes \rho^z_0 U_{QEC} \\ = &Z_1\rho Z_1 \otimes \ket{Z_1}\bra{Z_1} +Z_2\rho Z_2 \otimes \ket{Z_2}\bra{Z_2} -iZ_1\rho Z_2 \otimes \ket{Z_1}\bra{Z_2} +iZ_2\rho Z_1 \ket{Z_2}\bra{Z_1} \end{aligned} \end{equation} \end{widetext} where $\ket{Z_1}$ and $\ket{Z_2}$ represent the two orthogonal syndrome states of the ancilla that are used to detect phase errors on qubits one and two respectively. The important part of the above expression is that when the syndrome qubits are measured we are calculating $\text{Tr}(\rho \ket{Z_1}\bra{Z_1})$ or $\text{Tr}(\rho \ket{Z_2}\bra{Z_2})$, therefore the two cross terms in the above expression are never observed. In this mapping the only two possible states that exist after the measurement of the ancilla system are, \begin{equation} \begin{aligned} Z_1\rho Z_1 \otimes \ket{Z_1}\bra{Z_1} \quad \text{with Probability } =\frac{1}{2} \\ Z_2\rho Z_2 \otimes \ket{Z_2}\bra{Z_2} \quad \text{with Probability } =\frac{1}{2} \end{aligned} \end{equation} Therefore, not only are the cross terms eliminated via error correction but the final density matrix again collapses to a single error perturbation of ``clean" codeword states with no correlated errors. Consequently, in standard QEC analysis it is assumed that after each elementary gate operation, measurement, initialization and memory step, a hypothetical error correction cycle is run. This cycle digitizes all continuous errors (either systematic or environmental) into either an $X$ and/or $Z$ error on each qubit. This cycle is assumed to be error free and take zero time. In this way error correction can be analyzed by assuming perfect gate operations and discrete, probabilistic errors. The probability of each error occuring can then be independently calculated via a systematic gate analysis or through the evolution of the master equation. \section{fault-tolerant Quantum Error Correction and the threshold theorem.}\label{sec:Fault-tolerance} Section~\ref{sec:sec:QEC} detailed the protocols required to correct for quantum errors, however this implementation of QEC assumed the following, \begin{enumerate} \item Errors only occur during ``memory" regions, i.e. when quantum operations or error correction are not being performed and we assume errors do not occur on ancilla qubits. \item The quantum gates themselves do not induce any systematic errors within the logical data block. \end{enumerate} Clearly these are two very unrealistic assumptions and error correction procedures and logical gate operations need to be designed such that they can still correct for errors. \subsection{Fault-tolerance} The concept of Fault-tolerance in computation is not a new idea, it was first developed in relation to classical computing~\cite{N55,G83,A87}. However, in recent years the precise manufacturing of digital circuitry has made large scale error correction and fault-tolerant circuits largely unnecessary. The basic principle of Fault-tolerance is that the circuits used for gate operations and error correction procedures should not cause errors to cascade. This can be seen clearly when we look at a simple CNOT operation between two qubits [Fig.~\ref{fig:CNOT}]. In this circuit we are performing a sequence of three CNOT gates which act to take the state $\ket{111}\ket{000} \rightarrow \ket{111}\ket{111}$. In Fig.~\ref{fig:CNOT}a. we consider a single $X$ error which occurs on the top most qubit prior to the first CNOT. This single error will cascade through each of the three gates such that the $X$ error has now propagated to four qubits. Fig.~\ref{fig:CNOT}b. shows a slightly modified design that implements the same operation, but the single $X$ error now only propagates to two of the six qubits. \begin{figure*}[ht] \begin{center} \includegraphics[width=0.85\textwidth]{CNOT.pdf} \caption{Two circuits to implement the transformation $\ket{111}\ket{000} \rightarrow \ket{111}\ket{111}$. a) shows a version where a single $X$ error can cascade into four errors while b) shows an equivalent circuit where the error only propagates to a second qubit. } \label{fig:CNOT} \end{center} \end{figure*} If we consider each block of three as a single logical qubit, then the staggered circuit will only induce a total of one error in each logical block, given a single $X$ error occurred somewhere during the gate operations. This is the one of the standard definitions of Fault-tolerance. {\em fault-tolerant circuit element: A single error will cause \textbf{at most} one error in the output for each logical qubit block.} It should be stressed that the idea of Fault-tolerance is a discrete definition, either a certain quantum operation is fault-tolerant or it is not. What is defined to be fault-tolerant can change depending on the error correction code used. For example, for a single error correcting code, the above definition is the only one available (since any more than one error in a logical qubit will result in the error correction procedure failing). However, if the quantum code employed is able to correct multiple errors, then the definition of Fault-tolernace can be relaxed, i.e. if the code can correct three errors then circuits may be designed such that a single failure results in at most two errors in the output (which is then correctable). In general, for an code correcting $t=\lfloor (d-1)/2 \rfloor$ errors, fault-tolerance requires that $\leq t$ errors during an operation does not result in $> t$ errors in the output for each logical qubit. \subsection{Threshold Theorem} \label{sec:threshold} The threshold theorem is truly a remarkable result in quantum information and is a consequence of fault-tolerant circuit design and the ability to perform dynamical error correction. Rather than present a detailed derivation of the theorem for a variety of noise models, we will instead take a very simple case where we utilize a quantum code that can only correct for a single error, using a model that assumes uncorrelated, errors on individual qubits. For more rigorous derivations of the theorem see~\cite{AB97,G97+,A07}. Consider a quantum computer where each physical qubit experiences either an $X$ and/or $Z$ error independently with probability $p$, per gate operation. Furthermore, it is assumed that each logical gate operation and error correction circuit is designed according to the rules of Fault-tolerance and that a cycle of error correction is performed after each elementary {\em logical} gate operation. If an error occurs during a logical gate operation, then Fault-tolerance ensures this error will only propagate to at most one error in each block, after which a cycle of error correction will remove the error. Hence if the failure probability of un-encoded qubits per time step is $p$, then a single level of error correction will ensure that the logical step fails only when two (or more) errors occur. Hence the failure rate of each logical operation, to leading order, is now $p^1_L = cp^2$, where $p^1_L$ is the failure rate (per logical gate operation) of a 1st level logical qubit and $c$ is the upper bound for the number of possible 2-error combinations which can occur at a physical level within the circuit consisting of the correction cycle $+$ gate operation $+$ correction cycle~\cite{A07}. We now repeat the process, re-encoding the computer such that a level-2 logical qubit is formed, using the same $[[n,k,d]]$ quantum code, from $n$, level-1 encoded qubits. It is assumed that all error correcting procedures and gate operations at the 2nd level are self-similar to the level-1 operations (i.e. the circuit structures for the level-2 encoding are identical to the level-1 encoding). Therefore, if the level-1 failure rate per logical time step is $p^1_L$, then by the same argument, the failure rate of a 2-level operation is given by, $p^2_L = c(p^1_L)^2 = c^3p^4$. This iterative procedure is then repeated (referred to as concatenation) up to the $k$th level, such that the logical failure rate, per time step, of a $k$-level encoded qubit is given by, \begin{equation} p^k_L = \frac{(cp)^{2^k}}{c}. \label{eq:threshold} \end{equation} Eq. \ref{eq:threshold} implies that for a finite {\em physical} error rate, $p$, per qubit, per time step, the failure rate of the $k$th-level encoded qubit can be made arbitrarily small by simply increasing $k$, dependent on $cp < 1$. This inequality defines the threshold. The physical error rate experienced by each qubit per time step must be $p_{th} < 1/c$ to ensure that multiple levels of error correction reduce the failure rate of logical components. Hence, provided sufficient resources are available, an arbitrarily large quantum circuit can be successfully implemented, to arbitrary accuracy, once the physical error rate is below threshold. The calculation of thresholds is therefore an extremely important aspect to quantum architecture design. Initial estimates at the threshold, which gave $p_{th} \approx 10^{-4}$~\cite{K97,AB97,G97+} did not sufficiently model physical systems in an accurate way. Recent results~\cite{SFH07,SDT07,SBFRYSGF06,MCTBCCC04,BSO05} have been estimated for more realistic quantum processor architectures, showing significant differences in threshold when architectural considerations are taken into account. \section{fault-tolerant operations on encoded data}\label{sec:operations} Sections~\ref{sec:sec:QEC} and~\ref{sec:Fault-tolerance} showed how fault-tolerant QEC allows for any quantum algorithm to be run to arbitrary accuracy. However, the results of the threshold theorem assume that logical operations can be performed directly on the encoded data without the need for continual decoding and re-encoding. Using stabilizer codes, a large class of operations can be performed on logical data in an inherently fault-tolerant way. If a given logical state, $\ket{\psi}_L$, is stabilized by $K$, and the logical operation $U$ is applied, the new state, $U\ket{\psi}_L$ is stabilized by $UKU^{\dagger}$, i.e, \begin{equation} UKU^{\dagger}U\ket{\psi}_L = UK\ket{\psi}_L = U\ket{\psi}_L. \end{equation} In order for the codeword states to remain valid, the stabilizer set for the code, $\mathcal{G}$, must remain fixed through every operation. Hence for $U$ to be a valid operation on the data, $U\mathcal{G}U^{\dagger} = \mathcal{G}$. \subsection{Single Qubit Operations} The logical $\bar{X}$ and $\bar{Z}$ operations on a single encoded qubit are the first examples of valid codeword operations. Taking the $[[7,1,3]]$ code as an example, $\bar{X}$ and $\bar{Z}$ are given by, \begin{equation} \bar{X} = XXXXXXX \equiv X^{\otimes 7}, \quad \bar{Z} = ZZZZZZZ \equiv Z^{\otimes 7}. \label{eq:logop} \end{equation} Since the single qubit Pauli operators satisfy $XZX = -Z$ and $ZXZ = -X$ then, $\bar{X}K^{i}\bar{X} = K^{i}$ and $\bar{Z}K^{i}\bar{Z} = K^{i}$ for each of the $[[7,1,3]]$ stabilizers given in Eq.~\ref{eq:stab7}. The fact that each stabilizer has a weight of four guarantees that $UKU^{\dagger}$ picks up an even number of $-1$ factors. Since the stabilizers remain fixed the operations are valid. However, what transformations do Eq.~\ref{eq:logop} actually perform on encoded data? For a single qubit, a bit-flip operation $X$ takes $\ket{0} \leftrightarrow \ket{1}$. Recall that for a single qubit $Z\ket{0} = \ket{0}$ and $Z\ket{1} = -\ket{1}$, hence for $\bar{X}$ to actually induce a logical bit-flip it must take, $\ket{0}_L \leftrightarrow \ket{1}_L$. For the $[[7,1,3]]$ code, the final operator which fixes the logical state is $K^7 = Z^{\otimes 7}$, where $K^7\ket{0}_L = \ket{0}_L$ and $K^7\ket{1}_L = -\ket{1}_L$. As $\bar{X}K^7\bar{X} = -K^7$, any state stabilized by $K^7$ becomes stabilized by $-K^7$ (and vice-versa) after the operation of $\bar{X}$. Therefore, $\bar{X}$ represents a logical bit flip. The same argument can be used for $\bar{Z}$ by considering the stabilizer properties of the states $\ket{\pm} = (\ket{0} \pm \ket{1})/\sqrt{2}$. Hence, the logical bit- and phase-flip gates can be applied directly to logical data by simply using seven single qubit $X$ or $Z$ gates, [Fig.~\ref{fig:transversal}]. \begin{figure*}[ht] \begin{center} \includegraphics[width=0.75\textwidth]{transversal.pdf} \caption{Bit-wise application of single qubit gates in the $[[7,1,3]]$ code. Logical $X$, $Z$ $H$ and $P$ gates can trivially be applied by using seven single qubit gates, fault-tolerantly. Note that the application of seven $P$ gates results in the logical $\bar{P^{\dagger}}$ being applied and vice-versa.} \label{fig:transversal} \end{center} \end{figure*} Two other useful gates which can be applied in this manner is the Hadamard rotation and phase gate, \begin{equation} H = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}, \quad \quad P = \begin{pmatrix} 1 & 0 \\ 0 & i \end{pmatrix}. \end{equation} These gates are useful since when combined with the two-qubit CNOT gate, they can generate a subgroup of all multi-qubit gates known as the Clifford group (gates which map Pauli group operators back to the Pauli group). Again, using the stabilizers of the $[[7,1,3]]$ code and the fact that for single qubits, \begin{equation} \begin{aligned} HXH = Z, \quad \quad HZH = X, \\ PXP^{\dagger} = iXZ, \quad \quad PZP^{\dagger} = Z, \end{aligned} \end{equation} a seven qubit bit-wise Hadamard gate will switch $X$ with $Z$ and therefore will simply flip $\{K^1,K^2,K^3\}$ with $\{K^4,K^5,K^6\}$, and is a valid operation. The bit-wise application of the $P$ gate will leave any $Z$ stabilizer invariant, but takes $X \rightarrow iXZ$. This is still valid since provided there are a multiple of four non-identity operators for the stabilizer set, the factors of $i$ will cancel. Hence seven bit-wise $P$ gates is valid for the $[[7,1,3]]$ code. What does $\bar{H}$ and $\bar{P}$ do to the logical state? For a single qubit, the Hadamard gate flips any $Z$ stabilized state to a $X$ stabilized state, i.e $\ket{0,1} \leftrightarrow \ket{+,-}$. Looking at the transformation of $K^7$, $\bar{H}K^7\bar{H} = X^{\otimes 7}$, the bit-wise Hadamard gate will invoke a logical Hadamard operation. The single qubit $P$ gate leaves a $Z$ stabilized state invariant, while an $X$ eigenstate becomes stabilized by $iXZ$. Hence, $\bar{P^{\dagger}}(X^{\otimes 7})\bar{P} = -i(XZ)^{\otimes 7}$ and the bit-wise gate, $\bar{P}$, represents a logical $P^{\dagger}$ gate on the data block. Similarly, bit-wise $\bar{P^{\dagger}}$ gates enact a logical $P$ gate [Fig.~\ref{fig:transversal}]. Each of these fault-tolerant operations on a logically encoded block are commonly referred to as transversal operations, as a logical operation is obtained by a set of individual operations acting transversally on the physical qubits. \subsection{Two-qubit gate.} A two-qubit logical CNOT operation can also be applied in the same transversal way. For un-encoded qubits, a CNOT operation performs the following mapping on the two qubit stabilizer set, \begin{equation} \begin{aligned} &X\otimes I \rightarrow X\otimes X, \\ &I\otimes Z \rightarrow Z\otimes Z, \\ &Z\otimes I \rightarrow Z\otimes I, \\ &I\otimes X \rightarrow I\otimes X. \end{aligned} \label{eq:CNOTtrans} \end{equation} Where the first operator corresponds to the control qubit and the second operator corresponds to the target. Now consider the bit-wise application of seven CNOT gates between logically encoded blocks of data [Fig.~\ref{fig:transversal2}]. First the stabilizer set must remain invariant, i.e, \begin{equation} \mathcal{G} = \{K^{i}\otimes K^{j}\} \rightarrow \{K^{i}\otimes K^{j}\} \; \forall \; (i,j). \end{equation} Table~\ref{tab:stabtrans} details the transformation for all the stabilizers under seven bit-wise CNOT gates, demonstrating that this operation is valid on the $[[7,1,3]]$ code. The transformations in Eq.~\ref{eq:CNOTtrans} are trivially extended to the logical space, showing that seven bit-wise CNOT gates invoke a logical CNOT operation. \begin{equation} \begin{aligned} &\bar{X}\otimes I \rightarrow \bar{X}\otimes \bar{X}, \\ &I\otimes \bar{Z} \rightarrow \bar{Z}\otimes \bar{Z}, \\ &\bar{Z}\otimes I \rightarrow \bar{Z}\otimes I, \\ &I\otimes \bar{X} \rightarrow I\otimes \bar{X}. \end{aligned} \label{eq:CNOTtrans2} \end{equation} \begin{table*} \begin{tabular}{|c|c|c|c|c|c|c|} \hline $K^i \otimes K^j$ & $K^1$ & $K^2$ &$K^3$ &$K^4$ &$K^5$ &$K^6$ \\ \hline $K^1$ & $K^1\otimes I$& $K^1\otimes K^1K^2$ & $K^1\otimes K^1K^3$ & $K^1K^4 \otimes K^1K^4$ & $K^1K^5\otimes K^1K^5$ & $K^1K^6\otimes K^1K^6$\\ \hline $K^2$ & $K^2\otimes K^1K^2$ & $K^2\otimes I$ & $K^2 \otimes K^2K^3$ &$K^2K^4\otimes K^2K^4$ &$K^2K^5\otimes K^2K^5$ & $K^2K^6\otimes K^2K^6$\\ \hline $K^3$ & $K^3\otimes K^3K^1$ & $K^3\otimes K^3K^2$ &$K^3\otimes I$ & $K^3K^4\otimes K^3K^4$ & $K^3K^5\otimes K^3K^5$ &$K^3K^6\otimes K^3K^6$ \\ \hline $K^4$ & $K^4\otimes K^1$ & $K^4\otimes K^2$ & $K^4\otimes K^3$ & $I\otimes K^4$ & $K^4K^5\otimes K^5$ &$K^4K^6\otimes K^6$ \\ \hline $K^5$ & $K^5\otimes K^1$ & $K^5\otimes K^2$ & $K^5\otimes K^3$ & $K^5K^4\otimes K^4$ & $I\otimes K^5$ &$K^5K^6\otimes K^6$ \\ \hline $K^6$ & $K^6\otimes K^1$ & $K^6\otimes K^2$ & $K^6\otimes K^3$ & $K^6K^4\otimes K^4$ & $K^6K^5\otimes K^5$ &$I\otimes K^6$ \\ \hline \end{tabular} \caption{Transformations of the $[[7,1,3]]$ stabilizer set under the gate operation $U=$CNOT$^{\otimes 7}$, where $\mathcal{G} \rightarrow U^{\dagger}\mathcal{G}U$. Note that the transformation does not take any stabilizer outside the group generated by $K^i \otimes K^j\; (i,j)\in [1,..,6]$, therefore $U=$CNOT$^{\otimes 7}$ represents a valid operation on the codespace.} \label{tab:stabtrans} \end{table*} The issue of Fault-tolerance with these logical operations should be clear. The $\bar{X}$,$\bar{Z}$, $\bar{H}$ and $\bar{P}$ gates are trivially fault-tolerant since the logical operation is performed through seven bit-wise single qubit gates. The logical CNOT is also fault-tolerant since each two-qubit gate only operates between counterpart qubits in each logical block. Hence if any gate is inaccurate, then at most a single error will be introduced in each block. \begin{figure}[ht] \begin{center} \includegraphics[width=0.45\textwidth]{transversal2.pdf} \caption{Bit-wise application of a CNOT gate between two logical qubits. Since each CNOT only couples corresponding qubits in each block, this operation is inherently fault-tolerant.} \label{fig:transversal2} \end{center} \end{figure} In contrast to the [[7,1,3]] code, let us also take a quick look at the [[5,1,3]] code. As mentioned in section~\ref{sec:sec:QEC2} the [[5,1,3]] code is a non-CSS code, meaning the the Clifford group of gates cannot be fully implemented in a transversal manner. To see this clearly we can examine how the stabilizer group for the code transforms under a transversal Hadamard operation, \begin{equation} \begin{pmatrix} X & Z & Z & X & I \\ I & X & Z & Z & X \\ X & I & X & Z & Z \\ Z & X & I & X & Z \end{pmatrix} \quad \longrightarrow \quad \begin{pmatrix} Z & X& X & Z & I \\ I & Z & X & X & Z \\ Z & I & Z & X & X \\ X & Z & I & Z & X \end{pmatrix} \end{equation} The stabilizer group is not preserved under this transformation, therefore the transversal Hadamard operation is not valid for the [[5,1,3]] code. One thing to briefly note is that there is a method for performing logical Hadamard and phase gates on the [[5,1,3]] code~\cite{G97+}. However, it essentially involves performing a valid, transversal, three-qubit gate and then measuring out two of the logical ancillae. While these gates are useful for operating on quantum data, they do not represent a universal set for quantum computation. In fact it has been shown that by using the stabilizer formalism, these operations can be efficiently simulated on a classical device~\cite{G98,AG04}. In order to achieve universality one of the following gates are generally added to the available set, \begin{equation} T = \begin{pmatrix} 1 & 0 \\ 0 & e^{i\pi/4} \end{pmatrix}, \end{equation} or the Toffoli gate~\cite{T81}. However, neither of these two gates are members of the Clifford group and applying them in a similar way to the other gates will transform the stabilizers out the group and consequently does not represent a valid operation. Circuits implementing these two gates in a fault-tolerant manner have been developed~\cite{NC00,GC99,SI05,SFH07}, but at this stage the circuits are complicated and resource intensive. This has practical implications to encoded operations. If universality is achieved by adding the $T$ gate to the list, arbitrary single qubit rotations require long gate sequences (utilizing the Solovay-Kitaev theorem~\cite{K97,DN06}) to approximate arbitrary logical qubit rotations and these sequences often require many $T$ gates~\cite{F05+}. Finding more efficient methods to achieve universality on encoded data is therefore still an active area of research. \section{fault-tolerant circuit design for logical state preparation}\label{sec:FTcircuit} Section~\ref{sec:Fault-tolerance} introduced the basic rules for fault-tolerant circuit design and how these rules lead to the threshold theorem for concatenated error correction. However, what does a full fault-tolerant quantum circuit look like? Here, we introduce a full fault-tolerant circuit to prepare the $[[7,1,3]]$ logical $\ket{0}$ state. As the $[[7,1,3]]$ code is a single error correcting code, we use the one-to-one definition of Fault-tolerance and therefore only need to consider the propagation of a single error during the preparation (any more that one error during correction represents a higher order effect and is ignored). As described in Section~\ref{sec:sec:QEC}, logical state preparation can be done by initializing an appropriate number of physical qubits and measuring each of the $X$ stabilizers that describe the code. Therefore, a circuit which allows the measurement of a Hermitian operator in a fault-tolerant manner needs to be constructed. The general structure of the circuit used was first developed by Shor~\cite{S96+}, however it should be noted that several more recent methods for fault-tolerant state preparation and correction now exist~\cite{S97,S02,DA07} which are more efficient than Shor's original method. The circuits shown in Fig.~\ref{fig:opmeas2}a and~\ref{fig:opmeas2}b, which measure the stabilizer $K^1 = IIIXXXX$ are not fault-tolerant, since a single ancilla is used to control each of the four CNOT gates. Instead, four ancilla qubits are used which are prepared in the state $\ket{\mathcal{A}} = (\ket{0000}+\ket{1111})/\sqrt{2}$. This can be done by initializing four qubits in the $\ket{0}$ state and applying a Hadamard then a sequence of CNOT gates. Each of these four ancilla are used to control a separate CNOT gate, after which the ancilla state is decoded and measured. \begin{figure*}[ht] \begin{center} \includegraphics[width=0.8\textwidth]{opmeas2.pdf} \caption{Three circuits which measure the stabilizer $K^1$. Fig a) represents a generic operator measurement where a multi-qubit controlled gate is available. Fig. b) decomposes this into single- and two-qubit gates, but in a non-fault-tolerant manner. Fig. c) introduces four ancilla such that each CNOT is controlled via a separate qubit. This ensures Fault-tolerance.} \label{fig:opmeas2} \end{center} \end{figure*} By ensuring that each CNOT is controlled via a separate ancilla, any $X$ error will only propagate to a single qubit in the data block. However, during the preparation of the ancilla state there is the possibility that a single $X$ error can propagate to multiple ancilla, which are then fed forward into the data block. In order to combat this, the ancilla block needs to be verified against possible $X$ errors. Tracking through all the possible locations where a single $X$ error can occur during ancilla preparation leads to the following unique states. \begin{equation} \begin{aligned} &\ket{\mathcal{A}}_1 = \frac{1}{\sqrt{2}}(\ket{0000}+\ket{1111}),\\ &\ket{\mathcal{A}}_2 = \frac{1}{\sqrt{2}}(\ket{0001} + \ket{1110}),\\ &\ket{\mathcal{A}}_3 = \frac{1}{\sqrt{2}}(\ket{0011} + \ket{1100}),\\ &\ket{\mathcal{A}}_4 = \frac{1}{\sqrt{2}}(\ket{0111} + \ket{1000}).\\ &\ket{\mathcal{A}}_5 = \frac{1}{\sqrt{2}}(\ket{0100} + \ket{1011}).\\ \end{aligned} \end{equation} From these possibilities, the last four states have a different parity between the first and forth qubit. Hence to verify this state, a fifth ancilla is added, initialized and used to perform a parity check on the ancilla block. This fifth ancilla is then measured. If the result is $\ket{0}$, the ancilla block is clean and can be coupled to the data. If the ancilla result is $\ket{1}$, then either a single error has occured in the ancilla preparation or on this verification qubit. In either case, the entire ancilla block is re-initialized and the ancilla prepared again. This is continued until the verification qubit is measured to be $\ket{0}$ [Fig.~\ref{fig:opmeas3}]. \begin{figure*}[ht] \begin{center} \includegraphics[width=0.8\textwidth]{opmeas3.pdf} \caption{Circuit required to measure the stabilizer $K^1$, fault-tolerantly. A four qubit GHZ state is used as ancilla with the state requiring verification against multiple $X$ errors. After the state has passed verification it is coupled to the data block and a syndrome is extracted.} \label{fig:opmeas3} \end{center} \end{figure*} \begin{figure*}[ht] \begin{center} \includegraphics[width=0.7\textwidth,height=0.9\textheight]{opmeas4.pdf} \caption{ Circuit required to prepare the $[[7,1,3]]$ logical $\ket{0}$ state fault-tolerantly. Each of the $X$ stabilizers are sequentially measured using the circuit in Fig.~\ref{fig:opmeas3}. To maintain Fault-tolerance, each stabilizer is measured 2-3 times with a majority vote taken.} \label{fig:stateprep} \end{center} \end{figure*} The re-preparation of the ancilla block protects against $X$ errors, which can propagate forward through the CNOT gates. $Z$ errors on the other hand, propagate in the other direction. Any $Z$ error which occurs in the ancilla block will propagate straight through to the final measurement. This results in the measurement not corresponding to the eigenstate the data is projected to and can possibly result in mis-correction once all stabilizers have been measured. To protect against this, each stabilizer is measured 2-3 times and a majority vote of the measurement results taken. As any additional error represents a second order process, if the first or second measurement has been corrupted by an induced $Z$ error, then the third measurement will only contain additional errors if a higher order error process has occurred. Therefore, we are free to ignore this possibility and assume that the third measurement is error free. The full circuit for $[[7,1,3]]$ state preparation is shown in Fig.~\ref{fig:stateprep}, where each stabilizer is measured 2-3 times. The total circuit requires a minimum of 12 qubits (7-data qubits and a 5-qubit ancilla block). As you can see, the circuit constructions for full fault-tolerant state preparation (and error correction) are not simple circuits. However, they are easy to design in generic ways when employing stabilizer coding. \section{Loss Protection} So far we have focused the discussion on correction techniques which assume that error processes maintain the assumption of a qubit structure to the Hilbert space. As we noted in section~\ref{sec:lossleakage}, the loss of physical qubits within the computer violates this assumption and in general requires additional correction machinery beyond what we have already discussed. For the sake of completeness, this section examines some correction techniques for qubit loss. Specifically, we detail one such scheme which was developed with single photon based architectures in mind. Protecting against qubit loss requires a different approach than other general forms of quantum errors such as environmental decoherence or systematic control imperfections. The cumbersome aspect related to correcting qubit loss is detecting the presence of a qubit at the physical level. The specific machinery that is required for loss detection is dependent on the underlying physical architecture, but the basic principal is that the presence or absence of the physical qubit must be determined without discriminating the actual quantum state. Certain systems allow for loss detection is a more convenient way than others. Electronic spin qubits, for example, can employ Single Electron Transistors (SET) to detect the presence or absence of the charge without performing measurement on the spin degree of freedom~\cite{DS00,CGJH05,AJWSD01}. Optics in contrast requires more complicated non-demolition measurement~\cite{MW84,IHY85,POWBR04,MNBS05}. This is due to the fact that typical photonic measurement is performed via photo-detectors which have the disadvantage of physically destroying the photon. Once the detection of the presence of the physical qubit has been performed, a freshly initialized qubit can be injected to replace the lost qubit. Once this has been completed, the standard error correcting procedure can correct for the error. A freshly initialized qubit state, $\ket{0}$ can be represented as projective collapse of a general qubit state, $\ket{\psi}$, as, \begin{equation} \ket{0} \propto \ket{\psi}+Z\ket{\psi}. \end{equation} If we consider this qubit as part of an encoded block, then the above corresponds to a 50\% error probability of experiencing a phase flip on this qubit. Therefore, a loss event that is corrected by non-demolition detection and standard QEC essentially guarantees a correction event in the QEC cycle. Therefore the probability of loss needs to be at a comparable rate to standard errors as the correction cycle after a loss detection event will, with high probability, detect and correct the error. Additionally, if a loss event is detected and the qubit replaced, the error detection code shown in section~\ref{sec:detection} becomes a single qubit correction code. This is due to the fact that erasure errors have known locations. Consequently error detection is sufficient to perform full correction, in contrast to non-erasure errors where the location is unknown. A second method for loss correction is related to systems that have high loss rates compared to systematic and environmental errors. The most prevalent in optical systems. Due to the high mobility of single photons and their relative immunity to environmental interactions, loss is a major error channel that generally dominates over other error sources. The use of error detection and correction codes for photon loss is undesirable due to the need for non-demolition detection of the lost qubit. While techniques exist for measuring the presence or absence of a photon without direct detection have been developed and implemented~\cite{POWBR04}, they require multiple ancilla photons and controlled interactions. Ultimately it is more desirable to redesign the loss correction code such that it can be employed directly with photo-detection rather than more complicated non-demolition techniques. One such scheme was developed by Ralph, Hayes and Gilchrist in 2005~\cite{RHG05}. This scheme was a more efficient extension of an original Parity encoding method developed by Knill, Laflamme and Milburn to protect against photon loss in their controlled-$\sigma_z$ gate~\cite{KLM01}. The general Parity encoding for a logical qubit is an $N$ photon GHZ state in the conjugate basis, i.e, \begin{equation} \begin{aligned} &\ket{0}_L^N = \frac{1}{\sqrt{2}}(\ket{+}^{\otimes N} + \ket{-}^{\otimes N}), \\ &\ket{1}_L^N = \frac{1}{\sqrt{2}}(\ket{+}^{\otimes N} - \ket{-}^{\otimes N}), \end{aligned} \end{equation} where $\ket{\pm} = (\ket{0}\pm \ket{1})/\sqrt{2}$. The motivation with this type of encoding is that measuring any qubit in the $\ket{0,1}$ basis simply removes it from the state, reducing the resulting state by one, i.e, \begin{equation} \begin{aligned} P_{0,N} \ket{0}_L^N &= (I_N + Z_N)\ket{0}_L^N \\ & = \frac{1}{\sqrt{2}}(\ket{+}^{N-1} + \ket{-}^{N-1})\ket{0}_N = \ket{0}_L^{N-1}\ket{0}_N \\ P_{1,N} \ket{0}_L^N &= (I_N - Z_N)\ket{0}_L^N \\ &= \frac{1}{\sqrt{2}}(\ket{+}^{N-1} - \ket{-}^{N-1})\ket{1}_N = \ket{1}_L^{N-1}\ket{1}_N \\ \end{aligned} \label{eq:lossenc} \end{equation} where $P_{0,1,N}$ are the projectors corresponding to measurement in the $\ket{0,1}$ basis on the $N^{th}$ qubit (up to normalization). The effect for the $\ket{1}_L$ state is similar. Measuring the $N^{th}$ qubit in the $\ket{0}$ state simply removes it from the encoded state, reducing the logical zero state by one, while measuring the $N^{th}$ qubit as $\ket{1}$ enacts a logical bit flip at the same time as reducing the size of the logical state. However, since the measurement result is known, this encoded bit flip can be corrected for. Instead of introducing the full scheme developed in~\cite{RHG05}, we instead just give the general idea of how such encoding allows for loss detection without non-demolition measurements. Photon loss in this model is assumed equivalent to measuring the photon in the $\ket{0},\ket{1}$ basis, but not knowing the answer [Sec~\ref{sec:lossleakage}]. Our ignorance of the measurement result could lead to a logical bit flip error on the encoded state, therefore we require the ability to protect against logical bit flip errors on the above states. As already shown, the 3-qubit code allows us to achieve such correction. Therefore the final step in this scheme is encoding the above states into a redundancy code (a generalized version of the 3-qubit code), where an arbitrary logical state, $\ket{\psi}_L$ is now given by, \begin{equation} \ket{\psi}_L = \alpha\ket{0}_1^N \ket{0}_2^N...\ket{0}_q^N + \beta \ket{1}_1^N \ket{1}_2^N ... \ket{1}_q^N \end{equation} where $\ket{0}^N,\ket{1}^N$ are the parity encoded states shown in Eq.~\ref{eq:lossenc} and the fully encoded state is $q$-blocks of these parity states. This form of encoding protects against the loss of qubits by first encoding the system into a code structure that allows for the removal of qubits without destroying the computational state and then protecting against logical errors that are induced by loss events. In effect it maps errors that are un-correctable by standard QEC to error channels that are correctable, in this case qubit loss $\rightarrow$ qubit bit-flips. This is common with pathological error channels. If a specific type of error violates the standard ``qubit" assumption of QEC, additional correction techniques are always required to map this type of error to a correctable form, consequently additional physical resources are usually needed. \section{Some Modern Developments in Error Correction} \label{sec:modern} Up until this stage we have restricted our discussions on error correction to the most basic principals and codes. The ideas and methodologies we have detailed represent the introductory techniques that were developed when error correction was first proposed. For those readers who are only looking for a basic introduction to the field, you can quite easily skip the remainder of this paper. Providing a fair and encompassing review of the more modern and advanced error correction techniques that have been developed is far outside our goal for this review. However, we would be remiss if we did not briefly examine some of the more advanced error correction techniques that have been proposed for large scale quantum information processing. For the remainder of this discussion we choose two closely related error correction techniques, subsystem coding and topological coding which have been receiving significant attention in the fields of architecture design and large scale quantum information processing. While some readers may disagree, we review these two modern error correction protocols because they are currently two of the most useful correction techniques when discussing the physical construction of a quantum computer. We again attempt to keep the discussion of these techniques light and provide specific examples when possible. However, it should be stressed that these error correcting protocols are far more complicated than the basic codes shown earlier. Topological error correction alone has since its introduction, essentially become its own research topic within the broader error correction field. Hence we encourage the reader who is interested to refer to the cited articles below for more rigorous and detailed treatment of these techniques. \subsection{Subsystem Codes} \label{sec:subsystem} Quantum subsystem codes~\cite{B06} are one of the newer and highly flexible techniques to implement quantum error correction. The traditional stabilizer codes that we have reviewed are more formally identified as subspace codes, where information is encoded in a relevant coding subspace of a larger multi-qubit system. In contrast, subsystem coding identifies multiple subspaces of the multi-qubit system as equivalent for storing quantum information. Specifically, multiple states are identified with the logical $\ket{0}_L$ and $\ket{1}_L$ states. The primary benefit to utilizing subsystem codes is the general nature of their construction. Moving from smaller to larger error correcting codes is conceptually straightforward, error correction circuits are much simpler to construct when encoding information in multiple subsystems~\cite{AC07} and the generality of their construction introduces the ability to perform dynamical code switching in a fault-tolerant manner~\cite{SEDH07}. This final property gives subsystem coding significant flexibility as the strength of error correction within a quantum computer can be changed, fault-tolerantly, during operation of the device. As with the other codes presented in this review, subsystem codes are stabilizer codes but now defined over a square lattice. The lattice dimensions represent the $X$ and $Z$ error correction properties and the size of the lattice in either of these two dimensions dictates the total number of errors the code can correct. In general, a $\mathcal{C}$($n_1$,$n_2$) subsystem code is defined over a $n_1\times n_2$ square lattice which encodes one logical qubit into $n_1n_2$ physical qubits with the ability to correct at least $\lfloor\frac{n_1-1}{2}\rfloor$ $Z$ errors and at least $\lfloor\frac{n_2-1}{2}\rfloor$ $X$ errors. Again, keeping with the spirit of this review, we instead focus on a specific example, the $\mathcal{C}$(3,3) subsystem code. This code, encoding 9 physical qubits into one logical qubit can correct for one $X$ and one $Z$ error. In order to define the code structure we begin with a $3\times 3$ lattice of qubits, where each qubit is identified with the vertices of the lattice (note that this 2D structure represents the structure of the code, it does not imply that a physical array of qubits {\em must} be arranged into a 2D lattice). \begin{figure}[ht!] \begin{center} \includegraphics[width=0.4\textwidth]{stabil.pdf} \caption{Stabilizer structure for the $\mathcal{C}$(3,3) code. Fig a. gives two of the four stabilizers from the group $\mathcal{S}$. Fig. b. illustrates one of the four encoded Pauli operators from each subsystem defined with the Gauge group, $\mathcal{T}$. Fig. c. gives the two logical operators from the group $\mathcal{L}$ which enact valid operations on the encoded qubit. } \label{fig:stab} \end{center} \end{figure} Fig.~\ref{fig:stab} illustrate three sets of stabilizer operators which are defined over the lattice. The first group, illustrated in Fig.~\ref{fig:stab}a. is the stabilizer group, $\mathcal{S}$, which is generated by the operators, \begin{equation} \begin{aligned} \mathcal{S} = \langle X_{i,*} X_{i+1,*} ; Z_{*,j}Z_{*,j+1} \ | \ i \in \Z_{2} ; j \in \Z_{2} \rangle, \end{aligned} \label{stabilizers:bs} \end{equation} where we retain the notation utilized in~\cite{AC07,SEDH07} $U_{i,*}$ and $U_{*,j}$ represent an operator, $U$, acting on all qubits in a given row, $i$, or column, $j$, respectively, and $\Z_2=\{1,2\}$. The second relevant subsystem is known as the gauge group~[Fig.~\ref{fig:stab}b.], $\mathcal{T}$, and is described via the non-Abelian group generated by the pairwise operators \begin{equation} \begin{aligned} \mathcal{T} = &\langle X_{i,j}X_{i+1,j} \ | \ i \in \Z_{2} ; j \in \Z_{3} \rangle, \\ &\langle Z_{i,j}Z_{i,j+1} \ | \ i \in \Z_{3} ; j \in \Z_{2} \rangle. \end{aligned} \end{equation} The third relevant subsystem is the logical space~[Fig.~\ref{fig:stab}c], $\mathcal{L}$, which can be defined through the logical Pauli operators \begin{equation} \mathcal{L} = \langle Z_{*,1} ; X_{1,*} \rangle, \end{equation} which when combined with $\mathcal{S}$ form a non-Abelian group. The stabilizer group $\mathcal{S}$, defines all relevant code states, i.e. {\em every} valid logical space is a $+1$ eigenvalue of this set. For the $\mathcal{C}$(3,3) code, there are a total of nine physical qubits and a total of four independent stabilizers in $\mathcal{S}$, hence there are five degrees of freedom left in the system which can house $2^5$ logical states which are simultaneous eigenstates of $\mathcal{S}$. This is where the gauge group, $\mathcal{T}$, becomes relevant. As the gauge group is non-Abelian, there is no valid code state which is a simultaneous eigenstate of all operators in $\mathcal{T}$. However, if you examine closely there are a total of four encoded Pauli operations within $\mathcal{T}$. Fig~\ref{fig:stab}b. illustrates two such operators. As all elements of $\mathcal{T}$ commute with all elements of $\mathcal{S}$ we can identify each of these four sets of valid ``logical" qubits to be equivalent, i.e. we define $\{\ket{0}_L,\ket{1}_L\}$ pairs which are eigenstates of $\mathcal{S}$ and an abelian subgroup of $\mathcal{T}$ and then ignore exactly what gauge group we are in (each of the four possible $\ket{0}_L$ states can be used to store a single logical qubit in the $\ket{0}$ state, regardless of which particular $\ket{0}_L$ gauge state we are in). Hence, each of these gauge states represent a subsystem of the code, with each subsystem logically equivalent. The final group we considered is the logical group $\mathcal{L}$. This is the set of two Pauli operators which enact a logical $X$ or $Z$ gate on the encoded qubit {\em regardless} of the gauge choice and consequently represent true logical operations to our encoded space. In a more formal sense, the definition of these three group structures allows us to decompose the Hilbert space of the system. If we let $\mathcal{H}$ denote the Hilbert space of the physical system, $\mathcal{S}$ forms an Abelian group and hence can act as a stabilizer set denoting subspaces of $\mathcal{H}$. If we describe each of these subspaces by the binary vector, $\vec{e}$, formed from the eigenvalues of the stabilizers, $\mathcal{S}$, then each subspace splits into a tensor product structure \begin{equation} \mathcal{H} = \bigoplus_{\vec{e}} \mathcal{H}_{\mathcal{T}} \otimes \mathcal{H}_{\mathcal{L}}, \end{equation} where elements of $\mathcal{T}$ act only on the subsystem $\mathcal{H}_{\mathcal{T}}$ and the operators $\mathcal{L}$ act only on the subsystem $\mathcal{H}_{\mathcal{L}}$. Therefore, in the context of storing qubit information, a logical qubit is encoded into the two dimensional subsystem $\mathcal{H}_{\mathcal{L}}$. As the system is already stabilized by operators in $\mathcal{S}$ and the operators in $\mathcal{T}$ act only on the space $\mathcal{H}_{\mathcal{T}}$, qubit information is only altered when operators in the group $\mathcal{L}$ act on the system. This formal definition of how subsystem coding works may be more complicated than the standard stabilizer codes shown earlier, but this slightly more complicated coding structure has significant benefits when we consider how error correction is performed. In general, to perform error correction, each of the stabilizers of the codespace must be checked to determine which eigenvalue changes have occurred due to errors. In the case of subsystem code this would appear to be problematic. The stabilizer group, $\mathcal{S}$, consist of qubit operators that scale with the size of the code. In our specific example, each of the $X$ and $Z$ stabilizers are six-dimensional (and in general, for a $n_1\times n_2$ lattice, the $X$ stabilizers are $2n_1$ dimensional and the $Z$ stabilizers are $2n_2$ dimensional). If techniques such as Shor's method~[Section~\ref{sec:FTcircuit}] were used, we would need to prepare a large ancilla state to perform fault-tolerant correction, which would also scale linearly with the size of the code, this is clearly undesirable. However, due to the gauge structure of subsystem codes we are able to decompose the error correction procedure~\cite{AC07}. Each of the stabilizers in $\mathcal{S}$ are simply the product of certain elements from $\mathcal{T}$, for example, \begin{equation} \begin{aligned} &X_{1,1}X_{1,2}X_{1,3}X_{2,1}X_{2,2}X_{2,3} \in \mathcal{S} \\ = &( X_{1,1}X_{2,1} ) . (X_{1,2}X_{2,2}).(X_{1,3}X_{2,3}) \in \mathcal{T}. \end{aligned} \end{equation} Therefore if we check the eigenvalues of the three, 2-dimensional operators from $\mathcal{T}$ we are able to calculate what the eigenvalue is for the 6-dimensional stabilizer. This decomposition of the stabilizer set for the code can only occur since the decomposition is in terms of operators from $\mathcal{T}$ which, when measured, has no effect on the logical information encoded within the system. In fact, when error correction is performed the gauge state of the system will almost always change based on the order in which the eigenvalues of the gauge operators are checked. This exploitation of the gauge properties of subsystem coding is extremely beneficial for fault-tolerant designs for correction circuits. As the stabilizer operators can now be decomposed into multiple 2-dimensional operators, fault-tolerant circuits for error correction do not require any encoded ancilla states. Furthermore, if we decide to scale the code-space to correct more errors (increasing the lattice size representing the code) we do not require measuring operators with higher dimensionality. Fig~\ref{fig:BScheck} taken from Ref.~\cite{AC07} illustrates the fault-tolerant circuit constructions for Bacon-Shor subsystem codes. \begin{figure}[ht] \begin{center} \begin{tabular}{ccc} \put(-30,10){(a)} \put(95,10){(b)} \Qcircuit @C=1ex @R=2.3ex @!R { \put(0.1,15){\footnotesize{$j,k$}} & \qw & \targ & \qw & \qw & \qw & \qw \\ & \qw \put(-12,12){\footnotesize{$j{+}1,k$}} & \qw & \targ & \qw & \qw & \qw\\ & \push{|+\rangle \hspace{0.1cm}} & \ctrl{-2} & \ctrl{-1} & \gate{H} & \meter } &\hspace{1.2cm} & \hspace{0.2cm} \Qcircuit @C=1ex @R=2.3ex @!R { \put(0.1,15){\footnotesize{$k,j$}} & \qw & \ctrl{+2} & \qw & \qw & \qw \\ & \qw \put(-12,12){\footnotesize{$k,j{+}1$}} & \qw & \ctrl{+1} & \qw & \qw \\ & \push{|0\rangle \hspace{0.1cm}} & \targ & \targ & \meter & } \vspace{-0.2cm} \end{tabular} \end{center} \caption{\label{fig:BScheck} (From Ref.~\cite{AC07}) Circuits for measuring the gauge operators and hence performing error correction for subsystem codes. Fig. a. measures, fault-tolerantly, the operator $X_{j,k}X_{j+1,k}$ with only one ancilla. Fig. b. measures $Z_{k,j}Z_{k,j+1}$. The results of these two qubit parity checks can be used to calculate the parity of the higher dimensional stabilizer operators of the code. } \end{figure} As each ancilla qubit is only coupled to two data qubits, no further circuit constructions are required to ensure fault-tolerance. The classical results from these 2-dimensional parity checks are then combined to calculate the parity of the higher dimensional stabilizer of the subsystem code. A second benefit to utilizing subsystem codes is the ability to construct fault-tolerant circuits to perform dynamical code switching. When using more traditional error correction codes it is difficult, if not impossible, to fault-tolerantly switch between codes with different error correcting properties. The Steane [[7,1,3]] code is a single error correcting code for both $X$ and $Z$ channels. If during the operation of a quantum computer, the user wished to increase the error correcting power of their code to two errors in the $X$ and $Z$ channel they would first decode the quantum data and then re-encode with the higher distance code. This is clearly a non fault-tolerant procedure as any error occurring on the decoded information will cause catastrophic failure. Due to the general lattice structure of subsystem codes switching too and from higher order codes can be achieved without decoding and re-encoding information, allowing the user of the computer to dynamically adjust the error correction during the computation. Figs.~\ref{figure:switch1},~\ref{figure:switch2} and~\ref{figure:switch3}, taken from Ref.~\cite{SEDH07} illustrates circuits to perform fault-tolerant switching from the $\mathcal{C}$(3,3) and $\mathcal{C}$(3,5) subsystem code. As noted before, the $\mathcal{C}$(3,3) is a single $X$, single $Z$ error correcting code while the $\mathcal{C}$(3,5) is a single $X$, two $Z$ error correcting code. We will not detail why these circuits successfully implement fault-tolerant code switching, instead we encourage readers to refer to Ref.~\cite{SEDH07} for further details. \begin{figure}[ht!] \[ \Qcircuit @C=0.4em @R=0.36em @!R { & \qw & \qw & \qw & \qw & \qw & \qw & \qw \\ & \qw & \qw & \qw & \qw & \targ & \qw & \qw \\ & \qw & \qw & \targ & \qw & \qw & \qw & \qw \\ \push{\vert {0} \rangle \hspace{0.1cm}} & \qw & \qw & \qw & \targ & \qw & \qw & \qw \\ \push{\vert {0} \rangle \hspace{0.1cm}} & \qw & \targ & \qw & \qw & \qw & \qw & \qw \\ \push{\vert {0} \rangle \hspace{0.1cm}} & \gate{H} & \qw & \qw & \ctrl{-2} & \ctrl{-4} & \gate{H} & \meter{} & & \\ \push{\vert {0} \rangle \hspace{0.1cm}} & \gate{H} & \ctrl{-2} & \ctrl{-4} & \qw & \qw & \gate{H} & \meter{} & & \\ \put(-0.4,244.5){\footnotesize{$1,j$}} \put(-0.4,228.0){\footnotesize{$2,j$}} \put(-0.4,211.5){\footnotesize{$3,j$}} \put(87,244.5){\footnotesize{$1,j$}} \put(87,228.0){\footnotesize{$2,j$}} \put(87,211.5){\footnotesize{$3,j$}} \put(87,195.0){\footnotesize{$4,j$}} \put(87,178.0){\footnotesize{$5,j$}} }\] \vspace{-17pt} \caption{(From Ref.~\cite{SEDH07}). Circuit to convert from the $\mathcal{C}$($3$,$3$) subsystem code to the $\mathcal{C}$($5$,$3$) code for one column, $j$, of the lattice structure of the code.} \label{figure:switch1} \end{figure} \begin{figure}[ht!] \[ \Qcircuit @C=0.4em @R=0.36em @!R { & \qw & \qw & / \qw & \qw & \qw & \qw & \qw & \qw & \gate{X} & \gate{X} & \gate{X^{\otimes3}} & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw \\ & \qw & \qw & / \qw & \qw & \qw & \qw & \qw & \qw & \qw \cwx & \qw \cwx & \qw \cwx & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw \\ & \qw & \qw & / \qw & \qw & \qw & \qw & \qw & \qw & \qw \cwx & \qw \cwx & \qw \cwx & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw \\ & \qw & \qw & / \qw & \qw & \qw & \qw& \qw & \gate{\mathcal{P}} & \qw \cwx & \gate{X} \cwx & \meter{} \cwx & \\ & \qw & \qw & / \qw & \qw & \qw & \qw& \gate{\mathcal{P}} & \qw & \gate{X} \cwx & \qw \cwx & \meter{} \cwx[-1] & \\ \push{\vert {0^{\otimes3}} \rangle \hspace{0.1cm}} & \qw & \qw & / \qw & \qw & \qw & \qw & \qw & \gate{}\qwx[-2] & \qw \cwx & \meter{} \cwx[-1] & \\ \push{\vert {0^{\otimes3}} \rangle \hspace{0.1cm}} & \qw & \qw & / \qw & \qw & \qw & \qw & \gate{} \qwx[-2] & \qw & \meter{} \cwx[-1] & & \\ \put(-0.4,247.5){\footnotesize{$i=1$}} \put(-0.4,231.0){\footnotesize{$i=2$}} \put(-0.4,214.5){\footnotesize{$i=3$}} \put(-0.4,198.0){\footnotesize{$i=4$}} \put(-0.4,181.0){\footnotesize{$i=5$}} \put(154,247.5){\footnotesize{$i=1$}} \put(154,231.0){\footnotesize{$i=2$}} \put(154,214.5){\footnotesize{$i=3$}} }\] \vspace{-17pt} \caption{(From Ref.~\cite{SEDH07}). Downconversion from the $\mathcal{C}$($5$,$3$) code to the $\mathcal{C}$($3$,$3$)code . $\mathcal{P}$ is the gate sequence in Fig.~\ref{figure:switch3}.} \label{figure:switch2} \end{figure} \begin{figure}[ht!] \[ \Qcircuit @C=0.4em @R=0.36em @!R { & \qw & \qw & \qw & \qw & \qw & \ctrl{4} & \ctrl{3} & \qw & \qw & \\ & \qw & \qw & \qw & \ctrl{4} & \ctrl{3} & \qw & \qw & \qw & \qw & \\ & \qw & \qw & \ctrl{3} & \qw & \qw & \qw & \qw & \ctrl{1} & \qw & \\ \push{\vert 0 \rangle \hspace{0.1cm}} & \qw & \qw & \qw & \qw & \qw & \qw & \targ & \targ & \meter{} \\ \push{\vert 0 \rangle \hspace{0.1cm}} & \qw & \qw & \qw & \qw & \targ & \targ & \qw & \qw & \meter{} \\ \push{\vert 0 \rangle \hspace{0.1cm}} & \qw & \qw & \targ & \targ & \qw & \qw & \qw & \qw & \meter{} \\ \put(-0.4,211.5){\footnotesize{$i,1$}} \put(-0.4,195.0){\footnotesize{$i,2$}} \put(-0.4,178.5){\footnotesize{$i,3$}} }\] \vspace{-17pt} \caption{(From Ref.~\cite{SEDH07}) $X$ parity measurement under $\mathcal{C}$($5$,$3$) for one row, $i$, of the lattice structure.} \label{figure:switch3} \end{figure} \subsection{Topological Codes} A similar coding technique to the Bacon-Shor subsystem codes is the idea of topological error correction, first introduced with the Toric code of Kitaev in 1997~\cite{K97}. Topological coding is similar to subsystem codes in that the code structure is defined on a lattice (which, in general, can be be of dimension $> 2$) and the scaling of the code to correct more errors is conceptually straightforward. However, in topological coding schemes the protection afforded to logical information relies on the unlikely application of error chains which define non-trivial topological paths over the code surface. Topological error correction is a complicated area of quantum error correction and fault-tolerance and any attempt to fairly summarize the field is not possible within this review. In brief, there are two ways of approaching the problem. The first is simply to treat topological codes as a class of stabilizer codes over a qubit system. This approach is more amenable to current current information technologies and is being adapted to methods in cluster state computing~\cite{RHG07,FG08}, optics~\cite{DFSG08,DMN08}, ion-traps~\cite{SJ08} and superconducting systems~\cite{IFIITB02}. The second approach is to construct a physical Hamiltonian model based on the structure of the topological code. This leads to the more complicated field on anyonic quantum computation~\cite{K97}. By translating a coding structure into a physical Hamiltonian system, excitations from the ground state of this Hamiltonian exhibit natural robustness against local errors (since their physical Hamiltonian symmetries reflect the coding structure imposed). Specifically, quasi-particles arising from a Hamiltonian approach to quantum codes exhibit fractional quantum statistics (they acquire fractional phase shifts when their positions are exchanged twice with other anyons, in contrast to Bosons or Fermions which always acquire $\pm 1$ phase shifts). The unique properties of anyons therefore allow for natural, robust, error protection and anyon/anyon interactions are performed by rotating anyones around each other. However, the major issue with this model is that it relies on quasi-particle excitations that do not, in general, arise naturally. Although certain physical systems have been shown to exhibit anyonic excitations, most notably in the fractional quantum hall effect~\cite{NSSFS08} the ability to first manufacture a reliable anyonic system in addition to reliably design and construct a large scale computing system based on anyons is a daunting task. As there are several extremely good discussions of both anyonic~\cite{NSSFS08} and non-anyonic topological computing~\cite{DKLP02,FSG08,FG08} we will not review any of the anyonic methods for topological computing and simply provide a brief example of one topological coding scheme, namely the surface code~\cite{BK01, DKLP02,FSG08}. The surface code for quantum error correction is an extremely good error correction model for several reasons. As it is defined over a 2-dimensional lattice of qubits it can be implemented on architectures that only allow for the coupling of nearest neighbor qubits (rather than the arbitrary long distance coupling of qubits in separate regions of the computer). The surface code also exhibits one of the highest fault-tolerant thresholds of any quantum error correction scheme, recent simulations estimate a threshold approaching 1\%~\cite{RHG07}. Finally, the surface code can correct problematic error channels such as qubit loss and leakage naturally. The surface code, as with subsystem codes, is a stabilizer code defined over a 2-dimensional qubit lattice. Fig.~\ref{fig:surface1} illustrates. We now identify each edge of the 2D lattice with a physical qubit. The stabilizer set consists of two types of operators, the first is the set of $Z^{\otimes 4}$ operators which circle every lattice face (or plaquette). The second is the set of $X^{\otimes 4}$ operators which encircle every vertex of the lattice. The stabilizer set is consequently generated by the operators, \begin{equation} A_p = \bigotimes_{j \in b(p)} Z_j, \quad B_v = \bigotimes_{j \in s(v)} X_j \end{equation} where $b(p)$ is the four qubits surrounding a plaquette and $s(v)$ is the four qubits surrounding each vertex in the lattice and identity operators on the other qubits are implied. First note that all of these operators commute as any plaquette and vertex stabilizer will share either zero or two qubits. If the lattice is not periodic in either dimension, this stabilizer set completely specifies one unique state, i.e. for a $N\times N$ lattice there are $2N^2$ qubits and $2N^2$ stabilizer generators. Hence this stabilizer set defines a unique multi-qubit entangled state which is generally referred to as a ``clean" surface. \begin{figure}[ht!] \begin{center} \includegraphics[width=0.4\textwidth]{surface1.pdf} \caption{General structure of the surface code. The edges of the lattice correspond to physical qubits. The four qubits surrounding each face (or plaquette) are +1 eigenstates of the operators $A_p$ while the four qubits surrounding each vertex are +1 eigenstates of the operators $B_v$. If all eigenstate conditions are met, a unique multi-qubit state is defined as a ``clean" surface.} \label{fig:surface1} \end{center} \end{figure} \begin{figure*}[ht!] \begin{center} \includegraphics[width=0.8\textwidth]{surface2.pdf} \caption{The surface code imbeds two self similar lattices that are interlaced, generally referred to as the primal and dual lattice. Fig. a. illustrates one lattice where plaquettes are defined with the stabilizers $A_p$. Fig b. illustrates the dual structure where plaquettes are now defined by the stabilizer set $B_v$. The two lattice structures are interlaced and are related by shifting along the diagonal by half a lattice cell. Each of these equivalent lattices are independently responsible for $X$ and $Z$ error correction} \label{fig:surface2} \end{center} \end{figure*} \begin{figure*}[ht!] \begin{center} \includegraphics[width=\textwidth]{surface3.pdf} \caption{Examples of error chains and their effect on the eigenvalues for each plaquette stabilizer. a). A single $X$ error causes the parity of two adjacent cells to flip. b) and c). Longer chains of errors only cause the end cells to flip eigenvalue as each intermediate cell will have two $X$ errors and hence the eigenvalue for the stabilizer will flip twice.} \label{fig:surface3} \end{center} \end{figure*} Detailing exactly how this surface can be utilized to perform robust quantum computation is far outside the scope of this review and there are several papers to which such a discussion can be referred~\cite{RH07,RHG07,FSG08,FG08}. Instead, we can quite adequately show how robust error correction is possible by simply examining how a ``clean" surface can be maintained in the presence of errors. The $X$ and $Z$ stabilizer sets, $A_p$ and $B_v$ define two equivalent 2D lattices which are interlaced, Fig.~\ref{fig:surface2}, illustrates. If the total 2D lattice is shifted along the diagonal by half a cell then the operators $B_v$ are now arranged around a plaquette and the operators $A_p$ are arranged around a lattice vertex. Since protection against $X$ errors are achieved by detecting eigenvalue flips of $Z$ stabilizers and visa-versa, these two interlaced lattices correspond to error correction against $X$ and $Z$ errors respectively. Therefore we can quite happily restrict our discussion to one possible error channel, for example correcting $X$ errors (since the correction for $Z$ errors proceeds identically when considering the stabilizers $B_v$ instead of $A_p$). Fig~\ref{fig:surface3}a. illustrates the effect that a singe $X$ error has on a pair of adjacent plaquettes. Since $X$ and $Z$ anti-commute, a single bit-flip error on one qubit in the surface will flip the eigenvalue of the $Z^{\otimes 4}$ stabilizers on the two plaquettes adjacent to the respective qubit. As single qubit errors act to flip the eigenvalue of adjacent plaquette stabilizers we examine how chains of errors affect the surface. Figs~\ref{fig:surface3}b. and Fig.~\ref{fig:surface3}c. examine two longer chains of errors. As you can see, if multiple errors occur, only the eigenvalues of the stabilizers associated with the ends of the error chains flip. Each plaquette along the chain will always have two $X$ errors occurring on different boundaries and consequently the eigenvalue of the $Z^{\otimes 4}$ stabilizer around these plaquettes will flip twice. If we now consider an additional ancilla qubit which sits in the center of each plaquette and can couple to the four surrounding qubits, we can check the parity by running the simple parity circuit shown in Fig~\ref{fig:surface4}. If we assume that we initially prepare a perfect ``clean" surface, we then, at some later time, check the parity of every plaquette over the surface. \begin{figure}[bt] \begin{center} \includegraphics[width=0.3\textwidth]{surface4.pdf} \caption{a). Lattice structure to check the parity of a surface plaquette. An additional ancilla qubit is coupled to the four neighboring qubits that comprise each plaquette. b). Quantum circuit to check the parity of the $Z^{\otimes 4}$ stabilizer for each surface plaquette.} \label{fig:surface4} \end{center} \end{figure} If $X$ errors have occurred on a certain subset of qubits, the parity associated with the endpoints of error chains will have flipped. We now take this 2-dimensional {\em classical} data tree of eigenvalue flips and pair them up into the most likely set of error chains. Since it is assumed that the probability of error on any individual qubit is low, the most likely set of errors which reflects the eigenvalue changes observed is the minimum weight set (i.e. connect up all plaquettes where eigenvalues have changed into pairs such that the total length of all connections is minimized). This classical data processing is quite common in computer science and minimum weight matching algorithms such as the Blossom package~\cite{CR99,K08} have a running time polynomial in the total number of data points in the classical set. Once this minimal matching is achieved, we can identify the likely error chains corresponding to the end points and correction can be applied accordingly. The failure of this code is therefore dictated by error chains that cannot be detected through changes in plaquette eigenvalues. If you examine Fig~\ref{fig:surface5}, we consider an error chain that connects one edge of the surface lattice to another. In this case every plaquette has two associated qubits that have experienced a bit flip and no eigenvalues in the surface have changed. Since we have assumed that we are only wishing to maintain a ``clean" surface, these error chains have no effect, but when one considers the case of storing information in the lattice, these types of error chains correspond to logical errors on the qubit~\cite{FSG08}. Hence undetectable errors are chains which connect boundaries of the surface to other boundaries (in the case of information processing, qubits are artificial boundaries within the larger lattice surface) \begin{figure}[ht!] \begin{center} \includegraphics[width=0.45\textwidth]{surface5.pdf} \caption{Example of a chain of errors which do not cause any eigenvalue changes in the surface. If errors connect boundaries to other boundaries, the error correction protocol will not detect them. In the case of a ``clean" surface, these error chains are invariants of the surface code. When computation is considered, qubit information are artificial boundaries within the surface. Hence if error chains connect these information qubits to other boundaries, logical errors occur.} \label{fig:surface5} \end{center} \end{figure} It should be stressed that this is a simplified description of the full protocol, but it does encapsulate the basic idea. The important thing to realize is that the failure rate of the error correction procedure is suppressed, exponentially with the size of the lattice. In order for a series of single qubit errors to be undetectable, they must form a chain connecting one boundary in the surface with another. If we consider an error model where each qubit experiences a bit flip, independently, with probability $p$, then an error chain of one occurs with probability $p$, error chains of weight two occur with probability $O(p^2)$, chains of three $O(p^3)$ etc... If we have an $N \times N$ lattice and we extend the surface by {\em one} plaquette in each dimension, then the probability of having an error chain connecting two boundaries will drop by a factor of $p^2$ (two extra qubits have to experience an error one on each boundary). Extending and $N\times N$ lattice by one plaquette in each dimension requires $O(N)$ extra qubits, hence this type of error correcting code suppresses the probability of having undetectable errors exponentially with a qubit resource cost which grows linearly. As we showed in Section~\ref{sec:Fault-tolerance}, standard concatenated coding techniques allow for a error rate suppression which scales with the concatenation level as a double exponential while the resource increase scales exponentially. For the surface code, the error rate suppression scales exponentially while the resource increase scales linearly. While these scaling relations might be mathematically equivalent, the surface code offers much more flexibility at the architectural level. Being able to increase the error protection in the computer with only a linear change in the number of physical qubits is far more beneficial than using an exponential increase in resources when utilizing concatenated correction. Specifically, consider the case where a error protected computer is operating at a logical error rate which is just above what is required for an algorithm. If concatenated error correction is employed, then adding another later of correction will not only increase the number of qubits by an exponential amount, but it will also drop the effective logical error rate far below what is actually required. In contrast, if surface codes are employed, we increase the qubit resources by a linear factor and drop the logical error rate sufficiently for successful application of the algorithm. We now leave the discussion regarding topological correction models. We emphasize again that this was a {\em very} broad overview of the general concept of topological codes. There are many details and subtleties that we have deliberately left out if this discussion and we urge the reader, if they are interested, to refer to the referenced articles for a much more thorough treatment of this topic. \section{Conclusions and future outlook} This review has hopefully provided a basic introduction to some of the most important theoretical aspects of quantum error correction and fault-tolerant quantum computation. The ultimate goal of this discussion was not to provide a rigorous theoretical framework for QEC and fault-tolerance, but instead attempted to illustrate most of the important rules, results and techniques that have evolved out of this field. We not only covered the basic aspects of QEC through specific examples, but also we have briefly discussed how physical errors influence quantum computation and how these processes are interpreted within the context of QEC. One of the more important aspects of this review is the discussion related to the stabilizer formalism, circuit synthesis and fault-tolerant circuit construction. Stabilizers are arguably the most useful theoretical formalism in QEC as once it is sufficiently understood, most of the important properties of error correcting codes can be investigated and understood largely by inspection. The study of quantum error correction and fault-tolerance is still and active area of QIP research. Although the library of quantum codes and error correction techniques are vast there is still a significant disconnect between the abstract framework of quantum coding and the more physically realistic implementation of error correction for large scale quantum information. There are several future possibilities for the direction of quantum information processing. Even with the development of many of these advanced techniques, the physical construction and accuracy of current qubit fabrication is still insufficient to obtain any benefit from QEC. Many in the field now acknowledge that the future development of quantum computation will most likely split into two broad categories. The first is arguably the more physically realistic, namely few qubit application in quantum simulation. Quantum simulation, i.e. using quantum systems to efficiently simulate other quantum systems was proposed by Richard Feynmann in the early 1980's~\cite{F81} and was one of the primary motivations to the development of the field. In the ideal case, it is argued that having access to a quantum computer with on the order of 10-100 physical qubits could allow for simulating physical systems large enough to be impractical for current classical computers. If we limit our quantum array to the 100 qubit level, then even implementing active error correction techniques would not be desirable. Instead, higher quality fabrication and control as well as techniques in error avoidance (which require far less resources than error correction) would instead be used in order to lower effective error rates below what is required to run few qubit applications. Beyond few qubit quantum simulation we move to truely large scale quantum computation, i.e. implementing large algorithms such as Shor on qubit arrays well beyond 1000 physical qubits. This would undoubtably require active techniques in error correction. Future work needs to focus on adapting the many codes and fault-tolerant techniques to the architectural level. As we noted in section~\ref{sec:threshold}, the implementation of QEC at the design level largely influences the fault-tolerant threshold exhibited by the code itself. Being able to efficiently incorporate both the actual quantum code and the error correction procedures at the physical level is extremely important when developing an experimentally viable, large scale quantum computer. There are many differing opinions within the quantum computing community as to the future prospects for quantum information processing. Many remain pessimistic regarding the development of a million qubit device and instead look towards quantum simulation in the absence of active error correction as the realistic goal of quantum information. However, in the past few years, the theoretical advances in error correction and the fantastic speed in the experimental development of few qubit devices continues to offer hope for the near-term construction of a large scale device, incorporating many of the ideas presented within this review. While we could never foresee the possible successes or failures in quantum information science, we remain hopeful that a large scale quantum computer is still a goal worth pursuing. \section{Acknowledgments} The authors wish to thank A. M. Stephens, R. Van Meter, A.G. Fowler, L.C.L. Hollenberg, A. D. Greentree for helpful comments and acknowledge the support of MEXT, JST, the EU project QAP. \bibliographystyle{alpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,730
Biografia Laureata in lettere, ha insegnato nelle scuole secondarie materie umanistiche. Negli anni settanta aderì al movimento femminista e fece parte dell'Unione donne in Italia. Iscritta al Partito Comunista, scrittrice e giornalista, scrisse negli anni ottanta sulla rivista Rinascita. In quegli anni fu candidata ed eletta nella IX legislatura. Opere Alle donne - Canzoniere popolare, 1977 Le foglie della Sibilla, Como, Dominioni editore, 1988 Né moglie né madre, Como, Dominioni editore, 1990 Alda, Ginevra, Lydia partigiane, Como, Filò editore, 1998 Rivisitando la vita di Giuseppe Parini, Rimini, Luisè editore, 1999 Il nido della poiana, Lipomo, Nani editore, 2003 Separati di letto e di mensa: 1865-1928, Como, Elpo editore, 2017 Note Collegamenti esterni Scrittori italiani del XXI secolo Politici del Partito Comunista Italiano
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,173
{"url":"https:\/\/chem.libretexts.org\/Bookshelves\/Physical_and_Theoretical_Chemistry_Textbook_Maps\/Book%3A_Mathematical_Methods_in_Chemistry_(Levitus)\/12%3A_Partial_Differential_Equations\/12.01%3A_Introduction_to_Partial_Differential_Equations","text":"12.1: Introduction to Partial Differential Equations\n\nMany important equations in physical chemistry, engineering, and physics, describe how some physical quantity, such as a temperature or a concentration, varies with position and time. This means that one or more spatial variables and time serve as independent variables. For example, let\u2019s consider the concentration of a chemical around the point $$(x,y,z)$$ at time $$t$$: $$C = C (x, y, z, t)$$. The differential equation that describes how $$C$$ changes with time is\n\n$\\label{eq:pde1} \\nabla^2C(x,y,z,t)=\\frac{1}{D}\\frac{\\partial C(x,y,z,t)}{\\partial t}$\n\nwhere $$\\nabla^2$$ is an operator known as the Laplacian. In cartesian three-dimensional coordinates:\n\n$\\nabla^2=\\frac{\\partial^2}{\\partial x^2}+\\frac{\\partial^2}{\\partial y^2}+\\frac{\\partial^2}{\\partial z^2} \\nonumber$\n\nThe constant $$D$$ is the diffusion coefficient, and determines how far molecules move on average in a given period of time. The diffusion coefficient depends on the size and shape of the molecule, and the temperature and viscosity of the solvent.\n\nThe diffusion equation (Equation \\ref{eq:pde1}) is a partial differential equation because the dependent variable, $$C$$, depends on more than one independent variable, and therefore its partial derivatives appear in the equation.\n\nOther important equations that are common in the physical sciences are:\n\nThe heat equation:\n\n$\\label{eq:pde2} \\nabla^2T(x,y,z,t)=\\frac{1}{\\alpha}\\frac{\\partial T(x,y,z,t)}{\\partial t}$\n\nwhich is in a way a diffusion equation, except that the quantity that diffuses is heat, and not matter. This translates into a change in temperature instead of a change in concentration.\n\nThe wave equation:\n\n$\\label{eq:pde3} \\nabla^2u(x,y,z,t)=\\frac{1}{v^2}\\frac{\\partial^2 u(x,y,z,t)}{\\partial t^2}$\n\nwhich describes the displacement of all points of a vibrating thin string. Here, $$v$$ has units of speed, and it\u2019s related to the elasticity of the string.\n\nThe time-independent Schr\u00f6dinger equation\n\n$\\label{eq:pde4} -\\frac{\\hbar^2}{2m}\\nabla^2\\psi(x,y,z)+V\\psi(x,y,z)=E\\psi(x,y,z)$\n\nwhich we have already introduced in previous chapters. Note that the Schr\u00f6dinger equation becomes an Ordinary Differential Equation for one-dimensional problems (e.g. the one-dimensional particle in a box, page ), but it is a PDE for systems where particles are allowed to move in two or more dimensions.\n\nIn this course, we will introduce the simplest examples of PDEs relevant to physical chemistry. As you will see, solving these equations analytically is rather complex, and the solutions depend greatly on initial and boundary conditions. Because solving these equations is time consuming, in your upper-level physical chemistry courses your teacher will often show you the solutions without going through the whole derivation. Yet, it is important that you go through all the work at least once for the simplest cases, so you know what it is involved in solving the PDEs you will see in the future.","date":"2021-10-25 14:14:12","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8593311905860901, \"perplexity\": 204.61361196741163}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323587711.69\/warc\/CC-MAIN-20211025123123-20211025153123-00442.warc.gz\"}"}
null
null
Ronald Moodnick, plus connu sous le nom de Ron Moody, est un acteur britannique né le à Tottenham, un quartier de Londres, et mort le à Londres. Biographie Né à Tottenham, au nord de Londres, dans une famille juive, il travaille dans une variété de genres, mais il est assurément mieux connu pour son rôle de Fagin dans la comédie musicale Oliver !, film britannique réalisé par Carol Reed, sorti en 1968, d'après le roman Oliver Twist de Charles Dickens. Il crée ce rôle sur une scène du West End theatre de Londres, et le reprend sur Broadway, puis dans la version cinématographique de 1968, pour lequel il est nommé pour l'Oscar du meilleur acteur par l'Academy Award. Il apparaît dans plusieurs séries télévisées pour enfants, dont Les Animaux du Bois de Quat'sous (The Animals of Farthing Wood), L'Île de Noé (Noah's Island), Telebugs, Into the Labyrinth et Midnight Is a Place. Il est évoqué à deux reprises pour reprendre le rôle du Docteur dans la série Doctor Who, mais refuse même une proposition en 1969 après le départ de Patrick Troughton de la série. Il dit par la suite regretter cette décision. Il a aussi tenu le rôle d'Edwin Caldecott, un vieil ennemi de Jim Branning, dans EastEnders. Ron Moody épouse Therese Blackbourn en 1985. Ils sont les parents de six enfants, le plus jeune est né quand Ron Moody était jeune septuagénaire. Il est aussi le cousin du réalisateur Laurence Moody. Il meurt en . Filmographie 1959 : Follow a Star : le violoniste 1963 : La Souris sur la Lune (The Mouse on the Moon) : le Premier Ministre Rupert Mountjoy 1964 : Lady détective entre en scène (Murder Most Foul) : Driffold Cosgood 1966 : Du miel pour le prince (Honey for the Prince) , épisode 26, saison 4 de la série télévisée Chapeau melon et bottes de cuir (The Avengers) : Hopkirk 1968 : Oliver ! : Fagin 1970 : Le Mystère des douze chaises (The Twelve Chairs) : Hippolyte Vorobyaninov 1975 : La Légende du loup-garou (Legend of the Werewolf) : Le gardien du zoo 1977 : The Strange Case of the End of Civilization as We Know It : Gropinger 1979 : Un cosmonaute chez le roi Arthur (The Spaceman and King Arthur) : Merlin 1980 : Cher Inspecteur (Nobody's Perfect, série en 8 épisodes) : Inspecteur Roger Hart 1982 : Meurtres en direct (Wrong Is Right) : Roi Awad 1983 : Where Is Parsi? (Where Is Parsifal?) : Beersbohm 1989 : Astérix et le Coup du menhir (Asterix and the Big Fight) : voix de Prolix dans la version anglaise 1995 : Un visiteur chez le roi Arthur (A Kid in King's Arthur's Court) : Merlin 2001 : Revelation : Sir Isaac Newton Notes et références Liens externes Militaire britannique de la Seconde Guerre mondiale Acteur anglais Acteur anglais de doublage Personnalité britannique du judaïsme Naissance en janvier 1924 Naissance à Tottenham Acteur ayant incarné Merlin Golden Globe du meilleur acteur Décès en juin 2015 Décès à Londres Décès à 91 ans Nom de scène Acteur ayant incarné Fagin
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,484
\section{Introduction} Games on graphs are a very natural concept and so it is no wonder that this field has emerged jointly with graph theory as a whole. For finite boards one often considers strong games, i.e. where two players interchangeably colour edges of a finite graph $G$ with the aim to be the first player to have some previously agreed upon graph contained as a subgraph in the graph induced by their respective coloured edges. Another important kind of games is the so-called ``Maker-Breaker games''. A typical setup for such games on (potentially infinite) graphs is the following: at each turn, Maker claims an edge of $ G $ (not previously claimed by either player) after which Breaker claims an unclaimed edge. There is either a fixed number of turns or they play until the whole edge set is distributed. The set of winning sets of Maker is public knowledge and usually has some combinatorial description. Classical games of this type are for example the Shannon switching game, in which Maker's goal is to connect two vertices by a path (see \cite{Lehman_1964}), and the game where Maker's goal is to build a spanning tree (see \cite{chvatal1978biased}) or more generally a base of a matroid (see \cite{lgorzata2005biased}). For recent results about Maker-Breaker games on infinite graphs we refer to \cite{NICHOLASDAY2021482} and \cite{bowler2020maker}. Some games (like the base-exchange game in \cite{aharoni1991bases}) can be phrased more naturally in the language of infinite matroids. It is worth mentioning that Maker-Breaker games have been investigated in an even more abstract settings as well (see \cite{CN1979}). For graphs $G$ and $H$, let $\mathfrak{MB}(G, H) $ denote the \emph{Maker-Breaker game} where $G$ (more precisely the set of edges) is the board, there are turns (indexed by ordinals) each of which begins with Maker claiming a previously unclaimed edge, after which Breaker does likewise. The game terminates when all the edges are claimed and Maker wins if and only if at the end of the game the subgraph $ G_M $ of $ G $ induced by the edges claimed by Maker contains a subgraph isomorphic to $ H $. Let us recall that a graph $ G $ is an ordered pair $ (V,E) $ with $ E\subseteq [V]^{2} $ where $ V $ is called the vertex set and $ E $ is the edge set of $ G $. The complete graph on $ \kappa $ is $ K_\kappa:=(\kappa, [\kappa]^{2}) $. The complete bipartite graph with vertex classes of size $ \lambda $ and $ \kappa $ is denoted by $ K_{\lambda,\kappa} $, its vertex set is $ (\lambda \times \{ 0 \})\cup (\kappa\times \{ 1 \}) $ and its edge set is $ \{ (\alpha,0), (\beta, 1):\ \alpha<\lambda, \beta<\kappa \} $. Note that the vertex sets of these graphs are already well-ordered, and so we generally do not need to invoke the axiom of choice. It was shown that in the game $ \mathfrak{MB}(K_{\omega}, K_{\omega}) $ Maker has a winning strategy (see \cite{bowler2020maker}). In this note we analyse similar games on uncountable graphs. Note that each outcome of the game defines a 2-colouring of $E(G)$. This suggests a possible connection to Ramsey type problems, although in the current context the colourings in question are not arbitrary but are produced by players with particular goals in mind. There are colourings of the edges of a $K_{\omega_1}$ with two colours without any monochromatic $K_{\omega_1}$ in ZFC (see~\cite{sierpinski1933}), but if instead of the axiom of choice one assumes DC+AD, then there is always a monochromatic $K_{\omega_1}$ because $\omega_1$ becomes measurable (see \cite{kanamori2008higher}*{Theorem 28.2}) and hence weakly compact\footnote{We write CH, GCH, DC, AD and $ \mathfrak{p} $ for the continuum hypothesis, generalised continuum hypothesis, axiom of dependent choice, axiom of determinacy and the pseudo-intersection number respectively.}. The existence of a monochromatic $K_{\omega,\omega_1}$ when colouring the edges of a $K_{\omega,\omega_1}$ with two colours is even more dependent on the set-theoretic framework. While there is a colouring without a monochromatic copy in ZFC+CH, there is no such colouring in ZFC+$\omega_1<\mathfrak{p} $. Since we could not find these particular statements formulated anywhere in the literature on infinite Ramsey theory, for the sake of completeness we include them here as Corollaries \ref{cor: no monochrom} and \ref{cor: monochrom}. These Ramsey-type results compare well to the corresponding results about the existence of a winning strategy for either player. Our main results are as follows: \begin{thm}\label{t: main thm bip} It is independent of ZFC if Breaker has a winning strategy in the game $\mathfrak{MB}(K_{\omega, \omega_1}, K_{\omega,\omega_1}) $. He has one under ZFC+GCH\footnote{A closer analysis shows that only CH is needed here, but we have chosen a simpler exposition over optimality of the results, since the independence is our main concern.}, while Maker has one under ZFC$+\omega_1<\mathfrak{p}$. \end{thm} \begin{thm}\label{t: Ramsey bipart} It is independent of ZFC if every $ 2 $-colouring of the edges of $ K_{\omega, \omega_1} $ admits a monochromatic copy of $ K_{\omega, \omega_1} $. It is true in ZFC$+\omega_1<\mathfrak{p} $ but fails under ZFC+CH. \end{thm} \begin{thm}\label{t: main thm} Assuming the consistency of AD, it is independent of ZF+DC if Breaker has winning strategies in the games $\mathfrak{MB}(K_{\omega_n}, K_{\omega_n}) $ for $ n\in \{ 1,2 \} $. He has such winning strategies under ZFC+GCH, while Maker has winning strategies in these games under ZF+DC+AD. \end{thm} Let $\mathfrak{MB}(K_{\kappa}, K_{\mathsf{club}}) $ be the game in which Maker's goal is to build a ``$K_{\mathsf{club}}$'', i.e. a complete graph whose vertex set is a closed unbounded subset of $ \kappa $. \begin{thm}\label{t: main thm club} Assuming the consistency of AD, it is independent of ZF+DC if Breaker has a winning strategy in the game $\mathfrak{MB}(K_{\omega_1}, K_{\mathsf{club}}) $. \end{thm} Our results raise the following natural questions: \begin{quest} Is it consistent with ZFC that neither Maker nor Breaker has a winning strategy in the game $\mathfrak{MB}(K_{\omega, \omega_1}, K_{\omega,\omega_1}) $? \end{quest} \begin{quest} Does Breaker have a winning strategy in $\mathfrak{MB}(K_{\omega_1}, K_{\omega_1})$ under ZFC? \end{quest} \begin{quest} Does Maker have a winning strategy in $\mathfrak{MB}(K_{\omega_1}, K_{\mathsf{club}}) $ under ZF+DC+AD? \end{quest} \textbf{Acknowledgements:} The authors are grateful to Stefan Geschke, Zoltán Vidnyánszky and Daniel Hathaway for the insightful discussions about the Axiom of determinacy. \section{The winning strategies of Breaker under GCH} \begin{prop}[ZFC+GCH]\label{p: Breaker win} For every infinite cardinal $ \kappa $, Breaker has a winning strategy in the game $\mathfrak{MB}(K_{\kappa^{+}}, K_{\kappa, \kappa^{+}}) $. \end{prop} \begin{proof} Let us assume that $ K_{\kappa^{+}} $ is represented as the complete graph on the vertex set $\kappa^+$. Working under GCH, we fix an enumeration $ \{ A_\alpha: \alpha<\kappa^{+} \} $ of $ [\kappa^{+}]^{\kappa} $ and for each $\alpha<\kappa^{+}$, we pick a surjective function $ f_\alpha: \kappa \rightarrow \{ A_\beta: \beta \leq\alpha \}$). Whenever Maker plays an edge $ \{ \beta, \alpha \} $ with $ \beta < \alpha $ and there is a $ \gamma<\kappa $ such that this is the $(\gamma+1) $st downwards edge from $ \alpha $ she claims, Breaker chooses the smallest $\delta\in f_\alpha(\gamma) $ for which $ \{ \delta, \alpha \} $ is available, and plays $ \{ \delta, \alpha \} $ if such a $ \delta $ exists - otherwise he plays arbitrarily. Suppose for a contradiction that Maker manages to build a $ K_{\kappa, \kappa^{+}} $ (despite Breaker playing as above) and let $ A $ be its smaller and $ B $ its bigger vertex class. Then there is an $ \alpha<\kappa^{+} $ with $A_{\alpha}=A$. Fix a $ \beta\in B $ with $ \beta > \max \{\alpha, \sup A\} $ and let $ \gamma < \kappa $ with $ f_\beta(\gamma)=A $. At the turn when Maker claims a downwards edge from $ \beta $ for the $ (\gamma+1) $st time, there are still $ \kappa $ many $ \delta\in A $ for which $ \{ \delta, \beta \} $ is available, thus Breaker's next play is $ \{ \delta, \beta \} $ for the smallest such $ \delta $. This contradicts $ \{ \delta, \beta \}\in E(G_M) $. \end{proof} The corresponding negative Ramsey-result can be proved in a similar manner: \begin{cor}[ZFC+GCH]\label{cor: no monochrom} For every infinite cardinal $ \kappa $, there exists a $ 2 $-colouring of the edge set of $ K_{\kappa, \kappa^{+}} $ without a monochromatic copy of $ K_{\kappa, \kappa^{+}} $. \end{cor} \begin{proof} Let $ \{ v_\alpha: \alpha<\kappa^{+} \} $ be an enumeration of the larger vertex class and let $ \{ A_\alpha: \alpha<\kappa^{+} \} $ be an enumeration of $ [\kappa^{+}]^{\kappa} $. For each $ \alpha<\kappa^{+} $, we colour the edges incident with $ v_\alpha $ in such a way that for every $ \beta \leq \alpha $ both colours appear among the edges between $ v_\alpha $ and $ A_\beta $. This clearly ensures that no set $ A $ can be the smaller vertex class of a monochromatic copy of $ K_{\kappa, \kappa^{+}} $ and therefore no such a monochromatic copy exists. \end{proof} \begin{obs}\label{o: monoton} If Breaker has a winning strategy in $ \mathfrak{MB}(G,H) $, then he also has one in every game $ \mathfrak{MB}(G',H') $ where $ G' $ is a subgraph of $ G $ and $ H' $ is a supergraph of $ H $. \end{obs} Since $ K_{\kappa, \kappa^{+}} $ is a subgraph of $ K_{\kappa^{+}} $, Observation \ref{o: monoton} guarantees that Proposition \ref{p: Breaker win} has the following consequences: \begin{cor}[ZFC+GCH]\label{cor: Breaker win} For every infinite cardinal $ \kappa $, Breaker has a winning strategy in the following games: \begin{enumerate} \item\label{i: Breaker win 1} $\mathfrak{MB}(K_{\kappa, \kappa^{+}}, K_{\kappa, \kappa^{+}}) $, \item\label{i: Breaker win 2} $\mathfrak{MB}(K_{\kappa^{+}}, K_{\kappa^{+}}) $ \item\label{i: Breaker win 3} $\mathfrak{MB}(K_{\kappa^{+}}, K_{\mathsf{club}}). $ \end{enumerate} \end{cor} \section{Winning strategies for Maker} During the course of play in $\mathfrak{MB}(G,H)$ we will refer to a vertex as {\em fresh} if no edge incident with that vertex has been claimed yet by either player. \subsection{A winning strategy for Maker in \texorpdfstring{$\mathfrak{MB}(K_{\omega, \omega_1}, K_{\omega, \omega_1})$}{MBKww1}} A set $\mathcal{F}$ of sets has the strong finite intersection property if the intersection of any finitely many elements of $\mathcal{F}$ is infinite. Given two sets $X$ and $Y$, write $X \subseteq^*Y$ if $X \setminus Y$ is finite. A \emph{pseudo-intersection} for a set $\mathcal{F}$ of sets is a set $P$ with $P \subseteq^* F$ for all $F \in \mathcal{F}$. The cardinal $\mathfrak{p}$ is the minimum cardinality of a set $\mathcal{F}$ of subsets of $\omega$ that has the strong finite intersection property but does not admit an infinite pseudo-intersection. Clearly $\aleph_0<\mathfrak{p} \leq 2^{\aleph_0}$ and it is known that $ \omega_1<\mathfrak{p} $ is consistent relative to ZFC (see \cite{kunen2011set}*{Lemma III.3.22 on p. 176}). \begin{prop}\label{prop: Maker win MA} Maker has a winning strategy in $\mathfrak{MB}(K_{\omega, \omega_1}, K_{\omega, \omega_1}) $ if $ \omega_1<\mathfrak{p} $. \end{prop} \begin{proof} Let $ U$ and $ V $ be the two sides of the bipartite graph $ K_{\omega, \omega_1} $, where $ \left|U\right|=\omega $ and $ \left|V\right|=\omega_1 $. We denote the subgraph of $ G $ induced by the edges Maker claimed before turn $ \alpha $ by $ G^{\alpha}_M $ and we write $ N_{G^{\alpha}_{M}}(v) $ for the set of the neighbours of $v$ in this graph. During the game Maker will choose a sequence $\langle v_{\alpha} \colon \alpha < \kappa \rangle$ $\langle v_{\alpha} \colon \alpha < \omega_1 \rangle$ of distinct vertices from $V$ and a sequence $\langle N_{\alpha} \colon \alpha < \kappa\rangle$ of subsets of $U$ in such a way as to ensure that for any $\alpha < \kappa$ and a sequence $\langle N_{\alpha} \colon \alpha < \omega_1\rangle$ of subsets of $U$ in such a way as to ensure that for any $\alpha < \omega_1$ \begin{enumerate} \item $N_{\alpha} \subseteq N_{G_M}^{\omega \cdot(\alpha + 1)}(v_{\alpha})$. \item the set $\{N_{\beta} \colon \beta \leq \alpha \}$ has the strong finite intersection property. \end{enumerate} Assume that turn $ \alpha \cdot \omega $ has just begun for some $\alpha<\omega_1$ and that Maker has constructed suitable $v_{\beta}$ and $N_{\beta}$ for all $\beta < \alpha$. She picks $ v_{\alpha}$ to be any fresh vertex in $V$. Using (2) for all $\beta < \alpha$, we know that the set $\{N_{\beta} \colon \beta < \alpha \}$ has the strong finite intersection property. Let $P_\alpha $ be an infinite pseudo-intersection of this family. In each of the next $ \omega $ turns, Maker claims an edge $ \{ u, v_\alpha \} $ with $ u\in P_\alpha $. Let $N_{\alpha}$ be the set of all the endpoints $u \in U$ of these edges. It is easy to check that this construction satisfies (1) and (2) for $\alpha$. At the end of the game $ \{N_{\alpha} \colon \alpha<\omega_1 \} $ has the strong finite intersection property and hence (by the assumption $ \omega_1<\mathfrak{p} $) admits an infinite pseudo-intersection $ P $. By the definition of $ P $, for each $ \alpha<\omega_1 $, the set $ P\setminus N_\alpha $ is finite. Then there exists an uncountable $ O\subseteq \omega_1 $ and a finite $ F\subseteq P $ such that $ P\setminus N_\alpha=F $ for every $ \alpha\in O $. Finally,$ (P\setminus F)\cup \{ v_\alpha: \alpha\in O \} $ induces a copy of $ K_{\omega, \omega_1} $, all of whose edges have been claimed by Maker. \end{proof} \begin{rem} The same proof shows that Maker has a winning strategy in $\mathfrak{MB}(K_{\omega, \kappa}, K_{\omega, \kappa}) $ for every $\kappa<\mathfrak{p}$ with $ \mathsf{cf}(\kappa)>\aleph_0 $. \end{rem} The proof of Proposition \ref{prop: Maker win MA} leads to the following positive Ramsey result: \begin{cor}\label{cor: monochrom} If $ \omega_1<\mathfrak{p} $, then any $ 2 $-colouring of the edges of $ K_{\omega, \omega_1} $ admits a monochromatic copy of $ K_{\omega, \omega_1} $. \end{cor} \begin{proof} Call the colours red and blue, and call the countable and uncountable side of the original graph $U$ and $V$ respectively. We pick a free ultrafilter $ \mathcal{U} $ on $ U $. Then for each $ v\in V $ either the set $ N_r(v) $ of the red neighbours of $ v $ is in $ \mathcal{U} $ or the set $N_b(v)$ of the blue neighbours. We may assume that there is an uncountable $ V'\subseteq V $ such that $ N_r(v)\in \mathcal{U} $ for each $ v\in V' $. Since $ \mathcal{U} $ is a free ultrafilter, the family $ \{ N_r(v): v\in V' \} $ has the strong finite intersection property and therefore (by $ \omega_1<\mathfrak{p} $) admits an infinite pseudo-intersection $ P $. This means that for every $ v\in V' $ the set $ P\setminus N_r(v) $ is finite. Then there exists an uncountable $ V''\subseteq V' $ and finite $ F\subseteq P $ such that $ P\setminus N_r(v)=F $ for each $ v\in V'' $ and hence $ (P\setminus F)\cup V'' $ induces a red copy of $ K_{\omega, \omega_1} $. \end{proof} \begin{quest} Is it consistent with ZFC$+\aleph_\omega<2^{\aleph_0}$ that Maker has a winning strategy in the game $\mathfrak{MB}(K_{\omega, \omega_\omega}, K_{\omega, \omega_\omega}) $? \end{quest} Theorem \ref{t: main thm bip} is implied by the case $ \kappa=\omega $ of Corollary \ref{cor: Breaker win}/ (\ref{i: Breaker win 1}) together with Proposition \ref{prop: Maker win MA}. Similarly, Theorem \ref{t: Ramsey bipart} follows from Corollaries \ref{cor: no monochrom} and \ref{cor: monochrom}. \subsection{A winning strategy for Maker in \texorpdfstring{$\mathfrak{MB}(K_{\omega_1}, K_{\omega_1})$}{MBKww1} and \texorpdfstring{$\mathfrak{MB}(K_{\omega_2}, K_{\omega_2})$}{MBKww1}} \begin{prop}[ZF]\label{p: Maker wins measurable} If either $ \kappa $ is measurable or $ \kappa=\omega $, then Maker has a winning strategy in the game $\mathfrak{MB}(K_{\kappa}, K_{\kappa}) $. \end{prop} \begin{proof} A sub-binary Hausdorff tree is a set theoretic tree $ T $ in which each vertex has at most two children and no two vertices at any limit level have the same set of predecessors. During the game Maker builds a sequence $ \left\langle T_\alpha: \alpha\leq \kappa \right\rangle $ of sub-binary Hausdorff trees with root $ 0 $ and $ T_\alpha\subseteq \kappa $ of height at most $1+\alpha$ such that \begin{enumerate}[label=(\alph*)] \item\label{a condition} \begin{enumerate}[label=(\roman*)] \item $ T_0=\{ 0 \} $, \item $ T_{\alpha+1} $ is obtained from $ T_\alpha $ by inserting a new maximal element, \item $ T_\alpha=\bigcup_{\beta<\alpha}T_\beta $ if $ \alpha $ is a limit ordinal, \end{enumerate} \item\label{b condition} for every distinct $ <_{T_\alpha} $-comparable $ u,v\in T_\alpha $, the edge $ \{ u,v \} $ is claimed by Maker in the game. \end{enumerate} Suppose that $ \alpha=\beta+1 $ and $ T_\beta $ is already defined. Maker picks the smallest ordinal $ v $ such that no edge incident with $v$ is claimed and claims edge $ \{ 0, v \} $. Then, for as long as she can, on each following turn she connects $ v $ to vertices in $ T_\beta $ in such a way that: \begin{enumerate} \item\label{first cond} she maintains that the current neighbourhood of $ v $ in her graph is a downward closed chain in $ T_\beta $, \item\label{second cond} whenever she claims some $ \{ u,v \} $, then Breaker has no edge between $ v $ and the subtree $ T_{\beta, u} $ of $ T_\beta $ rooted at $ u $. \end{enumerate} Note that, at any step at which $v$ has a largest Maker-neighbour in $T_\beta$ and this neighbour has two children in $ T_\beta $, she can proceed. Moreover, she can also proceed even if there is no such largest Maker-neighbour as long as there is some element of $T_{\beta}$ whose predecessors are precisely the Maker-neighbours of $v$ in $T_{\beta}$. Thus, if Maker is unable to continue this process with $ v $, then either $ v $ has a largest Maker-neighbour in $ T_\beta $ which has at most one child or else there is no vertex in $T_{\beta}$ with precisely the Maker-neighbours of $u$ as its predecessors. In either case we can define $ T_{\beta+1} $ by adding $ v $ to $ T_\beta $ with its current set of Maker-neighbours as its predecessors, and Maker starts a new phase with a new fresh vertex. It is enough to show that there is a $ \kappa $-branch $ B $ in $ T_\kappa $, because then $ G_M[B] $ is a copy of $ K_{\kappa} $ by \ref{b condition}. Since $ \left|T_\kappa\right|=\kappa $ by \ref{a condition}, we can fix a $ \kappa $-complete free ultrafilter $ \mathcal{U} $ on $ T_\kappa $. By transfinite recursion we build a $ \kappa $-branch. Let $ v_0:=0 $. Suppose that there is an $ \alpha<\kappa $ such that the $ <_{T_\kappa} $-increasing sequence $ \left\langle v_\beta: \beta<\alpha \right\rangle $ is already defined and for each $ \beta<\alpha $, $ T_{\kappa, v_\beta}\in \mathcal{U} $. If $ \alpha $ is a limit ordinal, then since $\bigcap_{\beta<\alpha}T_{v_\beta}\in \mathcal{U} $ by the $ \kappa $-completeness of $ \mathcal{U} $, there is at least one vertex of $T$ with all $v_{\beta}$ as predecessors. We define $ v_\alpha $ to be the unique minimal such vertex, so that $ T_{v_\alpha}=\bigcap_{\beta<\alpha}T_{v_\beta}\in \mathcal{U} $. If $ \alpha=\beta+1 $, then $ T_{\kappa, v_\beta}\in \mathcal{U} $ by assumption. Since $ T_\kappa $ is sub-binary, $ v_\beta $ has a unique child $ v $ satisfying $ T_{\kappa, v}\in \mathcal{U} $ and we let $ v_{\beta+1}:=v $. The recursion is done and $\{ v_\alpha: \alpha<\kappa \} $ is clearly a $ \kappa $-branch. \end{proof} We remark that this strategy is quite flexible and deals also with a number of variants of the Maker-Breaker game. For example, if Breaker is allowed $k<\omega$ moves for every move that Maker picks, simply take a sub-$(k+1)$-regular Hausdorff tree, in which every node has at most $k+1$ children. Furthermore, if in addition Breaker is allowed to go first in every turn, simply weaken the Hausdorff assumption to the requirement that at most $k+1$ vertices at a limit level have the same set of predecessors. Since $ \omega_1 $ and $ \omega_2 $ are measurable cardinals under ZF+DC+AD (\cite{kanamori2008higher}*{Theorems 28.2 and 28.6}), the cases $ \kappa\in \{\omega, \omega_1 \} $ of Corollary~\ref{cor: Breaker win}/(\ref{i: Breaker win 2}) and the cases $\kappa \in \{\omega_1, \omega_2\}$ of Proposition \ref{p: Maker wins measurable} together imply Theorem \ref{t: main thm}. \subsection{Breaker may lose the \texorpdfstring{$\mathfrak{MB}(K_{\omega_1}, K_{\mathsf{club}})$}--game} \begin{prop}\label{prop: strategy steal} Under ZF+DC+AD, Breaker does not have a winning strategy in the game $ \mathfrak{MB}(K_{\omega_1}, K_{\mathsf{club}}) $. \end{prop} \begin{proof} First of all, the club filter on $ \omega_1 $ is a countably complete free ultrafilter under ZF+DC+AD (this is explicit in the proof of \cite{kanamori2008higher}*{Theorem 28.2}). Furthermore, it is normal \cite{dorais2021bounding}*{Proposition 4.1}. Thus for any 2-colouring of $ [\omega_1]^{2} $ there exists a colour with a monochromatic $ K_{\mathsf{club}} $ (the standard proof of this for arbitrary normal ultrafilters uses only ZF, see \cite{jech2003set}*{Theorem 10.22}). It follows that if Breaker successfully prevents Maker from building a $ K_{\mathsf{club}} $, then he necessarily builds a $ K_{\mathsf{club}} $ himself. Suppose for a contradiction that Breaker has a winning strategy. We shall show that Maker can ``steal'' this winning strategy. Indeed, Maker picks an arbitrary edge in turn $ 0 $ as well as in each limit turn while in successor turns she pretends to be Breaker and claims edges according to his winning strategy. This is a winning strategy for Maker, a contradiction. \end{proof} Theorem \ref{t: main thm club} follows from the case $ \kappa=\omega $ of Corollary~\ref{cor: Breaker win}/(\ref{i: Breaker win 3}) and Proposition \ref{prop: strategy steal}. \begin{rem} The same strategy stealing argument shows that if $\kappa$ is a weakly compact cardinal, then Breaker does not have a winning strategy in the game $ \mathfrak{MB}(K_{\kappa}, K_{\kappa}) $. \end{rem} \begin{rem} We did not really use the full power of AD, just some consequences that are weaker in the sense of consistency strength than AD itself. The axiom-system ZF+DC+``$\omega_1$ is measurable'' is equiconsistent with ZFC+``there exists a measurable cardinal'' (see \cite{jech1968omega}). The club filter being an ultrafilter is a strictly stronger assumption, for more details see p. 3 in \cite{dorais2021bounding}. \end{rem} \bibliographystyle{unsrtnat}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,691
\section{Introduction} The purpose of this paper is to construct an $SO(3)$-version of Fintushel-Stern's torsion invariants \cite{FS}. R. Fintushel and R. Stern constructed a variant of Donaldson invariants for spin $4$-manifolds by using $2$-torsion cohomology classes of the moduli spaces of instantons on $SU(2)$-bundles. They used cohomology classes of degree one and two. S. K. Donaldson gave another construction by using a class of degree $3$ \cite{yang}. As is well known, the usual Donaldson invariant is trivial for the connected sum of $4$-manifolds with $b^+$ positive (\cite{poly}). On the other hand, Fintushel and Stern showed that their torsion invariant is not necessarily trivial for the connected sum of the form $Y \# S^2 \times S^2$ in general. In this paper, we define an invariant of $4$-manifolds using $2$-torsion cohomology classes of $SO(3)$-moduli spaces and show that our invariant is not necessarily trivial for $Y \# S^2 \times S^2$ as in the case of Fintushel-Stern's invariant. We basically follow the argument in \cite{FS} and modify it to extend the definition to non-spin $4$-manifolds. The outline of the construction is as follows. Let $X$ be a closed, oriented, simply connected, non-spin Riemannian $4$-manifold and $P$ be an $SO(3)$-bundle over $X$ satisfying \[ w_2(P)=w_2(X) \in H^2(X; \mathbb{Z}_2), \quad p_1(P) \equiv \sigma(X) \mod 8. \] Here $\sigma(X)$ is the signature of $X$. Let $\mathcal{B}_P^*$ be the space of gauge equivalence classes of irreducible connections on $P$. In \cite{AMR}, S. Akbulut, T. Mrowka and Y. Ruan showed that $H^1(\mathcal{B}_P^*; \mathbb{Z}_2)$ is isomorphic to $\mathbb{Z}_2$. We denote the generator by $u_1$. On the other hand, for homology class $[\Sigma] \in H_2(X; \mathbb{Z})$ with self-intersection number even, we have an integral cohomology class $\mu([\Sigma]) \in H^2(\mathcal{B}_P^*; \mathbb{Z})$. Suppose that the dimension of the moduli space $M_P$ of instantons on $P$ is $2d+1$ for some non-negative integer $d$. In general $M_P$ is not compact. However for homology classes $[\Sigma_1], \dots, [\Sigma_d] \in H_2(X; \mathbb{Z})$ with self-intersection numbers even, we can define the pairing \[ q_{X}^{u_1}([\Sigma_1], \dots, [\Sigma_d])= \< u_1 \cup \mu([\Sigma_1]) \cup \cdots \cup \mu([\Sigma_d]), [M_P] \> \in \mathbb{Z}_2 \] in an appropriate sense. We show that this number depends only on the homology classes $[\Sigma_i]$ and gives a differential-topological invariant of $X$. We will show a gluing formula of torsion invariants for $Y \# S^2 \times S^2$, which is an $SO(3)$-version of Theorem 1.1 in \cite{FS}. By using this gluing formula and D. Kotschick's calculation in \cite{K, K2}, we prove that $q_{2\mathbb{C} \mathbb{P}^2 \# \overline{\mathbb{CP}}\ \! \!^2}^{u_1}$ is non-trivial. This example exhibits two interesting aspects explained below. The first aspect is related to vanishing theorem. We have a description of $X=2\mathbb{C} \mathbb{P}^2 \# \overline{\mathbb{CP}}\ \! \!^2$ as the connected sum of $Y_1=\mathbb{C} \mathbb{P}^2$ and $Y_2=\mathbb{C} \mathbb{P}^2 \# \overline{\mathbb{CP}}\ \! \!^2$. Since the second Stiefel-Whitney class $w_2(P)$ is equal to $w_2(X)$, both of $w_2(P)|_{Y_1}$ and $w_2(P)|_{Y_2}$ are non-trivial. In such a situation, the usual Donaldson invariants are trivial by the dimension-count argument (\cite{MM}). Hence the non-triviality of $q_{2\mathbb{C} \mathbb{P}^2 \# \overline{\mathbb{CP}}\ \! \!^2}^{u_1}$ implies that the dimension-count argument can not be applied directly to proving such a vanishing theorem in our case. If each homology class $[\Sigma_i]$ is in $H_2(Y_1; \mathbb{Z})$ or $H_2(Y_2; \mathbb{Z})$, then we can show that our invariant vanishes. However we can not reduce the argument to this case because of the condition that $[\Sigma_i] \cdot [\Sigma_i]$ must be even to define our invariant. The next aspect is related to the Seiberg-Witten theory. In \cite{Wi}, E. Witten introduced invariants, called the Seiberg-Witten invariants, of $4$-manifolds using monopole equations. He conjectured that the invariants are equivalent to the Donaldson invariants and explicitly wrote a formula which should give a relation between the Donaldson invariants and the Seiberg-Witten invariants. In \cite{PT}, V. Pidstrigach and A. Tyurin proposed a program to give a rigorous mathematical proof of the formula by using non-abelian monopoles. The theory of non-abelian monopoles has been developed by P. Feehan and T. Leness (\cite{FL1, FL2, FL3}). Feehan and Leness recently announced that they completed the proof of Witten's formula for $4$-manifolds of simple type with $b_1=0$ and $b^+ > 1$ in \cite{FL4}. The non-triviality of $q^{u_1}_{2\mathbb{C} \mathbb{P}^2 \# \overline{\mathbb{CP}}\ \! \!^2}$ is quite a contrast to the equivalence of the Donaldson invariants and Seiberg-Witten invariants. If a $4$-manifold has a positive scalar curvature metric and satisfies $b^+(X) \geq 1$, then the moduli space of solutions of the monopole equations with respect to the metric is empty for some perturbation. Hence any known invariant of $2\mathbb{C} \mathbb{P}^2 \# \overline{\mathbb{CP}}\ \! \!^2$ coming from the monopole equations (the Seiberg-Witten invariant and a refinement due to S. Bauer and M. Furuta \cite{BF}) is trivial since $2\mathbb{C} \mathbb{P}^2 \# \overline{\mathbb{CP}}\ \! \!^2$ has a positive scalar curvature metric. The paper is organized as follows. In Section 2, we construct cohomology classes $\mu([\Sigma])$ and $u_1$, and define a torsion invariant. In Section 3, we prove a gluing formula for the connected sum of the form $Y \# S^2 \times S^2$. In Section 4, we prove that $q_{2 \mathbb{C} \mathbb{P}^2 \# \overline{\mathbb{CP}}\ \! \!^2}^{u_1}$ is non-trivial by using the gluing formula. We also discuss the reason why the usual vanishing theorem does not hold for our torsion invariant. \begin{acknow} The author would like to thank my advisor Mikio Furuta for his suggestions and warm encouragement. The author also thanks Yukio Kametani and Nobuhiro Nakamura for useful conversations. \end{acknow} \section{Torsion invariants} \label{invariants} \subsection{Notations} \label{notations} Let $X$ be a closed, oriented, simply connected $4$-manifold, $g$ a Riemannian metric on $X$ and $P$ an $SO(3)$-bundle over $X$. Put \[ k=-\frac{1}{4}p_1(P) \in \mathbb{Q}, \quad w=w_2(P) \in H^2(X; \mathbb{Z}_2). \] Let $\mathcal{A}_P^*$ be the space of irreducible connections on $P$ and $\mathcal{G}_P$ be the gauge group of $P$. We write $\mathcal{B}_P^*$ or $\mathcal{B}_{k,w,X}^*$ for the quotient space $\mathcal{A}_P^*/\mathcal{G}_P$. We denote by $M_P$ or $M_{k,w,X}$ the moduli space of instantons on $P$. Let $A$ be an instanton on $P$. We have a sequence \[ \Omega_X^0(\mathfrak{g}_{P}) \stackrel{d_A}{\longrightarrow} \Omega^1_X (\mathfrak{g}_P) \stackrel{d_A^+}{\longrightarrow} \Omega_X^+(\mathfrak{g}_P). \] The condition that $A$ is an instanton implies that $d_A^+ \circ d_A = 0$. Hence the above sequence define a complex. We denote the cohomology groups by $H_{A}^0$, $H_A^1$, $H_A^2$. Let $\bar{P}$ be a $U(2)$-lift of $P$ and $\bar{E}$ be the rank $2$ complex vector bundle associated with $\bar{P}$. Fix a connection $a_{\det}$ on $\det \bar{E}$. We write $\mathcal{A}_{\bar{E}}$ for the space of connections on $\bar{E}$ which induce the connection $a_{\det}$ on $\det \bar{E}$, and write $\mathcal{A}_{\bar{E}}^*$ for the space of irreducible connections in $\mathcal{A}_{\bar{E}}$. Let $\mathcal{G}_{\bar{E}}$ be the group of bundle automorphisms on $\bar{E}$ with determinant $1$. We also introduce a subgroup $\mathcal{G}_{\bar{E}}^0$ of $\mathcal{G}_{\bar{E}}$. Fix a point $x_0$ in $X$. The subgroup $\mathcal{G}_{\bar{E}}^0$ is defined by \[ \mathcal{G}_{\bar{E}}^0=\{ g \in \mathcal{G}_{\bar{E}} | g(x_0)=1 \}. \] We denote the quotient spaces by \[ \mathcal{B}_{\bar{E}}^* = \mathcal{A}_{\bar{E}}^* / \mathcal{G}_{\bar{E}}, \quad \widetilde{\mathcal{B}}_{\bar{E}} = \mathcal{A}_{\bar{E}} / \mathcal{G}_{\bar{E}}^0, \quad \widetilde{\mathcal{B}}_{\bar{E}}^* = \mathcal{A}_{\bar{E}}^* / \mathcal{G}_{\bar{E}}^0. \] Since we are assuming that $X$ is simply connected, the natural map $\mathcal{B}^*_{\bar{E}} \rightarrow \mathcal{B}^*_{P}$ is bijective. To construct cohomology classes $u_1$ and $\mu([\Sigma])$, we need the universal bundle $\widetilde{\mathbb{E}}$ over $X \times \widetilde{\mathcal{B}}_{\bar{E}}$. The universal bundle is defined by \[ \widetilde{\mathbb{E}}:=\bar{E} \times_{\mathcal{G}_{\bar{E}}^0} \mathcal{A}_{\bar{E}} \longrightarrow X \times \widetilde{\mathcal{B}}_{\bar{E}}. \] For a closed, oriented surface $\Sigma$ embedded in $X$, let $\nu(\Sigma)$ be a small tubular neighborhood of $\Sigma$. We define spaces of gauge equivalence classes of connections on $\nu(\Sigma)$. Let $\mathcal{A}_{\nu(\Sigma)}$ be the space of connections on $\bar{E}|_{\nu(\Sigma)}$ which induce the connection $a_{\det}|_{\nu(\Sigma)}$ on $\det \bar{E}|_{\nu(\Sigma)}$. Let $\mathcal{G}_{\nu(\Sigma)}$ be the group of automorphisms of $\bar{E}|_{\nu(\Sigma)}$ with determinant $1$. We assume that the base point $x_0$ is in $\nu(\Sigma)$. Define $\mathcal{G}_{\nu(\Sigma)}^0$ by \[ \mathcal{G}_{\nu(\Sigma)}^0 = \{ g \in \mathcal{G}_{\nu(\Sigma)} | g(x_0)=1 \}. \] We denote the quotient spaces by \[ \mathcal{B}_{\nu(\Sigma)}^* = \mathcal{A}_{\nu(\Sigma)}^* / \mathcal{G}_{\nu(\Sigma)}, \quad \widetilde{\mathcal{B}}_{\nu(\Sigma)} = \mathcal{A}_{\nu(\Sigma)} / \mathcal{G}_{\nu(\Sigma)}^0, \quad \widetilde{\mathcal{B}}_{\nu(\Sigma)}^* = \mathcal{A}_{\nu(\Sigma)}^* / \mathcal{G}_{\nu(\Sigma)}^0. \] Restricting connections, we have a map \[ \tilde{r}_{\nu(\Sigma)}:\widetilde{\mathcal{B}}_{\bar{E}}^* \longrightarrow \widetilde{\mathcal{B}}_{\nu(\Sigma)}. \] We have the universal bundle $\widetilde{\mathbb{E}}_{\nu(\Sigma)}$ over $\nu(\Sigma) \times \widetilde{\mathcal{B}}_{\nu(\Sigma)}$ defined by \[ \widetilde{\mathbb{E}}_{\nu(\Sigma)}:=(\bar{E}|_{\nu(\Sigma)}) \times_{\mathcal{G}_{\nu(\Sigma)}^0} \mathcal{A}_{\nu(\Sigma)} \longrightarrow \nu(\Sigma) \times \widetilde{\mathcal{B}}_{\nu(\Sigma)}. \] \subsection{Cohomology classes of $\mathcal{B}_{P}^*$} \label{cohomolgy} Suppose $\Sigma$ is a closed, oriented surface embedded in $X$ such that $\< w_2(P),[\Sigma] \> \equiv 0 \mod 2$. In this subsection, we define a $2$-dimensional integral cohomology class $\mu([\Sigma]) \in H^2(\mathcal{B}_P^*; \mathbb{Z})$. Basically we follow a standard construction in \cite{DK, K}. We first define the cohomology class $\tilde{\mu}_{\bar{E}}([\Sigma]) \in H^2(\widetilde{\mathcal{B}}_{\bar{E}}; \mathbb{Z})$ to be the slant product $c_2(\widetilde{\mathbb{E}})/[\Sigma]$. \begin{lemma} \label{lem beta} Let $\beta:\widetilde{\mathcal{B}}_{\bar{E}}^* \rightarrow \mathcal{B}_{\bar{E}}^*$ be the projection. Then the induced homomorphism \[ \beta^*:H^2(\mathcal{B}_{\bar{E}}^*; \mathbb{Z}) \longrightarrow H^2( \widetilde{\mathcal{B}}_{\bar{E}}^*; \mathbb{Z}) \] is injective. Moreover for a homology class $[\Sigma] \in H_2(X; \mathbb{Z})$ with $\< w_2(P), [\Sigma] \> \equiv 0 \mod 2$, the cohomology class $\tilde{\mu}_{\bar{E}}([\Sigma])$ lies in the image of $\beta^*$. \end{lemma} \begin{proof} Since $H^1(SO(3); \mathbb{Z})=0$, the spectral sequence associated with the fibration $SO(3) \rightarrow \widetilde{\mathcal{B}}_{\bar{E}}^* \rightarrow \mathcal{B}_{\bar{E}}^*$ induces an exact sequence \begin{equation} \label{eq exact} 0 \longrightarrow H^2(\mathcal{B}_{\bar{E}}^*; \mathbb{Z}) \stackrel{\beta^*}{\longrightarrow} H^2(\widetilde{\mathcal{B}}_{\bar{E}}^*; \mathbb{Z}) \longrightarrow H^2(SO(3); \mathbb{Z}), \end{equation} which implies the injectivity of $\beta^*$. Let $\eta$ be a complex line bundle over $SO(3)$ defined by \[ \eta:=SU(2) \times_{ \{ \pm 1 \} } \mathbb{C} \longrightarrow SO(3). \] Here the action of $\{ \pm 1 \}$ on $\mathbb{C}$ is the multiplication. Then it is easy to obtain the identification \[ \widetilde{\mathbb{E}}|_{\Sigma \times SO(3)} = (\bar{E}|_{\Sigma}) \times_{ \{ \pm 1 \} } SU(2) = (\bar{E}|_{\Sigma}) \boxtimes \eta \longrightarrow \Sigma \times SO(3), \] and we have \[ \begin{split} c_2(\widetilde{\mathbb{E}}|_{\Sigma \times SO(3)})/[\Sigma] &=\big( \pi_1^* c_2(\bar{E}|_{\Sigma}) + \pi_1^* c_1(\bar{E}|_{\Sigma}) \cup \pi_2^* c_1(\eta) \big)/[\Sigma] \\ &=\< c_1(\bar{E}), [\Sigma] \> c_1(\eta) \\ &\in H^2(SO(3); \mathbb{Z}) \cong \mathbb{Z}_2, \end{split} \] where \[ \pi_1:\Sigma \times SO(3) \longrightarrow \Sigma, \quad \pi_2:\Sigma \times SO(3) \longrightarrow SO(3) \] are the projections. If $\< w_2(P), [\Sigma] \>$ is zero, the pairing $\< c_1(\bar{E}), [\Sigma] \>$ is even, and hence the restriction of $c_2(\widetilde{\mathbb{E}})/[\Sigma]$ to $SO(3)$ is trivial. From the exact sequence (\ref{eq exact}), $\tilde{\mu}_{\bar{E}}([\Sigma])$ is in the image of $\beta^*$. \end{proof} By Lemma \ref{lem beta}, there is a unique element of $H^2(\mathcal{B}_{\bar{E}}^*; \mathbb{Z})$ such that the image by $\beta^*$ is $\tilde{\mu}_{\bar{E}}([\Sigma])$. Through the natural identification between $\mathcal{B}_{P}^*$ and $\mathcal{B}_{\bar{E}}^*$, we have a $2$-dimensional cohomology class of $\mathcal{B}_{P}^*$. We denote it by $\mu_{\bar{E}}([\Sigma])$. \begin{lemma} \label{lem mu} Let $X$ be a closed, oriented, simply connected $4$-manifold and $P$ be an $SO(3)$-bundle over $X$. Suppose that $[\Sigma]$ is a $2$-dimensional homology class in $X$ with $\< w_2(P),[\Sigma] \> \equiv 0 \mod 2$. Then the cohomology class $\mu_{\bar{E}}([\Sigma]) \in H^2(\mathcal{B}_P^*; \mathbb{Z})$ is independent of the choice of $\bar{E}$. \end{lemma} This lemma will be shown in \S \ref{well-def} as a corollary of Lemma \ref{lem pull back}. Under the assumption in Lemma \ref{lem mu}, we define $\mu([\Sigma]) \in H^2(\mathcal{B}_P^*; \mathbb{Z})$ as follows. \begin{definition} \label{def mu} For a homology class $[\Sigma] \in H_2(X, \mathbb{Z})$ with $\< w_2(P), [\Sigma] \> \equiv 0 \mod 2$, the cohomology class $\mu([\Sigma]) \in H^2(\mathcal{B}_P^*; \mathbb{Z})$ is defined to be $\mu_{\bar{E}}([\Sigma])$. \end{definition} \begin{remark} Let \[ \mathbb{P}:=P \times_{\mathcal{G}_P} \mathcal{A}^*_P \longrightarrow X \times \mathcal{B}_P^* \] be the universal bundle of $P$. Then the usual definition of $\mu$-map is given by \[ \begin{array}{rccc} \mu_{\mathbb{Q}}: & H_2(X; \mathbb{Z}) & \longrightarrow & H^2(\mathcal{B}_P^*; \mathbb{Q}) \\ & [\Sigma] & \longmapsto & -\frac{1}{4}p_1(\mathbb{P})/[\Sigma]. \end{array} \] In general, $\mu_{\mathbb{Q}}([\Sigma])$ does not have an integral lift. Under our assumptions, it is easy to see that $\mu([\Sigma])$ is an integral lift of $\mu_{\mathbb{Q}}([\Sigma])$. \end{remark} Next we define a torsion cohomology class $u_1 \in H^1(\mathcal{B}_P^*; \mathbb{Z}_2)$. We write $\sigma(X)$ for the signature of $X$. Akbulut, Mrowka and Ruan showed the following in \cite{AMR}. \begin{proposition} [\cite{AMR}] \label{pi1} Let $X$ be a closed, oriented, simply connected $4$-manifold and $P$ be an $SO(3)$-bundle over $X$. Then we have \[ \pi_1(\mathcal{B}_P^*)=\left\{ \begin{array}{cl} \mathbb{Z}_2 & \text{if $w_2(P)=w_2(X),\ p_1(P) \equiv \sigma(X) \mod 8$} \\ 1 & \text{otherwise}. \end{array} \right. \] \end{proposition} \begin{remark} \label{rem pi1} Suppose $P$ is an $SO(3)$-bundle over $X$ with $w_2(P)$ equal to $w_2(X)$ and let $\bar{P}$ be a $U(2)$-lift of $P$. Then $p_1(P)$ is equal to $\sigma(X)$ modulo $8$ if and only if $c_2(\bar{P})$ is equal to $0$ modulo $2$. This equivalence is a consequence of the formulas \[ p_1(P)=-4c_2(\bar{P}) + c_1(\bar{P})^2, \quad w_2(X)^2 \equiv \sigma(X) \mod 8. \] \end{remark} When $w_2(P)=w_2(X)$ and $p_1(P) \equiv \sigma(X) \mod 8$, we have $H^1(\mathcal{B}_P^*; \mathbb{Z}_2) \cong \mathbb{Z}_2$ from Proposition \ref{pi1}. \begin{definition} Let $X$ be a closed, oriented, simply connected $4$-manifold and $P$ be an $SO(3)$-bundle over $X$ satisfying $w_2(P) = w_2(X), \quad p_1(P) \equiv \sigma(X) \mod 8$. We write $u_1$ for the generator of $H^1(\mathcal{B}_P^*; \mathbb{Z}_2) \cong \mathbb{Z}_2$. \end{definition} \subsection{Construction of $q_{X}^{u_1}$} Let $X$ be a closed, oriented, simply connected $4$-manifold. Suppose $b^+(X)=2a$ for a positive integer $a$. Let $P$ be an $SO(3)$-bundle over $X$. Assume that $P$ satisfies the condition \begin{equation} \label{eq w_2} w_2(P)=w_2(X) \in H^2(X; \mathbb{Z}_2), \quad p_1(P) \equiv \sigma(X) \mod 8. \end{equation} The virtual dimension of $M_P$ is given by \[ \dim M_P=-2p_1(P)-3(1+b^+(X))=8k-3(1+2a). \] If we put $d=-p_1(P)-3a-2=4k-3a-2$, then we have \[ \dim M_P=2d+1. \] From the condition (\ref{eq w_2}), we have \[ d \equiv -\sigma(X) -3a - 2 \mod 8. \] Suppose that $d \geq 0$ and take $2$-dimensional homology classes $[\Sigma_1], \dots, [\Sigma_d]$ of $X$ satisfying \[ \< w_2(P), [\Sigma_i] \> \equiv 0 \mod 2 \quad (i=1,\dots, d). \] The assumption $\< w_2(P),[\Sigma_i] \> \equiv 0 \mod 2$ is equivalent to $[\Sigma_i] \cdot [\Sigma_i] \equiv 0 \mod 2$ since $w_2(P)$ is equal to $w_2(X)$. We want to define the pairing $\< u_1 \cup \mu([\Sigma_1]) \cup \cdots \cup \mu([\Sigma_d]), M_P \> \in \mathbb{Z}_2$. The moduli space $M_P$ is not compact in general and the pairing is not well-defined in the usual sense. To define the pairing, we need submanifolds $V_{\Sigma_i}$ dual to $\mu([\Sigma_i])$ which behave nicely near the ends of $M_P$. We briefly explain how the submanifolds are constructed. See \cite{poly, DK} for the details. We use the following three things. The first is that when $b^+(X)$ and $k=-\frac{1}{4}p_1(P)$ are positive $M_P$ lies in $\mathcal{B}_P^*$ and has a natural smooth structure for generic metrics on $X$. The second is that the restrictions of irreducible instantons to open subsets are also irreducible. The third is that the cohomology class $\mu([\Sigma])$ comes from $\mathcal{B}_{\nu(\Sigma)}^*$. More precise statement of the third is as follows. Let $[\Sigma] \in H_2(X; \mathbb{Z})$ be a homology class with $[\Sigma] \cdot [\Sigma] \equiv 0 \mod 2$. Since the following diagram is commutative \[ \begin{CD} \widetilde{\mathbb{E}}|_{\nu(\Sigma) \times \mathcal{B}_{\bar{E}}^*}=(\bar{E}|_{\nu(\Sigma)}) \times_{\mathcal{G}_{\bar{E}}^0} \mathcal{A}^*_{\bar{E}} @>{\operatorname{id}_{\bar{E}} \times \tilde{r}_{\nu(\Sigma)}}>> \widetilde{\mathbb{E}}_{\nu(\Sigma)}=(\bar{E}|_{\nu(\Sigma)}) \times_{\mathcal{G}_{\nu(\Sigma)}^0} \mathcal{A}_{\nu(\Sigma)} \\ @VVV @VVV \\ \nu(\Sigma) \times \widetilde{\mathcal{B}}_{\bar{E}}^* @>>{\operatorname{id}_{\nu(\Sigma)} \times \tilde{r}_{\nu(\Sigma)}}> \nu(\Sigma) \times \widetilde{\mathcal{B}}_{\nu(\Sigma)} \end{CD} \] we obtain \begin{equation} \label{eq mu restriction} \tilde{\mu}_{\bar{E}}([\Sigma])=c_2(\tilde{\mathbb{E}})/[\Sigma]=\tilde{r}_{ \nu(\Sigma)}^* (c_2(\widetilde{\mathbb{E}}_{\nu(\Sigma)})/[\Sigma]) \in H^2(\widetilde{\mathcal{B}}_{\bar{E}}^*; \mathbb{Z}). \end{equation} We apply Lemma \ref{lem beta} to the restriction $P|_{\nu(\Sigma)}$, instead of $P$ itself. Then we see that there exists a unique $2$-dimensional cohomology class $\mu_{\nu(\Sigma),\bar{E}}([\Sigma])$ of $\mathcal{B}_{\nu(\Sigma)}^*$ such that the pull-back by the natural projection $\widetilde{\mathcal{B}}_{\nu(\Sigma)}^* \rightarrow \mathcal{B}_{\nu(\Sigma)}^*$ is equal to $c_2(\widetilde{\mathbb{E}}_{\nu(\Sigma)})/[\Sigma]$. We define $V_{\Sigma}$ as follows. \begin{definition} Take a homology class $[\Sigma] \in H_2(X; \mathbb{Z})$ with $[\Sigma] \cdot [\Sigma]$ even. We write $\mathcal{L}_{\Sigma}$ for a complex line bundle over $\mathcal{B}_{\nu(\Sigma), \bar{E}}^*$ with first Chern class $\mu_{\nu(\Sigma), \bar{E}}([\Sigma]) \in H^2(\mathcal{B}_{\nu(\Sigma),\bar{E}}^*; \mathbb{Z})$. Fix a section $s_{\Sigma}$ of $\mathcal{L}_{\Sigma}$. We denote the zero locus of $s_{\Sigma}$ by $V_{\Sigma} \subset \mathcal{B}_{\nu(\Sigma)}^*$. Suppose that $b^+(X)$ and $k=-\frac{1}{4}p_1(P)$ are positive. For a generic metric $g$, we define \[ M_{P} \cap V_{\Sigma} := \{ \ [A] \in M_P \ | \ [A|_{\nu(\Sigma)}] \in V_{\Sigma} \ \}. \] \end{definition} We will show that the pairing $\< u_1, M_P \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_d} \>$ is well-defined under some condition. \begin{remark} \label{rem line bundle} We give some remarks on the line bundle $\mathcal{L}_{\Sigma}$. We refer to \cite{poly, DK} for details. \begin{itemize} \item As is well-known, we are also able to construct the line bundle $\mathcal{L}_{\Sigma}$ by using a family of twisted Dirac operators on $\Sigma$. \item Assume that $\< w_2(P), [\Sigma] \>$ is equal to $0$ modulo $2$. Then $P|_{\nu(\Sigma)}$ is topologically trivial. Let $\mathcal{B}_{\nu(\Sigma) \ +}^* := \mathcal{B}_{\nu(\Sigma)}^* \cup \{ [\Theta_{\nu(\Sigma)} ] \}$. Here $\Theta_{\nu(\Sigma)}$ is the trivial connection on $\nu(\Sigma)$. It is known that $\mathcal{L}_{\Sigma}$ extends to $\mathcal{B}_{\nu(\Sigma) \ +}^*$. Hence we can assume that the section $s_{\Sigma}$ is non-zero near $[\Theta_{\nu(\Sigma)}]$. In the case when $w_2(P)$ is zero, we need this property to define invariants. On the other hand, when we treat an $SO(3)$-bundle $P$ with $w_2(P)$ non-trivial, we do not need this property for the definition of invariants. However we will need this property in Lemma \ref{lem bubble} to prove some property of our invariant . \end{itemize} \end{remark} We prepare some lemmas. The following is well-known. \begin{lemma} [\cite{conn, DK}] \label{lem transverse} Let $X$ be a closed, oriented, simply connected $4$-manifold with $b^+(X)$ positive and $P$ be an $SO(3)$-bundle with $w_2(P)=w_2(X)$ and $k=-\frac{1}{4}p_1(P)$ positive. Take homology classes $[\Sigma_1], \dots, [\Sigma_{d'}] \in H_2(X; \mathbb{Z})$ with self-intersection numbers even. For generic sections $s_{\Sigma_i}$, the intersections \[ M_{k-j,w,X} \cap \left( \bigcap_{i \in I} V_{\Sigma_i} \right) \quad (I \subset \{ 1, \dots , d' \},\ 0 \leq j < k) \] are transverse. \end{lemma} From now on, we require that $\Sigma_i$ are generic in the following sense. \begin{equation} \label{eq transverse} \left\{ \quad \begin{split} \Sigma_i \pitchfork \Sigma_j \quad & \text{($i,j$ distinct)} \\ \Sigma_i \cap \Sigma_j \cap \Sigma_k=\emptyset \quad & \text{($i,j,k$ distinct)}. \end{split} \right. \end{equation} \begin{lemma} \label{lem compact} Let $X$ be a closed, oriented, simply connected, non-spin $4$-manifold with $b^+(X)$ positive. Let $P$ be an $SO(3)$-bundle over $X$ with $w_2(P)$ equal to $w_2(X)$. Suppose that the dimension of $M_P$ is $2d'+r$ for a non-negative integer $d'$ and $1 \leq r \leq 3$. Take $d'$ homology classes $[\Sigma_1],\dots, [\Sigma_{d'}] \in H^2(X; \mathbb{Z})$ with \[ [\Sigma_i] \cdot [\Sigma_i] \equiv 0 \mod 2 \quad (i=1,\dots,d'). \] Moreover we assume that the surfaces $\Sigma_i$ satisfy the condition (\ref{eq transverse}). Then for generic sections $s_{\Sigma_i}$, the intersection \[ M_P \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_{d'}} \] is a compact $r$-dimensional manifold. \end{lemma} \begin{proof} Put $k=-\frac{1}{4}p_1(P)$, $w=w_2(P)$. For $[A] \in M_P$, we have \[ \begin{split} k &= -\frac{1}{4}p_1(P) \\ &=\frac{1}{8\pi^2} \int_X \operatorname{Tr}(F_A^2) \\ &=\frac{1}{8\pi^2} \int_X | F_{A}^- |^2 d\mu_g - \frac{1}{8 \pi^2} \int_X | F_{A}^+ |^2 d\mu_g \\ &=\frac{1}{8\pi^2} \int_X | F_A^- | d\mu_g \geq 0. \end{split} \] by the Chern-Weil theory. Here $d\mu_g$ is the volume form with respect to $g$. First we show $k>0$. If not, $k=0$ and $A$ is flat. Since $X$ is simply connected, $A$ is trivial. This contradicts to the assumption that $w_2(P)$ is non-trivial. Hence we have $k>0$. From Lemma \ref{lem transverse}, $M_P \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_{d'}}$ is a smooth $r$-dimensional manifold for generic sections $s_{\Sigma_i}$. Next we prove that $M_P \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_{d'}}$ is compact. Let $\{ [A^{(n)}] \}_{n \in \mathbb{N}}$ be a sequence in $M_P \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_{d'}}$. Uhlenbeck's weak compactness theorem implies that there is a subsequence $\{ [A^{(n')}] \}_{n'}$ which is weakly convergent to \[ ([A_{\infty}]; x_1,\dots, x_l) \in M_{k-l, w, X} \times X^l. \] We also have $k-l>0$ in the same way as above. Let $m$ be the number of the tubular neighborhoods $\nu(\Sigma_i)$ which contain $x_{\alpha}$ for some $\alpha$ with $1 \leq \alpha \leq l$. Then without loss of generality, we may suppose that \[ [A_{\infty}] \in M_{k-l, w, X} \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_{d'-m}} \] if we change the order of the surfaces. If we take the tubular neighborhoods $\nu(\Sigma_i)$ to be sufficiently small, we have \begin{equation*} \label{eq nu sigma} \nu(\Sigma_i) \cap \nu(\Sigma_j) \cap \nu(\Sigma_k) = \emptyset \quad (\text{$i, j, k$ distinct}) \end{equation*} from (\ref{eq transverse}). Hence we have $m \leq 2l$. Since $k-l>0$, the intersection $M_{k-l,x,X} \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_{d'-m}}$ is transverse by Lemma \ref{lem transverse}. From this transversality, we obtain \[ \begin{split} 0 &\leq \dim M_{k-l, w, X} \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_{d'-m}} \\ &= \dim M_{k, w, X} - 8l - 2(d'-m) \\ &=r-8l+2m \\ &\leq r-4l. \end{split} \] Since we suppose $1 \leq r \leq 3$, we have $l=0$ and \[ [A_{\infty}] \in M_{P} \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_{d'}}. \] Hence $M_P \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_{d'}}$ is compact. \end{proof} Let $X$ be as in Lemma \ref{lem compact} and $P$ be an $SO(3)$-bundle over $X$ satisfying (\ref{eq w_2}). Suppose that $\dim M_P$ is $2d+1$ for a non-negative integer $d$ and take homology classes $[\Sigma_1], \dots, [\Sigma_d] \in H_2(X; \mathbb{Z})$ with self-intersection numbers even. From Lemma \ref{lem compact}, we have the pairing \[ \< u_1, M_P \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_d} \> \in \mathbb{Z}_2. \] \begin{proposition} \label{prop well-defined} Let $X$ be a closed, oriented, simply connected, non-spin $4$-manifold with $b^+(X)=2a$ for a positive integer $a$ and $P$ be an $SO(3)$-bundle over $X$ satisfying (\ref{eq w_2}). Assume that the dimension of $M_P$ is $2d+1$ for a non-negative integer $d$. Then the pairing \begin{equation*} \label{eq pairing} \< u_1, M_P \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_d} \> \in \mathbb{Z}_2 \end{equation*} is independent of the choices of Riemannian metric $g$, $U(2)$-lift $\bar{P}$ of $P$, sections $s_{\Sigma_i}$ of $\mathcal{L}_{\Sigma_i}$ and surfaces $\Sigma_i$ representing the homology classes $[\Sigma_i]$. Moreover the pairing is multi-linear with respect to $[\Sigma_1], \dots, [\Sigma_d]$. \end{proposition} We prove the above proposition in \S \ref{well-def}. By using this proposition, we can easily show that the following invariant $q^{u_1}_X$ is well defined. \begin{definition} Let $X$ be as in Proposition \ref{prop well-defined}. Let $A_d'(X)$ be the subspace of $\otimes^d H^2(X;\mathbb{Z})$ generated by \[ \{ \ [\Sigma_1] \otimes \cdots \otimes [\Sigma_d] \ | \ [\Sigma_i] \in H_2(X; \mathbb{Z}), [\Sigma_i] \cdot [\Sigma_i] \equiv 0 \mod 2 \ \}, \] and we put \[ A'(X) := \bigoplus_{d} A'_d(X), \] where $d$ runs over non-negative integers with $d \equiv -\sigma(X)-3a-2 \mod 8$. We define $q_X^{u_1}$ by \[ \begin{array}{rccc} q_X^{u_1}:& A'(X) & \longrightarrow & \mathbb{Z}_2 \\ & ([\Sigma_1],\dots,[\Sigma_d]) & \longmapsto & q_{k,w,X}^{u_1}([\Sigma_1],\dots,[\Sigma_d]) := \< u_1,M_P \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_d} \>. \end{array} \] Here $P$ is an $SO(3)$-bundle over $X$ with $w_2(P)=w_2(X)$ and $p_1(P)=-d-3a-2$. \end{definition} \subsection{Well-definedness of $q_{X}^{u_1}$} \label{well-def} In this subsection, we prove Proposition \ref{prop well-defined}. First we show the independence of $q_X^{u_1}$ from Riemannian metric $g$ and sections $s_{\Sigma_i}$ in a standard way. Take two metrics $g$, $g'$ on $X$ and sections $s_{\Sigma_i}$, $s_{\Sigma_i}'$ of $\mathcal{L}_{\Sigma_i}$. Choose a path $\{ g_t \}_{t \in [0,1]}$ between $g$ and $g'$, and a path $\{ s_{\Sigma_i,t} \}_{t \in [0,1] }$ between $s_{\Sigma_i}$ and $s_{\Sigma_i}'$. Then put \[ \mathcal{M}:= \coprod_{t \in [0,1]} M_{P}(g_t) \times \{ t \}, \quad \mathcal{M} \cap \mathcal{V}_{\Sigma_i} := \{ \ ([A], t) \in \mathcal{M} \ | \ s_{\Sigma_i, t}([A|_{\nu(\Sigma_i)}]) = 0 \ \}. \] Using a similar argument in the proof of Lemma \ref{lem compact}, we can show the following lemma: \begin{lemma} \label{lem wd} Let $X$ and $P$ be as in Proposition \ref{prop well-defined}. Then for generic paths $\{ g_{t} \}_{t \in [0,1]}$ and $\{ s_{\Sigma_i,t} \}_{t \in [0,1]}$, the intersection \[ \mathcal{M} \cap \mathcal{V}_{\Sigma_1} \cap \cdots \cap \mathcal{V}_{\Sigma_d} \] is a compact $2$-dimensional manifold whose boundary is \[ ( M_{P}(g) \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_d} ) \coprod ( M_{P}(g') \cap V_{\Sigma_1}' \cap \cdots \cap V_{\Sigma_d}'). \] \end{lemma} This lemma implies \[ \< u_1, M_P(g) \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_d} \>= \< u_1, M_P(g') \cap V_{\Sigma_1}' \cap \cdots \cap V_{\Sigma_d}' \> \in \mathbb{Z}_2, \] and the pairing $\< u_1, M_P \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_d} \>$ is independent of the choices of $g$ and $s_{\Sigma_i}$. Next we see the independence of $q_{X}^{u_1}$ from the choice of $U(2)$-lift $\bar{P}$ of $P$ . Take two $U(2)$-lifts $\bar{P}$ and $\bar{P}'$ of $P$. The associated vector bundle $\bar{E}'$ with $\bar{P}'$ is topologically isomorphic to $\bar{E} \otimes L$ for some complex line bundle $L$ over $X$. Fix connections $a_{\det}$, $a_L$ on $\det \bar{E}$, $L$ and an isomorphism \[ \varphi:\bar{E}' \stackrel{\cong}{\longrightarrow} \bar{E} \otimes L. \] We have a connection $a_{\det}'$ on $\det \bar{E}'$ induced by $a_{\det}, a_L$ and $\varphi$. We consider connections on $\bar{E} \otimes L$ and $\bar{E}'$ which are compatible with $a_{\det}+2a_L$ and $a_{\det}'$ respectively. By tensoring $a_L|_{\nu(\Sigma)}$, we have maps \[ t_{\mathcal{A}}:\mathcal{A}_{\nu(\Sigma), \bar{E}} \stackrel{\cong}{\longrightarrow} \mathcal{A}_{\nu(\Sigma), \bar{E} \otimes L}, \quad t_{\mathcal{B}^*}:\mathcal{B}_{\nu(\Sigma), \bar{E}}^* \stackrel{\cong}{\longrightarrow} \mathcal{B}_{\nu(\Sigma), \bar{E} \otimes L}^*, \quad t_{\widetilde{\mathcal{B}}}:\widetilde{\mathcal{B}}_{\nu(\Sigma), \bar{E}} \stackrel{\cong}{\longrightarrow} \widetilde{\mathcal{B}}_{\nu(\Sigma), \bar{E} \otimes L}. \] Moreover the pull-back by $\varphi$ induces identifications \[ \psi_{\mathcal{B}^*}:\mathcal{B}_{\nu(\Sigma), \bar{E} \otimes L}^* \stackrel{\cong}{\longrightarrow} \mathcal{B}_{\nu(\Sigma), \bar{E}'}^*, \quad \psi_{\widetilde{\mathcal{B}}}:\widetilde{\mathcal{B}}_{ \nu(\Sigma), \bar{E} \otimes L} \stackrel{\cong}{\longrightarrow} \widetilde{\mathcal{B}}_{\nu(\Sigma), \bar{E}'}. \] \begin{lemma} \label{lem pull back} Suppose $\mathcal{L}_{\Sigma}$, $\mathcal{L}_{\Sigma}'$ are complex line bundles over $\mathcal{B}_{\nu(\Sigma), \bar{E}}^*$, $\mathcal{B}_{\nu(\Sigma), \bar{E}'}^*$ corresponding to the cohomology classes $\mu_{\nu(\Sigma), \bar{E}}([\Sigma]) \in H^2(\mathcal{B}_{\nu(\Sigma),\bar{E}}^*; \mathbb{Z})$, $\mu_{\nu(\Sigma), \bar{E}'}([\Sigma]) \in H^2(\mathcal{B}_{\nu(\Sigma),\bar{E}'}^*; \mathbb{Z})$. Then we have \[ (\psi_{\mathcal{B}^*} \circ t_{{\mathcal{B}}^*})^* \mathcal{L}_{\Sigma}' \cong \mathcal{L}_{\Sigma}. \] \end{lemma} \begin{proof} It is sufficient to show that $(\psi_{\widetilde{\mathcal{B}}} \circ t_{\widetilde{\mathcal{B}}})^* (c_2(\widetilde{\mathbb{E}}'_{\nu(\Sigma)})/[\Sigma])$ is equal to $c_2(\widetilde{\mathbb{E}}_{\nu(\Sigma)})/[\Sigma]$ since $H^2(\mathcal{B}^*_{\nu(\Sigma),\bar{E}}; \mathbb{Z}) \rightarrow H^2(\widetilde{\mathcal{B}}^*_{\nu(\Sigma), \bar{E}}; \mathbb{Z})$ is injective. Let $\pi_1:\nu(\Sigma) \times \widetilde{\mathcal{B}}_{\nu(\Sigma), \bar{E}} \rightarrow \nu(\Sigma)$ be the projection. We have the following commutative diagram: \[ \begin{CD} \widetilde{\mathbb{E}}_{\nu(\Sigma)} \otimes \pi_1^* (L|_{\nu(\Sigma)})@. \widetilde{\mathbb{E}}_{\nu(\Sigma)}' \\ @| @| \\ (\bar{E} \otimes L|_{\nu(\Sigma)}) \times_{\mathcal{G}^0_{\nu(\Sigma),\bar{E}}} \mathcal{A}_{\nu(\Sigma),\bar{E}} @>{\varphi^{-1} \times ( \varphi^* \ \circ \ t_{\mathcal{A}} )}>> (\bar{E}'|_{\nu(\Sigma)}) \times_{\mathcal{G}^0_{\nu(\Sigma),\bar{E}'}} \mathcal{A}_{\nu(\Sigma),\bar{E}'} \\ @VVV @VVV \\ \nu(\Sigma) \times \widetilde{\mathcal{B}}_{\nu(\Sigma),\bar{E}} @>>{\operatorname{id}_{\nu(\Sigma)} \times ( \psi_{\widetilde{\mathcal{B}}} \ \circ \ t_{\tilde{\mathcal{B}}} )}> \nu(\Sigma) \times \widetilde{\mathcal{B}}_{\nu(\Sigma),\bar{E}'} \end{CD} \] Hence we have \[ \big( \operatorname{id}_{\nu(\Sigma)} \times (\psi_{\widetilde{\mathcal{B}}} \circ t_{\tilde{\mathcal{B}}}) \big)^* \ \widetilde{\mathbb{E}}'_{\nu(\Sigma)} \cong \widetilde{\mathbb{E}}_{\nu(\Sigma)} \otimes \pi_1^* (L|_{\nu(\Sigma)}) \] and we obtain \begin{equation} \label{eq slant} \begin{split} (\psi_{\widetilde{\mathcal{B}}} \circ t_{\widetilde{\mathcal{B}}})^* (c_2(\widetilde{\mathbb{E}}'_{\nu(\Sigma)})/[\Sigma]) &=c_2(\widetilde{\mathbb{E}}_{\nu(\Sigma)} \otimes \pi_1^* (L|_{\nu(\Sigma)}))/[\Sigma] \\ &=\big\{ c_2(\widetilde{\mathbb{E}}_{\nu(\Sigma)}) + \pi_1^* c_1(L|_{\nu(\Sigma)}) \cup c_1(\widetilde{\mathbb{E}}_{\nu(\Sigma)}) + \pi_1^* c_1(L|_{\nu(\Sigma)})^2 \big\} / [\Sigma] \\ &=c_2(\widetilde{\mathbb{E}}_{\nu(\Sigma)})/[\Sigma] + \big\{ \pi_1^* c_1(L|_{\nu(\Sigma)}) \cup c_1(\widetilde{\mathbb{E}}_{\nu(\Sigma)}) \big\} /[\Sigma] \\ &\in H^2(\widetilde{\mathcal{B}}_{\bar{E}}; \mathbb{Z}). \end{split} \end{equation} By the K\"unneth formula, we can write \[ \begin{split} c_1(\widetilde{\mathbb{E}}_{\nu(\Sigma)}) &=c_1(\widetilde{\mathbb{E}}_{\nu(\Sigma)})_{\nu(\Sigma)} + c_1(\widetilde{\mathbb{E}}_{\nu(\Sigma)})_{\widetilde{\mathcal{B}}} \\ &\in H^2(\nu(\Sigma) \times \widetilde{\mathcal{B}}_{\nu(\Sigma), \bar{E}}; \mathbb{Z}) \cong H^2(\nu(\Sigma); \mathbb{Z}) \oplus H^2(\widetilde{\mathcal{B}}_{\nu(\Sigma), \bar{E}}; \mathbb{Z}) \end{split} \] since $\widetilde{\mathcal{B}}_{\nu(\Sigma), \bar{E}}$ is simply connected (\cite{AB}). The action of $\mathcal{G}_{\nu(\Sigma), \bar{E}}^0$ on $\Lambda^2 \bar{E}|_{\nu(\Sigma)}$ is trivial, since the determinants of elements of $\mathcal{G}^0_{\nu(\Sigma), \bar{E}}$ are equal to $1$ by definition. Hence $\Lambda^2 \widetilde{\mathbb{E}}_{\nu(\Sigma)}$ is the pull-back $\pi_1^* (\Lambda^2 \bar{E}|_{\nu(\Sigma)})$. This implies that the $\widetilde{\mathcal{B}}_{\nu(\Sigma)}$-part $c_1(\widetilde{\mathbb{E}}_{\nu(\Sigma)})_{\widetilde{\mathcal{B}}}$ of $c_1(\widetilde{\mathbb{E}}_{\nu(\Sigma)})=c_1(\Lambda^2 \widetilde{\mathbb{E}}_{\nu(\Sigma)})$ is $0$ and we have \[ \begin{split} \big\{ \pi_1^* c_1(L|_{\nu(\Sigma)}) \cup c_1(\widetilde{\mathbb{E}}_{\nu(\Sigma)}) \big\} /[\Sigma] &=\big\{ \pi_1^* c_1(L|_{\nu(\Sigma)}) \cup c_1(\widetilde{\mathbb{E}}_{\nu(\Sigma)})_{\nu(\Sigma)} \big\} / [\Sigma]\\ &=0 \in H^2(\widetilde{\mathcal{B}}_{\nu(\Sigma)}; \mathbb{Z}). \end{split} \] From the equation (\ref{eq slant}), we obtain \begin{equation} \label{eq c2} (\psi_{\widetilde{\mathcal{B}}} \circ t_{\widetilde{\mathcal{B}}})^* (c_2(\widetilde{\mathbb{E}}'_{\nu(\Sigma)})/[\Sigma]) = c_2(\widetilde{\mathbb{E}}_{\nu(\Sigma)})/[\Sigma] \in H^2(\widetilde{\mathcal{B}}_{\nu(\Sigma)}; \mathbb{Z}). \end{equation} \end{proof} \noindent {\it Proof of Lemma \ref{lem mu}.} \\ Lemma \ref{lem mu} follows from (\ref{eq c2}) and the following commutative diagram: \[ \xymatrix{ \widetilde{\mathcal{B}}_{\nu(\Sigma),\bar{E}} \ar[rr]^{\cong}_{\psi_{\widetilde{\mathcal{B}}} \circ t_{\widetilde{\mathcal{B}}}} & & \widetilde{\mathcal{B}}_{\nu(\Sigma), \bar{E}'} \\ \widetilde{\mathcal{B}}^*_{X,\bar{E}} \ar[u]^{\tilde{r}_{\nu(\Sigma)}} \ar[rr]^{\cong} \ar[d] & & \widetilde{\mathcal{B}}^*_{X, \bar{E}'} \ar[u]_{\tilde{r}_{\nu(\Sigma)}} \ar[d] \\ \mathcal{B}_{X,\bar{E}}^* \ar[rr]^{\cong} \ar[dr]^{\cong} & & \mathcal{B}_{X,\bar{E}'}^* \ar[dl]_{\cong} \\ & \mathcal{B}_{X,P}^* } \] \qed \noindent {\it Proof of independence of $q_{X}^{u_1}$ from $\bar{P}$.} \\ Take homology classes $[\Sigma_i] \in H_2(X; \mathbb{Z})$ with $[\Sigma_i] \cdot [\Sigma_i] \equiv 0 \mod 2$ for $i=1,\dots,d$ and choose $U(2)$-lifts $\bar{P}$ and $\bar{P}'$ of $P$. Then we obtain line bundles $\mathcal{L}_{\Sigma_i}$ and $\mathcal{L}_{\Sigma_i}'$ over $\mathcal{B}_{\nu(\Sigma_i), \bar{E}}^*$ and $\mathcal{B}_{\nu(\Sigma_i), \bar{E}'}^*$. We denote the zero locus of sections $s_{\Sigma_i}$, $s_{\Sigma_i}'$ of $\mathcal{L}_{\Sigma_i}$, $\mathcal{L}_{\Sigma_i}'$ by $V_{\Sigma_i}$, $V_{\Sigma_i}'$. By Lemma \ref{lem pull back}, $(\psi_{\mathcal{B}^*} \circ t_{{\mathcal{B}}^*})^* \mathcal{L}_{\Sigma_i}'$ is isomorphic to $\mathcal{L}_{\Sigma_i}$. We fix an isomorphism and regard the section $s_{\Sigma_i}'$ of $\mathcal{L}_{\Sigma_i}'$ as a sections of $\mathcal{L}_{\Sigma_i}$ through the identifications \[ \psi_{\mathcal{B}^*} \circ t_{{\mathcal{B}}^*}:\mathcal{B}_{\nu(\Sigma_i), \bar{E}}^* \stackrel{\cong}{\longrightarrow} \mathcal{B}^*_{\nu(\Sigma_i), \bar{E}'}, \quad (\psi_{\mathcal{B}^*} \circ t_{\mathcal{B}^*})^* \mathcal{L}_{\Sigma_i}' \cong \mathcal{L}_{\Sigma_i}. \] We take paths $\{ s_{\Sigma_i,t} \}_{t \in [0,1]}$ between $s_{\Sigma_i}$ and $s_{\Sigma_i}'$. In the same way as Lemma \ref{lem wd}, we have a bordism between $M_P \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_d}$ and $M_P \cap V_{\Sigma_1}' \cap \cdots \cap V_{\Sigma_d}'$. Hence we obtain \[ \< u_1, M_P \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_d} \> = \< u_1, M_P \cap V_{\Sigma_1}' \cap \cdots \cap V_{\Sigma_d}' \> \in \mathbb{Z}_2. \] \qed Lastly we show that $q_{X}^{u_1}$ is independent of the choice of surfaces $\Sigma_i$ representing the homology classes $[\Sigma_i]$ and that $q_{X}^{u_1}$ is multi-linear with respect to $[\Sigma_1], \dots, [\Sigma_d]$. It follows from the following lemma directly. \begin{lemma} Let $X$ and $P$ be as in Proposition \ref{prop well-defined}. Take homology classes $[\Sigma_1], \dots, [\Sigma_d] \in H_2(X;\mathbb{Z})$ with self-intersection numbers even. Moreover assume that \[ [\Sigma_1] = [\Sigma_1'] + [\Sigma_1''] \in H_2(X;\mathbb{Z}), \quad [\Sigma_1'] \cdot [\Sigma_1'] \equiv [\Sigma_1''] \cdot [\Sigma_1''] \equiv 0 \mod 2. \] Then we have \begin{gather*} \< u_1,M_P \cap V_{\Sigma_1} \cap V_{\Sigma_2} \cap \cdots \cap V_{\Sigma_d} \>=\\ \< u_1,M_P \cap V_{\Sigma_1'} \cap V_{\Sigma_2} \cap \cdots \cap V_{\Sigma_d} \>+ \< u_1,M_P \cap V_{\Sigma_1''} \cap V_{\Sigma_2} \cap \cdots \cap V_{\Sigma_d} \> \in \mathbb{Z}_2. \end{gather*} \end{lemma} \begin{proof} By definition, we have \[ \tilde{\mu}_{\bar{E}}([\Sigma_1]) =c_2(\tilde{\mathbb{E}})/[\Sigma_1] =c_2(\tilde{\mathbb{E}})/[\Sigma_1'] + c_2(\tilde{\mathbb{E}})/[\Sigma_1''] =\tilde{\mu}_{\bar{E}}([\Sigma_1'])+\tilde{\mu}_{\bar{E}}([\Sigma_1'']) \in H^2(\widetilde{\mathcal{B}}_{\bar{E}}; \mathbb{Z}). \] The homomorphism $\beta^*:H^2(\mathcal{B}_{\bar{E}}^*; \mathbb{Z}) \rightarrow H^2(\widetilde{\mathcal{B}}_{\bar{E}}^*; \mathbb{Z})$ is injective and $\tilde{\mu}_{\bar{E}}([\Sigma_1])$, $\tilde{\mu}_{\bar{E}}([\Sigma_1'])$, $\tilde{\mu}_{\bar{E}}([\Sigma_1''])$ lie in the image $\beta^*$ from Lemma \ref{lem beta}. Hence we have \[ \mu([\Sigma_1])=\mu([\Sigma_1'])+\mu([\Sigma_1'']) \in H^2(\mathcal{B}_P^*; \mathbb{Z}). \] Since $M_{P} \cap V_{\Sigma_2} \cap \cdots \cap V_{\Sigma_d}$ is compact from Lemma \ref{lem compact}, we have \[ \begin{split} &\< u_1, M_P \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_d} \> \\ &\quad = \< u_1 \cup \mu([\Sigma_1]), M_P \cap V_{\Sigma_2} \cap \cdots \cap V_{\Sigma_d} \> \\ &\quad =\< u_1 \cup (\mu([\Sigma_1']) + \mu([\Sigma_1''])), M_P \cap V_{\Sigma_2} \cap \cdots \cap V_{\Sigma_d} \> \\ &\quad =\< u_1 \cup \mu([\Sigma_1']), M_P \cap V_{\Sigma_2} \cap \cdots \cap V_{\Sigma_d} \> + \< u_1 \cup \mu([\Sigma_1'']), M_P \cap V_{\Sigma_2} \cap \cdots \cap V_{\Sigma_d} \> \\ &\quad =\< u_1, M_P \cap V_{\Sigma_1'} \cap V_{\Sigma_2} \cap \cdots \cap V_{\Sigma_d} \> + \< u_1, M_P \cap V_{\Sigma_1''} \cap V_{\Sigma_2} \cap \cdots \cap V_{\Sigma_d} \>. \end{split} \] \end{proof} \section{A connected sum formula for $Y \# S^2 \times S^2$} \subsection{Statement of the result} As is well known Donaldson invariants vanish for the connected sum $X_1 \# X_2$ provided $b^+(X_i)>0$ for $i=1, 2$ (\cite{poly}). In \cite{FS}, however, Fintushel and Stern defined some torsion invariants by using instantons on $SU(2)$-bundles and they showed that their $SU(2)$-torsion invariants are non-trivial for the connected sum of the form $Y \# S^2 \times S^2$. In this section, we show a similar non-vanishing theorem for our $SO(3)$-torsion invariants. Let $Y$ be a closed, oriented, simply connected, non-spin $4$-manifold with $b^+(Y)=2a-1$ for $a>1$. Let $Q$ be an $SO(3)$-bundle with $w_2(Q)$ equal to $w_2(Y)$ and $p_1(Q)$ equal to $\sigma(Y)+4$ modulo $8$. Suppose that the dimension of $M_Q$ is $2d$ for a non-negative integer $d$. When we fix an orientation on the space $\mathcal{H}_g^+(Y)$ of self-dual harmonic $2$-forms on $Y$ and an lift $c \in H^2(Y;\mathbb{Z})$ of $w_2(Q) \in H^2(Y;\mathbb{Z}_2)$, we have the Donaldson invariant \[ q_{k-1,w,Y}:\otimes^d H_2(Y;\mathbb{Z}) \longrightarrow \mathbb{Q} \] where \[ k-1=-\frac{1}{4}p_1(Q) \in \mathbb{Q}, \quad w=w_2(Q) \in H^2(Y;\mathbb{Z}_2). \] When $[\Sigma_i] \cdot [\Sigma_i]$ are even for $i=1,\dots, d$, then $q_{k-1,w,Y}([\Sigma_1],\dots,[\Sigma_d])$ is in $\mathbb{Z}$. We consider an $SO(3)$-bundle $P$ over $X=Y \# S^2 \times S^2$ satisfying \begin{equation*} \label{eq w_2 P} w_2(P)=w_2(X), \quad p_1(P)=p_1(Q)-4, \end{equation*} so that $P$ satisfies (\ref{eq w_2}). The dimension of $M_P$ is given by $2d+5$. We define surfaces $\Sigma$, $\Sigma'$ embedded in $S^2 \times S^2$ by \[ \Sigma=S^2 \times \{ pt \}, \quad \Sigma'=\{ pt \} \times S^2 \subset S^2 \times S^2. \] Then we have \[ [\Sigma] \cdot [\Sigma] \equiv [\Sigma'] \cdot [\Sigma'] \equiv 0 \mod 2. \] Now $q_{k,w,Y \# S^2 \times S^2}^{u_1}([\Sigma_1],\dots,[\Sigma_d],[\Sigma],[\Sigma'])$ is defined for homology classes $[\Sigma_i]$ of $Y$ with self-intersection numbers even. The following is an $SO(3)$-version of Theorem 1.1 in \cite{FS}. \begin{theorem} \label{main thm} In the above situation, we have \[ q_{k,w,Y \# S^2 \times S^2}^{u_1}([\Sigma_1],\dots,[\Sigma_d],[\Sigma],[\Sigma']) \equiv q_{k-1,w,Y}([\Sigma_1],\dots,[\Sigma_d]) \mod 2. \] \end{theorem} The proof is given in the following three subsections. \subsection{Notations and general facts} \label{facts} For the proof of Theorem \ref{main thm}, we will investigate the intersection $M_P \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_d} \cap V_{\Sigma'} \cap V_{\Sigma}$ when the neck of $Y \# S^2 \times S^2$ is very long. For the preparation, we define some notations and recall some facts about instantons over the connected sum of $4$-manifolds. Let $Y_1$ and $Y_2$ be a closed, oriented $4$-manifold. The connected sum $X=Y_1 \# Y_2$ is constructed in the following way. Fix Riemannian metrics $g_1$ and $g_2$ on $Y_1$ and $Y_2$ which are flat in small neighborhoods of fixed points $y_1 \in Y_1$ and $y_2 \in Y_2$. For $N>1$ and $\lambda>0$ with $N\lambda^{\frac{1}{2}} \ll 1$, we put \[ \Omega_i=\Omega_{y_i}(\lambda,N)= \{ y \in Y_i | N^{-1} \lambda^{\frac{1}{2}} < d(y,y_i) < N \lambda^{\frac{1}{2}} \} \quad (i=1, 2). \] Let \[ \sigma:(TY_1)_{y_1} \stackrel{\cong}{\longrightarrow} (TY_2)_{y_2}. \] be an orientation-reversing linear isometry. For each positive real number $\lambda>0$, we define \[ \begin{array}{rccc} f_{\lambda}: & (TY_1)_{y_1} \backslash \{ 0 \} & \longrightarrow & (TY_2)_{y_2} \backslash \{ 0 \} \\ & \xi & \longmapsto & \frac{\lambda}{| \xi |^2}\sigma(\xi). \end{array} \] This map $f_{\lambda}$ induces a diffeomorphism between $\Omega_1$ and $\Omega_2$. The connected sum $X$ of $Y_1$ and $Y_2$ is identified with \[ X(\lambda)=(Y_1 \backslash B_{y_1}(N^{-1} \lambda^{\frac{1}{2}}) ) \bigcup_{f_{\lambda}} (Y_2 \backslash B_{y_2}(N^{-1} \lambda^{\frac{1}{2}})) \] where $B_{y_i}(N^{-1} \lambda^{\frac{1}{2}})$ is the open ball centered on $y_i$ with radius $N^{-1} \lambda^{\frac{1}{2}}$. The metrics $g_1$ and $g_2$ define a conformal structure on $X$ since $g_i$ is flat in a small neighborhood of $y_i$. We fix a metric $g_{\lambda}$ on $X$ which represents the conformal structure. Moreover we assume that $g_{\lambda}$ is equal to $g_i$ on $Y_i \backslash B((N+1) \lambda^{\frac{1}{2}})$. \begin{definition} Fix a real number $q$ with $q>4$. Let $[A^{(n)}] \in M_P(g_{\lambda_n})$ be instantons over $X=Y_1 \# Y_2$ for a sequence $\lambda_n \rightarrow 0$. Let $z_1,\dots, z_l$ be points in $Y_1 \backslash \{ y_1 \}$, $z_1',\dots, z_m'$ be points in $Y_2 \backslash \{ y_2 \}$ and $A_i$ be connections over $Y_i$. Then we say that $[A^{(n)}]$ is weakly convergent to $([A_1], [A_2];z_1,\dots,z_l, z_1',\dots, z_m')$ when $[A^{(n)}]$ is $L^q$-convergent to $([A_1],[A_n])$ over compact subsets in $(Y_1 \cup Y_2) \backslash \{ y_1, y_2, z_1, \dots,z_l, z_1',\dots, z_m'\}$ and $|F_{A^{(n)}}|^2$ is convergent as measure to \[ |F_{A_1}|^2 + |F_{A_2}|^2 + 8 \pi^2 \left( \sum_{\nu=1}^l \delta_{z_{\nu}} + \sum_{\nu=1}^{m} \delta_{z_{\nu}'} \right) \] over compact subsets in $(Y_1 \backslash \{ y_1 \}) \cup (Y_2 \backslash \{ y_2 \})$. Here $\delta_z$ is the delta function supported on $z$. \end{definition} We use the following well-known theorem. \begin{theorem} [\cite{poly, DK}] \label{thm weak conv} Let $P$ be an $SO(3)$-bundle over $X=Y_1 \# Y_2$. Set $k=-p_1(P)/4$, $w=w_2(P)$, $w_i=w|_{Y_i}$. Let $[A^{(n)}] \in M_{k, w, X}(\lambda_n)$ be instantons over $X$ for $\lambda_n \rightarrow 0$. Then there is a subsequence $\{ [A^{(n')}] \}_{n'}$ which is weakly convergent to $([A_1],[A_2];z_1,\dots, z_l, z_1',\dots, z_m')$ for some \[ [A_1] \in M_{k_1,w_1,Y_1}(g_1), \ [A_2] \in M_{k_2,w_2,Y_2}(g_2), \ z_1,\dots, z_l \in Y_1 \backslash \{ y_1 \}, \ z_1',\dots, z_m' \in Y_2 \backslash \{ y_2 \} \] with \[ k_1 \geq 0, \quad k_2 \geq 0, \quad k_1 + k_2 + l + m \leq k. \] \end{theorem} Next we review gluing of instantons. The theory of gluing of instantons is standard. To fix notations, we recall the theory briefly. Let $A_i$ be instantons over $Y_i$. We denote the $SO(3)$-bundles carrying $A_i$ by $P_i$. We can construct instantons on $X=Y_1 \# Y_2$ close to $A_i$ on each factor. Outline of the construction is as follows. (See \cite{DK} Chapter 7 for details.) Let $b$ be a small positive number with $b \geq 4 N \lambda^{\frac{1}{2}}$. By using suitable cut-off functions and trivializations of $P_i$ on neighborhoods of $y_i$, we obtain a connections $A_i'$ which are flat over the annuli $\Omega_i$ and equal to $A_i$ outside the balls centered at $y_i$ with radius $b$. Take an $SO(3)$-isomorphism $\rho$ between $(P_1)_{y_1}$ and $(P_2)_{y_2}$. We can spread this isomorphism by using flat structures of $A_i'$, and obtain an isomorphism $g_{\rho}$ between $P_1|_{\Omega_1}$ and $P_2|_{\Omega_2}$ covering $f_{\lambda}$. We define an $SO(3)$-bundle $P_{\rho}$ over $X$ and a connection $A'(\rho)=A_1' \#_{\rho} A_2'$ on $P_{\rho}$ by gluing $P_i$, $A_i$ through $g_{\rho}$. Then in large region outside the neck of $X$, $A'(\rho)$ satisfies the instanton equation, and $F^+_{A'(\rho)}$ is very small near the neck. To obtain a genuine instanton we have to perturb $A'(\rho)$. We consider the equation \begin{equation} \label{eq perturb} F_{A'(\rho) + a }^+ = 0 \end{equation} for $a \in \Omega_X^1(\mathfrak{g}_{P_{\rho}})$. To solve this equation, we take linear maps \[ \sigma_i:H_{A_i}^2 \longrightarrow \Omega_{Y_i}^+(\mathfrak{g}_{P_i}) \] such that $d_{A_i}^+ \oplus \sigma_i$ are surjective and for each $h_i \in H^2_{A_i}$ the supports of $\sigma_i(h_i)$ are in the complement of the ball centered at $y_i$ with radius $b$. Then put \[ \sigma:=\sigma_1+\sigma_2:H^2_{A_1} \oplus H^2_{A_2} \longrightarrow \Omega_{X}^+(\mathfrak{g}_{P_{\rho}}). \] We can construct a right inverse of $d_{A'(\rho)}^+ + \sigma$ starting from right inverses of $d_{A_i}^+ + \sigma_i$ . Decompose the right inverse as $P \oplus \pi$, where \[ P:\Omega_{X}^+(\mathfrak{g}_{P_{\rho}}) \longrightarrow \Omega^1(\mathfrak{g}_{P_{\rho}}), \quad \pi:\Omega_X^+(\mathfrak{g}_{P_{\rho}}) \longrightarrow H_{A_1}^2 \oplus H_{A_2}^2. \] Instead of (\ref{eq perturb}), we first consider the equation \[ F_{A'(\rho)+a}^+ + \sigma(h)=0 \] for $(a, h) \in \Omega_X^1(\mathfrak{g}_{P_{\rho}}) \times (H_{A_1}^2 \oplus H_{A_2}^2)$. We find a solution of this equation in the form $a=P \xi$, $h=-\pi \xi$. In this case, we see that the equation is equivalent to the equation \[ \xi + (P \xi \wedge P \xi)^+ = - F_{A'(\rho)}^+ \] by a short calculation. Using the contraction mapping principle, we can show that there is a unique small solution $\xi_{\rho} \in \Omega^+(\mathfrak{g}_{P_{\rho}})$ for the equation. We get a genuine instanton if and only if $\pi \xi_{\rho}=0$. Therefore there is a map \[ \Psi:Gl_{y_1,y_2} \longrightarrow H^2_{A_1} \times H_{A_2}^2 \] such that the solutions of $\Psi=0$ represent instantons over $X$. Here $Gl_{y_1,y_2}$ is the space of $SO(3)$-equivariant isomorphisms between $(P_1)_{y_1}$ and $(P_2)_{y_2}$. We fix an element $\rho_0 \in Gl_{y_1, y_2}$ to identify $Gl_{y_1,y_2}$ with $SO(3)$. We can include the deformations of $[A_i]$ to this construction. For small neighborhoods $U_{A_i}$ of $0$ in $H_{A_i}^1$, we have a map \[ \Psi:T:=U_{A_1} \times U_{A_2} \times SO(3) \longrightarrow H_{A_1}^2 \times H_{A_2}^2 \] such that elements of $\Psi^{-1}(0)$ correspond to instantons. Let $\Gamma_{A_i}$ be the isotropy group of $A_i$ in the gauge group and put $\Gamma=\Gamma_{A_1} \times \Gamma_{A_2}$. We assume that $U_{A_i}$ is $\Gamma_{A_i}$-invariant. Then there are natural actions of $\Gamma$ on $T$ and on $H_{A_1}^2 \times H_{A_2}^2$. We can show that $\Psi$ is $\Gamma$-equivariant and instantons corresponding to elements of $\Psi^{-1}(0)$ are gauge equivalent to each other if and only if they are in the same $\Gamma$-orbit. Hence we can regard $\Psi^{-1}(0)/\Gamma$ as a subspace of $M_P$. An important feature is that instantons over $X=Y_1 \# Y_2$ which is close to $A_i$ over $Y_i$ are given in the above description. More precise statement is the following: Let $Y_i''$ be the complement of balls centered at $y_i$ with radius $\lambda^{\frac{1}{2}}/2$. Take instantons $A_i$ over $Y_i$ and a positive number $\nu>0$. Then put \begin{equation} \label{eq U} U_{\lambda}(\nu):= \{ \ [A] \in \mathcal{B}_X^* \ | \ d_{ q}([A|_{Y_i''}], [A_i |_{Y_i''}]) < \nu, \ i=1, 2 \ \}. \end{equation} Here $q$ is the fixed real number with $q>4$ and $d_q$ is the distance induced by $L^q$-norm over $Y_i''$. If $\nu>0$ is small, then there is a positive number $\lambda(\nu)>0$ such that for $\lambda< \lambda(\nu)$ we can take a neighborhood $T$ of $\{ 0 \} \times \{ 0 \} \times SO(3)$ in $H_{A_1}^1 \times H_{A_2}^1 \times SO(3)$ such that $M_P(g_{\lambda}) \cap U_{\lambda}(\nu)$ is homeomorphic to $\Psi^{-1}(0)/\Gamma$. Summing up these: \begin{theorem} \label{thm gluing} Let $A_1, A_2$ be instantons on $Y_1, Y_2$. Then there is a $\Gamma=\Gamma_{A_1} \times \Gamma_{A_2}$-invariant neighborhood $T$ of $SO(3) \times \{ 0 \} \times \{ 0 \}$ in $SO(3) \times H^1_{A_1} \times H_{A_2}^1$ and $\Gamma$-equivariant map \[ \Psi:T \longrightarrow H^2_{A_1} \times H_{A_2}^2 \] such that $\Psi^{-1}(0)/\Gamma$ is homeomorphic to an open set $N$ in $M_{P}$. Moreover for a small positive number $\nu>0$, there is a $\lambda(\nu)>0$ and $T$ such that if $\lambda<\lambda(\nu)$ then $N=M_P(g_{\lambda}) \cap U_{\lambda}(\nu)$. \end{theorem} In particular, when $Y_2$ is $S^4$ and $A_2$ is the fundamental instanton $J$ with instanton number one, we have: \begin{corollary} \label{coro gluing} Let $A_1$ be an instanton over $Y_1$ and $A_2$ be the fundamental instanton $J$ over $S^4$. For a small positive number $\nu>0$, there is a positive number $\lambda_0>0$ and a neighborhood $U_{A_1}$ of $0$ in $H^1_{A_1}$, a neighborhood $U_{0}$ of $0$ in $S^4=\mathbb{R}^4 \cup \{ \infty \}$ and $\Gamma=\Gamma_{A_1}$-equivariant map \[ \Psi:U_{A_1} \times U_{0} \times (0, \lambda_0) \times SO(3) \longrightarrow H_{A_1}^2 \] such that $\Psi^{-1}(0)/\Gamma$ is naturally homeomorphic to $M_P \cap U_{\lambda_0}(\nu)$. \end{corollary} \begin{remark} \label{rem gluing} We can generalize the statements of Theorem \ref{thm gluing} and Corollary \ref{coro gluing} to $3$ or more instantons. \end{remark} \subsection{Shrinking the neck} \label{neck} In the situation of Theorem \ref{main thm}, we investigate \[ M_P(g_{\lambda}) \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_d} \cap V_{\Sigma} \cap V_{\Sigma'} \] as $\lambda$ tends to $0$. We use the notations in \S \ref{facts}. Let $Y_1$ be a closed, oriented, simply connected, non-spin $4$-manifold with $b^+(Y_1)=2a-1$ with $a>1$ and we write $Y_2$ for $S^2 \times S^2$. Let $P$ be an $SO(3)$-bundle over $X=Y_1 \# Y_2$ satisfying (\ref{eq w_2}). Assume that the virtual dimension of $M_P$ is $2d+5$ for a non-negative integer $d$. Take homology classes $[\Sigma_1], \dots, [\Sigma_d] \in H_2(Y_1; \mathbb{Z})$ with $[\Sigma_i] \cdot [\Sigma_i] \equiv 0 \mod 2$. Set $\Sigma=S^2 \times \{ pt \}, \Sigma'=\{ pt \} \times S^2 \subset Y_2$. Take instantons \[ [A^{(n)}] \in M_P(g_{\lambda_n}) \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_d} \cap V_{\Sigma} \cap V_{\Sigma'} \] for a sequence $\lambda_n \rightarrow 0$. By Theorem \ref{thm weak conv}, a subsequence of $\{ [A^{(n)}] \}_{n}$ is weakly convergent to some \[ ([A_1], [A_2]; z_1,\dots, z_{l}, z_1',\dots, z_{m}'), \] where \[ [A_1] \in M_{k_1,w,Y_1}(g_1), \ [A_2] \in M_{k_2,Y_2}(g_2), \ z_1,\dots, z_l \in Y_1 \backslash \{ y_1 \}, \ z_1',\dots , z_m' \in Y_2 \backslash \{ y_2 \}. \] \begin{lemma} \label{lem bubble} In the above situation, we have \begin{gather*} k_1=k-1,\ l=0, \ [A_1] \in M_{k-1,w,Y_1}(g_1) \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_d}, \\ m=1, \quad z_1' \in \nu(\Sigma) \cap \nu(\Sigma'), \quad [A_2]=[\Theta_{Y_2}]. \end{gather*} Here $\Theta_{Y_2}$ is the trivial connection on $Y_2$. \end{lemma} \begin{proof} From Theorem \ref{thm weak conv}, we have \begin{equation} \label{eq p1} k_1+k_2+l+m \leq k. \end{equation} Let $p$ be the number of $\nu(\Sigma_i)$ which contain some point $z_{\alpha}$ and $q$ be the number of $\nu(\Sigma)$, $\nu(\Sigma')$ which contain some point $z_{\alpha}'$. Then by the transversality condition (\ref{eq transverse}), we have \begin{equation} \label{eq p q} 0 \leq p \leq 2l, \quad 0 \leq q \leq 2m. \end{equation} Without loss of generality, we may assume \[ [A_1] \in M_{k_1,w,Y_1} \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_{d-p}} \] if we change the order of surfaces. Since $w_2(P)|_{Y_1}$ is non-trivial, we can show $k_1>0$ in the same way as the proof of Lemma \ref{lem compact}. For generic sections, the intersection $M_{k_1,w,Y_1} \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_{d-p}}$ is transverse by Lemma \ref{lem transverse}. Hence we have \begin{equation} \label{eq M 1} 2(d-p) \leq \dim M_{k_1,w,Y_1}. \end{equation} We would like to show $k_2=0$. Suppose that $k_2$ is positive. Then we also obtain \begin{equation} \label{eq M 2} 2(2-q) \leq \dim M_{k_2,Y_2}. \end{equation} By index theorem, there is the formula \begin{equation} \label{eq sum} \dim M_{k_1,w,Y_1} + \dim M_{k_2,Y_2}+3=\dim M_{k_1+k_2,w,X}. \end{equation} From (\ref{eq p1}), (\ref{eq M 1}), (\ref{eq M 2}) and (\ref{eq sum}), we have \[ 2(d-p)+2(2-q)+3 \leq \dim M_{k_1+k_2,w,X} \leq \dim M_{k,w,X}-8(l+m)=2d+5-8(l+m). \] This inequality and (\ref{eq p q}) imply \[ 8(l+m) +2 \leq 2p+2q \leq 4(l+m). \] We have a contradiction. Hence $k_2$ is $0$ which implies that $[A_2]$ is the class of trivial flat connection $[\Theta_{Y_2}]$. Since $k_2$ is $0$, the virtual dimension of $M_{0,Y_2}$ is $-6$. From (\ref{eq sum}), we have \begin{equation} \label{eq sum 2} \dim M_{k_1,w,Y_1}-3 = \dim M_{k_1,w,X}. \end{equation} By (\ref{eq p1}), (\ref{eq p q}),(\ref{eq M 1}) and (\ref{eq sum 2}), we have \[ 2(d-2l) - 3 \leq 2(d-p) - 3 \leq \dim M_{k_1,w,Y_1}-3= \dim M_{k_1,w,X} \leq \dim M_{k,w,X} - 8(l+m). \] Therefore we obtain \[ 4l+8m \leq 8. \] In particular, we have $m \leq 1$. We show $m=1$. Suppose $m=0$, then we have $[\Theta_{Y_2}] \in V_{\Sigma}, [\Theta_{Y_2}] \in V_{\Sigma'}$. To obtain a contradiction, we need to choose $V_{\Sigma}$ and $V_{\Sigma'}$ in a specific way. As mentioned in Remark \ref{rem line bundle}, we can choose $V_{\Sigma}$ and $V_{\Sigma'}$ do not include $[\Theta_{Y_2}]$. If we choose such $V_{\Sigma}$ and $V_{\Sigma'}$, we have a contradiction. We obtain $l=0$, $m=1$ and $z_1' \in \nu(\Sigma) \cap \nu(\Sigma')$. Hence \[ [A_1] \in M_{k_1,w,Y_1} \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_d}. \] Lastly we show $k_1=k-1$. From (\ref{eq p1}), we have $k_1 \leq k-1$. On the other hand, from (\ref{eq M 1}) we have \[ 2d \leq \dim M_{k_1,w,Y_1}=\dim M_{k-1,w,Y_1}-8(k-1-k_1)=2d-8(k-1-k_1). \] This implies $k_1 \geq k-1$. Therefore $k_1$ is equal to $k-1$. We complete the proof. \end{proof} Let $w_0'$ be the unique intersection point of $\Sigma$ and $\Sigma'$. Fix a small neighborhood $U_{w_0'}$ of $w_0'$ with $\nu(\Sigma) \cap \nu(\Sigma') \subset U_{w_0'}$. We suppose that the metric $g_2$ on $Y_2$ is flat on $U_{w_0'}$ for simplicity. Take \[ [A^{(n)}] \in M_P(g_{\lambda_n}) \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_d} \cap V_{\Sigma} \cap V_{\Sigma'} \] for $\lambda_n \rightarrow 0$ and assume that $\{ [A^{(n)}] \}_{n \in \mathbb{N} }$ weakly converges to $([A_1], [\Theta_{Y_2}]; z_1')$ for some $[A_1] \in M_{k-1,w,Y_1} \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_d}$, $z_1' \in \nu(\Sigma) \cap \nu(\Sigma')$. We can define the local center of mass $c_n \in U_{w_0'}$ and scale $\lambda_n'>0$ of $[A^{(n)}]$ around $z_1'$ when $n$ is sufficiently large. If $n$ is large enough, then we obtain \[ \int_{U_{w_0'}} | F_{A^{(n)}}|^2 d\mu_{g_2} > 4\pi^2 \] since $|F_{A^{(n)}}|^2$ converges to $8\pi^2 \delta_{z_1'}$ on $U_{w_0'}$. We define the center of mass $c_n$ to be the center of the smallest ball in $U_{w_0'}$ where the integral of $|F_{A^{(n)}}|^2$ is equal to $4 \pi^2$ and the scale $\lambda_n'$ to be the radius of the ball. The center of mass and scale is determined uniquely (\cite{appli}). The center $c_n$ converges to $z_1'$ and the scale $\lambda_n'$ converges to $0$. Let $m:\mathbb{R}^4 \rightarrow S^4=\mathbb{R}^4 \cup \{ \infty \}$ be the stereographic map and $d_{\lambda}:\mathbb{R}^4 \rightarrow \mathbb{R}^4$ be the map $d_{\lambda}(y)=\lambda^{-1}y$. Put $\chi_n:=m \circ d_{\lambda_n'}$. Then $\chi_n$ induces a conformal isomorphism between $X$ and the connected sum \[ X \# S^4=(X \backslash B_{c_n}(N^{-1}\lambda_n')) \cup_{f_{\lambda_n'}} (S^4 \backslash B_{\infty}(N^{-1} \lambda_n') ) \] since the metric $g_2$ is flat on $U_{w_0'}$. Here $f_{\lambda_n'}$ is defined in the following way: Using the geodesic coordinate near $c_n$ and the stereographic map, we identify $(TX)_{c_n}$ with $(TS^4)_{0}$. Let $\sigma'$ be the natural, orientation reversing isometry between $(TS^4)_{0}$ and $(TS^4)_{\infty}$, then $f_{\lambda_n'}$ is given by \[ \begin{array}{rccc} f_{\lambda_n'}: & (TX)_{c_n} \backslash \{ 0 \} & \longrightarrow & (TS^4)_{\infty} \backslash \{ 0 \} \\ & \xi & \longmapsto & \frac{\lambda_n'}{|\xi|^2} \sigma'(\xi). \end{array} \] We can regard $A^{(n)}$ as an instanton on $X \# S^4$ such that $A^{(n)}$ is close to $A_1$, $\Theta_{Y_2}$ on $Y_1$, $Y_2$ and close to the standard instanton $J$ on $S^4$. Fix a small positive number $\lambda_0$ and a small neighborhood $U_{[A_1]}'$ of $[A_1]$ in $M_Q$. Let $O_{[A_1]} \subset \mathcal{B}_{P}^*$ be a small open neighborhood of \[ \{ \ [B' \ \#_{y_1,\lambda, \rho} \ \Theta_{Y_2} \ \#_{z_1', \lambda', \rho'} \ J' ] \ | \ B \in U_{[A_1]}', \ \lambda, \lambda' \in (0, \lambda_0), \ \rho, \rho' \in SO(3), \ z_1' \in \nu(\Sigma) \cap \nu(\Sigma') \ \}. \] Here $B', J'$ are connections which are flat near $y_1, \infty$ and equal to $B, J$ outside $b$-balls. (The real number $b$ is a small positive number fixed in \S \ref{facts}). The instanton $[A^{(n)}]$ is in $O_{[A_1]}$ when $n$ is large. We can define the local centers for elements of $O_{[A_1]}$ and we have a map $p:O_{[A_1]} \rightarrow U_{w_0'}$ which maps connections to their centers. By Donaldson \cite{conn} Proposition (3.18), we can take sections $s_{\Sigma}$, $s_{\Sigma'}$ such that $O_{[A_1]} \cap V_{\Sigma}$, $O_{[A_1]} \cap V_{\Sigma'}$ are equal to $p^{-1}(U_{z_1'}' \cap \Sigma)$, $p^{-1}( U_{z_1'}' \cap \Sigma')$. Hence we may suppose that the center $c_n$ of $[A^{(n)}]$ is $w_0'$ for large $n$. We denote $S^4$ by $Y_3$ and denote $\Theta_{Y_2}$, $J$ by $A_2$, $A_3$ and put \[ Y_{1,n}''=Y_1 \backslash B_{y_1}(\lambda_n / 2), \quad Y_{2,n}''=Y_2 \backslash ( B_{y_2}(\lambda_n / 2) \cup B_{w_0'}(\lambda_n' / 2)), \quad Y_{3,n}''=Y_3 \backslash B_{\infty}(\lambda_n' / 2). \] For $\nu>0$, put \[ U_{[A_1],\lambda_n}(\nu)= \{ \ [A] \in \mathcal{B}_{X \# S^4}^* \ | \ d_{q}([A|_{Y_{i,n}''}], [A_i|_{Y_{i,n}''}])< \nu, \ i=1, 2, 3 \ \}. \] We have proved the following: \begin{lemma} Fix a positive number $\nu>0$. Take instantons $[A^{(n)}] \in M_P(g_{\lambda_n}) \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_d} \cap V_{\Sigma} \cap V_{\Sigma'}$ for a sequence $\lambda_n \rightarrow 0$. Then $[A^{(n)}]$ is in $U_{[A_1], \lambda_n}(\nu)$ for some $[A_1] \in M_Q \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_d}$ when $n$ is sufficiently large. \end{lemma} Fix $[A_1] \in M_{Q} \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_d}$ and a small positive number $\nu$. By Theorem \ref{thm gluing}, Corollary \ref{coro gluing} and Remark \ref{rem gluing}, there is a small neighborhood $U_{A_1}$ of $0$ in $H_{A_1}^1$, a positive real number $\lambda_0$ and a $\Gamma_{\Theta_{Y_2}}$-equivariant map \[ \Psi: T=U_{A_1} \times SO(3) \times U_{w_0'} \times (0, \lambda_0) \times SO(3) \longrightarrow H^2_{\Theta_{Y_2}} \] such that $\Psi^{-1}(0)/\Gamma_{\Theta_{Y_2}}$ is homeomorphic to $M_P(g_{\lambda_n}) \cap U_{[A_1], \lambda_n}(\nu)$. Since the action of $\Gamma_{\Theta_{Y_2}}=SO(3)$ on $SO(3) \times SO(3)$ is the diagonal action, $\Psi^{-1}(0)/SO(3)$ is naturally identified with \[ \Psi^{-1}(0) \cap \big( U_{A_1} \times \{ 1 \} \times U_{w_0'} \times (0, \lambda_0) \times SO(3) \big). \] We write $T'$ for $U_{A_1} \times \{ 1 \} \times U_{w_0'} \times (0, \lambda_0) \times SO(3)$. Since $T'$ parametrizes connections on $X$, it makes sense to take the intersection $T' \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_d} \cap V_{\Sigma} \cap V_{\Sigma'}$. We can suppose \[ T' \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_d} \cap V_{\Sigma} \cap V_{\Sigma'} = \{ 0 \} \times \{ 1 \} \times \{ w_0' \} \times (0,\lambda_0) \times SO(3). \] Hence $M_P(g_{\lambda_n}) \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_d} \cap V_{\Sigma} \cap V_{\Sigma'} \cap U_{[A_1],\lambda_n}(\nu)$ is homeomorphic to \[ \Psi^{-1}(0) \cap \big( \{ 0 \} \times \{ 1 \} \times \{ w_0' \} \times (0,\lambda_0) \times SO(3) \big) \subset H^1_{A_1} \times SO(3) \times U_{w_0'} \times (0,\lambda_0) \times SO(3). \] Donaldson calculated the leading term of $\Psi$ in \cite{conn} explicitly. By the explicit expression of the leading term of $\Psi$ and calculations similar to those in \cite{conn} V, we can show the following: \begin{lemma} For generic metrics $g_1$ and $g_2$, and points $y_1$, $y_2$ and $w_0'$, the intersection \[ \Psi^{-1}(0) \cap \big( \{ 0 \} \times \{ 1 \} \times \{ w_0' \} \times (0,\lambda_0) \times SO(3) \big) \] is homeomorphic to \[ \{ c \lambda_{n} \} \times \gamma \subset (0, \lambda_0) \times SO(3) \] where $\gamma$ is a loop in $SO(3)$ which represent the generator of $\pi_1(SO(3)) \cong \mathbb{Z}_2$ and $c>0$ is a constant number independent of $n$. \end{lemma} Define $N_{[A_1]}$ by \begin{equation} \label{eq NA} N_{[A_1]}=\{ \ [A_1' \#_{\lambda_n} \Theta_{Y_2} \#_{w_0', c \lambda_n, \rho} J'] \ | \ \rho \in \gamma \ \}. \end{equation} We have obtained the following: \begin{corollary} \label{coro intersection} Let $Y$ be a closed, oriented, simply connected, non-spin $4$-manifold with $b^+(Y)=2a-1$ for $a>1$ and $P$ be an $SO(3)$-bundle over $X=Y \# S^2 \times S^2$ which satisfies the condition (\ref{eq w_2}). Suppose that the virtual dimension of $M_P$ is $2d+5$ for a non-negative integer $d$. Take $d$ homology classes $[\Sigma_i]$ in $H_2(Y; \mathbb{Z})$ with self-intersection numbers even. Then for a small $\lambda>0$, generic metrics $g_1$ and $g_2$, and generic points $y_1,y_2$ and $w_0'$, the intersection \[ M_P(g_{\lambda}) \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_d} \cap V_{\Sigma} \cap V_{\Sigma'} \] is homeomorphic to \[ \coprod_{[A_1] \in M_Q \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_d}} N_{[A_1]}. \] \end{corollary} \subsection{End of the proof} From Corollary \ref{coro intersection}, we have \[ q_{k,w,Y \# S^2 \times S^2}^{u_1}([\Sigma_1],\dots,[\Sigma_d], [\Sigma], [\Sigma'])= \sum_{[A_1]} \< u_1, N_{[A_1]} \> \in \mathbb{Z}_2, \] where $[A_1]$ runs in $M_Q \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_d}$. Therefore it is sufficient to show that the pairing $\< u_1, N_{[A_1]} \>$ is non-trivial for the proof of Theorem \ref{main thm}. The last step is carried out by making use of the following Proposition due to Akbulut, Mrowka and Ruan. \begin{proposition} [\cite{AMR}] \label{prop u1} Let $X_i$ be closed, oriented, simply connected $4$-manifolds for $i=1,2$ and $x_i$ be points of $X_i$. Take $SO(3)$-bundles $P_i$ over $X_i$ with $w_2(P_i)$ equal to $w_2(X_i)$. Choose $U(2)$-lifts $\bar{P}_i$ of $P_i$ and assume that the second Chern numbers of $\bar{P}_i$ are odd. (In this case, $P_1 \# P_2$ satisfies the condition (\ref{eq w_2}). See Remark \ref{rem pi1}.) We fix trivializations of $P_i$ on small neighborhoods $U_{x_i}$ of $x_i$. For irreducible connections $B_i$ on $P_i$ with trivial on $U_{x_i}$ with respect to fixed trivializations, we have a family of connections \[ G:=\{ \ [B_1 \#_{\rho} B_2] \ | \ \rho \in SO(3) \ \} \ (\cong SO(3)) \subset \mathcal{B}_{P_1 \# P_2}^*. \] Then the restriction $u_1|_G$ is non-trivial in $H^1(G; \mathbb{Z}_2) \cong \mathbb{Z}_2$. \end{proposition} In our case, \[ X_1=Y \# S^2 \times S^2,\ P_1=Q \# P_{S^2 \times S^2}, \ B_1=A_1' \# \Theta_{S^2 \times S^2}, \ X_2=S^4, \ P_2=P_{S^4}/\{ {\pm 1} \}, \ B_2=J'. \] Here $Q$ is an $SO(3)$-bundle over $Y$ with \begin{equation} \label{eq Q w2} w_2(Q)=w_2(Y), \quad p_1(Q) \equiv \sigma (Y)+4 \mod 8, \end{equation} $P_{S^2 \times S^2}$ is the trivial $SO(3)$-bundle over $S^2 \times S^2$ and $P_{S^4}$ is an $SU(2)$-bundle with second Chern number equal to $1$. By the formulas \[ p_1(Q)=-4c_2(\bar{Q}) + c_1(\bar{Q})^2, \ w_2(Y)^2 \equiv \sigma(Y) \mod 8 \] and (\ref{eq Q w2}), we have \[ c_2(\bar{Q}) \equiv 1 \mod 2. \] Hence the assumptions of Proposition \ref{prop u1} is satisfied. Since $N_{[A_1]}$ is a loop in $G$ which represent the generator of $\pi_1(G) \cong \mathbb{Z}_2$, we obtain: \begin{corollary} For each $[A_1] \in M_Q \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_d}$, the pairing $\< u_1, N_{[A_1]} \>$ is non-trivial in $\mathbb{Z}_2$. \end{corollary} This completes the proof of Theorem \ref{main thm}. \section{Example} \subsection{Non-triviality of $q_{2\mathbb{C} \mathbb{P}^2 \# \overline{\mathbb{CP}}\ \! \!^2}^{u_1}$} \label{example} We see that the $SO(3)$-torsion invariant for $X=2\mathbb{C} \mathbb{P}^2 \# \overline{\mathbb{C} \mathbb{P}}^2$ is non-trivial. To distinguish two $\mathbb{C} \mathbb{P}^2$'s, we write $X=\mathbb{C} \mathbb{P}^2_1 \# \mathbb{C} \mathbb{P}^2_2 \# \overline{\mathbb{CP}}\ \! \!^2$. \begin{theorem} \label{thm non-trivial} Let $H_i$ be the canonical generator of $H_2(\mathbb{C} \mathbb{P}^2_i;\mathbb{Z})$ for $i=1, 2$ and $E$ be the canonical generator of $H_2(\overline{\mathbb{CP}}\ \! \!^2; \mathbb{Z})$. Then we have \[ q_{\mathbb{C} \mathbb{P}^2_1 \# \mathbb{C} \mathbb{P}_2^2 \# \overline{\mathbb{CP}}\ \! \!^2}^{u_1}(-H_1 + E, H_2 - E) \equiv 1 \mod 2. \] \end{theorem} \begin{proof} Let $Q$ be an $SO(3)$-bundle on $\mathbb{C} \mathbb{P}^2$ with \[ w_2(Q)=w_2(\mathbb{C} \mathbb{P}^2), \quad p_1(Q)=-3. \] Then the dimension of $M_Q$ is $0$. Kotschick showed that the Donaldson invariant associated with $Q$ is \[ q_{\frac{3}{4},w,\mathbb{C} \mathbb{P}^2}=-1 \] if we choose a suitable orientation on $M_Q$ (\cite{K, K2}). Note that there is no wall since $b^-(\mathbb{C} \mathbb{P}^2)$ is $0$. The signature of $\mathbb{C} \mathbb{P}^2$ is $1$, hence we have \[ p_1(Q) \equiv \sigma(\mathbb{C} \mathbb{P}^2) + 4 \mod 8 \] and $q^{u_1}_{\frac{7}{4},w,\mathbb{C} \mathbb{P}^2 \# S^2 \times S^2}([\Sigma],[\Sigma'])$ is defined. From Theorem \ref{main thm}, we have \[ q^{u_1}_{\frac{7}{4},w,\mathbb{C} \mathbb{P}^2 \# S^2 \times S^2}([\Sigma], [\Sigma']) \equiv 1 \mod 2. \] On the other hand, $\mathbb{C} \mathbb{P}^2 \# S^2 \times S^2$ is diffeomorphic to $\mathbb{C} \mathbb{P}^2_1 \# \mathbb{C} \mathbb{P}^2_2 \# \overline{\mathbb{CP}}\ \! \!^2$ (\cite{W}). The induced isomorphism between the $2$-dimensional homology groups is given by \[ \begin{array}{ccl} H_2(\mathbb{C} \mathbb{P}^2 \# S^2 \times S^2; \mathbb{Z}) &\stackrel{\cong}{\longrightarrow} & H_2(\mathbb{C} \mathbb{P}^2_1 \# \mathbb{C} \mathbb{P}_2^2 \# \overline{\mathbb{CP}}\ \! \!^2; \mathbb{Z}) \\ H & \longmapsto & H_1 + H_2 - E \\ \left[ \Sigma \right] & \longmapsto & -H_1 + E \\ \left[ \Sigma' \right] & \longmapsto & H_2 - E. \end{array} \] The torsion cohomology class $w$ is $w_2(\mathbb{C} \mathbb{P}^2 \# S^2 \times S^2)$, and the image of $w$ under the isomorphism is $w_2(2\mathbb{C} \mathbb{P}^2 \# \overline{\mathbb{CP}}\ \! \!^2)$. We also denote this class by $w$. The images of $[\Sigma]$ and $[\Sigma']$ under the isomorphism are $-H_1 + E$ and $H_2 - E$ respectively. Hence we obtain \[ q^{u_1}_{\frac{7}{4}, w, \mathbb{C} \mathbb{P}^2_1 \# \mathbb{C} \mathbb{P}^2_2 \# \overline{\mathbb{CP}}\ \! \!^2}(-H_1 + E, H_2-E) \equiv 1 \mod 2. \] \end{proof} \subsection{A vanishing theorem} Let $X$ be a closed, oriented, simply connected, non-spin $4$-manifold with $b^+(X)=2a$ for some $a>0$. Moreover assume that $X$ can be written as the connected sum $Y_1 \# Y_2$ of non-spin $4$-manifolds $Y_i$ with $b^+(Y_i) \geq 1$. In this situation, we can show a vanishing theorem similar to the usual Donaldson invariant. However we must require a certain condition for homology classes in $X$. The condition is that each homology class lies in $H_2(Y_1; \mathbb{Z})$ or $H_2(Y_2; \mathbb{Z})$. Suppose that $P$ is an $SO(3)$-bundle over $X$ satisfying (\ref{eq w_2}) and that $\dim M_P$ is $2d+1$ for some non-negative integer $d$. Moreover suppose that $d=d_1+d_2$ for some $d_1 \geq 0$, $d_2 \geq 0$. Take homology classes $[\Sigma_1], \dots, [\Sigma_{d_1}] \in H_2(Y_1; \mathbb{Z})$, $[\Sigma_1'], \dots, [\Sigma_{d_2}'] \in H_2(Y_2; \mathbb{Z})$ with self-intersection numbers even. Then by the standard dimension-count argument \cite{MM}, we can show \[ M_P \cap V_{\Sigma_1} \cap \cdots \cap V_{\Sigma_{d_1}} \cap V_{\Sigma_1'} \cap \cdots \cap V_{\Sigma_{d_2}'} = \emptyset \] when the neck is sufficiently long. Hence we have: \begin{theorem} \label{thm vanishing} Let $Y_1, Y_2$ be closed, oriented, simply connected, non-spin $4$-manifolds with $b^+(Y_i)>0$ and $b^+(Y_1) \equiv b^+(Y_2) \mod 2$. Then for homology classes $[\Sigma_1], \dots, [\Sigma_{d_1}] \in H_2(Y_1; \mathbb{Z})$, $[\Sigma_1'], \dots, [\Sigma_{d_2}'] \in H_2(Y_2; \mathbb{Z})$ with self-intersection numbers even, we have \[ q_{Y_1 \# Y_2}^{u_1}([\Sigma_1], \dots, [\Sigma_{d_1}], [\Sigma_1'], \dots, [\Sigma_{d_2}']) \equiv 0 \mod 2. \] \end{theorem} \begin{remark} We regard $X=2\mathbb{C} \mathbb{P}^2 \# \overline{\mathbb{C} \mathbb{P}}^2$ as the connected sum of $Y_1=\mathbb{C} \mathbb{P}^2$ and $Y_2=\mathbb{C} \mathbb{P}^2 \# \overline{\mathbb{C} \mathbb{P}}^2$. Then $w$ is non-trivial on $Y_i$ for $i=1, 2$. By Theorem \ref{main thm}, $q^{u_1}_{Y_1 \# Y_2}(-H_1+E, H_2-E)$ is non-trivial in contrast to Theorem \ref{thm vanishing}. If there were a formula like \begin{gather*} q_{\frac{7}{4},w,Y_1 \# Y_2}^{u_1}(-H_1+E,H_2-E) \equiv \\ `` q_{\frac{7}{4},w,Y_1 \# Y_2}^{u_1}(-H_1,H_2-E)" + ``q_{\frac{7}{4},w,Y_1 \# Y_2}^{u_1}(E,H_2-E)" \mod 2, \end{gather*} then we would be able to apply Theorem \ref{thm vanishing} to showing the vanishing of $q_{\frac{7}{4},w,Y_1 \# Y_2}^{u_1}(-H_1+E,H_2-E)$. However $`` q_{\frac{7}{4},w,Y_1 \# Y_2}^{u_1}(-H_1,H_2-E)"$ nor $``q_{\frac{7}{4},w,Y_1 \# Y_2}^{u_1}(E,H_2-E)"$ are not defined because \[ (-H_1) \cdot (-H_1) \equiv E \cdot E \equiv 1 \mod 2. \] \end{remark}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,355
package main import ( "net/http" ) func main() { err := http.ListenAndServe(":8080", http.HandlerFunc(helloHandler)) if err != nil { panic(err) } } func helloHandler(w http.ResponseWriter, r *http.Request) { w.Write([]byte("Hello, world!")) }
{ "redpajama_set_name": "RedPajamaGithub" }
3,023
\section{Introduction} \label{sec:intro} \input{sections/introduction.tex} \section{Methodology} \label{sec:methodology} \input{sections/methodology.tex} \section{Simulation, Datasets, and Object Detector} \label{sec:datasets} \input{sections/datasets.tex} \section{Results} \label{sec:results} \input{sections/results.tex} \section{Conclusion and Future Work} \label{sec:conclusion} \input{sections/conclusion.tex} \section*{Acknowledgments} This work was carried out in part with support from National Science Foundation project CPS1739869. Special thanks to the ARC Lab at the University of Wisconsin-Madison for their support through their motion capture facilities. \bibliographystyle{IEEEtran} \subsection{Understanding and interpreting the results} Two quantities are of interest herein: mean performance difference across similar batches (referred to in Algorithm \ref{alg:method} as $\overline{W_1}$); and the overlap fractions, computed as ratios of cardinalities, $ {\mathcal{O}}_A \triangleq |A^{overlap}| / |{\mathds{A}}| $ and $ {\mathcal{O}}_B \triangleq |B^{overlap}| / |{\mathds{B}}| $. In the case herein, performance is on the interval [0,1], with test-set mean IOU around 0.8 for both {\textit{$Net_{real}$}} and {\textit{$Net_{sim}$}}. Therefore, $\overline{W_1} $ values around 0.08 would represent around 10\% of the performance. For interpreting overlap fraction, $ {\mathcal{O}}_A $ represents the fraction of $ {\mathds{A}} $ contexts that have at least one similar context out of $ {\mathds{B}} $. Note that the overlap fraction has nothing to do with the object detection process -- it is a quantity that is derived from ground truths, and speaks to how meaningful it is to assess the performance of an algorithm (object detection, in our case) by comparing outcomes of the algorithm when applied to elements from $ {\mathds{A}} $ and $ {\mathds{B}} $. Specifically, assume Set $ A $ is real data, and Set $ B $ is synthetic data. If $ {\mathcal{O}}_A $ is large and $ {\mathcal{O}}_B $ is low, many of the synthetic images have no context replicas among the real images; abundant as it might be, simulated data has little in common with the data collected in the real world. If $ {\mathcal{O}}_A $ is small and $ {\mathcal{O}}_B $ is large, it means that the simulation produces data that is very repetitive but it fails to capture the diversity of real data. If $ {\mathcal{O}}_A $ and $ {\mathcal{O}}_B $ are both large, the simulation does a good job at capturing the real world. The case when $ {\mathcal{O}}_A $ and $ {\mathcal{O}}_B $ are both small raises ``sim-to-real gap'' red flags -- if one trains and tests an object recognition net using Set $ B $, it might not work in the real world since $ A $ and $ B $ don't have much context in common. Plotting $\overline{W_1}$ vs. $ {\mathcal{O}}_A $, see Fig. \ref{fig:diff_overlab_interp}, helps interpret the performance of the object detection algorithm. When mean $ W_1 $ is large, if both $ {\mathcal{O}}_A $ and $ A^{overlap} $ are large, one can be confident that the object detection algorithm doesn't work when deployed in reality. However, a small $ W_1 $ under these circumstances suggests that the algorithm has a good chance to work in the real world. \begin{figure} \centering \includegraphics[width=.9\linewidth]{images/difference_overlap_interpretation.png} \caption{Relative certainty of estimate is based on the overlap fraction ($ {\mathcal{O}}_A $). Performance difference suggests predictiveness on the overlapping regions. Well validated simulation should fall in lower right corner such that performance is similar on a broad range of contexts.} \label{fig:diff_overlab_interp} \end{figure} \subsection{Results on paired data} Although the proposed method does not require twin $ f_{A,i} $ and $ f_{B,j} $ images, handling twins provides insights into how the method works. Twin data sets imply large $ {\mathcal{O}}_A $ and $ {\mathcal{O}}_B $ values. If one also has that $ |{\mathds{A}}| $ and $ |{\mathds{B}}| $ are large, then there is high confidence that a small $ W_1 $ translates into a good simulator. In this analysis we use a patch size of $ h=w=120 $ on Lab images and a similarity threshold of $ \theta = 0.8 $. Using the trained object detector, we can evaluate the difference in performance witnessed on the real and simulated data. \begin{figure} \centering \includegraphics[width=\linewidth]{images/results_rlab_slab/full_results/meanW1_cropped.png} \caption{Comparison of the full lab datasets using {\textit{$Net_{sim}$}} and {\textit{$Net_{real}$}} detectors. Evident here is low difference in sim-real performance for {\textit{$Net_{real}$}}.} \label{fig:arclab_comparison_w1} \end{figure} First, we compare the full RLab and SLab datasets, Set $ \mathds{A} $ and $ \mathds{B} $, respectively, using two object detection networks -- {\textit{$Net_{sim}$}} and {\textit{$Net_{real}$}}. This demonstrates the proposed validation algorithm in practice and highlights the difference in object detector performance when using different domains for training. The results, shown in Fig.~\ref{fig:arclab_comparison_w1}, report the performance difference between real and simulated cones. Interestingly, the network which was trained on real data ({\textit{$Net_{real}$}}) experiences very low sim-vs-real difference, indicating that assessing this specific detector in this simulation is nearly equivalent to assessing it on real data. Unsurprisingly, {\textit{$Net_{sim}$}} experienced a large shift between sim and real data, a result found by many perception researchers to date when trying to train in simulation. This indicates the real images fall outside the learned space of {\textit{$Net_{sim}$}}, while the simulated images fall within the learned space of {\textit{$Net_{real}$}}. Thus, we would not recommend the use of {\textit{$Net_{sim}$}} in an autonomy stack on a robot, since there is a gap between how it behaved with synthetic data and real data. \begin{figure} \centering \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=\textwidth]{images/results_rlab_slab/splits_results/w1over_sim_on_red_cones_cropped.png} \end{subfigure} \\ \vspace{.3cm} \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=\textwidth]{images/results_rlab_slab/splits_results/w1over_sim_on_green_cones_cropped.png} \end{subfigure} \caption{Comparison of the performance difference vs overlap for the lab splits based on {\textit{$Net_{sim}$}}. This shows the difference is obtained on similar overlap fractions. Top: red cones, bottom: green cones.} \label{fig:splitscomparison_sim_overlap} \end{figure} To further analyze the Lab results, we partition the real Lab and simulated Lab datasets each into three splits to gauge the ability of real data to predict other real data, and sim data to predict other sim data. Specifically, the Lab datasets (real and simulated) were each partitioned into three subsets of 100 images, keeping the real and simulated pairs in their respective sets such that R1 is paired with S1, R2-S2, and R3-S3. These splits result in approximately 1400 cones within each subset. Using {\textit{$Net_{sim}$}}, we evaluated the patch-based difference between each of the splits and expect sim-sim and real-real to be closer in performance prediction than sim-real splits since {\textit{$Net_{sim}$}} would experience a relatively larger sim-vs-real difference. The results are shown in Fig.~\ref{fig:splitscomparison_sim_overlap}. The Mean $ W_1 $ between sim-sim is low as expected, meaning the network has little variance on intra-domain performance compared with the sim-real shift. Additionally, along the $ x $-axis, Fig. \ref{fig:splitscomparison_sim_overlap} shows that the overlap fraction is $\approx50\%$ for the inter-domain splits and $\approx60\%$ for the intra-domain splits, meaning there is a small context difference between the sim and real data, possibly due to labeling, cone models, or lens model. \begin{figure} \centering \begin{subfigure}[b]{0.99\linewidth} \centering \includegraphics[width=\textwidth]{images/results_rlab_slab/overlap_6.png} \end{subfigure} \\ \vspace{.3cm} \begin{subfigure}[b]{0.99\linewidth} \centering \includegraphics[width=\textwidth]{images/results_rlab_slab/overlap_60.png} \end{subfigure} \caption{Examples of similar patches from real and simulated lab datasets. Bounding boxes show the predicted object location using {\textit{$Net_{sim}$}}. Each subfigure includes up to 10 examples that were considered similar to the reference example, with top half pulled from Set $ A $ (real), and bottom half pulled from Set $ B $ (sim).} \label{fig:example_overlap} \end{figure} Since this comparison produces the set of overlapping and non-overlapping examples, we can analyze the overlapping batches. For comparing the real and simulation Lab datasets, Fig. \ref{fig:example_overlap} shows example patches with overlap, and Fig. \ref{fig:example_no_overlap} shows example patches without overlap, giving insights into coverage between the two datasets. \begin{figure} \centering \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[width=\textwidth]{images/results_rlab_slab/nooverlap_A_0.png} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[width=\textwidth]{images/results_rlab_slab/nooverlap_B_2.png} \end{subfigure} \\ \vspace{.2cm} \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[width=\textwidth]{images/results_rlab_slab/nooverlap_A_12.png} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[width=\textwidth]{images/results_rlab_slab/nooverlap_B_8.png} \end{subfigure} \\ \vspace{.2cm} \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[width=\textwidth]{images/results_rlab_slab/nooverlap_A_17.png} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[width=\textwidth]{images/results_rlab_slab/nooverlap_B_19.png} \end{subfigure} \caption{Examples from the real (left half) and simulated (right half) lab datasets that had no overlap in the other dataset. Performance based on {\textit{$Net_{sim}$}}. These are examples which fall into a category showing lack of coverage between the datasets for which we cannot make a good estimate of performance difference.} \label{fig:example_no_overlap} \end{figure} To confirm the results from Fig. \ref{fig:arclab_comparison_w1} for {\textit{$Net_{real}$}}, we again analyze the splits, this time for the performance of {\textit{$Net_{real}$}} which demonstrated low sim-real gap. The results are shown in Fig. \ref{fig:splitscomparison_real_overlap} (note the y-scale). Here we see the same context differences as before, but the performance difference on intra-domain vs inter-domain splits are negligible with low magnitude on all comparisons, meaning the assessment of {\textit{$Net_{real}$}} on all splits are nearly equivalent. \begin{figure} \centering \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=\textwidth]{images/results_rlab_slab/splits_results/w1over_real_on_red_cones_cropped.png} \end{subfigure} \\ \vspace{.3cm} \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=\textwidth]{images/results_rlab_slab/splits_results/w1over_real_on_green_cones_cropped.png} \end{subfigure} \caption{Comparison of the performance difference vs overlap for the lab splits based on {\textit{$Net_{real}$}}. This shows the difference is obtained on similar overlap fractions. Note the scale on y-axis. Top: red cones, bottom: green cones.} \label{fig:splitscomparison_real_overlap} \end{figure} \subsection{Point-wise vs distribution-wise comparison} \label{subsec:results_point_vs_distribution} If two datasets are fully paired, it would be natural to compare the performance on the pairs directly. However, this fails to capture sensitivities in the object detector network which may induce performance difference for slight changes in the image or context. Furthermore, a pointwise approach fails to predict the distribution of performances that we expect to see for a given sample. A verification test that can be used for comparing the point-wise and distribution approach is to take two nearly identical simulations, and perform both approaches. The simulated datasets we used here are identical in every way except a small camera pose difference. Using the same methodology for reconstructing the lab simulated dataset, we introduce an uncertainty on the calibrated camera pose. Based on this uncertainty, the calibrated pose of the camera is sampled, with an angular normal distribution of stdev $\ang{1}$ in yaw-pitch-roll and a positional normal distribution of 1~cm in $ x $, $ y $, $ z $ directions. These sampled simulated datasets follow the same path as the collected data, and are still paired, but not pixel-perfect. We then do a point-wise comparison between these sampled datasets to quantify the difference in performance, knowing that the only differences are due to perturbed contexts. \begin{figure} \centering \centering \includegraphics[width=\linewidth]{images/results_rlab_slab/splits_results/sampled01_vs_sampled02_cropped.png} \caption{Comparison of two nearly identical simulated datasets, showing the pointwise and distribution difference. The distribution comparison shows less sensitivity to the uncertainty of the camera pose.} \label{fig:sampled_vs_sampled} \end{figure} Figure \ref{fig:sampled_vs_sampled} summarizes these results for green and red cones, using {\textit{$Net_{sim}$}}. The results demonstrate a higher point-wise mean difference between the performance of the pairings even though the image difference is very slight. The distribution comparison is less affected by the pose uncertainty, demonstrating lower difference between the datasets. This test additionally measures the repeatability of the validation process. The benefit of being less sensitive to uncertainty is in addition to the ability of the distribution-comparison to evaluate performance distribution. \subsection{Results on unpaired data} \begin{figure} \centering \begin{subfigure}[b]{0.99\linewidth} \centering \includegraphics[width=\linewidth]{images/hall_lab/sim_perf_red_cones_cropped.png} \end{subfigure} \\ \vspace{.3cm} \begin{subfigure}[b]{0.99\linewidth} \centering \includegraphics[width=\linewidth]{images/hall_lab/sim_perf_green_cones_cropped.png} \end{subfigure} \caption{All-vs-all comparison of RLab, SLab, RHall, and SHall. Top: red cones, bottom: green cones. Performance based on {\textit{$Net_{sim}$}}.} \label{fig:hall_lab:sim_perf} \end{figure} In this subsection, the focus is on two questions that come into play when using simulation to evaluate an object detector: ``Does the simulated dataset have good context coverage of the real data of interest?,'' and ``Does sim data elicit the same response we would see if we were evaluating the object detection net on real data?'' Four datasets are used to answer these questions: SLab, RLab, SHall, and RHall. We perform an all-vs-all comparison to look at coverage between the datasets and performance difference on the overlapping region of the datasets. For the performance difference between the four datasets, we consider each of the four as the predicted dataset (Set $ A $), and evaluate each other dataset's ability to predict performance. Figure \ref{fig:hall_lab:sim_perf} shows the comparison between each dataset. Evident in the results is that {\textit{$Net_{sim}$}} performs similarly across both simulated environments. However, {\textit{$Net_{sim}$}} varies more in performance between the real environments. This large real-real difference is unexpected, but illustrates high sensitivities of the network outside of the learned feature space. \begin{figure} \centering \includegraphics[width=\linewidth]{images/hall_lab/sim_perf_overlap_green_cones_cropped.png} \caption{The difference vs overlap between the real and simulated hallway and labs for green cones detected using {\textit{$Net_{sim}$}}.} \label{fig:hall_lab_sim_greencones_diff_overlab} \end{figure} It should be noted that when the datasets have significantly different contexts, the degree of confidence in the results decreases owing to context coverage differences. For example, when we compare Real Lab to Sim Hall and Sim Lab to Sim Hall, we have a green cone overlap fraction of less than 20\% for both sets (see left-most points in Fig. \ref{fig:hall_lab_sim_greencones_diff_overlab}), meaning we are looking at a relatively small subset of Sim Hall. \begin{figure} \centering \begin{subfigure}[b]{0.99\linewidth} \centering \includegraphics[width=\textwidth]{images/hall_lab/overlap_0.png} \end{subfigure} \\ \vspace{.3cm} \begin{subfigure}[b]{0.99\linewidth} \centering \includegraphics[width=\textwidth]{images/hall_lab/overlap_101.png} \end{subfigure} \caption{Examples of overlap between RHall and SHall. Bounding boxes show the predicted object location using {\textit{$Net_{sim}$}}. Each subfigure includes up to 10 examples that were similar to the reference example, with top half pulled from Set $ A $ (real), and bottom half pulled from Set $ B $ (sim).} \label{fig:hall_lab:example_overlap} \end{figure} To further understand the comparison, Fig. \ref{fig:hall_lab:example_overlap} shows two example batches, with the reference sample shown in the upper left. Figure \ref{fig:hall_lab:example_no_overlap} then shows six sample contexts which were not included in the comparison due to a lack of similarity. It is clear here that the higher-complexity examples found less correspondence between the datasets. \begin{figure} \centering \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[width=\textwidth]{images/hall_lab/nooverlap_A_4.png} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[width=\textwidth]{images/hall_lab/nooverlap_B_1.png} \end{subfigure} \\ \vspace{.2cm} \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[width=\textwidth]{images/hall_lab/nooverlap_A_6.png} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[width=\textwidth]{images/hall_lab/nooverlap_B_7.png} \end{subfigure} \\ \vspace{.2cm} \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[width=\textwidth]{images/hall_lab/nooverlap_A_8.png} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[width=\textwidth]{images/hall_lab/nooverlap_B_8.png} \end{subfigure} \caption{Examples from the real hall (left) and sim hall datasets (right) which had no overlap. Performance based on {\textit{$Net_{sim}$}}. These are examples which fall into a category showing lack of coverage between the datasets for which we cannot make a good estimate of performance difference. } \label{fig:hall_lab:example_no_overlap} \end{figure} \subsection{The impact of tuning parameters} \label{subsec:tuning_parameters} The final experiment assesses the sensitivity of the results to threshold $ \theta $ and patch size $ h \times w $ choices. We used the Lab datasets ($\mathds{A}$ is real, $\mathds{B}$ is simulated) and carried out sweeps over the threshold and patch size. Ideally, the validation metric proposed is not highly sensitive to the choice of parameters in Algorithm \ref{alg:method}. \begin{figure} \centering \includegraphics[width=\linewidth]{images/threshold/mean_w1.png} \caption{Effect of threshold parameter $ \theta $ on mean $W_1$ for the RLab-SLab comparison.} \label{fig:threshold_meanw1} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{images/threshold/overlap_fraction.png} \caption{The effect of $ \theta $ on overlap fraction $ {\mathcal{O}}_A $ for the RLab-SLab comparison. For high $ \theta $, the overlap reduces even for roughly paired data, since the constraint for finding corresponding ground truth patches is much tighter.} \label{fig:threshold_overlap} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{images/threshold/b_samples.png} \caption{The effect of $ \theta $ on mean value of samples in $ A^c $ and $B^c$ for the RLab-SLab comparison. For a higher threshold $ \theta $, the number of samples included in the batch-wise comparison is reduced due to the tighter constraint for patches to be similar.} \label{fig:threshold_samples} \end{figure} To understand the impact of $ \theta $, we swept the similarity threshold from 0.5 to 0.9 while keeping the patch size constant at $ 120 \times 120 $ pixels (roughly the size of the largest cone in the dataset). Figure \ref{fig:threshold_meanw1} shows the impact of $ \theta $ on mean $ W_1 $, indicating no acute sensitivity was evident. Figures \ref{fig:threshold_overlap} and \ref{fig:threshold_samples} show the effect threshold has on $ A^{overlap} $ and the mean batch size. A low threshold leads to a broad acceptance of objects as \textit{similar} and a larger mean batch size. This also leads to a higher probability of finding similar examples between the datasets, and thus a high overlap fraction. A high threshold imposes a tight constraint on similarity, reducing the batch size and the likelihood of finding overlap. As $ \theta $ approaches 1.0, the batch size and overlap approach 0.0. If the datasets are perfectly paired, then a high threshold may be appropriate. \begin{figure} \centering \includegraphics[width=\linewidth]{images/patchsize/mean_w1.png} \caption{Patch size $ h \times w $ effect on Mean $W_1$ for the RLab-SLab comparison. $W_1$ can change depending on how many and which samples are included as similar in the $ A^c $ and $ B^c $ batches. } \label{fig:patchsize_meanw1} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{images/patchsize/overlap.png} \caption{Patch size effect on overlap fraction for the RLab-SLab comparison. Results are independent of performance. With increasing patch size, more context is considered when calculating similarity, resulting in a tighter constraint and less overlap.} \label{fig:patchsize_overlap} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{images/patchsize/samples.png} \caption{Patch size effect on mean samples for the RLab-SLab comparison. Results are independent of performance. With larger patch sizes, more context is considered and fewer examples will be found to be similar to a given reference examples.} \label{fig:patchsize_samples} \end{figure} Along with threshold, patch size can be used to constrain the comparison. To illustrate its effect, patch size was swept from $80 \times 80$ px to $180 \times 180$ px with a constant threshold of $ \theta = 0.8 $. The mean $W_1$, plotted in Fig. \ref{fig:patchsize_meanw1}, shows only a slight change with no areas of high sensitivity. The overlap and batch size, shown in Figs. \ref{fig:patchsize_overlap} and \ref{fig:patchsize_samples} respectively, indicate similar trends to the threshold $ \theta $. Specifically, patch size determines how much area surrounding the object is considered to be relevant in determining context similarity. By increasing the patch area, the probability of finding similar examples decreases, lowering the overlap fraction and the batch size. The patch size should be chosen based on the object size and the importance of context for a given application and algorithm (e.g. the receptive field of the network). \subsection{Contribution} Herein, we demonstrate the camera model validation methodology by using an object detection algorithm for detecting cones in RGB images. We show the methodology's ability to quantify the sim-to-real gap as experienced by object detectors trained in different domains, e.g., trained on sim or trained on real. We also show that the intra-domain validation (real images against real images, or sim against sim) can provide a baseline expectation for the performance shift, and the expected variance in the metric. Our contributions are summarized as follows: \begin{enumerate} \item We introduce a camera model validation methodology based on the performance of an image-based object detection algorithm. \item We introduce a method of extracting object-centric patches from images to find batches of similar content in each data set. \item We show that the proposed methodology provides insights into the performance difference for similar context, as well as the ability to measure the context overlap between two datasets. \item We demonstrate an ability to facilitate low-cost, low-overhead validation of camera simulation in meaningful scenarios. \end{enumerate} The proposed camera validation metric comes into play in answering questions such as: How similarly does a given perception algorithm perform in simulation vs. reality? Which camera model is more appropriate for a particular robot simulation study, and by how much? Which perception algorithm is less sensitive to data generated using a certain camera model? How well does a specific object detector training algorithm/paradigm mitigate the sim-to-real gap? How much does a set of synthetic images probe the full breadth of contexts seen in real data? Herein, the proposed validation metric is used to answer several of these questions. \subsection{Related Work} Several approaches to validation seek to directly compare a simulated image with its equivalent real image. This idea is used in \cite{grapinet2013optical,gruyer2012modeling} to validate lens distortion and camera response. The approach provides little information about the model's usefulness in perception since differences at the pixel level may not be correlated with important features for perception. That is, two images with the same content can have pixel-level differences yet elicit the same performance from a perception algorithm. Another validation approach is discussed in \cite{lyu2022accurate} and used to gauge rendering performance. While useful in understanding the interplay between sub-component models that make up a camera simulator, the process puts unnecessary constraints on the simulator to produce pixel-perfect data. In order to compare data directly, experiments must be conducted in a tightly controlled environment where all parameters, including materials and lighting, are known. This is impractical in robotics applications, which tend to involve complex scenarios that are difficult to replicate in simulation in minute detail. More recently, several validation approaches have included an algorithm as ``judge'' in the quantification effort. This is a more holistic approach, and has roots in human-robot-interaction (HRI). Validation of USARSim proposed comparing high level performance metrics to demonstrate the ability of simulation to predict reality \cite{wang2005validating,balaguer2008gps,carpin2006quantitative}. Analogous performance metrics from HRI, based on human performance, are summarized in \cite{steinfeld2006common}. A validation methodology has recently been proposed which builds on both traditional (pixel-to-pixel type) and performance-based validation (use of perception algorithm performance as judge), arguing for traditional validation at a component level, and performance validation at a system level \cite{durst2022novel}. This approach is insightful but daunting from a data acquisition and registration perspective. In Machine Learning (ML), metrics are necessary to measure the ability of generative adversarial networks (GANs) to accurately mimic target domains. The metric quantifies the GAN's ability to change, for instance, images of horses to images of zebras while preserving the rest of the image \cite{park2020contrastive}. While sometimes left to the subjectivity of human perception \cite{denton2015deep,salimans2016improved}, judging the quality of a GAN has fallen on metrics based on machine-learned features. Inception Score (IS) \cite{salimans2016improved} uses a pre-trained classification network to generate the class probabilities of the synthetic images, and quantifies GAN performance using these estimates. An improvement over IS was obtained by comparing the Frechet distance (Wasserstein-2 distance) between the inception estimates \cite{heusel2017gans}. Another variant of IS is the Kernel Inception Distance (KID) \cite{binkowski2018demystifying}, which uses an alternate comparison of the inception distance. All three variants were shown to correlate well with human judgment, with KID demonstrating improved performance. Another variant of inception is the semantically aligned kernel VGG distance (sKVD) \cite{richter2021enhancing}, which compares the KID for samples that are semantically nearest neighbors. To that end, the authors generated patches from the image datasets and the two patches which were nearest neighbors in their vector-encoded segmentation maps were KID-compared. Finally, while inception distance looks at the output of a classifier, upstream layers in the network are often used for comparing images or generating an object function for style GANs. Comparing the output of these layers was shown to provide a perceptual similarity measure \cite{zhang2018unreasonable} akin to image quality measures such as SSIM \cite{wang2004image}, but based on features known to be relevant for perception. While these measures can give insights for validation of camera simulation, they draw from orthogonal requirements, and may not apply directly to simulation in robotics. \subsection{Simulation for data generation} \label{sec:datasets:simulation} Our interest is in comparing simulated and real images produced by mono-RGB cameras. While the simulated data could come from any source, we use Chrono for generating the synthetic data and demonstrating the validation methodology. Chrono \cite{chronoOverview2016,projectChronoWebSite} is a multi-physics simulation platform used for vehicle mobility analysis (on/off-road), autonomous vehicle simulation, field robotics simulation, etc. The ability to generate synthetic data using a camera model is facilitated by Chrono::Sensor \cite{asherSensors2020}, which leverages hardware-accelerated ray tracing for rendering images from within a scene governed by Chrono. Chrono::Sensor implements models of lag, noise, distortion, blur, and global illumination within the rendering pipeline to generate synthetic data. Sensors can be mounted on vehicles for use in software-in-the-loop simulations \cite{aaronAImultiAAsJCND2021,end2endMUBO2022}, or can be moved kinematically throughout the scene to random positions and orientations to collect training data. The camera model used for generating the data discussed in this paper is modeled from an ELP USB camera with a 2.1~mm lens. The data is acquired at 720p for both the real and simulated camera, and the horizontal field of view is estimated at 80\textdegree. The intrinsic camera parameters are calibrated using MATLAB's camera calibrator and a checkerboard calibration pattern. The calibrated radial parameters are then used as input to the lens distortion model. Since this paper is concerned with the quantification of the realism of synthetic data and not the production of the most realistic data possible, artifacts such as noise, blur, and global illumination are excluded from the simulation. Studying their impact is left for future work. Alongside the camera sensor, Chrono::Sensor includes an image segmentation sensor which uses the same parameters as the camera to label individual pixels by class and instance ID. This segmentation map can be converted to ground truth object bounding boxes. This automatic labeling system allows for vast amounts of simulation data to be generated for training and testing perception algorithms - one major benefit to using synthetic data. Four primary datasets were created to demonstrate the validation methodology. The datasets are all indoor environments with small, 3.8~cm red and green cones as objects of interests. This comes from a broader task of developing a 1/6th scale autonomous vehicle and the Autonomy Research Testbed (ART) \cite{artatk2022}. Of the four datasets, two are real and two are synthetic. The datasets are based on two environments -- a hallway and a motion tracking lab. We consider two environments to allow for better understanding of the environment's effect on the training and testing of perception. The four datasets are as followed: real hall (RHall), real lab (RLab), simulated hall (SHall), simulated lab (SLab). \subsection{Unpaired Sim and Real Hallway Datasets} \label{sec:datasets:unpaired} The real hallway data was generated using an ELP 2MP USB camera and 50 randomly placed cones. Images were taken from random positions and orientations within the hallway, and labeled by hand. The true position of the cones in the hallway was unknown. The simulated hallway dataset was generated using Chrono::Sensor and a model of the ELP camera. The virtual hallway environment was designed by hand to mimic the texture and layout of the real hallway, and 50 green and red cones were randomly placed in the environment. The simulated camera captured images from random positions and orientations in the hallway. Samples from the real and simulated hallway datasets are shown in Table~\ref{tab:hallway_dataset}. These datasets are unpaired, so do not have corresponding samples in the real and simulated data. \begin{table*} \centering \caption{Unpaired images from the hallway datasets. Simulation was created using a random distribution of a similar number of red and green cones.} \begin{tabular}{cM{.2\textwidth}M{.2\textwidth}M{.2\textwidth}M{.2\textwidth}} \toprule Real & \includegraphics[width=.2\textwidth]{images/real_hallway_frame_10.png} & \includegraphics[width=.2\textwidth]{images/real_hallway_frame_22.png} & \includegraphics[width=.2\textwidth]{images/real_hallway_frame_30.png} & \includegraphics[width=.2\textwidth]{images/real_hallway_frame_40.png} \\ Simulated & \includegraphics[width=.2\textwidth]{images/sim_hallway_frame_3.png} & \includegraphics[width=.2\textwidth]{images/sim_hallway_frame_7.png} & \includegraphics[width=.2\textwidth]{images/sim_hallway_frame_10.png} & \includegraphics[width=.2\textwidth]{images/sim_hallway_frame_22.png} \\ \bottomrule \end{tabular} \label{tab:hallway_dataset} \end{table*} \subsection{Paired Sim and Real ARC Lab Datasets} \label{sec:datasets:paired} Having paired data, although not necessary for the methodology, provides an opportunity to more easily interpret the results of the validation methodology since the environment differences are constrained and context overlap is high. To generate the paired image dataset, images were first collected in reality for ART-1 \cite{art-iros-video,artatk2022} navigating an S-like path delineated by red cones on the right and green cones on the left. The vehicle pose was tracked and recorded through time, along with the images which were time-stamped. The position of the cones was measured at the beginning of the test using the motion capture system and recorded. The cones were static throughout the experiment. To recreate the setup in simulation, a basic replica of the lab was created using 3D modeling software and images of the lab. Within this virtual environment, the simulation was started by placing red and green cones into the environment at their recorded positions. Then, a simulated camera was moved through the environment based on the recorded vehicle pose and image timestamp. This process resulted in images that are nearly paired. From a content perspective, the data is very similar, but uncertainty in vehicle pose, cone position, camera calibration (intrinsic and extrinsic parameters), and 3D models result in image pairings which are not pixel-wise paired. Examples from the sim and real lab datasets are shown in Tab. \ref{tab:arclab_dataset}. \begin{table*} \centering \caption{Paired images from the sim and real lab datasets. Notice that the cones are not pixel-wise paired due to uncertainty in tracked pose, camera pose, and virtual environment.} \begin{tabular}{cM{.2\textwidth}M{.2\textwidth}M{.2\textwidth}M{.2\textwidth}} \toprule Real & \includegraphics[width=.2\textwidth]{images/me3038_real_frame_149.png} & \includegraphics[width=.2\textwidth]{images/me3038_real_frame_250.png} & \includegraphics[width=.2\textwidth]{images/me3038_real_frame_300.png} & \includegraphics[width=.2\textwidth]{images/me3038_real_frame_345.png} \\ Simulated & \includegraphics[width=.2\textwidth]{images/me3038_sim_frame_149.png} & \includegraphics[width=.2\textwidth]{images/me3038_sim_frame_250.png} & \includegraphics[width=.2\textwidth]{images/me3038_sim_frame_300.png} & \includegraphics[width=.2\textwidth]{images/me3038_sim_frame_345.png} \\ \bottomrule \end{tabular} \label{tab:arclab_dataset} \end{table*} \subsection{Object detection algorithm} In order to detect the red and green cones within the collected images, we leveraged an object detector based on YOLOv5 \cite{glenn_jocher_2020_4154370}. The object detector was trained to detect tight bounding boxes for two classes -- class 1: red cones; class 2: green cones. Two versions of the object detector were trained (called {\textit{$Net_{sim}$}} and {\textit{$Net_{real}$}}). The first ({\textit{$Net_{sim}$}}) was trained exclusively on simulated images. The second version of the network ({\textit{$Net_{real}$}}) was trained exclusively on real images. Training for both networks was performed on images from the lab environment, using randomly distributed cones of equal class frequency. Validation sets from the same domain (randomly distributed cones in the lab) were used to determine convergence in training. Examples of {\textit{$Net_{real}$}} and {\textit{$Net_{sim}$}} predictions on a real and simulated image respectively are shown in Fig. \ref{fig:example_detections}.
{ "redpajama_set_name": "RedPajamaArXiv" }
7,786
La contea di Putnam (in inglese Putnam County) è una contea dello Stato dell'Indiana, negli Stati Uniti. La popolazione al censimento del 2000 era di abitanti. Il capoluogo di contea è Greencastle. Altri progetti Collegamenti esterni Putnam
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,833
Home MAGAZINE STORIES Roadside Attractions David Milne's Homecoming Driving in Saskatchewan this summer? Stop for treats – including an 18-foot ice cream cone and Lego models – as the province's galleries install irresistible art in public spaces. by Paul Gessell Alison Norlen, "Cornet," 2018 Coroplast and screws, 4.5' x 4.5' x 18' (photo by James Seibel) The latest tourist attraction in Estevan, a small Saskatchewan city near the American border, is a glowing, 18-foot-long ice cream cone. You won't be getting a taste – it's made from Coroplast, a type of corrugated plastic. But this interior-lit sculpture is irresistible all the same. Created by Saskatoon artist Alison Norlen, it honours Estevan's much loved Dairy Queen, one of the first to open in Canada. Cornet, as Norlen's work is aptly titled, sits outside the Estevan Art Gallery and Museum. As you might guess, visitors love to take selfies with the giant treat. Some people, says gallery director Amber Andersen, even pretend to lick the cone. Talk about engaging with art! The sculpture is part of a public art project, Roadside Attractions, organized by the Dunlop Art Gallery in Regina in conjunction with nine other galleries across Saskatchewan. Each gallery commissioned works with the help of $375,000 in funding from a special fund set up by the Canada Council for the Arts to help mark the 150th anniversary of Confederation. Saskatchewan Tourism tossed in another $20,000 to encourage visitors. Kelly Litzenberger, "Yorkton CPR Station," 2018 Lego, 30" x 20" x 9" Kelly Litzenberger, "Hudson's Bay Building, Yorkton," 2018 Lego, 26" x 15" x 10" Kelly Litzenberger, "Yorkton City Hall," 2018 The projects include an oversized rear-view mirror in Prince Albert created by Heather Benning and Tim Moore as a reference to job-seeking migrants who leave the province. Joi T. Arcand's large banners in Cree syllabics detail Indigenous history in the place settlers now call Moose Jaw. In Yorkton, Kelly Litzenberger built Lego replicas of historical buildings and, in North Battleford, an Indigenous drum by Lionel Auburn Peyachew uncannily resembles a wheel of cheese. In all, 15 Saskatchewan communities are part of the project, including whistle stops like Montmartre, Birch Hills and Imperial, which are hosting works commissioned by the Dunlop. Some of the artists are local, while others are based in different provinces. Michel Huneault, "Untitled 2, Roxham Road," 2017 photograph on flag fabric, 78" x 118" In Saskatoon, the AKA artist-run centre commissioned Montreal photographer Michel Huneault to install four giant photographs in Victoria and Diefenbaker parks. His images focus on some of the thousands of asylum seekers last year at the unofficial Roxham Road border crossing in Quebec. Through the magic of Photoshop, they are seen only in silhouette, thus remaining anonymous, and are covered in colourful blankets worn by asylum seekers Huneault photographed earlier in Europe. Visitors can use cellphones to tap into audio of their conversations with Mounties as they negotiate this "irregular" – Huneault abhors the term "illegal" – crossing. He hopes his work helps people understand the plight of asylum seekers. Soon after installing the work, however, it was vandalized. AKA said it was repairing the damage so the work could be reinstalled. Joi T. Arcand, "nitōhtaw kikāwīnaw-askiy kika-wihtamāk namōya ka-pōnipayiwa kihci-asotamākēwina / Listen to the land (Mother Earth), she will tell you that the sacred promises will never cease," 2018 vinyl banners, 99" x 192" (photo courtesy Moose Jaw Museum and Art Gallery) Meanwhile, back at the ice cream cone, Norlen says she constructed Cornet using the surprisingly inflexible plastic cardboard with two helpers during Estevan's annual two-day fair in June. Her exhibition of drawings and sculptures, Eccentricity, is on view inside the gallery until Aug. 24. "It was really, really a challenge," says Norlen, best known for large-scale drawings, which can be found at the National Gallery of Canada and in other important collections. "Let's use very flat materials that don't bend, and materials you've never used before, and then give your self the shortest time you could possibly think of, and then make it happen." Well, she made it happen. But why a giant ice cream cone? It turns out Estevan is very proud of its Dairy Queen, which opened in 1954 and, depending whose records you want to believe, was either the first or second one in Canada. When the business offered to donate a day's sales to the Humboldt Broncos this spring after the hockey team's tragic bus accident, the blocks-long lineup raised more than $20,000. Not bad for a community with a population just over 11,000. Amber Andersen, at the art gallery, was not surprised at the Dairy Queen turnout. Estevan, she says, does like its ice cream. ■ Roadside Attractions runs all summer and, in some cases, beyond. The full circuit takes about 20 hours to drive. Along the way, you can listen to podcast interviews with the artists. For information, go to skroadsideattractions.com. Updated July 6, 2018 to add information about the vandalism of Michel Huneault's work in Saskatoon. 17 July 2018 Joi T. Arcand Dunlop Art Gallery Michel Huneault Estevan Art Gallery and Museum Alison Norlen Kelly Litzenberger Paul Gessell Saskatchewan-born Paul Gessell has worked as a journalist across Canada for The Canadian Press, Maclean's and The Ottawa Citizen, earning two National Newspaper Awards and other honours. He currently focuses on the collision of art and politics. Read more by Paul Gessell
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,934
, conocida en España como Gamera contra Guiron, guardián del planeta fantasma, es una película japonesa del género kaiju dirigida por Noriaki Yuasa, escrita por Niisan Takahashi y producida por Daiei Film. Es la quinta entrada en la serie de películas de Gamera, después de Gamera vs. Viras, que fue estrenada el año anterior. Es protagonizada por Nobuhiro Kajima, Miyuki Akiyama, Christopher Murphy, Yuko Hamada y Eiji Funakoshi. La película se estrenó en Japón el 21 de marzo de 1969. La película fue sucedida por Gamera vs. Jiger al año siguiente. Argumento Mientras exploran los cielos a través de su telescopio, dos niños pequeños, Akio y Tom, espían una nave espacial que desciende a un campo cercano. Aturdidos y desconcertados, le cuentan a la madre de Akio lo que han visto, pero ella descarta su historia como una tontería infantil. Al día siguiente, los dos niños, con la hermana menor de Akio, Tomoko, a cuestas, van en bicicleta al sitio para investigar. Akio y Tom logran robar en la nave espacial. Pero luego, sin previo aviso, la nave despega, dejando a Tomoko atrás. Se eleva hacia el espacio exterior hacia un campo de asteroides, lo que hace que los niños entren en pánico. Sin embargo, Gamera aparece y despeja el camino para la nave a través de los asteroides. La nave espacial, volando cerca de la velocidad de la luz, deja atrás a Gamera y transporta a los niños a un planeta desconocido, donde aterriza en las afueras de una ciudad alienígena. De repente, aparece un plateado "Space" Gyaos, amenazando la nave y los dos niños. Justo antes de que la criatura ataque, un segundo monstruo extraño, cuya cabeza se asemeja a un cuchillo, emerge de una guarida subterránea y ataca al Space Gyaos. El Space Gyaos emite un rayo que se refleja en la cabeza en forma de espada de la nueva criatura y corta su propia pierna derecha. Después de que el Space Gyaos intenta retirarse, la criatura con cabeza de cuchillo se lanza y corta el ala izquierda del Space Gyaos, antes de cortar su ala derecha. Luego, la criatura corta la cabeza del indefenso Space Gyaos y brutalmente corta el cuerpo en pedazos más pequeños antes de retirarse a su guarida. Akio y Tom exploran una parte de la ciudad alienígena y se encuentran con los únicos habitantes del planeta: dos hermosas mujeres, llamadas Barbella y Florbella, que explican que su planeta, conocido como "Terra", orbita el sol directamente enfrente de la tierra, por eso nunca ha sido descubierto por los astrónomos de la Tierra. Además, Terra se enfrenta a la extinción; no solo el planeta se está enfriando, sino que los Space Gyaos lo están tomando y las dos mujeres son las últimas de su tipo. El monstruo con cabeza de cuchillo, que los terranos llaman "Guiron", es su última defensa contra los Space Gyaos. Barbella y Florbella repentinamente se vuelven contra Tom y Akio y los ponen a prueba. Usando sus dispositivos supertecnológicos, las mujeres alienígenas sondean las mentes de los niños, en el proceso aprenden sobre Gamera y su punto débil por los niños. Se revela que las mujeres terran planean alimentarse del cerebro de los niños para absorber sus conocimientos. En preparación para extraer el cerebro de Akio para su alimentación, las mujeres afeitan la cabeza del niño. En una misión de rescate, Gamera aterriza en Terra en busca de los niños. Las mujeres llaman Guiron para atacar a la tortuga gigante. Guiron planea cortar a Gamera por la mitad, pero Gamera agarra una de las patas delanteras de Guiron y la muerde. Guiron intenta quitarse de encima a la tortuga. Envolviendo su cola en un monolito, Gamera arroja a Guiron a un cañón, haciendo que su cabeza de cuchillo quede atascada. Gamera usa su aliento de fuego sobre Guiron. Guiron usa sus shurikens para penetrar las mejillas de Gamera. La tortuga trata de curar sus heridas agarrando rocas similares a hielo pero Guiron usa sus shurikens nuevamente y esta vez Gamera usa la roca más larga para rebotar a los shurikens en el propio cuerpo de Guiron. Guiron se aleja, mientras Gamera cae en un lago inconsciente y boca arriba. Tom logra liberar a Akio, pero, en el proceso, libera involuntariamente a Guiron. Ya no bajo el control de los extraterrestres, Guiron arrasa la ciudad de Terran, incluso atacando a sus propias señoras mientras intentan huir a la Tierra. La criatura con cabeza de cuchillo corta la nave espacial por la mitad, hiriendo mortalmente a Barbella; Florbella mata a Barbella mientras dice que los miembros inútiles de su sociedad son sacrificados. Mientras Guiron ataca la base donde están encarcelados los muchachos, Gamera despierta y renueva su ataque a la criatura alienígena, finalmente golpeando la cabeza de Guiron en el suelo. Florbella intenta huir en un cohete, pero Guiron corta el vehículo por la mitad y, como resultado, muere. Gamera atrapa la mitad del cohete y lanza a Guiron a su base shuriken. Gamera usa su aliento de fuego sobre Guiron nuevamente donde estaba el cohete; el cohete explota y corta a Guiron por la mitad. Gamera usa su aliento de fuego para soldar la nave extraterrestre y lleva la nave y los dos muchachos de regreso a la Tierra. En la Tierra, los niños son devueltos a sus madres y todos se despiden de Gamera mientras vuela hacia la noche. Reparto Nobuhiro Kajima como Akio. Miyuki Akiyama como Tomoko. Christopher Murphy como Tom. Yuko Hamada como Kuniko. Eiji Funakoshi como Dr. Shiga Kon Ohmura como Kondo. Edith Hanson como la madre de Tom. Estreno Gamera vs. Guiron se estrenó en Japón el 21 de marzo de 1969. Fue sucedida por Gamera vs. Jiger. Referencias Enlaces externos Películas de 1969 Películas en japonés Películas de Japón Películas de ciencia ficción de Japón Películas de monstruos Películas de Kaiju Películas de aventuras espaciales Secuelas de películas
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,392
The Sands-Willets Homestead is a historic house and museum located within the Incorporated Village of Flower Hill in Nassau County, New York. It is operated as a historic house museum by the Cow Neck Peninsula Historical Society, is designated as a Village of Flower Hill Landmark and a New York State Landmark, and is listed on the National Register of Historic Places. Description Main House The Sands-Willets Homestead is a 20-room, shingled 2-story building with an enlarged porch and porte cochere. The west wing dates to about 1735. It was originally a four-bay, -story house with end chimneys over a full-sized basement. The main portion of the house is a Greek Revival–style dwelling built during the first half of the 19th century. When he home was built, it was the centerpiece of a 240-acre (97 ha) farm. At the time, the property stretched from Manhasset Bay at its western edge to Hempstead Harbor at its eastern edge, which was convenient for shipping produce to New York City and points beyond. Over time, sections of the farm would be sold to developers and often turned into suburban housing developments, ultimately leading to the property nowadays having an area of less than . The Cow Neck Peninsula Historical Society purchased the home from Eliza Willets in 1976, for the purpose of preserving and restoring it, and turning it into a museum, research, and educational center. Barn and garden A contributing barn and a garden are also located on the property. The barn, which dates to the late 17th Century, was moved to the property in 1978. 2020s renovations and accessibility upgrades The Historical Society received a grant for capital improvements, in December 2020. A $125,525 grant from the Robert David Lion Gardiner Foundation enabled the Society to renovate building's porch, and make other improvements, which enabled individuals in wheelchairs, or pushing baby stollers, to share in tours. The renovations were be made in ways that preserve the original heritage value of the house. In popular culture Scenes for the HBO series Boardwalk Empire were filmed in the Sands-Willets House. See also The George Washington Denton House - Another historic home in the Village of Flower Hill. This home is also listed on the National Register of Historic Places. The Thomas Dodge Homestead - Another historic house museum that the Cow Neck Peninsula Historical Society operates, located in neighboring Port Washington. References External links Sands-Willets House (Cow Neck Peninsula Historical Society website) Flower Hill, New York Historic house museums in New York (state) Houses on the National Register of Historic Places in New York (state) Greek Revival houses in New York (state) Houses completed in 1735 Houses in Nassau County, New York Museums in Nassau County, New York National Register of Historic Places in Nassau County, New York
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,990
Q: NFC - Can i pass NdfMessage to iOS? I am developing an APP using NFC tech to pass some data to another android device but can i pass NdfMessage to iOS device? if an iOS device has external NFC chip A: At the moment there is no NFC chip on iOS devices... maybe on iPhone5 or external accessories
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,123
Kiszka szwedzka – kiełbasa ziemniaczana, tradycyjne danie charakterystyczne dla okolic Wałcza. Kiszka szwedzka wpisana została na Listę produktów tradycyjnych Ministerstwa Rolnictwa i Rozwoju Wsi 6 marca 2012. Wolno ją wytwarzać wyłącznie na terenie gminy Wałcz. Historia wsi Szwecja sięga roku 1590, kiedy to teren zaludniono osadnikami z Pomorza (20 pańszczyźnianych chłopów narodowości niemieckiej). Głównym składnikiem diety ich potomków były, po rozpowszechnieniu się tej rośliny w Europie, ziemniaki. We wsi funkcjonowała duża gorzelnia, przerabiająca ten surowiec, który stał się również podstawą tutejszego wyżywienia. Po II wojnie światowej tradycja przetrwała (gorzelnia nadal funkcjonowała). Tradycję przejęli osadnicy z innych części Polski. Pierwotnie, przed wojną, kiszkę wytwarzano ze startych, surowych ziemniaków, kaszy jęczmiennej, soli, ryżu, cynamonu i majeranku. Obecnie wykorzystuje się do jej wytwarzania ziemniaki surowe i gotowane, surowy oraz wędzony boczek, cebulę, kaszę mannę, jajka i przyprawy. Całością nadziewa się jelita i parzy około 10 minut. W czerwcu każdego roku odbywa się w Szwecji festyn Świętojanki, na którym istnieje możliwość degustacji kiszki. Kiszkę spożywać można zarówno na ciepło, jak i na zimno Zobacz też kiszka ziemniaczana – tradycyjna potrawa kuchni białoruskiej i kuchni podlaskiej dzionie rakowskie – tradycyjna potrawa świętokrzyska Przypisy Kuchnia pomorska Kuchnia polska Kuchnia niemiecka Polskie produkty tradycyjne Potrawy z ziemniaków
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,968
The Weeknd, aka Abel Tesfaye, hails from Canada and was born on February 16, 1990. He rose to fame after releasing several songs on YouTube followed by several mix tapes in the early 2010s. November 13: The Weeknd released Trilogy. The compilation LP consists of remastered mix tapes (House Of Balloons, Thursday, Echoes Of Silence) from 2011 and 3 previously unreleased songs. December 1: The Weeknd topped the Billboard Top R&B/Hip-Hop Albums chart for 1 week with Trilogy. December 19: Trilogy was certified gold. May 16: Trilogy was certified platinum. August 25: The video for "Wicked Games" was nominated for 2 MTV Video Music Awards including Best Visual Effects and Artist To Watch. September 10: The Weeknd released his official debut LP Kiss Land. September 28: The Weeknd topped the Billboard Digital Albums chart for 1 week Top R&B/Hip-Hop Albums chart for 1 week, and R&B Albums chart for 2 weeks with Kiss Land. January 26: The Weeknd was nominated for a Grammy Award for Best Rap/Sung Collaboration ("Remember You" with Wiz Khalifa). October 18: The Weeknd hit the ARC Weekly Top 40 helping Ariana Grande with "Love Me Harder." December 6: The Weeknd hit the Top 10 helping Ariana Grande with "Love Me Harder." January 3: The Weeknd topped the Billboard Rhythmic Songs chart for 5 weeks helping out Ariana Grande with "Love Me Harder." January 17: The Weeknd topped the Billboard Dance/Mix Airplay chart for 3 weeks helping out Ariana Grande with "Love Me Harder." February 28: The Weeknd hit the ARC Weekly Top 40 with "Earned It." April 11: The Weeknd hit the Top 10 with "Earned It." April 11: The Weeknd topped the Billboard R&B/Hip-Hop Songs chart for 2 weeks and R&B Songs chart for 14 weeks with "Earned It." April 25: The Weeknd topped the Billboard R&B/Hip-Hop Airplay chart for 13 weeks with "Earned It." May 2: The Weeknd topped the Billboard Rhythmic Songs chart for 3 weeks with "Earned It." May 9: The Weeknd topped the Billboard Radio Songs chart for 4 weeks with "Earned It." May 16: The Weeknd hit #1 for 1 week on the ARC Weekly Top 40 with "Earned It." May 23: The Weeknd topped the Billboard Pop Songs chart for 1 week with "Earned It." May 30: The Weeknd topped the Billboard Adult R&B Songs chart with "Earned It." June 20: The Weeknd hit the ARC Weekly Top 40 with "Can't Feel My Face." July 4: The Weeknd hit the Top 10 with "Can't Feel My Face." July 18: The Weeknd topped the Billboard R&B/Hip-Hop Digital Songs chart and R&B Songs chart with "Can't Feel My Face." August 1: The Weeknd topped the Billboard R&B/Hip-Hop Songs chart and Rhythmic Songs chart with "Can't Feel My Face." August 8: The Weeknd hit the ARC Weekly Top 40 with "The Hills." August 15: The Weeknd hit #1 for 5 weeks on the ARC Weekly Top 40 with "Can't Feel My Face." August 22: The Weeknd topped the Billboard Hot 100 chart, Radio Songs chart, and Pop Songs chart with "Can't Feel My Face." August 28: The Weeknd released Beauty Behind The Madness. August 30: The video for "Earned It" was nominated for a MTV Video Music Awards for Best Male Video. The video for "Love Me Harder" with Ariana Grande was nominated for Best Collaboration. September 10: The Weeknd topped the UK LP charts with Beauty Behind The Madness. September 19: The Weeknd topped the Billboard 200 LP chart, Digital Albums chart, R&B Albums chart, and R&B/Hip-Hop Albums chart, with Beauty Behind The Madness. September 19: The Weeknd hit the Top 10 with "The Hills." October 3: The Weeknd topped the Billboard Hot 100 chart and R&B/Hip-Hop Songs chart with "The Hills." October 3: The Weeknd topped the Billboard Radio Songs chart and Digital Songs chart with "The Hills." October 10: The Weeknd performed on Saturday Night Live. October 15: Beauty Behind The Madness was certified gold. October 17: The Weeknd hit #1 for 1 week with "The Hills." November 21: The Weeknd hit the ARC Weekly Top 40 with "In The Night." November 22: The Weeknd won 3 American Music Awards for Favorite Soul/R&B Male Artist and Favorite Soul/R&B Album (Beauty Behind The Madness), and was nominated for Artist of the Year, New Artist of the Year, and Single of the Year ("Can't Feel My Face"). December 19: The Weeknd hit the Top 10 with "In The Night." December 31: The Weeknd topped the Billboard 2015 Year-End Chart Toppers as the Top Pop Hot 100 Artist, Top Pop Hot 100 Artist - Male, Top R&B/Hip-Hop Songs Artist, Top R&B/Hip-Hop Digital Songs Arist, Top R&B Songs Artist, Top R&B Digital Songs Artist, Top R&B Albums Artist, Top Rhythmic Songs Artist, and with the Top R&B/Hip-Hop Airplay Song, Top R&B Digital Song, and Top Adult R&B Song ("Earned It"), Top R&B Song ("The Hills"), Top Rhythmic Song ("Can't Feel My Face"), and Top R&B Album (Beauty Behind The Madness). February 1: Beauty Behind The Madness was certified 2x platinum. February 15: The Weeknd won 2 Grammy Awards including Best R&B Performance ("Earned It") and Best Urban Contemporary Album (Beauty Behind The Madness), and was nominated for Album of the Year (Beauty Behind The Madness), Record of the Year and Best Pop Solo Performance ("Can't Feel My Face"), and Best R&B Song (awarded to the songwriter) and Best Song Written for Visual Media (awarded to the songwriter) ("Earned It"). February 28: The Weeknd was nominated for an Academy Award for Best Original Song ("Earned It"). March 22: Trilogy was certified 2x platinum. May 21: The Weeknd hit the ARC Weekly Top 40 helping out Belly with "Might Not." April 28: Beauty Behind The Madness was certified 3x platinum. August 28: The video for "Can't Feel My Face" was nominated for 2 MTV Video Music Awards including Best Male Video and Best Visual Effects. October 1: The Weeknd hit the ARC Weekly Top 40 with help from Daft Punk with "Starboy." October 1: The Weeknd performed on Saturday Night Live. October 8: The Weeknd hit the Top 10 with help from Daft Punk with "Starboy." November 20: The Weeknd was nominated for 3 American Music Awards including Artist of the Year, Favorite Pop/Rock Male Artist, and Favorite Soul/R&B Male Artist. November 26: The Weeknd hit #1 for 2 weeks on the ARC Weekly Top 40 with help from Daft Punk with "Starboy." November 28:The Weeknd released Starboy. December 10: The Weeknd hit the ARC Weekly Top 40 with help from Daft Punk with "I Feel It Coming." December 17: The Weeknd topped the Billboard 200 LP chart, Album Sales chart, Digital Albums chart, R&B/Hip-Hop Albums chart, and R&B Albums chart with Starboy. January 7: The Weeknd topped the Billboard Hot 100 chart, Digital Songs chart, and R&B/Hip-Hop Songs chart with help from Daft Punk with "Starboy." June 8: Starboy was certified 2x platinum. August 27: The video for "Reminder" was nominated for 4 MTV Video Music Awards for Video of the Year, Best Direction, Best Art Direction, and Best Editing. The Weeknd was also nominated for Artist of the Year. November 19: The Weeknd was nominated for 5 American Music Awards including Favorite Pop/Rock Album (Starboy), Collaboration of the Year ("Starboy" with Daft Punk), Favorite Soul/R&B Male Artist, Favorite Soul/R&B Album, and Favorite Soul/R&B Song ("Starboy" with Daft Punk). December 31: The Weeknd topped the Billboard 2017 Year-End Chart Toppers as the Top R&B Albums Artist. January 28: The Weeknd won a Grammy Award for Best Urban Contemporary Album (Starboy). February 10: The Weeknd hit the ARC Weekly Top 40 with Kendrick Lamar with "Pray For Me." March 3: The Weeknd hit the Top 10 with Kendrick Lamar with "Pray For Me." March 30: The Weeknd digitally released My Dear, Melancholy,. April 14: The Weeknd topped the Billboard 200 LP chart, Top Albums Sales chart, Digital Albums chart, Top R&B Albums chart, and Top R&B/Hip-Hop Albums chart with My Dear, Melancholy,. August 10: The Weeknd could be heard on the Nicki Minaj LP Queen on the track "Thought I Knew You." January 19: The Weeknd hit the ARC Weekly Top 40 helping out Gesaffelstein with "Lost In The Fire." January 30: Starboy was certified 3x platinum. March 18:Trilogy was certified 3x platinum and Kiss Land was certified gold. Tracks: "High For This" - "What You Need" - "House Of Balloons / Glass Table Girls" - "The Morning" - "Wicked Games" - "The Party & The After Party" - "Coming Down" - "Loft Music" - "The Knowing" - "Twenty Eight" - "Lonely Star" - "Life Of The Party" - "Thursday" (featuring Drake) - "The Birds Pt. 1 & 2" - "Gone" - "Rolling Stone" - "Heaven Or Vegas" - "Valerie" - "DD" - "Montreal" - "Outside" - "XO / The Host" - "Initiation" - "Same Old Song" (featuring Juicy J) - "The Fall" - "Next" - "Echoes Of Silence" - "Till Dawn" (Here Comes The Sun)" Tracks: "Professional" - "The Town" - "Adaption" - "Love In The Sky" - "Belong To The World" - "Live For" (featuring Drake) - "Wanderlust" - "Kiss Land" - "Pretty" - "Tears In The Rain" Tracks: "Real Life" - "Losers" (featuring Labrinth) - "Tell Your Friends" - "Often" - "The Hills" - "Acquainted" - "Can't Feel My Face" - "Shameless" - "Earned It" - "In The Night" - "As You Are" - "Dark Time" (featuring Ed Sheeran) - "Prisoner" (featuring Lana Del Rey) - "Angel" Tracks: "Call Out My Name" - "Try Me" - "Wasted Times" - "I Was Never There" (featuring Gesaffelstein) - "Hurt You" (featuring Gesaffelstein) - "Privilege"
{ "redpajama_set_name": "RedPajamaC4" }
2,189
The 8th Southern African AIDS Conference shall be held in Durban, Kwa- Zulu Natal province from thirteen – 15 June 2017. The one difference is that previous to this year, a disgustingly disturbingly large amount of your cash did not go to other clients needing medical care, but to the pockets of executives. It is a political proposal in any case. And like every good political proposal, it is cautious not to gore" any political oxen." Her cost containment program is cost containment mild because she fears alienating any key stakeholders—like the providers. You can't lower, and even stabilize costs, without key players getting less than they might have had. Probably the most dangerous a part of some fashionable flowering crops that develop from bulbs is the bulb itself, because the plant toxins are concentrated within the bulb. Other elements of the plant additionally include the toxins, however. OBSERVE: IF you might be on prescription remedy examine along with your doc to search out out if there is any cause you'll be able to NOT use/eat grapefruit BEFORE taking it. Some prescription meds DO NOT work with grapefruit and may actually be lethal. Samsung Kesehatan membantu untuk menciptakan pola gaya hidup yang seimbang dengan merekam berbagai informasi seperti Anda makanan, kafein dan asupan air element. Kentucky Health Information is an independent information service of the Institute for Rural Journalism and Group Points, based mostly in the School of Journalism and Media on the College of Kentucky, with support from the Foundation for a Healthy Kentucky. Republication of any KHN material with correct credit is hereby approved, but when the republication is longer than a news transient we ask that it contain the primary sentence of this paragraph. Thanks! claming this isn't his job he solely need assist individuals -electrical-se… man who dont have any schooling about emf sell books (77$). An evolving illness-in-a-dish" expertise, funded by the Nationwide Institutes of Well being (NIH), is bringing closer the day when such a seemingly futuristic personalised drugs situation may not appear thus far-fetched. Scientists have perfected mini cultured 3-D structures that develop and performance much like the outer mantle – the important thing working tissue, or cortex — of the brain of the person from whom they have been derived. Strikingly, these organoids" buzz with neuronal community exercise. Cells talk with one another in circuits, much as they do in our brains. I do know my grammar just isn't the best so sorry to those I've offended. To those who have had dangerous encounters with nurses or any medical career, sorry that you have judged a complete group of people out of your unhealthy encounter. And Sarah sorry my education is so poor that it makes you so indignant. The one issues that politics remedy" are the political and politically-related sector's want for power, wealth and ego hits. I'm fairly certain that in less than a few many years from now consuming mass produced meat will likely be as badly thought to be smoking is appeared upon in the present day.
{ "redpajama_set_name": "RedPajamaC4" }
6,237
Current Affairs Quiz – September 27 2017 1. Joao Lourenco has been sworn in as president of ___________ 2. The government has decided to increase the retirement age of central government doctors to ___ years from the current 60 years 3. Which of the following person is not included in "Self-made under 40" category in 'Hurun India Rich List 2017? Vijay Shekhar Sharma Sandeep Aggarwal Divyank Turakhia 4. __________has started testing world's first two-seater self-flying taxi called Autonomous Air Taxi (AAT) for transportation 5. Indian star ____________has made it to Forbes' annual top-10 highest-paid TV actresses list 6. Which state government will launch 'Mathru Purna' scheme on October 2 to meet the nutritional needs of pregnant and lactating women in rural areas? 7. Which state government has launched a scheme for disabled soldiers and their dependents by providing credit on concessional rate of interest to set up self-employment ventures? 8. Which state government has signed a MoU with South Korea for increased co-operation in infrastructure projects like development of smart cities, roads, airports and metros? 9. The Cabinet Committee on Security (CCS) has approved Rs _______crore for police modernisation for three fiscal years till 2019-20 10. Which airport has bagged the 'best airport security' award from the World Quality Congress (WQC)? Visakhapatnam International Airport Sardar Vallabhbhai Patel International Airport Chhatrapati Shivaji International Airport 11. INS Tarasa, a Water Jet Fast Attack Craft was commissioned into the Indian Navy in _________ 12. Which bank has launched 'Project Nishchay' in partnership with the Boston Consulting Group (BCG) to accelerate its turnaround program and improve financial performance? Andra Bank 13. India ranked _____ in the global competitiveness ranking of 137 countries by the World Economic Forum (WEF) 14. The Minister of Communication Shri Manoj Sinha launched DoT's first ever Mobile, Internet and Technology event in India – India Mobile Congress 2017 in ___________ 15. "Incredible India!" campaign has been organised by the Embassy of India in ________ 16. World Tourism Day was observed on? 17. Who has been described as one of the most influential women in India in BBC's 100 Women list? Sakshi Malik Current Affairs Quiz - September 27 2017 I got %%score%% of %%total%% right This quiz has been created with WordPress Viral Quiz ♥. Also Check Out – Current Affairs Daily Digest – September 27′ 2017 Also Check Out – Current Affairs Quiz – September 26 2017 September 2017 Quiz
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,362
<?php /* * This file is part of the LightSAML-Core package. * * (c) Milos Tomic <tmilos@lightsaml.com> * * This source file is subject to the MIT license that is bundled * with this source code in the file LICENSE. */ namespace LightSaml\Model\XmlDSig; use LightSaml\Credential\CredentialInterface; use LightSaml\Credential\KeyHelper; use LightSaml\Error\LightSamlSecurityException; use RobRichards\XMLSecLibs\XMLSecurityKey; abstract class AbstractSignatureReader extends Signature { /** @var XMLSecurityKey|null */ protected $key; /** * @param XMLSecurityKey $key * * @return bool True if validated, False if validation was not performed * * @throws \LightSaml\Error\LightSamlSecurityException If validation fails */ abstract public function validate(XMLSecurityKey $key); /** * @return XMLSecurityKey|null */ public function getKey() { return $this->key; } /** * @param CredentialInterface[] $credentialCandidates * * @throws \InvalidArgumentException If element of $credentialCandidates array is not CredentialInterface * @throws \LightSaml\Error\LightSamlSecurityException If validation fails * * @return CredentialInterface|null Returns credential that validated the signature or null if validation was not performed */ public function validateMulti(array $credentialCandidates) { $lastException = null; foreach ($credentialCandidates as $credential) { if (false == $credential instanceof CredentialInterface) { throw new \InvalidArgumentException('Expected CredentialInterface'); } if (null == $credential->getPublicKey()) { continue; } try { $result = $this->validate($credential->getPublicKey()); if ($result === false) { return; } return $credential; } catch (LightSamlSecurityException $ex) { $lastException = $ex; } } if ($lastException) { throw $lastException; } else { throw new LightSamlSecurityException('No public key available for signature verification'); } } /** * @return string */ abstract public function getAlgorithm(); /** * @param XMLSecurityKey $key * * @return XMLSecurityKey */ protected function castKeyIfNecessary(XMLSecurityKey $key) { $algorithm = $this->getAlgorithm(); if ($algorithm != $key->type) { $key = KeyHelper::castKey($key, $algorithm); } return $key; } }
{ "redpajama_set_name": "RedPajamaGithub" }
1,315
/*------------------------------------------------------------------ [CSS3 Stylesheet] Project: CSS3 Design Contest Last change: 23.06.2010 [created blank stylesheet, vf] Designed by: Joel Schwarting Works in browsers: FF 3.5+, Opera 10.5+, Safari 4+, Google Chrome 4.0+, IE 9+ -------------------------------------------------------------------*/ body { background-color: #2574b0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; padding: 50px; } ul { list-style:none; } #load { border-top: 1px solid #FFFCF8; margin: 0 auto; padding: 5px 5px 5px 5px; width: 200px; /* border-radius */ -webkit-border-radius: 5px; -moz-border-radius: 5px; border-radius: 5px; /* box-shadow */ -webkit-box-shadow: rgba(0,0,0,0.5) 0px 1px 3px; -moz-box-shadow: rgba(0,0,0,0.5) 0px 1px 3px; box-shadow: rgba(0,0,0,0.5) 0px 1px 3px; /* background-gradient */ background: -webkit-gradient(linear, left top, left bottom, from(#B6B692), to(#99997B)); background: -moz-linear-gradient(top, #B6B692, #99997B); background-color: #B6B692; } #loadBarBox { background-color: #343434; border: 2px solid #141414; height: 20px; overflow: hidden; width: 196px; /* border-radius */ -webkit-border-radius: 5px; -moz-border-radius: 5px; border-radius: 5px; /* box-shadow */ -webkit-box-shadow: inset 0px 1px 5px black; -moz-box-shadow: inset 0px 1px 5px black; box-shadow: inset 0px 2px 5px black; } #loadingBar { width: 100px; overflow: hidden; height: 20px; } #bars { background-color: Black; height: 20px; margin: 0px; padding: 0px; width: 220px; } #bars li { background-color: #FFBF00; display: block; float: left; height: 30px; margin-left: 10px; margin-top: -5px; position: relative; right: 5px; width: 10px; /* transform-rotate */ -webkit-transform: rotate(-15deg); -moz-transform: rotate(-15deg); -o-transform: rotate(-15deg); transform: rotate(-15deg); /* animation */ -webkit-animation-name: moveLeft; -webkit-animation-duration: 500ms; -webkit-animation-iteration-count: infinite; -webkit-animation-direction: one; -webkit-animation-timing-function: linear; } /* animation for moveLeft */ @-webkit-keyframes moveLeft { from{right:5px;} to{right:25px;} } #awesome { border:1px solid black; float:left; font-size:32px; padding: 10px; width: 110px; /* border-radius */ -webkit-border-radius: 10px; -moz-border-radius: 10px; border-radius: 10px; }
{ "redpajama_set_name": "RedPajamaGithub" }
9,152
Q: Is there an other solution calculate data by filtered date without "with" clause? Need to calculate value when date< first_date by name with interval 3 day in BigQuery. Example of data: +------+------------+------------------+ | Name | date | order_id | value | +------+------------+----------+-------+ | JONES| 2019-01-03 | 11 | 10 | | JONES| 2019-01-05 | 12 | 5 | | JONES| 2019-06-03 | 13 | 3 | | JONES| 2019-07-03 | 14 | 20 | | John | 2019-07-23 | 15 | 10 | +------+------------+----------+-------+ My solution is: WITH data AS ( SELECT "JONES" name, DATE("2019-01-03") date_time, 11 order_id, 10 value UNION ALL SELECT "JONES", DATE("2019-01-05"), 12, 5 UNION ALL SELECT "JONES", DATE("2019-06-03"), 13, 3 UNION ALL SELECT "JONES", DATE("2019-07-03"), 14, 20 UNION ALL SELECT "John", DATE("2019-07-23"), 15, 10 ), data2 AS ( SELECT *, MIN(date_time) OVER (PARTITION BY name) min_date FROM data ) SELECT name, ARRAY_AGG(STRUCT(order_id as f_id, date_time as f_date) ORDER BY order_id LIMIT 1)[OFFSET(0)].*, sum(case when date_time< date_add(min_date,interval 3 day) then value end) as total_value_day3, SUM(value) AS total FROM data2 GROUP BY name Output: +------+------+------------+----------------+------+ | name | f_id | f_date |total_value_day3| total| +------+------+------------+----------------+------+ | JONES| 11 | 2019-01-03 | 15 | 38 | | John | 15 | 2019-07-23 | 10 | 10 | +------+------+------------+----------------+------+ So my question, can do the same calculated with a more effective way? Or this solution is ok for large datasets? A: The following gets the same results without using window functions or array aggregations, so BQ has to do less ordering/partitioning. For this small example, my query takes longer to run, but there is less byte shuffling. If you run this against a much larger dataset, I think mine will be more efficient. WITH data AS ( SELECT "JONES" name, DATE("2019-01-03") date_time, "11" order_id, 10 value UNION ALL SELECT "JONES", DATE("2019-01-05"), "12", 5 UNION ALL SELECT "JONES", DATE("2019-06-03"), "13", 3 UNION ALL SELECT "JONES", DATE("2019-07-03"), "14", 20 UNION ALL SELECT "John", DATE("2019-07-23"), "15", 10 ), aggs as ( select name, min(date_time) as first_order_date, min(order_id) as first_order_id, sum(value) as total from data group by 1 ) select name, first_order_id as f_id, first_order_date as f_date, sum(value) as total_value_day3, total from aggs inner join data using(name) where date_time < date_add(first_order_date, interval 3 day) -- <= perhaps group by 1,2,3,5 Note, this makes an assumption that order_id is sequential (aka order_id 11 always occurs before order_id 12) in the same manner that dates are sequential.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,068
Doctor Who Pirate Planet novelisation announced By Dave Golder 2015-04-30T23:00:00.119Z Feature BBC books has announced that the last Doctor Who story written by Douglas Adams which has yet to be novelised, "The Pirate Planet", will be adapted for publication next year by James Goss. The writer has also adapted another Adams Doctor Who script, "City Of Death", which is published on 21 May. Speaking exclusively to SFX, Goss says that he has one main aim with his version of The Pirate Planet, which was originally screened in 1978 as part of Doctor Who's Key To Time sequence: "I want people to love The Pirate Planet as much as they love, City Of Death and Shada," says Goss. Amazon Prime Day deals start Monday: find out more at TechRadar. He admits, though, that it'll be a more difficult task than adapting the much-loved "City Of Death". "'The Pirate Planet' has that sort of slightly, 'Oh, maybe we can manage to take seven people in loin cloths out to a quarry in Wales' feel to it," he says of a story about a cyborg pirate captain (complete with robot parrot) who pilots a world-eating hollow planet around the galaxy. "The script is amazing but you do have this slight feeling that Douglas Adams has ended up in the middle of an episode of Blake's 7. And it's a very odd experience watching it. "I hope that doesn't make me sound like I'm slagging off 'The Pirate Planet' but 'City Of Death' is probably the most beautiful-looking the classic series ever managed to pull off. This was Doctor Who being chic and stylish and glamorous. Like everybody has just come in from a Noel Coward play. And it's just perfect. "But 'The Pirate Planet' just sort of sits there in a season that is trying desperately hard while grappling with incredible financial restraints. And sometimes that season pulls things off marvelously and sometimes that season just makes you go, 'Ooooh! Ow!'" If his plans come to fruition, though, Goss may be able to deliver something extra special with the novelisation of The Pirate Planet. "'The Pirate Planet' is very exciting," he explains "There's a brilliant researcher called Jem Roberts, who has written a very good biography of Douglas Adams, and he has access to Douglas Adams's archive at St John's College in Cambridge. And I'm hoping he'll help me get into the archive, because when I was researching the Doctor Who 50th Anniversary coffee table book, every single quote that I managed to find about 'The Pirate Planet' referred to how much more there was of the story before the BBC turned up and went, 'No, this is too wild a sky for us.' "So I'm trying to find out how much there was that got thrown away. And if it did get thrown away is there an early version, or script, or treatment of it in the Adams archive? I could be inferring too much from those quotations, and all I'm going to discover is a few longer scenes here and there or a slightly more ambitious special effects shot. Or maybe, just maybe, there are masses of ideas sitting in a folder somewhere in the college. Ideas that I'll be able to weave in." City Of Death is published by BBC Books on 21 May. A release date for The Pirate Planet is TBC. For lots more Doctor Who coverage why not subscribe to SFX? The Pirate Planet
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,127
## Michael Barber * * * ### HOW TO RUN A GOVERNMENT #### So that Citizens Benefit and Taxpayers Don't Go Crazy ## Contents _Preface_ _Introduction: The Missing Science of Delivery_ In which the problem of getting things done in government is confronted. 1 Priorities In which the importance of priorities and the dilemmas involved in setting them are examined. 2 Organization In which proposals are made as to how to organize government so that the likelihood of successful delivery is enhanced. 3 Strategy In which five paradigms of reform and the question of stewardship are explored. 4 Planning In which the elements of good planning – rather than plans – are considered. 5 Routines In which the importance of establishing a rhythm for driving delivery is emphasized. 6 Problem-solving In which techniques for solving the problems that arise in any great enterprise are discussed and evaluated. 7 Irreversibility In which the importance and difficulty of seeing things through are highlighted. 8 (Other People's) Money In which means of ensuring that taxpayers' money delivers outcomes that citizens want are described. Conclusion: The Future of Delivery In which some emerging themes destined to reshape and strengthen government approaches to delivery in future are predicted. _Appendix: The 57 Rules_ _Bibliography_ _Notes_ _Acknowledgements_ _Follow Penguin_ ##### ABOUT THE AUTHOR Sir Michael Barber is the co-founder of Delivery Associates and Chief Education Advisor at Pearson. Over the last two decades he has worked on government and public service reform in more than fifty countries. From 2001 to 2005 he was the first Head of the Prime Minister's Delivery Unit in the UK. His previous books include _Instruction to Deliver: Fighting to Transform Britain's Public Services._ _For Karen_ _Between the idea_ _And the reality_ _Between the motion_ _And the act_ _Falls the Shadow_ – T. S. Eliot, 'The Hollow Men' ## Preface Why do we need a book about how to run a government? Isn't the world already flooded with books about government? Aren't there columnists in every serious newspaper – and now a host of bloggers too – churning out more political commentary, much of it excellent, every hour? Well, yes there are, but surprisingly very few of the books and very little of the commentary focus on how to run a government so that it delivers the change it has promised. In fact, there is a gaping hole where this should be. Governments' errors, needless to say, attract plenty of attention (as they should); as does the practice of politics – how to win and lose elections, how to gain and lose power – which is endlessly fascinating; the fortunes of individual politicians as their ambitions are fulfilled or (more often) dashed are an endless soap opera; and, as thinking about government and politics advances, ideas are debated too, sometimes even at a rarefied level. But for some reason the practices of governments that succeed in making real improvements to the lives of their citizens attract very little attention. Perhaps at first glance this is due to the way outcomes for citizens are ground out (and they usually are ground out), which looks a little dull compared to the gossip or the commentary or the controversy about a big idea. But in this book I aim to show that this is a totally false conception. The reality is that the process by which governments deliver results is not just fascinating, but also vitally important. Why? Because, while what the president of the World Bank Jim Yong Kim calls the 'science of delivery' – the growing knowledge base about how governments can successfully deliver – is new and a phenomenon of the past decade or so, governments across every time zone are wrestling with the challenge of delivery on a daily basis. Across continents, their problems are remarkably similar as well as remarkably soluble, as the chapters that follow set out. In these pages there are stories and examples from every continent (except Antarctica), and every stage of development, which indicate practically how governments could massively increase their chances of success. As Jennifer Gold points out, 'For all the differences in the way various centres of government are approaching the task of improving implementation... across government, there are also important similarities.' Furthermore, history provides many examples of leaders of governments – kings and emperors, as well as presidents and prime ministers – who sought to improve the lives of their citizens and sometimes succeeded. As they did so, they weren't aware that one day they'd provide the historical evidence for an emerging science of delivery; nor, generally speaking, were the historians and biographers who wrote about them. Yet by sifting through the stories of, for example, Theodore Roosevelt, Horatio Nelson, Mohandas K. Gandhi and Henry VII, king of England, all of whom appear in these pages, one finds there, among the debris of historical writing, just as an archaeologist does among ruins, artefacts which, once assembled, provide insight into how governments of the future can get things done, often against the odds. Understanding how to run a government effectively is important because the success or otherwise of governments is fundamental to the prosperity and well-being of all of us, wherever we live. There is a tendency in the West, especially in the US, to see government as the problem, not least because a lot of the time government is hapless or worse. Government can be a problem, but you only have to look at what life is like when it breaks down to realize how important good government is. Also, we fool ourselves if we set up a false dichotomy between markets and governments, since the functioning of one depends absolutely on the other. Whether government is big or small is a political choice that countries can make but, whatever your politics, the effectiveness (or lack of it) of government is immensely important. As Theodore Roosevelt pointed out over a century ago, there is much more risk to markets from power being insufficiently concentrated than from its concentration in 'responsible and accountable hands'. His successor but one, Woodrow Wilson, made a similar point, in the process summarizing a very modern American argument: 'The English race... has long and successfully studied the art of curbing executive power to the constant neglect of the art of perfecting executive methods.' In short, the process of delivery is important to politics since democracy is threatened if politicians repeatedly make promises they don't then deliver; it is also important to citizens regardless of politics, because if government fails, their daily lives – education, health, safety, travel and parks, for example, not to mention the effective regulation of markets – are materially threatened. And it matters to the success of economies, at both national and global levels, because even where government is small, it takes up over 20 per cent of GDP. In many countries it is 40 or 50 per cent, and if it is unproductive it is a huge drag on economic growth. Moreover, as Matthew d'Ancona has argued, successful political leadership is becoming increasingly challenging as leaders face 'higher expectations of government, raised standards of accountability and media scrutiny more intense and unrelenting than at any time in history'. The challenge is acute as these leaders attempt to reconcile citizens to 'the thunderous forces unleashed by globalisation'. The perspective of the book throughout is from the centre of government looking out. The aim is to convey to the reader what it feels like to be in there looking at the world beyond and trying desperately to get something done so that the citizen benefits. From the outside, people at the heart of government look all-powerful; on the inside, they often feel helpless, stretched to and beyond breaking point by the weight of expectations on the one hand and the sheer complexity and difficulty of meeting them on the other. Political leaders of both talent and genuine goodwill, of which there are many more around the world than public commentary would have you believe, find themselves struggling to deliver their promises. Huge public bureaucracies, which is what government departments are, should in theory enable these political leaders to deliver outcomes for citizens; in practice they sometimes become barriers to implementation. It is in the interests of both transparency and understanding to describe, not just to experts but also to interested citizens, what these challenges are and how they might be overcome. This is the purpose of this book. As we all know, habits are hard to change. In governments, with deeply ingrained cultures and in the full glare of constant media attention, changing culture is harder still. But if our governments and public services are to provide the services and regulation on which our prosperity as individuals and as a global community depends, their cultures and processes will have to change. The chapters in this book describe the challenge governments face and the vital processes which, if adopted by governments around the world, would make a huge difference to all of us. They are the foundations of a science of delivery. Spread across the chapters are 57 Rules to follow. They are a summary of the main practical points in the book, and are gathered together, by chapter for ease of reference, in the Appendix. ## Introduction: The Missing Science of Delivery ##### FOUR CHARACTERS: ONE PROBLEM Viktor Chernomyrdin was the prime minister of Russia in those heady, rollercoaster years of the 1990s after Communism had collapsed and before Putin imposed his steadily tightening stranglehold. Throughout the ups and downs of his time in power, Chernomyrdin somehow kept his sense of humour, remarking on one occasion: 'We keep trying to invent new organizations, but they all turn out to be the Communist Party of the Soviet Union' and on another: 'It's never been this way before and now it's exactly the same again.' In spite of the hardships they endured, therefore, the Russian people had an affection for him that none of his predecessors or successors has enjoyed (or deserved, one might add). But his most famous saying of all is about the frustration of governing: 'We tried to do better,' he said, 'but everything turned out as usual.' Charles I was king of England, Scotland and Ireland from 1625. The Van Dyck portraits show the long hair, the wide and curling moustache, the epitome cavalier look, but nevertheless Charles was not one of royalty's great successes. Between 1638 and 1649, he managed to provoke separate but overlapping civil wars in all three of his kingdoms, ran out of money, upset the Parliament in England by trying (but failing) to arrest some of its members, steadily lost friends, and ended up losing a civil war. Finally, on 30 January 1649, a cold morning in Whitehall, he lost his head. Part way through this catalogue of error and ultimately disaster, he had a moment of deep insight. 'There's more to the doing,' he realized, 'than bidding it be done.' Thomas Phillip O'Neill, known as Tip after a baseball player with the same surname, served as Speaker of the US House of Representatives from 1977 to 1987, and remains the only speaker to serve for five consecutive congressional terms. With his swept-over shock of white hair and his unrivalled grasp of political wheeling and dealing, he was both a match and a foil for Ronald Reagan, whom he once described as 'the most ignorant man who ever occupied the White House'. Part of O'Neill's genius was to be able to maintain cordial relations with the president in spite of such remarks. On a later occasion, he said affectionately, 'I've known personally every President since Jack Kennedy and I can honestly say that Ronald Reagan was the worst. But he would have made a helluva king!' O'Neill had first been elected to the House of Representatives as long ago as 1952, and his razor-sharp mind had time to absorb and distil the central challenges of governing. 'It is easier to run _for_ office than to run _the_ office,' he once said. Interviewed at the end of his long career by the Harvard political scientist Derek Bok, O'Neill was asked to comment on how politics had changed over the course of his career. 'The quality [of the people] is clearly much better,' he commented. 'But the results are definitely worse.' Paul Corrigan is a good friend and was a close colleague and collaborator during my time in No. 10 Downing Street, when he was the special adviser to successive health secretaries. A quick phone conversation with Paul would result in a decision and action that, had they been pursued through the normal civil service channels, would have taken weeks and thousands of words in formal submissions. Paul's most memorable comment to me related to the quaint British custom under which whoever has won an election takes power right away. After the election on 1 May 1997, John Major, having lost, vacated No. 10 by the back door in the early afternoon of 2 May (and memorably went to watch cricket), while Blair and his entourage came in through the front door. 'It's funny,' commented Paul. 'You do the second most difficult thing in politics – which is to win an election – and then, without time even for a good night's sleep, you start to do the most difficult thing in politics, which is to run a country.' Four people, separated by time and space, with senior (in some cases very senior) roles in government, and all in different ways commenting on the difficulty of getting things done. ##### THE EMERGING SCIENCE There are countless books and manuals on every aspect of elections and campaigns, on policy and policymaking and on ideas and shaping public opinion, but on how to get things done in government there is almost nothing. No manuals. Virtually no academic literature. Surveying the academic literature on the subject of political science, I once explored the field known as implementation studies. There are worthwhile academic debates there which I have nothing against, but almost no guidance on the practical implications, and if you summed up the entire field – which no doubt rightly comes from a critical perspective – in a single sentence, it would be 'Nothing works.' Viktor Chernomyrdin all over again. There are plenty of business books on 'execution', but their messages don't necessarily translate into the context of government, and in any case they are often written with an underlying contempt for public officials. Fortunately, in the past decade or so, a new field has begun to emerge, partly I'm glad to say in response to the establishment of the Prime Minister's Delivery Unit (PMDU), which I founded in Tony Blair's second term (2001–5). The PMDU has both its supporters and detractors, as you might expect, but there's real value in the debate. More importantly, the PMDU also has its emulators – delivery units or their equivalent have sprung up in many countries at different levels in political systems. In North America, the mayor of Los Angeles, Antonio Villaraigosa, was one of the first to follow the path. Martin O'Malley, governor of Maryland, was another. So too did Dalton McGuinty, premier of Ontario from 2003 to 2013. In Europe, the Dutch government of Jan Peter Balkenende sought to learn from the PMDU experience and adapt the approach. David Cameron's Conservative Party in Britain dismissed some of the Delivery Unit's experience, but soon after being elected found itself establishing an Implementation Unit. In part this may have been a result of a conversation I had with the brilliant Steve Hilton about six months after Cameron, with Steve as his chief 'blue skies' thinker, had moved into No. 10. 'I know we disparaged targets and delivery and all that when we were in Opposition,' Steve began, 'but now we've been here a while, we have a question: How did you do it?' He had learned Paul Corrigan's lesson. 'You've learned fast,' I replied. 'It took Blair four years to learn the same thing.' In 2014, with the experience of four hard years governing, Cameron moved to further strengthen his Implementation Unit, appointing the excellent Simon Case to run it. It is not only in North America and Europe that there is emulation. Malaysian Prime Minister Najib Razak created Pemandu (Malay for 'driver') on the PMDU model. Chief Minister Shahbaz Sharif has applied the approach to education and health in Punjab, Pakistan. Sierra Leone's President Koroma has used a delivery approach to drive health outcomes, while in Chile, Colombia and some states of Brazil, there have been experiments along these lines. In the Middle East, Kuwait established a delivery unit too. Melanie Walker, head of the President's Delivery Unit at the World Bank, claims to have counted fifty-eight delivery units or equivalent around the world. Not all of these attempts at emulation have worked. The Malaysian and Ontarian efforts referred to have been outstandingly successful; the one in Kuwait much less so. The point is, though, that each of these attempts at establishing units whose primary focus is delivery or implementation, rather than policy or strategy, has enabled us to learn much more about what works and what doesn't. From this perspective, the failures are as valuable as the successes. As a result, we are beginning to have a much deeper understanding of the issues. Political scientists such as Steve Kelman at Harvard, Paul C. Light at New York University and Gwyn Bevan at the London School of Economics, some of whom had been beavering away for years on related subjects, have put delivery or implementation at the centre of their work. The towering figure of political science, Francis Fukuyama, has told me he plans to make the capacity of a state to implement central to the next phase of his work. A field is at last beginning to emerge. After I had left the Blair administration in 2005, I wrote the story of the establishment and impact of the PMDU in an attempt to interest a wider audience both in the riveting ups and downs of four years at the heart of government and in the challenges of getting things done. Shortly after that book, _Instruction to Deliver_ , was published, my friends in the US education reform movement urged me to bring this thinking across the Atlantic to assist with improving the outcomes of America's schools. Joel Klein seized the ideas and applied them in his heroic and remarkably successful drive to improve New York City's schools. Similarly, Paul Pastorek and Paul Vallas drew on this thinking in creating the Recovery School District in New Orleans. With the help of my friends, I then founded the US Education Delivery Institute, which has been assisting more than a dozen US states by applying delivery thinking to their education systems. As part of this work, I wrote, with colleagues, the first practical guide to driving delivery, specifically aimed at US education, though the techniques it describes are generally applicable to government. We called the book _Deliverology 101_. 'Deliverology' originated as a term of gentle abuse in the UK Treasury as shorthand to describe the work and techniques of the PMDU. Just as in the past 'Whig' and 'Tory' had transmuted from being insults to badges of honour, so we decided to adopt deliverology as our rallying cry. The implication of the -ology suffix was that something akin to a science was emerging. And indeed that is what I believe. More importantly, a health academic who had become president of Dartmouth College was beginning to believe it too. During his tenure at Dartmouth, Jim Yong Kim took to carrying _Deliverology 101_ around with him – or so he told me when we met. Its rigour appealed to him as someone who in his work as an academic had tried to change the facts on the ground, rather than simply write about them. Along with Paul Farmer, he founded Partners in Health which, in places such as Haiti and Peru, had enormously beneficial effects for hundreds of thousands of people. In some ways, Jim Kim was an unlikely candidate to be president of the World Bank. In the book he co-edited, _Dying for Growth_ , published in 2000, he and his colleagues had questioned the conventional wisdom of the World Bank and other global institutions. They pointed out that, 'while the proportion of people in good health may be greater than 50 years ago, the absolute number of people suffering from preventable diseases with little or no access to healthcare has risen dramatically in the same period'. Maybe it was this iconoclasm that recommended Jim Kim to President Obama who, in March 2012, nominated him the next President of the World Bank, a post he took up in July 2012. _Deliverology 101_ went with him and, not long into his tenure, he made a speech in Korea, the country from which his parents had emigrated, in which he set out the case for a new science, 'the Science of Delivery'. Some months later, writing in _Voices on Society_ , he put the case in plain terms: > Over the past few centuries, evidence-based delivery systems have revolutionised our lives. They have shown us what can work. The problem is that we still lack a framework for systematically understanding what does work in a given time and place, and for holding officials accountable to that standard. Now development agencies can fulfil their public trust by creating a science of delivery that will compile global delivery knowledge and mobilise it for practice. This is a bold and timely vision. Indeed I would say it is extremely urgent. It is not simply that in too many countries development policies are clearly not working and governance is demonstrably poor; it is also evidently the case that even in relatively well-governed countries, government is often both inefficient and ineffective. This has a massive economic effect which, especially in a time of austerity, is highly problematic. Worse still, it leads citizens to question the value of paying taxes, to be sceptical of government in general and in the worst cases to doubt the value of democracy. In America, frustration with federal government has rarely, if ever, been greater, so much so that Paul Volcker, the distinguished former Chairman of the Federal Reserve, has (in his late eighties) dedicated the next phase of his career to 'working for effective government'. A readable summary of what we know – a first sketch of a science of delivery – then, is needed not just for the developing world as Jim Kim suggests, but globally. And the sooner the better. There is another reason why it is needed. Delivery units have become fashionable: the words are regularly used, but few people know the secrets that distinguish those that succeed from those that don't. This book is an attempt to summarize, on the basis of both my direct experience and the growing evidence, what works in delivery. I certainly don't pretend it provides all the answers, not least because we don't yet have all the necessary knowledge. The science is far from complete and, based on human relationships as politics and government inevitably are, it never will be. However good the science becomes, the art involved will remain significant. Nevertheless, I will make two claims. First, this book will help to map the territory, setting the agenda for those in government who want to deliver, for those in universities who want to research delivery and for those citizens who would like to see governments succeed. Second, if the agenda set out in chapters 1 to were systematically applied by governments around the world, outcomes across a range of services such as policing, health and education would improve dramatically, the value realized from taxpayers' money would be substantially increased and citizens would have much more confidence in government than they currently do. Chapter 8 – about effective use of public money – is just as important, but so far the territory it covers is less well understood or applied. To sum it up, the application of the delivery knowledge we already have would make the world a better place. If this knowledge had been readily available to Viktor Chernomyrdin, he might have ended his career somewhat more optimistically... and maybe if he'd had it too, Charles I would not have lost his head. ##### THE VALUE OF GOOD GOVERNANCE In 2013, one of the great global debates in political and economic science was about the future of India, a country where close to one in five of the world's population lives. Two leading Indian intellectuals took up the cudgels for radically opposing views on the way forward. Jagdish Bhagwati, a towering Indian-American scholar based at Columbia University, teamed up with Arvind Panagariya to write _Why Growth Matters_. Their basic argument was that economic growth in India only really took off after the 1991 deregulatory pro-market reforms; before that it had been far too slow to meet the needs of the growing Indian population. Further, they argued that poverty reduction, far from being set back by these liberalizing reforms, has actually been enhanced. In effect, they claimed, unless you grow the cake, you have no chance of giving the poor a significantly bigger slice. Finally, they argue that the need of the present time is to further extend those liberalizing pro-market reforms including bringing quasi-market pressures into traditional public sector areas through measures such as the use of vouchers in the school system. Ranged against this case was the equally towering figure of Amartya Sen, the Nobel Prize-winning economist who, in collaboration with Jean Drèze, published _An Uncertain Glory: India and Its Contradictions_. Their analysis and prescription are very different. They argue that while, yes, there has been impressive growth since the early 1990s, the major problem is that the benefits of this have been unevenly distributed. They are frustrated that the debate about India's future, especially in the media, is one in which the wealthy are talking among themselves about themselves. 'What is remarkable,' they say, 'is not the media's interest in growth rates [which have declined recently], but its near-silence about the fact that the growth process is so biased, making the country look more and more like islands of California in a sea of sub-Saharan Africa.' Their solutions are much greater empowerment of the poor, especially women, so that they seize a much bigger voice in public debate, and reform of the public sector, which is necessary, they argue, because 'the general state of public services in India remains absolutely dismal, and the country's health and education systems have been severely messed up'. Above all, they urge a much more equitable distribution of power. While prescriptions in the two books are very different and this battle of titans enlivened the pages of academic and popular papers as well as the airwaves, the truth is there is quite a lot they agree on: * Current growth rates are too low. * There is a great deal of inequality. * Public services are 'messed up'. There is then a legitimate political debate, which is the classic one around the world, not just in India, about whether to depend largely on the market and 'the hidden hand' to solve these problems, or whether to rely on an extension of the public and social sectors (as in Bangladesh) to address them. There is no right answer here – it is exactly the kind of debate that should take place in a democracy. Crucially, though – and this is my central point at the start of this book – neither Bhagwati's solutions nor Sen's will succeed unless government in India, at federal and state levels, becomes much more effective than it currently is. Even with the more limited role that Bhagwati would recommend for the state, its effectiveness is critical, for example, to enforce property rights, regulate markets or fund and oversee – if not always provide – health and education services. From Sen's perspective, government effectiveness is more important still. Following India's 2014 election, we will see how Narendra Modi, the new prime minister, fares. The _Economist_ says he is 'a strong-willed moderniser' who likes setting targets and that 'measurably better performance' is what excites him. If so, the chapters that follow should excite him too. This is the point of the emerging science of delivery. Whether your political preference is for a minimalist state or a much larger one, you have an interest in government being effective at what it does. As William Easterly puts it, 'The debate about market versus government is... the wrong debate.' Both are essential. After all, we live in an era where demonstrating outcomes, whether you are a business or a government, is increasingly important, and the science of delivery will help make that possible. The argument in this book applies both to democratic countries (such as Canada, whose province of Ontario appears later in the book), and more generally to countries committed to the rule of law and the accountability of government. In some cases, democracy might not be fully in place, and indeed the prospects may be uncertain but, at the time of writing, I have taken the view that the direction of travel is right. The importance of delivery to emerging democracies – and the risks of failing – are brilliantly captured by Ryszard Kapus´cin´ski's description of the tragedy of 'the honest and patriotic post-colonial leader' who faces the: >... terrible _material resistance_ that each one encounters on taking his first, second and third steps up the summit of power. Each one wants to do something good and begins to do it and then sees, after a month, after a year, after three years, that it just isn't happening, that it is slipping away, that it is bogged down in the sand... The politician begins to push too hard. He looks for a way out through dictatorship. The dictatorship then fathers an opposition. The opposition organises a coup. > > And the cycle begins anew. The path to accountable government is that much easier to walk if governments succeed in delivering at least some of what they have promised. I am not recommending the content here to blatant autocracies or 'extractive' regimes interested purely in enriching themselves, though of course I can't be sure that some of them won't read the words. This is important because my belief is that the case made in the book has a moral purpose. More people are more likely to lead more fulfilled lives if they live in countries with effective accountable governments which can enforce basic individual rights and deliver effective public good. As Daron Acemoglu and James Robinson argue in _Why Nations Fail_ , their monumental analysis of why some nations succeed and others don't, effective and accountable governance is the key difference between those countries that pursue inclusive growth strategies and those that use the state as a means of enriching the elite. Meanwhile, Francis Fukuyama points out, in _The Origins of Political Order_ , that for countries to break out of 'dysfunctional equilibrium' requires either a radical change in economic and social circumstances or great leadership, or both. Shahbaz Sharif, chief minister of Punjab – who appears a number of times in this book – is a good example of someone contemporary seeking to provide precisely that kind of leadership. In _Political Order and Political Decay_ , which builds on his previous volume, Fukuyama takes the argument a stage further. While the state, rule of law and accountable government are prerequisites of success, they are not enough, especially as citizens' expectations rise. As he says, referring to Turkey and Brazil but in an argument that applies more generally, 'Government actually had to deliver better results if it was to be regarded as legitimate.' A common criticism of deliverology in its brief life is that it is necessarily top-down and therefore in some way encourages government to become overbearing. This is emphatically not the case, and is a theme that will emerge throughout the chapters that follow. Of course, this book is written from the perspective of people at the centre of governments – whether federal, national, provincial or local – but it is a mistake to deduce that because it takes that perspective it must necessarily lead to top-down or centralizing reform. Indeed, if a government was elected with the sole purpose of empowering local communities and reducing its own influence, the science of delivery would help it succeed. Crucially, by applying what we know, the capacity of government to deliver inclusive growth will be significantly enhanced. That is because the contents of this book are not a policy prescription but elements of a process by which a government can massively enhance its capacity to deliver outcomes for citizens, by learning rapidly as implementation occurs and adjusting its approach in the light of that learning. 'Governing is messy and complicated and difficult and no system of government can change that,' says Daniel Finkelstein, but he adds, crucially, that 'politics can be improved and of course change is possible'. Planning and targets, both of which are examined and recommended in later chapters, got a bad name from Stalin's five-year plans because in that infamous case the targets were unashamedly top-down and those involved in implementation manipulated the data to fit the plan and distorted the truth to (appear to) succeed. The science of delivery – recommended here – is not just distinct from this; it is the polar opposite. It says get started, learn fast from the real world, understand the messy reality and adjust the plan accordingly. In summary, the science of delivery sees the world from inside government looking out and is: * Valuable whether you want a smaller or larger state. * Either top-down or bottom-up, or something in between, according to choice. * A disciplined process rather than a policy prescription. * An important ingredient in the future of accountable government. With that context set out, we are ready. ## 1 ## Priorities ##### THE CHALLENGE OF GOVERNMENT Robert Arthur Talbot Gascoyne-Cecil, Third Marquess of Salisbury, was the British prime minister on three occasions, and for a total of over thirteen years. It was under his leadership that Britain saw in the twentieth century. The portraits reveal a broad forehead, shrewd, dark eyes and a very large, classically Victorian, beard. His appearance exudes stability. Given that his era witnessed Britain at the height of its imperial power and Queen Victoria's diamond jubilee, stability was exactly what Salisbury set out to provide. He summarized his beliefs in a sentence which, as a definition of true conservatism, has never been bettered. 'Whatever happens,' he said, 'will be for the worse, and therefore it is in our interest that as little should happen as possible.' Salisbury was by no means the only leader back then who aspired to do very little. William Evarts, secretary of state in the administration of US President Rutherford B. Hayes (1877–81), admonished him once by saying, 'You don't sufficiently realise, Mr President, the great truth that almost any question will settle itself if you only let it alone long enough.' Fifty years later, the Americans elected another president who preferred to let things alone. Calvin Coolidge, Silent Cal as he became known, made his case clear, long before he became president, in a letter to his father: 'It is much more important to kill bad bills than to pass good ones.' As his biographer, Amity Shlaes, puts it, 'Congress always says "Do". Coolidge replied, "Do not do", or at least, "Do less".' For these leaders, for whom success is doing as little as possible, understanding the science of delivery is less important (though it wouldn't have done them any harm and, we shall see later, Coolidge was actually an exponent of parts of the delivery approach). For every other kind of leader in government, it is central. They have an agenda, a set of commitments, beliefs about how they would like the world to be different, but they are not necessarily equipped with the knowledge, skills and understanding to get the job done. As Margaret Thatcher cried in exasperation to her advisers just before she was elected prime minister, 'Don't tell me what. I know what. Tell me how.' In the modern world where the pace of change is unrelenting and the demands of the electorate are insistent, the Lord Salisbury philosophy looks increasingly anachronistic. Citizens expect immeasurably more of government than they did a century ago, and they (or the media on their behalf) are only too quick to complain if, as often happens, those expectations are dashed. And when the world changes, especially in a crisis, they turn to government. As Keith Joseph, Margaret Thatcher's mentor, so memorably put it: 'The first words a baby learns in this country are "What's the government going to do about it?" ' In short, delivery matters. And the first question political leaders have to ask themselves is what exactly do they want to do. It's one thing to have a broad agenda or view of the world; it's quite another to turn that into a practical programme for government. RULE 1 HAVE AN AGENDA (even if, like Lord Salisbury, it is to do nothing) ##### PRIORITIES Every successful business leader will tell you that unless a business is clear about its priorities it will struggle to succeed. The same is true for a government. As the mid-twentieth-century firebrand Labour minister Aneurin Bevan memorably put it, 'The language of priorities is the religion of socialism.' Actually, not just of socialism. Prioritization is easy to advocate but difficult to do. It requires great discipline, not least because – by definition – establishing what the priorities are also means establishing what they are not. The model leader in this respect in my experience is Najib Razak, who became prime minister of Malaysia in 2009 and was returned in an election in 2013. He had told me before the election that business as usual would not be good enough; he wanted transformation. Once he became prime minister, he engaged in a consultative process with his cabinet to arrive at six national priorities (or National Key Results Areas – NKRAs – as they became known). They included rural basic infrastructure, crime reduction and education – but not health. From that collective decision, through to the election in 2013, the government genuinely prioritized the six NKRAs with time, energy and commitment. Not prioritizing something doesn't mean not doing anything at all. As Tony Blair used to say to me, 'There are priorities, and things you just have to do.' This is one of the differences between government and business. A business can choose a focus and close down or sell off the parts of the business that don't relate to that focus, but Najib Razak could not sell off his health department because that is clearly a function of modern government. In the non-priority areas, the function still needs oversight and management. There was – and still is – a health minister and budget in Malaysia, of course, and the minister was encouraged, as were all the ministers, to set departmental priorities. Twice a year the prime minister holds each minister, priority or not, to account for what they've done. Similarly, in Britain under Blair, while literacy and numeracy at primary level were priorities for the government, that did not mean that science or arts teaching stopped or that no one paid attention to the wider agenda. When Gordon Brown became prime minister in the summer of 2007, it soon became evident that, in spite of having spent many years aspiring to the top job, he did not establish clear priorities. He was, it is true, soon overwhelmed by the worldwide financial and economic crises in which he played a vital global role, but by then the absence of clarity about his priorities had left him, in Bob Dylan's words, 'condemned to drift or else be kept from drifting'. By contrast, by 2001 when Blair set up the Delivery Unit, he had become very clear about his priorities. With No. 10 Policy Unit colleagues, I went to him in my first week in the job with a selected list of goals or targets. Sitting in the sunshine outside the Cabinet Room, Blair took a pen and put a line through numerous suggestions, leaving a few: 'I want the Delivery Unit focused on issues of real salience... for example, in transport, I only want Michael to sort out the railways.' With the frustrating experience of his first term behind him, he knew how much focus it would take to get some tough things done. (And 'only sorting out the railways' took three years or more of unremitting effort!) Moreover, because he had just returned from the election campaign, he also knew what was uppermost in the minds of the British people. If anything, Margaret Thatcher was clearer sooner about her priorities than Blair, and more determined to make progress on them in her first term. As one of her more radical ministers, Nicholas Ridley, put it: 'She was adamant she would not start down this sort of road [welfare reform] at the beginning. There was enough to do sorting out industry, the economy, taxation and the trade unions.' Her own summary was even more succinct: 'The supply side must come first.' So, for her first and second terms, the priorities were set. Welfare, health and education reform had to wait. Her judgement seems to have been vindicated by history – those supply-side reforms have not been reversed and without doubt changed Britain for the better. RULE 2 DECIDE ON YOUR PRIORITIES (really decide) ##### AMBITION AND A MAP OF DELIVERY It is one thing to decide your priorities; it is another to decide how ambitious you want to be about them. How much change do you want, and how fast? In political circles, a favourite phrase is 'underpromise and over-deliver' and of course there is a point to this. It implies managing expectations, setting some achievable, modest goals and then doing better than expected – the idea being that the electorate will be duly impressed. In times of rising prosperity and broad goodwill towards government, this is a plausible scenario. It was the approach taken by the government of Steve Bracks, premier of Victoria, Australia early in the twenty-first century: his White Paper _Growing Victoria Together_ was an agenda for steady progress towards modest targets. The boat wasn't rocked and Victoria advanced. Similarly, a decade or more after the Second World War was over, the British government of Harold Macmillan was claiming 'You've never had it so good.' But in tough times – such as the present era of austerity – or in the case of a government with a radical reforming agenda and a leader who aspires to transformation (and perhaps a place in history), underpromising and overdelivering is too cautious an approach. Historical perspective suggests that in the 1950s Macmillan might (I would argue should) have done more to tackle the underlying structural weaknesses of the British economy which were brutally exposed in the 1960s and 70s. Sometimes the situation demands clear priorities and bold ambition. This is one reason why history judges war leaders as great. Lincoln's commitment to saving the Union and (ultimately) ending slavery provided a clear mission and called for unwavering ambition – literally whatever it took. The same is true of Churchill's determination first to save the island nation and then to defeat the scourge of Nazism. In peacetime, though, the degree of ambition depends in part on the situation and in part on the courage of the political leader. Margaret Thatcher's case combined both. She was elected at a moment of economic and social crisis, but when the worst of it was past and many of her ministers were urging her to take a softer line, she said famously, 'You turn if you want to. The lady's not for turning.' Similarly, in his second term, Blair was determined to bring about irreversible structural reforms of both health and education, and hence set ambitious goals for the Delivery Unit. Najib Razak in Malaysia opted for transformation rather than a quiet life. A leading influence on Najib Razak has been Idris Jala, whom we shall meet several times later in this book. He argues that unless targets 'really, really stretch', they are not worth getting out of bed for. 'Set goals you yourself think you cannot meet,' he told me, disarmingly. The degree of ambition also depends to an extent on political calculation. President Obama chose to give high priority to healthcare reform and was prepared to pay a heavy political price to get it through (and did!). Getting the law passed was very difficult, but turning the legislation into real gains on the ground proved harder still. The dilemma for political leaders in the twenty-first century is that people are impatient for results. If they don't come, the pressure from the people (and the media) intensifies, and political support can crumble. Thus, for reforming political leaders, the paradox is that they have to have a long-term strategy if they are to secure irreversible reform; but unless they deliver short-term results, no one believes them. On the horns of this dilemma some politicians (and in the developing world, donor agencies) revert to announcing 'initiatives', which they hope will convey an impression of activity but which do not result in transformation. (I have spent much of the past ten years trying unsuccessfully to abolish this sense of the word 'initiative'.) The Map of Delivery, which I originally drew for the Blair cabinet in 2002, is meant to assist with deciding the degree of ambition for a whole government, a leader or an individual minister. Map of Delivery Figure 1 The vertical axis deals with the staple of political debate in most political systems – how bold or radical do we want to be? Some politicians and their advisers are bold by nature, others more cautious. When John F. Kennedy made his famous commitment that the US would land a man on the moon by the end of the 1960s, it captured people's imagination around the world – but most of his advisers had counselled against it. In fact, a relatively junior official had inserted the proposal into a draft at the last minute, and it was by luck that this draft got to Kennedy without it being deleted by one adviser or another. Kennedy, of course, loved the ambition! Often the debate between boldness and caution is played out between politicians on the one hand and career civil servants on the other. Part of the value of career civil servants, after all, is that they have seen it all before. While politicians come and go, they are permanent. So the sight of a naïve – in their view – politician making an unreasonable – in their view – commitment is grist to their mill. Their ingrained cynicism often leads them to advise a more cautious approach (which is also incidentally likely to be less work). They avoid the 'controversy without impact' in the top left corner of the map by retreating down the vertical axis. The writers of the BBC programme _Yes, Minister_ created an entire, and memorable, comedy series from precisely this dilemma. Perhaps a pilot study instead of a full-scale rollout? Maybe some more research first? Or slow down the timetable for phasing it in? In the old days in Britain, there was always another option too – perhaps we could try it out in Scotland first? Evidence-based policymaking, while obviously a good thing on the face of it, plays into the case for caution too because, by definition, the evidence relates to the past and often recounts numerous failures. Too often it is used to justify incrementalism or delay. As one of the characters in Boris Pasternak's classic, _Doctor Zhivago_ , concludes, 'Yuri Andreevich was in too much of a hurry to establish ahead of time the failure of the efforts he made, announcing too confidently and almost with satisfaction the uselessness of any further attempts.' And, by definition too, the research is never complete and cannot tell you whether an innovation will succeed or not. This is not an argument for ignoring the evidence. On the contrary, it should always be taken into account, but it does not and cannot replace the need for judgement (which is what we elect politicians to exercise). In any case, the Map of Delivery only really comes into its own when the horizontal axis is brought into play. The picture looks different now; it becomes possible to break out of the old debate. A cautious idea, implemented well, might provide exactly those short-term results that a transformative political leader needs in order to show that he or she is on the right track. Meanwhile, for a bold idea, the map asks the insistent question: How will you get it done? In addition, it enables a government or an individual minister to think strategically. Anything on the left of the map needs to be moved, over time, across to the right. You might like to have a 'controversy without impact' on the way because, as Blair used to say, a good row provides definition and engages people, but in the end you want to ensure you deliver outcomes. However, even the boldest politician won't want everything in the 'transformation' box if he or she is wise, because the risks would be too great. Similarly, if everything is in the 'improved outcomes' box then the programme would be too incremental. Prioritization again. And sequencing. Both are vital to effective delivery. In the Blair health reforms, getting waiting times for routine surgery down delivered the short-term results that built confidence, while introducing patient choice to a state-owned monopoly over time was truly transformational. Similarly, Joel Klein as Chancellor of New York City Schools delivered rapid early improvements in elementary school test scores, which built public confidence in his programme, before embarking on much more radical, quasi-market reforms which have become irreversible. In the end, though, no amount of analysis can replace the need for political courage. Sometimes what Margaret Thatcher called the 'calculated bounce' is what is required. I discovered this myself once when, the day before I appeared in front of a parliamentary committee, Tony Blair announced on television that we would halve the number of illegal asylum seekers within six months. He was responding to sustained public and media pressure and decided to 'bounce' the system (and me). When I told him later that day that I had had to tell the committee that I had not known his announcement was coming, he replied simply: 'I don't know how you could have known; I didn't know myself until I said it.' (We hit the target six months later, incidentally.) Sometimes it's not so much a matter of a bounce, as of a leader setting a big, impossible-looking goal in order to transform expectations. Not all politics is 'the art of the possible', R. A. Butler's famous phrase; the pursuit of the impossible matters too. Kennedy and the moon landing has already been cited; eradicating smallpox also looked impossible once; and, at a more mundane level, no one believed Shahbaz Sharif, chief minister of Punjab, when he announced in 2011 that he aimed to achieve universal primary enrolment in time for the 2015 Millennium Development Goal. It remains to be seen whether he will, but progress since the announcement has been remarkable. In part it is a question of initiative; and one public servant said to my friend and colleague Simon Rea, 'You can sit and watch the garden grow or you can get out there and be the gardener.' George Bernard Shaw made the point best: 'The reasonable man adapts himself to the conditions that surround him... The unreasonable man adapts the surrounding conditions to himself... All progress depends on the unreasonable man.' Like it or not, we need unreasonable political leaders sometimes. RULE 3 BE UNREASONABLE (sometimes) AND USE THE MAP OF DELIVERY ##### TARGETS Priorities and ambition are necessary for transformation, but not sufficient. It is also necessary to define more precisely what outcome is intended. This is where targets come in. I shouldn't have been surprised, I suppose, given the role I had in the Blair administration, but I hadn't expected to become so personally identified with targets and governments setting them. In some corners of the media I was even called 'Mr Targets'. In others, the criticism was sharper still. After I had made a presentation at one of Tony Blair's press conferences, the then _Times_ columnist Simon Jenkins, never one to moderate his tone, described me as 'a control freak's control freak... a Great War general sitting in a chateau counting "targets" as they go over the top and then counting them back'. The comment reveals just how controversial the targets had become. People who for years had demanded that governments make specific promises (i.e. set targets) and then make transparent whether progress towards them is being made (i.e. publish data), suddenly found that targets were 'top-down', 'imposed' and 'distortionary'. In the end, though, unless a government aspires to do very little (like Lord Salisbury), it needs to set some clear goals. To use Delivery Unit jargon, it needs to make clear 'what success looks like'. This enables a government to make a case in terms people can understand and also, just as importantly, enables it to be held to account. That definition of success does not need to be called a target – there are plenty of other words available, such as 'goal' or 'objective' – but in practical terms that is what it is. If the definition of success is not clear, many of the same people who are critical of targets would accuse the government of lacking clarity or, worse, of obfuscation. In any case, governments set targets all the time, even when they claim to oppose the idea. Within months of being elected and having specifically criticized targets, David Cameron said he wanted Britain to be a top-five destination for tourists. You don't have to call that a target, but that is what it is. After three years in office, he didn't feel the need to pretend any more. In Prime Minister's Questions (on health service performance) on 6 November 2013, he responded to the critics by asserting 'we are hitting our targets'. Since governments, whether they like them or not, are going to set targets, we might as well have the real debate about them, which is about how to use them – and how to avoid the pitfalls (about which I learned a lot in the Blair years). Consistent with the emphasis on priorities, the first point to make about targets is that you don't want too many of them. As we have seen, Najib Razak in Malaysia chose six areas. Dalton McGuinty, the education premier of Ontario, set three for his education system – improved performance in literacy, numeracy and graduation rates; narrowed gaps between disadvantaged groups and the rest; and improved public confidence in the system. Specific, measurable, ambitious, realistic and timebound (SMART, the famous acronym) targets were set for each. It is hard to argue that for an education system in a province of over 15 million people that is too many. Similarly, in the Punjab education and health reforms, the chief minister has set a handful of measurable goals. By contrast, in the early phase of the Blair administration, there _were_ too many, and it did cause confusion. Moreover, some of those targets exemplified another pitfall. They were badly set – unmeasurable, for example, or broad, uninformed guesses plucked out of the air as part of a political deal on the spending review. On one occasion, when I proposed that we might get from 80 to 82 per cent of eleven-year-olds achieving the literacy standard, I was told that was too detailed and the percentage had to end in a nought or a five. (We set it at 85 per cent and hit 82.) Other targets had been set without thinking about how the data would be collected. The worst example was on road congestion, where we had an incomprehensible target measured by a bizarre process. (See p. 133 below for an explanation.) David Cameron may have started out as a target sceptic, but he set a very high profile 'net immigration' target – the idea was to ensure that in any given year more people emigrated than immigrated. If achieved, the prime minister thought, it would prove that he had immigration, a major political issue, under control. The problem with the target is that achieving it depends on much that the PM cannot influence. He has no influence on how many people choose to emigrate and minimal influence on immigration from within the European Union. All he can influence, therefore, is immigration from elsewhere, but however much you tighten up here (at whatever cost), the other factors still dominate the outcomes. As the _Evening Standard_ front page put it when the 2013 net migration figure revealed 212,000 more entering Britain than leaving it, 'PM's IMMIGRATION PLEDGE IN TATTERS'. Set the wrong target and even if you make progress, as Cameron has, you risk political defeat. The ministers in the Blair government would sometimes wobble on the subject of targets, perhaps confusing the problem of bad targets with the wider question of targets in general. Occasionally I'd think that even the prime minister himself was wobbling – but in the end he always came through. He says exactly this in his memoirs. >... in domestic policy, changing public service systems inevitably meant getting into the details of delivery and performance management in a radically more granular way. Increasingly, prime ministers are like CEOs or chairmen of major companies. They have to set a policy direction; they have to see it is followed; they have to get data on whether it is; they have to measure outcomes. Any political leader who ignores this advice does so at his or her peril. Crucially, Blair adds: > There was... a lot of exaggerated nonsense about targets... some criticism was valid. Targets can be too numerous... Sometimes different targets conflict... Sometimes they are too prescriptive... > > However, as I used to say to ministers and civil servants, if that is true, cut them down to their essentials, unwind any conflicts, grant a sensible discretion on how they should be met – but don't think for an instant that in any other walk of life you would spend these sums of money without demanding a measurable output. In a nutshell, that is all there is to it. For those who want to delve more deeply into the details – not everyone does – there are crucial decisions to make about the nature of the target itself. You can set a floor target – a standard below which performance is deemed unacceptable, as Michael Gove, the Secretary of State for Education from 2010 to 2014 in the Cameron government, did for school performance in England. You can set a percentage target – 90 per cent of trains to arrive on time, for instance. You can set a 100 per cent target too: this has the attraction of being able to tell people the impact will be universal; but it has the disadvantage that the last 1 or 2 per cent of anything often represents exceptional cases which either end up being very expensive to change or turn out to be genuine exceptions to the rule. We found this with the target for Accident & Emergency departments that no one should wait more than four hours to be seen and treated and either sent home or admitted to hospital. This was clear and plain for the vast majority, but for the exceptional spinal injury where the patient could not be moved, not appropriate. In these cases we always insisted that the target itself wasn't the point; it was the service standard the target represented that mattered. We agreed that there would and should be clinical exceptions – the professionals told us these would never be more than 1 or 2 per cent. The target was met on time in December 2004, and rarely missed for almost a decade until the NHS was, absurdly, allowed to stop paying attention. RULE 4 SET A SMALL NUMBER OF WELL-DESIGNED TARGETS (but don't call them targets if you don't want to!) ##### BENCHMARKING Once the priority is settled and the kind of target established, there is science in target-setting. Of course there will (hopefully) always be 'unreasonable' politicians who set 'unreasonable' targets, but for them too, being informed by the science can only help. The best way into it is through benchmarking, of which, as aficionados will know, there are five types (see Table 1). Each of the five, or a combination of them, can be used. Often governments, systems or people want to exceed past performance; benchmarking against history provides real insight, which is why athletes pay so much attention to their personal best. This is the first type of benchmarking. I remember discovering once, when we were debating road congestion in London and what to do about it, that when Lord Salisbury was prime minister, he was able to get from King's Cross (where his train from Hatfield arrived) to the Foreign Office (where he liked to work) in seventeen minutes... in a coach and four. Yet 100 years later, with all the improvements in transport, that was completely impossible except perhaps for a prime minister with motorcycle outriders. Ironic really, because having rushed to the office, Salisbury, as we've seen, aspired to do 'as little as possible' when he got there. Meanwhile, countries could set targets to be as good at education as some other countries in the international comparisons that are now published regularly. For example, in the 1990 Goals 2000 Act, the US set out to become the best in the world at maths and science by the year 2000, but apart from asserting this in legislation, they did nothing about it and failed to achieve it. This is benchmarking against the world, the second type. Benchmarking... 1. against history | What levels of performance have we achieved in the past? ---|--- 2. against the world | What levels of performance are achieved in systems like this elsewhere in the world? 3. against other similar systems | How do we compare to other systems like ours (e.g. among Australian provinces or German _länder_ )? 4. within the system | What levels of performance are achieved by the best-performing units in the system (e.g. a hospital, a school, a police force)? 5. against organizations that are altogether different but have some similar relevant functions | What can we learn from them about how they do that? Table 1 To take another example, US states are able to compare their performance in education by looking at the long run of data collected every two years through the National Assessment of Educational Progress. Massachusetts usually comes top and Louisiana bottom. A state could set a target to be as good as the top 10 per cent or the top 25 per cent of states by a certain date, the former being much more ambitious than the latter. This would be using the third type of benchmarking. Benchmarking within a system, the fourth type, is often the most useful of all. There are forty-three police forces in England and Wales, and comparing the different levels of crime among them provides real insight into what the system as a whole might achieve. Suppose every police force achieved the levels of crime reduction achieved by the top ten, or just the top half. Targets based on these kinds of assumptions are not only soundly based; they are hard to argue against. The Manchester constabulary could argue that being asked to match the performance of Devon and Cornwall's would be unfair because one is urban and the other rural, but it would be hard for it to justify not matching the top half of all forces in England or the best five large urban forces. Similarly, each of the thirty-six districts in Punjab is able to see monthly how its performance compares with the other thirty-five. Sometimes best-in-class performance at a level of specificity is not to be found in the expected places, which is where the fifth type of benchmarking applies. The best example I came across was a top-brand hotel chain which wanted to further improve its arrival and check-in process. They broke this down into sixteen steps and then sought best practice for each one. So, who was best in the world at opening the door of the taxi or limousine and welcoming the guest by name? Not another hotel chain, as it happens, but the people who run the Oscars ceremony. They are brilliant at it and a lot could be learned from them. Another way to approach target-setting is to estimate the effects of different policy actions by making good, plausible, informed guesses of what impact they might have. Assemble a dozen or more people with a stake in the system and work on the estimates, perhaps both a cautious one and an ambitious one, and see where they come out. I've been involved in workshops along these lines in Pakistan, America, Russia and the UK and always found them helpful. (See also chapter 4, on Data and Trajectories.) Table 2 is an example worked up for the National Literacy Strategy in England. The starting point was 63 per cent achieving high standards in literacy; the goal was 80 per cent five years later. We estimated the gain year-on-year from five key aspects of the strategy. Our predictions made in 1997 came out close to reality for the years 1998 to 2000, but turned out to be too optimistic for 2001. The UK National Literacy Strategy, 1997–2002 Table 2 RULE 5 APPLY THE SCIENCE TO TARGET-SETTING (but don't depend on it) In the end, precisely where the target is set is still a judgement about how ambitious to be, but a target based on this combination of benchmarking and policy impact analysis is likely to be better informed. As a codicil, let me come back to Blair's calculated bounce on illegal asylum applications mentioned on page 8 above. No one had planned a new target, not even Blair, but a day or so before he made his new promise on _Newsnight_ , he had seen the outcome of some benchmarking and impact analysis we had done on this theme, which showed halving over six months to be within the realms of the possible. My guess is that this analysis had lodged itself somewhere in the prime ministerial brain; so although he was guessing, at least it was an informed guess. The bureaucracy was furious of course, but the target was achieved. ##### UNINTENDED CONSEQUENCES A further question about targets relates to perverse or unintended consequences. These are real risks, and all the time you have to keep reminding yourself and all those involved that there is a moral purpose behind every target (or there should be) – and it's the moral purpose that really matters, not the target. Lose sight of the moral purpose and the edifice begins to crumble. So if a given target has perverse or unintended consequences which might defeat the wider moral purpose, it's a genuine problem. For some, this becomes an argument for not having targets at all, but then you risk all the problems outlined at the start of this chapter – lack of clarity about priorities, lack of clear definitions of success and ultimately lack of accountability. There is one approach that avoids the pitfalls of targets but maintains accountability, and that is the one pursued by Mayor Rudolph Giuliani in New York City and carried on by his successor. This involves choosing priorities, ensuring good, close-to-real-time data on key indicators such as major types of crime, publishing the data regularly and using rigorous benchmarking among, for example, precincts in the New York City Police Department, to drive up performance. All the evidence is that this works – and it works because it has all the elements of the target-setting approach, but here without actually specifying a goal other than, for example again, continuous crime reduction. However, this approach is as likely to have perverse or unintended consequences as one that formally sets targets, which still leaves that problem to be resolved. There are only three things you can do. One is to make sure the target, or in the Giuliani approach, the chosen metric, is well designed. One of the health targets we pursued in the Delivery Unit days was clearly flawed, and the flaw had consequences. The goal was that you should not have to wait more than forty-eight hours to see your GP – the moral purpose was to make it easier for people to see their doctor quickly and to end the common complaint at the time that, in some cases, it was very hard to make an appointment. Some GPs chose to interpret the new target narrowly, so that you could _only_ get an appointment within the next forty-eight hours, which meant that if you called to make a routine appointment, say for your day off work the following week, you'd be told that was not possible. Interpreted in this way, the target caused as many problems as it solved; a case of a badly designed target (and some block-headed GPs) leading to a perverse consequence. The second thing you can do is work with the key people in a given sector to try to anticipate as many of the perverse or unintended consequences as possible, consider these in the design of the target and then, as the plan unfolds, check whether they happen or not. When we persuaded the police in England's big cities to focus on cutting muggings – there was an epidemic of them in 2001–2 – they reluctantly agreed, but said that as they moved police officers from other duties onto this agenda, other crime types would get worse. Their assumption was, at least implicitly, that they were working at maximum efficiency, so prioritizing one crime would adversely affect others. There was no reason to believe this was true. After all, at the time, a police officer in London made on average only five arrests a year. In any case, we agreed to check. Nothing of the sort happened. The good news was, in the places where mugging fell fastest, other crime types fell too. The Chief of the Metropolitan Police at the time commented that we had achieved more for collaborative working in those few weeks than had been managed in the previous twenty years. Good policing is good policing, so if it is put in place for one crime, it is likely to bring benefits in relation to others too. We saw this pattern again and again. For example, if you train primary teachers to be better at teaching maths, they are quite likely to get better at teaching other subjects, simply because they are learning to teach better. That is not to say there are never perverse consequences – the key though is to explode the urban myths and take the battle into the media. You have to remember that any time you want to bring change to a major public service, those who don't want a given target will argue that it will have perverse or unintended consequences – most of which will never occur. If you don't battle this out on the airwaves, your critics will fill the vacuum. Where the perverse or unintended consequences do happen, you have a choice: adjust the policy or decide that the price is worth paying. Better to be a prophet armed than a prophet unarmed – in this case armed with facts. The third and final thing you can and should do is to review periodically the data-collection process to check for abuses or unintended negative consequences. In the Punjab Education Roadmap we have employed independent people to review the effectiveness of the data-collection process and then the accuracy of the data. We found that in some cases the people checking the headcount of students – crucial for measuring progress on student attendance – were going by the attendance register rather than actually counting the children in classrooms, yet we knew that attendance registers were often inaccurate. While the review confirmed that most of the data we relied on was acceptable and that these abuses were relatively few, we were now able to correct the problem. Similarly, on the four-hour A&E wait target, there were stories of ambulances waiting outside an A&E Department with an injured patient because the clock for the target technically started ticking only when the patient came through the door. This kind of thing did happen, but we knew from our data checks that it didn't happen very often. One wonders at the professional ethics of the staff involved in such an abuse – a classic case of losing sight of the moral purpose. RULE 6 CHECK FOR PERVERSE OR UNINTENDED CONSEQUENCES (they may not happen) A simple but essential point needs to be made briefly here and will emerge again in chapter 4. Whatever dataset you choose to base your target on, make sure there is an alternative dataset, ideally beyond reach of government's and public servants' control, that covers broadly the same theme. Use survey data as well as recorded crime figures to understand crime trends, for example. This will provide triangulation – confirmation or otherwise from another source – that movements in your target dataset are real and not manipulated. It will also provide you with a stronger foundation when you need to communicate your success (or failure). ##### CONSULTATION Given that targets help define priorities, and priorities, by definition, are important, a further question that arises is how much to consult on them. After all, there is no point in a target that is established by government – such as the US Goals 2000 Act – but not taken up by the system. In Malaysia, the government went out of its way to consult the Rakyat (as the citizens of Malaysia are known). They did major surveys. Then they hired an exhibition hall in downtown Kuala Lumpur and invited the Rakyat, the opposition and the media – including the country's vibrant blogger community – to come and comment on the targets and the plans for implementing them. This was bold and imaginative. (It also put pressure on those producing the plans to do a good job.) Our process in the Blair administration was much less open than this, although the original delivery targets were set in the immediate aftermath of an election campaign in which many of the issues to which they related had been debated vigorously. Over time, we became much more consultative about targets, encouraging – even requiring – the departments responsible to consult their stakeholders. However, standard government consultation exercises – unlike the bolder and more imaginative Malaysian approach – while essential, have their limitations. Yes, they are necessary, in terms of natural justice in a democracy. And yes, it makes sense to consult those who will be involved in implementation; they may well spot practical flaws in what is proposed, and with luck they will take greater ownership of the targets once they are set. Also, at the very least, dividing lines become clearer and the nature of future public debate is laid bare. But there are limitations in the nature of the exercise itself. The stakeholders with whom a government department regularly interacts, and therefore consults on targets (or anything else), inevitably represent the producer interest and/or the most powerful lobby groups. These groups tend either to have a vested interest in the status quo or to adopt standard positions that are regularly repeated and well known. They are therefore likely to err on the side of caution rather than ambition when it comes to setting a target because, as the producer interest, they'll have significant responsibility for implementation. In short, they are likely to aim to trim the unreasonable. Meanwhile, the voice of the potential beneficiaries – call them citizens or customers or both – is less likely to be heard because they are less likely to be organized and often include the powerless and voiceless. This is why from time to time politicians claim to be representing 'the silent majority' or 'the little platoons'; this may sound pretentious or arrogant out of some mouths, but is often a fair point. Thus, when the chief minister of Punjab decided he wanted an Education Roadmap and a serious approach to delivery, he didn't consult on the targets at all. He simply asserted that he wanted 100 per cent enrolment, improved attendance and much higher quality. Our team then turned these aspirations into real numbers. Needless to say, some of the officials thought the level of ambition bordered on madness, but there was never any doubt that the goals were ones that parents across Punjab would have wanted. Their only doubt at the time would have been about whether anything would actually get done – after all, they had heard empty promises many times before. This leads to the final point about targets, which is this: in big systems, such as the school system in Punjab or the National Health Service in England, the targets need to be cascaded out. How did we do that in Punjab? We took the provincial-level targets and then asked ourselves what each of the thirty-six districts would have to achieve for the province as a whole to reach its targets. We had some (rudimentary but good enough) data from the past which enabled us to set different starting points for districts according to their social composition, and therefore the degree of challenge they faced. Led by the outstanding Secretary – Schools of the time, Aslam Kamboh, we then summoned the district leaders to Lahore in April 2011 and shared their targets with them. There was some bewilderment; few had even heard of the Roadmap at that point, and fewer still believed that any attempt at reform would work. At that time, even the secretary was of the view that this was just another of the countless donor initiatives that came and went. It is worth noting that some powerful organizations in the province thought the targets were far too challenging. One leading representative of the World Bank accused us at the time of being too ambitious and going too fast. We readily agreed we were guilty on both counts. By 2013, though, the province and almost all of the districts had met or exceeded the targets set that day. In short, we could not have been more top-down if we had tried. 'Top-down' is often hurled as a term of abuse, but there are circumstances when it is the best approach – and a massively underperforming school system is a case in point. It is also often said that top-down can't work, which is simply inaccurate, as the Punjab education reform and many other successful changes around the world make clear. However, it is by no means the only approach to cascading out targets nor the best in all circumstances. An alternative is to consult the frontline units – a hospital, a police force or whatever – and ask them to state what they think they might achieve. In my experience, often (but not always) this will result in less ambition, but on the plus side ensures a greater sense of ownership. The third way is to have a negotiating process. This is what we did with the original National Literacy Strategy in 1997–8. The national target had already been set and made public. Department for Education statisticians then produced a range for each of the 150 local authorities within which they would have to come out for us to achieve the national target. In the days before ubiquitous email, these ranges were handed in brown envelopes to representatives of the local authorities at a conference in London, and they were asked to come back to us a week or two later with their view on what they could achieve. All but one readily came back with targets within the range we had proposed.* Then each year, as the actual results came in, we engaged in a data-informed dialogue with them about progress towards the target. The targets were seen as exceptionally ambitious, yet within two years even the lowest-performing local authority had exceeded the national average we had at the outset. Further evidence of the power of ambition itself to drive progress. RULE 7 CONSULT WITHOUT CONCEDING ON AMBITION (opposition is inevitable) ##### MORAL PURPOSE The art of prediction or forecasting is a subtle one, and when you set a target, make no mistake, that is what you are doing. In a political system this has its risks. David Blunkett famously said back in 1998 that if the government didn't meet its literacy target for 2002, his 'head was on the block'. When I said to him I thought that was a rash statement, he replied with some cutting edge, 'It is important that everyone takes responsibility, including me. And, by the way, if I go down, you're coming with me.' In other words, reputations are at risk. But David's point was right; if you want thousands of public servants to take responsibility for their part in achieving a goal, you need to make it clear that you take your responsibility seriously too. Nate Silver, in his magisterial survey of 'the art and science of prediction', _The Signal and the Noise_ , urges us to think probabilistically when we forecast: rather than 'it's going to rain tomorrow', 'there is a 90 per cent chance of rain tomorrow'. This is, of course, the right way to think analytically when a target is being discussed. He also makes another crucial point – that it's dangerous to depend purely on the data. His discussion of major league baseball led to vigorous debate about whether data analytics or the judgement of scouts gave better predictions of future success. He concludes firmly that a combination of the two is most effective. Statistical inferences are much stronger when backed up by theory and judgement or at least some deeper thinking about their root causes. As the economist Jan Hatzius (whom Silver quotes) says, you need a story. Data on its own is not enough. As if to confirm this, 2013 saw a young Norwegian, Magnus Carlsen, crowned world chess champion after he comfortably defeated the Indian Viswanathan Anand. Carlsen beats computers too, even though, as long ago as 1997, a computer, Deep Blue, had defeated his esteemed predecessor as world champion, Garry Kasparov. Given all the multiple increases in computer power (not to mention many more chess games to be fed into the analysis), how come, all these years later, the best computers can't beat the Norwegian upstart? It seems the answer is that the computer analysis has helped Magnus Carlsen too – he can absorb all that massive insight from a computer and then apply human judgement on top of that. The computer, of course, cannot do the latter. The eagle of computer analysis soars to a great height, and then the wren of human judgement, sitting on the eagle's back, can fly that little bit higher. Having a story is important, not just to ensure the best possible prediction, but because for a target to have real impact on the ground it has to be motivational, it has to have that moral purpose. Without this, however good the analysis, the benchmarking and the probabilistic thinking, none of it will cut ice. Someone – ideally the leader – has to tell a good story. Imagine Henry V on the eve of the battle of Agincourt making his speech, 'We happy few, we band of brothers...' and going on to argue that, with the odds stacked against us, we have a 10 per cent chance of victory, or that in seven of the past ten battles such as this, the army on home soil has won. This is not the speech Shakespeare wrote. Instead, he had Henry V tell a powerful story – that the events of the day ahead will become legendary. More prosaically, when I led the Prime Minister's Delivery Unit, we had to tell a story about each target and about its moral purpose. The way I explained it to Britain's top civil servants in 2002 – in language that I admit fell well short of Shakespearean – was this: > [Delivery] demands consistent focus on the targets and the data... But the targets, however good, and the data, however clear, are only imperfect representations of something even more important: that is, the real world outcomes that matter to citizens. Yes, it's important that no one waits more than four hours to be seen and treated in an Accident & Emergency Department, but that is not the point – the point is that patients should get high-quality treatment rapidly (and go home thinking that the service is a good use of taxpayers' money). Similarly, a certain percentage passing a literacy test at age eleven is worthwhile too – but it's not the point. The point is that children should leave primary education able to read and write well because those skills are essential in the modern world – and because being able to do so will change their lives. So the last rule about targets is to come back constantly to the moral purpose. Even Lord Salisbury, with whom the chapter began, had a clear priority and an implicit target – that Britain's dominant place in the world should be maintained. (The British navy of the time had a target too – that it should be bigger than the next two largest navies in the world combined.) This may not seem like a moral purpose all these years later, but no doubt at the time that was how it was perceived (at least in Britain). Sadly for him (but not so sadly for many others), rather a lot did happen in the twentieth century – in spite of Lord Salisbury – and his target was not achieved. RULE 8 TARGETS ARE IMPORTANT BUT NOT THE POINT (state and restate the story about the moral purpose) ## 2 ## Organization So now you know what you want to do. That's a good start, of course. But do you know how? To repeat: 'Don't tell me what! I know what. Tell me how!' How indeed... There will need to be a strategy and some planning; we will come to those in the next two chapters. First you need an organization capable of delivering the priorities you've set. It is hard to exaggerate the importance of thinking this through, and this chapter is intended to make that possible. For a newly elected government in particular, there is so much to learn, and often the most talented politicians find that the campaigning skills which propelled them to government are absolutely not the skills they need now that they find themselves governing. In May 1997 in the Blair administration, there was a huge sense of euphoria after a spectacular, once-in-a-generation election victory, but the degree of ignorance once we were in government was vast. In No. 10 they were much more focused on 'the message' than on getting things done, because for the years in opposition 'the message' was all there was. The message still matters in government of course, but it is by no means everything. Here's my account of those first days: > The first few weeks were utterly chaotic but incredibly productive. Throughout there was an air of unreality and above all a confidence... that for a while defied gravity... Both the confidence and chaos were in evidence on the Thursday of that first week. I bumped into [Conor] Ryan in the foyer of the department and he asked me if I was ready for that morning's meeting with the PM at No. 10. 'What meeting?' I asked. Shortly afterwards I found myself in the Cabinet Room, gulping for air as Blair asked whether we were sure we would meet the 80% target [for literacy] and everyone went silent and looked at me. From his more exalted position, Blair was discovering the harsh realities too, as his own account makes clear. > The instincts were by and large spot on. The knowledge, the experience, the in-depth understanding that grappling over time induces – these qualities were missing. There was a political confidence, even swagger about us; but it was born of our popularity with the country, not our fitness to change it... Since those years, I've watched governments – the Sarkozy administration in France, the Cameron government in Britain, or the recently elected Nawaz Sharif government in Pakistan – go through the learning process of answering the 'How?' question... as waves of hope crash on unforgiving rocks. Politics is an unforgiving business, and no one seems to think that a PM, a president or a minister needs to learn their way into the job, whereas in fact they are just like everyone else. And when you ask what it takes to become expert in a highly skilled role, the answer is surprisingly clear – it takes 10,000 hours of deliberate practice. This means not just 10,000 hours of doing something, but systematically working on the skills required in a conscious way. The starting point, therefore, is self-knowledge – being able to admit you are not an expert already (which means ruling out those political leaders, no small number, who suffer from hubris). Work this out for a PM who has this self-knowledge: 300 days per year at ten hours a day gives you 3,000 hours work a year. He or she may do more than that, but take it as a starting point. On this basis, if things went well, you would be expert roughly as you moved from your third to your fourth year in power. Remarkably, that is precisely when Blair himself says he became fully competent in the 'How?' skills of government. Reflecting on 1998, just a year after becoming PM, he says, '... something was missing, some dimension barely glimpsed... Now of course, I know what was wrong. But then I was seeing as through a cloud.' Another two years later, with maybe the 10,000 hours clocked up, the cloud had cleared: 'For me, the process [of putting together the ten-year plan for reform of the health service] was extraordinarily revealing and educative... I stopped thinking of it as a gamble... and started realising it was a clear mission whose challenge lay in... how it was carried through.' People had been telling him from the beginning that he'd need to get a grip on the civil service machine, but at first he hadn't had the depth of knowledge to recognize how important that was. He had mastered politics, but not government – until the end of that third year in office. With many terms of office lasting three, four or five years, depending on the constitution of the country, it becomes abundantly clear why so many leaders find their first terms so frustrating. And if they don't get another term, all too often they look back on their time in office much as Viktor Chernomyrdin did. A newly elected leader without experience of government should be thinking: Who am I going to learn from? How can I get the necessary learning in faster than 10,000 hours? The answers come in part from building a team, which we'll come to later in the chapter. ##### DELIVERY CAPACITY A good place to start, however, is to ask right away: Is the government machine capable of delivering our agenda? We asked this in the education department in 1997. The answer, we knew, was 'No'. We decided to set up the Standards and Effectiveness Unit – my first taste of dealing with implementation – precisely in response to the gaps in delivery capacity we had seen from outside before the election. We brought in leaders from the education field to work alongside civil servants, we employed experienced local government officials to strengthen the dialogue with their former colleagues across the country, we brought statisticians and researchers into the heart of the policy process, we had plans and checked progress... and (although by no means perfectly) it worked! There was a wonderful moment on my first day when I was negotiating my terms of employment. I argued that I would like my compensation to be tied to the performance of eleven-year-olds in literacy and numeracy tests. Eyebrows were raised to the skies. I'm afraid that would be impossible, came the reply. Accountability for performance in the civil service? Certainly not! At the beginning of his second term, Blair asked me to set up the Delivery Unit, again as a direct response to an evident capacity gap at the centre of government. In Ontario, soon after he was elected, Dalton McGuinty, learning from our experience, set up a Literacy and Numeracy Secretariat in his education department to drive teacher development – before then the teachers felt the civil servants were not capable of a deep enough dialogue about teaching and learning. In Malaysia, Najib Razak made the training of his top civil servants a key early task because he knew he would be unable to deliver unless he did. At the same time, he established Pemandu, modelled on the Delivery Unit, to drive his agenda forward. Each of these leaders realized sooner or later that, too often, the civil servants they inherit suffer from what has been called 'strategic atrophy', under which 'established assumptions' inhibit the formulation of 'new visions' and 'discount anything that challenges' the status quo. On the basis of experiences such as these, some colleagues and I developed a review process we call a Delivery Capacity Review, designed explicitly to answer the 'How?' question. The review has a rubric divided into five sections and fifteen modules. The five headings are: * Develop a foundation for delivery * Understand the delivery challenge * Plan for delivery * Drive delivery * Create an irreversible delivery culture.* The rubric simply asks a set of questions under each of the five sections, such as (under 'Plan for delivery'): Do plans track relevant performance metrics, leading indicators and implementation indicators for each intervention? That is hardly a question designed to set the pulse racing, but the point of the rubric is not to emulate a good thriller, but to be thorough, to make sure, in the classic phrase, that no stone is left unturned in checking out whether a government machine or an individual department is ready to deliver or not. The rubric then offers best-case and worst-case options to help those responsible answer the questions for themselves. Again, thorough rather than exciting – but then a lot of government is about just that, thoroughness, which is why Mario Cuomo claimed to campaign in poetry but govern in prose. On each of the fifteen modules, a team responsible can score itself from Red (not remotely ready) to Green (ready to go), with Amber-Red and Amber-Green in between. This may be the moment to introduce the four-point scale and traffic lights. A _Guardian_ profile of me once said, 'He must dream in four-point scales.' This was meant to be a joke, but the sad truth is that I have dreamt about four-point scales. In the Delivery Unit, we took the three colours of a traffic light and split the middle one because we could see that, unless we did that, people making judgements would too easily be able to opt for the middle. With the four-point scale, we all had to decide – was it basically green or basically red and, since these colours then drove follow-up action, that was important. In the case of the Delivery Capacity Review, you end up with a chart of your system which you can put on the wall – it may be a sea of red or (more rarely) a field of green, or something in between. Either way, it will tell you exactly where your delivery capacity challenges are so that you can begin to address them (see Table 3). There is one problem with almost all review processes in government – they take far too long and are therefore far too slow to have an impact. Often they start with someone setting up a committee or a task force or a working group. It meets periodically, but everyone involved has a day job, so very little gets done between meetings... meanwhile, the real world changes in unexpected ways, which is what always happens with real worlds... and so it goes on. The British Audit Commission, before its sudden demise in 2010, used to spend two years producing its excellent reports. I once asked one of the authors of these reports how long after starting on one did they know 90 per cent of what was in the final version. 'A month,' he replied. Since then, I've overseen the production of a number of review processes that take a month, tell you 90 per cent of what you need to know, and then drive action. Capacity Review Summary Table 3 The Delivery Capacity Review is based on that lesson. It is action-oriented. Following a brief pre-meeting a few weeks in advance, and having read a lot of materials (none of them produced specifically for the review), my team and I were able to do a pretty accurate Delivery Capacity Review for the State Department of Education in Massachusetts or Tennessee inside three days. The key is to make it interactive and to draw on the knowledge, often implicit, that the people who work there already have. This is how it works. At the start of day one, armed with the knowledge you have from reading the documents, you run a three-hour workshop with the senior management team of the organization – but without the boss, so that people feel more freedom to tell it how it is. You get them to score themselves on the five sections and fifteen modules – Red, Amber-Red, Amber-Green or Green. You create an atmosphere in which people feel it is safe to confront the facts, however brutal they might be. A key aspect of this is to make clear that judging something Red isn't necessarily a comment on any individual's performance. Rather, it is a judgement about the future: are we equipped to deliver the new goals we've taken on? In the case of Massachusetts, for example, these were the goals they had committed to for the 'Race to the Top' grant they won from the federal government. The debate goes back and forth. Those involved really enjoy it – it's practical, it's sometimes edgy and it's always productive. At some point they (nearly always) tell you that they've never done this kind of thing before and they now wish they had. By the end of the workshop, you have their picture of themselves, usually, incidentally, very honest. Also you, the review team, have learned a lot. Newly knowledgeable, you then interview or run focus groups with many stakeholders from every level in the system. You ask them what they think the department at the centre of the system does well, what it does badly and what advice they have about how it could achieve its goals. Once you get beyond the standard rhetorical stances and people relax, they too tell you what they really think – and your picture of where the organization is and how it needs to change becomes ever sharper. A key interview in this part of the process is a good hour alone with the boss – top official, minister or both – of the system. He or she will often, perhaps surprisingly, ask you to focus on the weaknesses, as this helps them drive actions which they know are necessary but haven't yet had the time or courage to take on. On the penultimate evening (the evening of day two), the review team meets without others present and reaches its own judgements on each of the fifteen modules, debating as it does so what the management team's perspective has been, whether it tallies with what has been learned since, what weight to give to strongly held views of particular individuals... Someone has to manage time and maintain momentum otherwise any one of the debates could go on and on. There's always someone who threatens to 'die in the ditch' for a specific viewpoint, but come the end of the meeting they remain alive, and not in a ditch. The conversation may be a couple of hours in total. The review team now has its judgements on each of the fifteen modules on the four-point scale and can compare them with those reached by the management team the day before. Now for the final morning. The review team and the management team, again without the boss, debate their differences and seek to agree. Usually, the differences are few, maybe on four or five judgements out of the fifteen, and usually only one shade apart. The beauty of this process is that the management team always knows much more about their organization, while the review team always knows much more about what 'good' (or 'bad') looks like in other, similar organizations. Once agreement is reached, as it almost always is (the head of the review is the ultimate tiebreaker), participants can move on to the even more important task of deciding what should be done to address the problems identified. What can be done in one month? Three months? And beyond? As a result of the preceding debates, the actions are easy to identify. They almost always literally fall out of the process. So in this meeting the key is to clarify precisely what the action is, who should be responsible and when it should be done. Then, in the final act of the Delivery Capacity Review, the two teams together report to the leader; here's the picture of our capacity to deliver the goals we've set and here's what we suggest should be done about it. The leader is often stunned by the accuracy of the picture – like a mirror held up to them – and impressed by the action plan. I can still remember the look on the face of the Tennessee Schools Commissioner when he heard the report – challenging, real and, above all, action-oriented. There is one key piece of sensitivity required before the final act. If there are personnel issues – it might have become clear that one senior manager is part of the problem – they need to be raised separately and confidentially. If the leader is part of the problem, it is more complex still, but you are undoubtedly better off than you were, as you now have an evidence base. If these personnel issues are a major barrier to making progress towards the goals, then they have to be addressed, however uncomfortable that might be. RULE 9 REVIEW THE CAPACITY OF YOUR SYSTEM TO DELIVER THE AGREED GOALS (and do it quickly) This process is quick and effective and can be applied to any large public system. Why wouldn't any minister with a sense of purpose ask for this to be done in the first couple of months in office? Will the picture that emerges be perfect? Of course not. Will it be a lot better than the picture formed from rumour and counter-rumour which is the stuff of large bureaucracies? You bet it will. ##### A DELIVERY UNIT Your priorities and goals are clear, you know how well (or not) your organization is set up to succeed, and you know what you need to do to strengthen its capacity to deliver. Now you need to know which part of the centre of your organization – whether it's a single department or an entire government – will make sure that delivery happens. And in this book, remember, delivery means that the citizens actually see and feel the difference, not just that a policy is announced or a law passed. Responsibility for delivery lies with the relevant department – hospitals with health, schools with education and so on – but who at the centre of the system will track progress for the political leader and make sure that focus is sustained, draw attention to problems and help solve them as necessary? Or play the equivalent role within a department, which itself will have numerous sections or branches and perhaps several associated agencies? A common response to this question is to say, if the leader appoints good ministers to lead the various departments, why would he or she need such a function? Or the equivalent at departmental level? Well, in theory, if all the ministers were highly competent and loyal to the president or prime minister this might be OK, but ministers are often appointed on the basis of the political faction they represent or for some other reason not directly related to competence. There is nothing wrong with this – it is how politics works – but it does mean that it is unlikely that all ministers will be excellent. In any case, as we've seen, being good at politics is not the same as being good at governing. And, of course, there's still the centre of government to consider. First, a delivery function there can make sure that small amounts of the leader's time can be applied systematically and routinely to the identified priorities. Second, if there is no delivery or implementation function there, then the centre is likely to focus on politics, strategy and policy, and not take implementation seriously enough. In Blair's first term, people from No. 10 used to come and see me in the education department. Always a pleasure. And the question they asked was, 'Have you got any more ideas?' Of course I had ideas but, I used to reply to them, how come you never ask me whether we've implemented the ideas we've already had? Third, a delivery unit can ensure that all the relevant departments and agencies contribute to achieving a government goal, thus overcoming the lack of collaboration between departments or agencies (the silo effect) that is so strong in most democracies. Fourth, a delivery function can become a centre of expertise on delivery and implementation; it can learn lessons which might apply to several departments or to the entire government machine, which simply can't be learned otherwise. In the twenty-first century, this capacity to learn rapidly and effectively is not just desirable, it is necessary. Indeed, this learning is fundamental to building delivery capacity. The truth is that in the modern world success is not achieved by the standard model people have in their heads, which looks like Figure 2. Figure 2 Actually, in politicians' heads it is often more like Figure 3. Figure 3 The Implementation Cycle Figure 4 (Hence my consistent advice to politicians: policy is 10 per cent and implementation 90 per cent.) But, given the scale of the challenges ambitious governments take on and the pace of change, it's not just a question of the relationship between policy and implementation, it's about the nature of that relationship. And it should look more like Figure 4. I didn't realize at the time, but Blair had clearly understood this, as his memoirs made clear: > We established a Delivery Unit... It was an innovation that was much resisted, but utterly invaluable... It would focus like a laser on an issue, draw up a plan to resolve it, working with the department concerned, and then performance manage it to a solution... often it became clear that the problem was systemic requiring wholesale change to the way a public service worked, rather than a centrally or bureaucratically driven edict. In most cases where an ambitious goal is set, you cannot know enough at the beginning to be sure of success; you have to learn as you go. The establishment of a delivery function makes that possible, not just in relation to each goal, but across all the goals. In government capacity terms, this is massive. Some governments – Australia's, for instance – have responded to these demands by setting up a Strategy and Delivery Unit or a Policy and Delivery Unit. I would not say that can't work; I would say that it is less likely to work for the simple reason that, in political systems such as governments, strategy and policy usually trump implementation and delivery. It is always tempting to pitch into any one of the range of debates going on in government about future strategy – they are absolutely riveting and they are at the core of what politicians love, their essence even. Meanwhile, tracking implementation, checking actions have been taken across a system, exploring data sets for evidence of progress... these tasks, however essential they might be, are (unless you are a sad graph-lover like me) fundamentally dull. For this reason, my advice is to separate delivery from strategy so as to ensure there is a senior person whose sole focus is delivery, as mine was in the original Delivery Unit and as Idris Jala's has been in Pemandu in Malaysia. RULE 10 SET UP A DELIVERY UNIT (call it what you like, but separate it from strategy and policy) What should a delivery unit look like? That's the next question. It should be small in relation to the overall task and focused solely on the priorities. At the beginning of the original Delivery Unit, we debated designing it to track everything. I opposed this because I thought the last thing we needed was a new, big bureaucracy to track a set of old, big bureaucracies. Blair, in any case, didn't give the idea the time of day. It was the start of his second term, he wanted a legacy of reformed public services, and had learned to prioritize. How Units Fail Figure 5 How small? In the Louisiana Department of Education, a delivery unit of three or four people drove big changes in outcomes, especially graduation rates from high school. In the original PMDU we had around forty people pursuing twenty or so priorities across four major departments. I had learned from studying previous newly established units at the centre of government that their traditional trajectory went roughly as in Figure 5. I chose to do the opposite from the outset; putting a cap on size and budget (in that first year, I returned some of our budget to the Treasury early, saying I didn't need it – the entire institution nearly fell off its collective chair: that had never happened before!) and promising to abolish it after three years unless there was strong demand for it from top civil servants and politicians. Forty was enough for us. Pemandu in Malaysia has around 100 people to drive the six NKRAs and an economic reform agenda. Idris Jala, its leader, emphasizes the importance of 'having a small team, lean, to go and deliver'. One way to look at size is these total numbers. Another is to consider the leverage ratio. We worked out once that for every pound spent on the Delivery Unit, we influenced roughly £50,000 of public expenditure. Once people saw the value we added, this began to look like a very good deal indeed. It is also a daunting and empowering thought for delivery unit staff. Focus on the priorities, keep it small and ensure it is well led. The person responsible for the delivery unit needs a distinctive set of characteristics. Get the wrong person and its influence will rapidly wither; get it right and everything else will follow. What are those characteristics? I prepared a list for the chief minister of Punjab, Shahbaz Sharif, setting out the attributes required to lead a delivery unit (see Table 4). There are people with this set of characteristics, but not that many in any given system. The first requirement alone rules out plenty of otherwise qualified people. I was lucky; I had got to know Blair in opposition by working on speeches and policy documents with him and his team. It's much harder for a PM to establish that level of trust once they are in office and surrounded by people jockeying for position and trying to impress. Najib Razak found the ideal candidate to lead Pemandu in Idris Jala. Idris had had a stellar career at Shell as a global problem-solver before returning to Malaysia. The PM knew him and respected his track record. Idris is a larger-than-life character with tremendous energy and motivational power, but he did not need or seek political credit; he wasn't interested in the politics at all. What he wanted to do was solve Malaysia's problems, and if the PM and cabinet got the credit, that was fine by him. As he put it, 'they must take the credit and our job is to help them to deliver'. George Noell, who headed the delivery unit in the Louisiana Department of Education, was a professor seconded from Louisiana State University by the then State Education Chief, Paul Pastorek. George was a master of data and education research, and so leapt at the chance to make a big difference in his adopted state. His most notable characteristic was his integrity – he deeply believed in the mission of improving Louisiana's schools – which he brought to bear on relationships in the fractured political world of the state's education system. Some might have disagreed with George; everyone admired him. While his boss, Paul Pastorek, was a risk-taker and master of the cut-throat policy environment of Baton Rouge – at weekends he hunted alligators – George kept the agenda and the data in focus. He insisted on telling Paul the bad news even when sometimes he would rather not have heard it! Characteristic | Reason ---|--- Completely trusted by the PM or CM | Has to be able to represent the PM or CM effectively and be the bearer of bad tidings sometimes. Determined, hardworking and focused | Without these characteristics, delivery is unthinkable. Optimistic and confident | Belief is a vital ingredient of success. If the head of the delivery unit doesn't believe that success is possible, why should anyone else? Good at building relationships | There will be many sensitive conversations to be had, especially when things are going wrong. The delivery unit head needs to be a calm, problem-solving influence. Happy to be out of the limelight, giving credit to others | If there is delivery in health, the minister of health should get the credit... The delivery unit needs to help others succeed and give them the credit (which is the currency of politics). Have a successful track record in business or government | Essential for credibility, especially at the beginning. Table 4 Note how in each of these cases it's not just the general personal characteristics that matter, but the specific fit to that leader and that political environment. With delivery leaders such as Idris or George, the potential for success is greatly enhanced. The delivery unit then requires a structure. This too needs to be specific to the context. In the original PMDU, my first thought was to have four teams. The first were the account managers, one for each department we interacted with. They held the relationship, worked with the department on its plans, checked the data, led progress checks and did the behind-the-scenes work for meetings between the PM and ministers. Second, there was a group of problem-solvers who could be flexibly deployed on problems as they arose, rather like a consultancy, which is where some of them were recruited from. Third, there was a team of data analysts led by the brilliant Tony O'Connor, servicing all the other teams. Fourth, there was a small team of capacity-builders who organized training events for the key officials in departments responsible for delivering the priority outcomes. It was a good theory, and worked quite well in practice for a while, but it left me holding too many senior departmental relationships with politicians and top officials. Also it exposed my own weaknesses as a manager – I tend to like chaos, often replying 'What's so good about clarity?' when someone demands it. We finally reorganized on a plan produced by my deputy, Peter Thomas, who wanted clarity for himself and everyone else and decided (rightly) that I needed to be pushed. This structure was built around departments – capacity-building was dropped as a separate function, and account management and problem-solving were combined in teams that related to each relevant department. The number-crunchers kept crunching numbers. The huge gain for me was that each departmental team now had senior leadership that could interact with top people in the departments, so it freed up my time. The PMDU was organized by priority (see Fig. 6 overleaf) and it worked. Figure 7 shows how we were then able to illustrate clearly how we interacted with departments. The redesign, which had in effect been imposed on me by Peter, worked excellently and was crucial. The 2001–3 model of organization had laid the foundations; his new model unleashed our capacity and was responsible for the tremendous success of the years 2003 to 2005. Idris Jala used this as the starting point for Pemandu while at the same time drawing on his own experience at Shell. Prime Minister's Delivery Unit (PMDU) Figure 6 With the right leadership and structure, the next question is, what kind of people to staff it with. Given the goals of keeping small and driving a big impact, it is obvious you need a small number of highly dedicated people, driven by the mission. For the establishment of the Roadmap in Punjab, eventually I recruited a small delivery team of three who worked with the Secretary – Schools to set the ball rolling. At first, there was just one person, Katelyn Donnelly, whose imagination, dedication and persistence, I came to discover, are more than a match for almost any bureaucrat, public or private. Over time, Hiran Embuldeniya and Saad Rizvi spent more and more time in Lahore too. Since I visited once a month for a week, I had to be confident that the team on the ground would be self-organized, self-motivated and would come to me either in our weekly calls or by email if they had a problem. As the team developed, they were brilliant, incredibly hardworking and wonderful at building relationships in the challenging circumstances of a Pakistani bureaucracy. Later, led by the dedicated Fenton Whelan, they recruited young Pakistani talent from the excellent Lahore University of Management Sciences. These young graduates not only worked tremendously hard, but extended our capacity to work at the district level across the province, since they could go where security worries prevented us from going. Collaboration between PMDU and Departments Figure 7 In the PMDU and in Pemandu, a virtue was made of combining in the teams some excellent civil servants with others brought in from outside – consultancies, audit organizations and university research departments. The mutual learning that resulted was one of the attractions of working there. This approach also addressed head-on the evident lack of delivery skills in the civil service. Figure 8 showing the previous roles of the various members of staff gives a flavour of the Delivery Unit in 2004. Rigorous recruitment and selection processes were vital to ensure the necessary relationships and leverage. We did not just want clever consultants – we wanted them to have a passion for the cause and relationship-building skills too. Adrian Masters, who joined PMDU from McKinsey and has gone on to an outstanding career in the National Health Service, illustrates the point. Previous Work Locations of PMDU Staff (2004) Figure 8 In the original PMDU, we built a culture around five key words: * Ambition – No compromise. That's what it takes. * Focus – Never be distracted. * Clarity – Collect and examine the data. Confront the brutal facts and don't be afraid to tell the prime minister. * Urgency – Constantly counteract the tendency of bureaucracies to delay. * Irreversibility – See the change through so it will stay changed. I repeated the five words, like a mantra, whenever I got the chance. There was one other central idea that underpinned our work – simplicity. Government is genuinely complicated, and on top of that there are some people in bureaucracies who seem to relish adding further complexity, perhaps because they believe if they (and only they) understand the state of affairs, it adds to their sense of power. If things are to get done, if delivery is to happen, there has to be a countervailing drive for simplicity. This is what we aimed to provide. In the end, we said, there are only five questions that matter, and we're going to keep asking them, calmly and persistently, until we get answers. The Five Key Questions Table 5 Simple. Really simple. The difficulty is the sheer discipline and persistence required to keep it this simple. To this day, if I find myself in conversation with a prime minister, president or minister, these five questions set an ideal agenda. The final piece of putting a delivery unit in place is to think through the different relationships it needs and how to ensure they work effectively. I saw many potential pitfalls in this respect when I took on the Delivery Unit role. There was the looming presence of Gordon Brown and the Treasury wondering how to deal with a new part of the Blair machine, seemingly intended to trample on Treasury territory. There were Cabinet ministers, powerful New Labour figures, thinking anything from 'Don't waste my time' to 'Are you Tony's spies?' And there were the permanent secretaries, the mandarin class, made legendary by _Yes, Minister_ and without compare in their ability to shrug off any challenge from yet another new unit which might threaten their ordered world. So, quite early on, we drew up a list of things we knew all these powerful people hated about units at the centre of government and we promised not to be like that. Instead, we said, we'll demonstrate the characteristics you say you would like. And we listed those too. This became our calling card, in effect a contract about how we would work. OUR WORKING APPROACH SEEKS TO AVOID * micro-management * generating bureaucracy or unnecessary work * getting in the way * policy wheezes (or gimmicks) * being driven by headlines * short-termism * opinion without evidence * changing the goalposts OUR APPROACH EMPHASIZES * keeping the PM well informed about his key priorities * consistent pursuit of those priorities * data and evidence * plain speaking * early identification of problems * imaginative problem-solving * application of best practice * recognizing differences as well as similarities between departments * urgency * building capacity * leaving responsibility and credit where they belong * the expectation of success. I promised ministers and officials that if they found any of my staff following the 'seeks to avoid' list, I would intervene immediately. Afterwards I summarized on a PowerPoint slide how I'd thought about each of our key relationships (Table 6). The problems won't be identical in other governments and bureaucracies, but you can guarantee that there will be similar challenges with relationships because, in the end, whatever else government is, it is always a soap opera. Rather than simply letting relationships develop, much better to be conscious about fostering the relationships you would like to see. **PMDU: Key Relationships** Getting the key relationships right --- * The Prime Minister: Whatever you're doing we're focused on your priorities. * The Chancellor of the Exchequer: We'll make sure the money you allocate delivers results. * Cabinet ministers: We'll help you get your bureaucracy to deliver the government's priorities. * Top civil servants: We'll sustain a focus on these priorities and help you solve your problems. * Everyone: However much we contribute you get the credit. Table 6 It is worth emphasizing how subtle and subjective these issues of leadership, selection of staff and relationships are. Now that delivery units have become fashionable, some major consultancies have become prone to touring the world recommending delivery units as the solution to pretty much any public sector problem. Their cookie-cutter approach misses these key subjective factors, with the result that all too often the delivery unit that looks beautiful on the organization chart fails in practice. RULE 11 THE DELIVERY UNIT NEEDS TO BE SMALL AND WELL LED (and excellent at building relationships) ##### DELIVERY AND THE CENTRE OF GOVERNMENT Geoff Mulgan was a colleague of mine in No. 10 Downing Street, responsible for strategy when I was responsible for delivery. Both before and since that experience, Geoff thought deeply and widely about government and how it should operate. He has also learned more about how it worked in different parts of the world than anyone I know. Recently he has turned his attention to the centre of government – the range of functions around a president, PM and/or cabinet – and asked what its role should be and how it should work. He says, 'Current structures usually fail on four counts (there are plenty of other problems – these are just some common ones)': 1. 'They are insufficiently effective at delivering legitimation...' In other words, they don't build trust and fall into the gap between expectations and delivery. 2. 'They are poor at making use of the right types of knowledge needed for good decision-making.' 3. 'They are poor at coordination and alignment of the often sprawling government machine...' 4. 'Many get timing wrong – it's not just that they do slowly things which should be done fast, but they also default to doing fast things which should be slow.' My own experience, not just in No. 10, would confirm Mulgan's perspective. The Delivery Unit came in time to ameliorate the problems he lists, but none of us involved believed we had a coherent structure in No. 10 at that time. Mulgan's answer is to use the metaphor of a brain. The centre of government should have the following capabilities: * Observation – seeing what's happening * Attention – focusing * Cognition – reflecting and analysing * Creation – imagining and designing * Memory – learning and not repeating mistakes * Judgement – discerning and deciding * Wisdom – making sense of complexity and bringing a moral perspective to bear. With this perspective, Mulgan identifies twelve tasks which he says 'need to be part of someone's job. Indeed a test of coherence of any current centre of government is whether it's clear where responsibility lies for each of these...' 1. Make the direction of government explicit... and avoid the temptation to generate blizzards of initiatives. 2. Shape and share the strategy for the whole of government – and create capacities to do this. 3. Ensure that the important things happen – and create capacities to keep a sharp focus on the ones that matter most. (This is the role of a delivery unit, as he goes on to say.) 4. Align national, regional and local actions as far as possible. 5. Ensure structures are aligned with purpose – rather than accepting traditional silos. 6. Bring in the right inputs – from open data to citizen experience – to guide experience. 7. Mobilize the best available knowledge and insights to guide decisions. 8. Try to do what works – and leave better evidence behind for your successors. 9. Put money to work – and ensure finance is aligned to strategy. 10. Prepare for the future. 11. Organize the top politicians and officials as a single team with a shared commitment to ends and means. (This is the guiding coalition idea – see next section – at whole-government level.) 12. Take care of the relationship with the public. It's a fascinating list. Numbers 1 and 12 are essentially tasks of communication. Numbers 2, 4 and 5 are organizational. Tasks 6, 7 and 8 are all about the capacity to learn more and better – a key role for the delivery unit and other parts of the centre. The key point for our purposes is that the delivery function is crucial not just in ensuring the most important things happen but in contributing to the wider agenda of better-informed decision-making. It is worth testing Mulgan's theoretical perspective against a real example. Soon after he became president in Chile in 2010, Sebastián Piñera decided to reorganize the centre of government, including the establishment of a President's Delivery Unit. Piñera had been a successful businessman and was at home with the idea of priorities and targets. I met him that summer. I remember the breathtaking views of the Andes at dawn as my flight arrived in Santiago, I remember too Piñera's restlessness, speed of thought and the slight twitch of his eye. Overall, his presidency, which finished in 2014, was mixed. He was considered unlucky, having to deal with not just, like others around the world, an era of financial and economic crisis, but also the famous mining accident, earthquakes and other disasters. He had to face massive student protests against policies which he supported but which originated before his time. It is true that he committed some gaffes as well, including making a particularly offensive remark about women. All this tarnished his reputation. His story is a cautionary tale for anyone who takes a simplistic view that improving delivery of results will, alone, lead to political success. Of course, delivery helps, but for any leader or government, a lot else – including the 24-hour media churn – affects reputation or credibility. As a recent report from the Inter-American Development Bank makes clear, on delivery the Piñera administration made real headway. 'We expect Chileans to judge us on our results, not our good intentions,' the president asserted early in his term. His Delivery Unit would check progress on demanding goals in the president's agenda, such as 'Reduce by 25%, by 2013, [criminal] offences committed in public places' or 'Create a million new, quality jobs during the period 2010–14, at the average rate of 200,000 per year'. These by no means trivial goals were grouped under seven 'pillars' covering everything from economic growth to health and reducing poverty. With ups and downs, the PDU made a significant contribution to progress on these goals, according to the IDB account. It acted vigorously and collaboratively with departments when progress slipped, as in 2013 when there was a crisis in relation to the crime goals. It helped 'to maintain the attention and the focus on the government's goals, irrespective of headlines' and it made 'the government programme more actionable'. Through its daily contact with ministers, it facilitated 'rapid identification of bottlenecks and coordination failures'. Certainly the departments were constantly aware of its existence. 'The PDU is like a fly buzzing in your ear,' said one official. This commentary suggests that the PDU contributed significantly to the agenda Mulgan sets out, especially to the central task of ensuring the important things get done. In fact, the PDU was one part of a wider reorganization of central government designed to address more comprehensively the kinds of challenge Mulgan identifies: the OECD lists four key functions which encompass most of what he describes. They are strategic planning, coordination, the monitoring of delivery and providing public accountability. The PDU in Chile dealt with the third of these but other reforms addressed the wider agenda. This is illustrated in Figure 9, which places the Delivery Unit in its context. Those planning a delivery unit have to think through its place in the centre, as the Piñera administration did in Chile. Organization Chart of the Ministry of the Presidency Figure 9 Coherent Organization at the Centre Figure 10 After the end of my time in the PMDU, I suggested that Blair should adopt a more coherent approach to the centre of government in the UK. The idea was to embed delivery thinking into a coherent overall model of the centre (see Fig. 10) that learned the lessons Geoff Mulgan has described. There is no single right answer, and we will never know whether my proposal would have worked because it was never adopted. In any case everything depends on the quality of leadership and the ability of leaders to build the right kind of relationships, which we will turn to in a moment. In Indiana, as we'll see in chapter 8, they adopted a different but highly effective model of embedding the delivery function firmly within the finance function. There are numerous plausible models. All in the end depend on the people and their relationships. Mulgan concludes that the centre needs no more than a few hundred people in total, 'highly skilled; highly networked; and well-integrated'. It should, he suggests in a judicious phrase, combine 'clarity with lightness'. As for Piñera's PDU, the evidence in the IDB report suggests it made a significant difference to outcomes, with over half the president's commitments, some of which were very ambitious, being delivered. It did so largely due to its persistent monitoring. As we've seen, though, successful delivery does not always translate into political success. As the IDB comments drily, such work, even when it changes the experience of citizens significantly, 'may not be a source of political gain'. ##### THE GUIDING COALITION The great Admiral Lord Nelson lies dying in the arms of his faithful friend, Thomas Masterman Hardy, on the burning deck of the _Victory_ , knowing that the result of the great battle of Trafalgar is assured, but also that he will not live to tell the tale. 'Thank God I have done my duty,' the admiral manages before losing his power of speech and, soon afterwards, drawing his last breath. (Whether he also said 'Kiss me, Hardy' remains a matter of conjecture.) Trafalgar was a great victory, scotching the threat of a Napoleonic invasion of Britain and laying the foundation for a century of naval dominance. And it was not the only victory Nelson had won – he had been a heroic captain at Cape St Vincent, and a bold, imaginative admiral at the Nile and Copenhagen. What was it that enabled him to achieve this string of spectacular victories? Everyone agrees that his courage and the way he led by example were vital ingredients. His attention to the health and well-being of the ordinary sailors was exemplary as well. But these were not the game-changing qualities – other admirals could do these things too (and some did, if not to quite the same heroic levels). What made Nelson different was what today we'd call his management style. John Sugden in his brilliant and epic biography explains: > The increasingly large forces deployed in modern warfare made it difficult for generals or admirals to exercise close supervision or control. Communications could easily break down... in sprawling, complicated encounters wreathed in smoke. Nelson never dispensed with signals [flags run up a mast to communicate with his captains]... but he did reduce his dependence on them by briefing his senior officers beforehand... His purpose was to draw them into the spirit and detail of the enterprise and to harness them to his expectations and standards of performance, so that they might use their judgement more effectively. While the French or Spanish captains were waiting for signals from their admirals, the British captains could exercise judgement and act without waiting because Nelson had built them into a team and trusted them. Far from being less cohesive as a result, the British were more cohesive. Nelson and his captains all understood why they were doing what they were doing, what the battle plan was and how they intended to execute it. Once in battle when, of course, everything does not go according to plan, each of them knew the overarching goal and the principles of the strategy and so each could replan and act on his own initiative. If in doubt, 'get in amongst them' – meaning the French – was the watchword. Reforming a large public service or implementing a major new government policy may not be quite as demanding as the battle of Trafalgar, but Sugden's words 'Communications could easily break down... in sprawling, complicated encounters wreathed in smoke' seem uncannily accurate as a description of life at the heart of government. Nelson had invented something to which a professor at Harvard Business School has much more recently given a name. One of the key ideas in _Leading Change_ , John Kotter's excellent guide to successful business leadership, is the concept of the guiding coalition. He is referring to transforming businesses, but it struck me as a concept that was perhaps even more relevant in a government environment. The guiding coalition is not the same as a management team; it's a shared understanding among seven to ten people in key positions about what needs to be done and how. Margaret Mead once said famously: 'Never doubt that a small group of thoughtful, committed citizens can change the world. Indeed, it's the only thing that ever has.' In essence, the guiding coalition is the same idea inside a large organization. And it is exactly what Nelson created around him as an admiral. In government we found that, if you got it right, the guiding coalition idea worked excellently. Yes, there may be a cabinet or a ministerial team, or a cabinet committee, or a departmental board, and each of these might have a function, but each has its limitations when it comes to driving delivery. A guiding coalition is quite different, and if you put it in place the difference it makes is incredible. When I read John Kotter's book, I was just leaving the Department for Education to set up the Delivery Unit and I realized immediately that one of the reasons for our success in education (relative to other departments) in the first Blair term was that, without even knowing the term, we had a guiding coalition in place. Let me illustrate. One of our goals had been to improve literacy and numeracy in primary schools. We had in fact seen the percentage achieving good standards in literacy rise from 57 to 75 per cent, with a similar improvement in numeracy. In 2001, international comparisons put England third in the world for reading among ten-year-olds; in 2007 England was shown to be the most improved system in the world in primary school mathematics over the previous decade. The strategy had been good if sometimes controversial, but a key to it had been the shared understanding among a handful of people; the Secretary of State for Education, David Blunkett; his Minister of State, Estelle Morris; myself as head of the Standards and Effectiveness Unit which was responsible for implementation of the strategy; Conor Ryan, David Blunkett's brilliant political/media adviser; Andrew Adonis and David Miliband in No. 10; David Normington, the top civil servant who was Director General of Schools at the time; and (at least until the autumn of 2000) Chris Woodhead, Chief Inspector of Schools. We all thought the strategy was important and the mission vital to the future of the country; we all understood the case for it and how to rebut the critics; and we all understood how it should be implemented. We were like Nelson and his captains; because we shared an understanding, we could act in harmony. The result was that we could proceed rapidly and effectively without needing endless conversations or meetings. David Blunkett barely mentions the strategy in his memoirs, not because he didn't care – he burned passionately for the cause – but because he knew we were getting on with it and it was working. One of the threats to successful implementation in government in the absence of a guiding coalition is the dissonance that can arise in public statements made by different individuals or the contrast between public and private statements. Rapidly, these are reported as 'rows', and sometimes they reflect real rows, but the biggest problem is the confusion that results in the field. In education, for example, teachers on the receiving end of mixed messages start asking 'What do they want us to do?' and end up shrugging their shoulders and saying 'They haven't a clue.' It is all too easy to slide down this slippery slope. With our guiding coalition in relation to the literacy and numeracy strategy in the first Blair term we avoided this fate. David Blunkett or Estelle Morris could answer a question in Parliament, Conor Ryan could brief the editorial writer of _The Times_ , I could address a headteacher conference and we'd all say the same thing – without needing to check. Resonance in place of dissonance. Compare this to the same issue in the Blair second term which featured three secretaries of state for education in four years, none of them as passionate about literacy and numeracy as David Blunkett had been. Some of the officials wanted to add other objectives to the agenda, such as behaviour or information technology because the literacy and numeracy strategies had proved to be such an effective implementation mechanism. As a result, the focus was lost. No one could say any longer, as I had to primary headteachers, that as long as you deliver on literacy and numeracy, everything else will be forgiven. Meanwhile, the department as a whole began to listen to teachers' pleas for less prescription and less pressure. Eventually ministers started talking about 'letting go'. Stephen Twigg, a junior education minister at the time, even said on the _Today_ programme that each teacher knows best in their own classroom, precisely the line that had caused the fifty-year plateau from the mid-1940s to the mid-1990s. In No. 10 we were still wedded to the hard-edged (and previously successful) approach but had become just one voice in a cacophony. No guiding coalition there! And very limited improvement in results in the second term compared to the first. So what is the generalizable lesson for the science of delivery? For each key goal, identify the seven to ten key positions. Seek to make compatible appointments and make time early on to create the shared understanding. Make sure that there is an honest dialogue among this group so that problems of implementation are identified and resolved. It sounds simple, as does so much of the science of delivery, but it is hard to do in practice amid the tumult of government. Dalton McGuinty achieved it brilliantly as premier of Ontario in relation to education and health. Most of the time he didn't have a separate delivery unit, but he was a master of delivery and consciously built guiding coalitions to lead key priorities. Similarly, even in the political chaos of Washington DC, Arne Duncan, the US Secretary of Education, was able to build and maintain the cross-party guiding coalition that put his ground-breaking Race to the Top programme in place successfully. The results are now coming through. Both these talented political leaders are proof that the philosophy or mindset of delivery is more important than the unit itself. RULE 12 CREATE A GUIDING COALITION FOR EACH PRIORITY (to increase clarity and speed) ##### CIVIL SERVICE REFORM Sooner or later, a conversation about how to improve delivery capacity in government slides into a more general one about civil service reform. This in turn raises much wider issues about governance – corruption, for example. When a top official in Punjab praises one of his officials, he calls them 'hard-working and honest', which by implication is a negative reflection on many others. So civil service reform is a vital issue for reasons far beyond delivery capacity. Getting it right is fundamental. As Bismarck, the great (and ruthless) German Chancellor put it, 'With bad laws and good civil servants... one can still govern, with bad civil servants even the best laws cannot help.' But it's also a risk. It is a risk because it can look so overwhelmingly difficult that political leaders give up. The opposite can happen too. They take it on and it becomes all-consuming; the reform of the civil service and the structure of government absorb all the time and energy available and the citizen sees little or no benefit, at least in the short-term. Donor agencies are often to be found proposing huge structural reforms, new agencies and generic capacity-building without ever specifying what goals they are meant to achieve. This is not a book about civil service reform. The science of delivery lesson here is clear: don't avoid civil service reform, but equally don't let it absorb you and the bureaucracy totally either. Set your goals and focus on delivering them; that is what the citizens will want and what you'll be remembered for. (In 2007, Scotland did just this, abolishing traditional departments and building a unified civil service around five directors general responsible for five broad policy goals, such as Safer Scotland or Smarter Scotland.) Learn the lessons of delivery as you go and from this build an agenda for civil service reform which can be approached in a measured way behind the scenes. Don't ask, 'How should we change the civil service?', ask instead, 'How do we need to change the civil service in order to deliver our goals?' These are two very different questions! Don't depend on generic civil service reform to achieve the outcomes you want; you don't have the time. RULE 13 BUILD THE CAPACITY TO DELIVER YOUR AGENDA (civil service reform for its own sake can be an energy drain) Emmet Regan, a talented MBA student at Warwick Business School, reached a good understanding of what underpinned the success of the PMDU; and the lessons he identified – the 'six Ps' – I would argue, apply generally. * Prioritization – establishing clear, specified goals * People – choosing good people, establishing good relationships * Power – using the power of the prime minister's office wisely to drive action * Public spending – focusing investment on delivering agreed outcomes * Politics – understanding and seeing the value of politics, rather than seeing politics as a problem * Performance – focusing on the actions required to deliver the agreed outcomes. Each of these topics is addressed in one or more chapters of this book. As a delivery unit comes into place, though, it is worth testing the design of any proposed organization against Regan's checklist. Whatever the country, whatever the context, whatever the agenda, these ingredients are the irreducible core. In fact, if you have them in place, the unit itself is not an essential requirement. In other words, the disciplines of delivery are more important than a unit with the name; an effective unit just makes it more likely that these disciplines will be operational. With them, success is possible. Without, it is not. The first two chapters have been about realizing that winning an election and governing a country are two very different activities requiring very different skills. The message is clear: in the inevitable hubris following an election victory or an appointment as a minister, remain clear-eyed and humble. Set an agenda (chapter 1), review the current state of the bureaucracy on which you depend to deliver that agenda and establish an organization capable of delivering that agenda (chapter 2). Meanwhile, remember that, unless you already have 10,000 hours of deliberate practice behind you, you have a lot to learn – so make sure you create the circumstances in which you learn fast. As you do, set your strategy, the subject of the next chapter. ## 3 ## Strategy Someone in the meeting suggested that the word 'preference' would be easier to 'sell' to the unions and the Labour Party. 'Choice' might provoke a toxic reaction from either or both. I noted in my diary that Blair didn't quite bang the table, but he might have done. 'Choice is choice,' he insisted. 'Why mince our words?' He added, 'It's going to be hell for a large part of the time we're doing this... [but] I don't see any point in being Prime Minister unless we take risks.' This conversation about strategy took place shortly after the 2001 election. How would we approach not any specific reform, but our entire reform agenda? What were the principles on which we would base our reforms? In the run-up to the election, Blair had made a speech at Gravesend (in Kent), suggesting there should be three principles: namely, set standards and expectations; devolve power, budgets and responsibility to the frontline; and break down artificial barriers and demarcations between professional groups such as doctors, nurses and pharmacists. Now at Chequers, the prime minister's out-of-London residence, which has an upstairs equivalent to the No. 10 Cabinet Room, we debated whether these three principles were a sufficient basis for a radical reform programme. Someone, probably knowing in advance that Blair would seize on it, threw choice into the mix. Shouldn't there be a fourth principle that, wherever possible, we would offer the patient, the parent, the citizen, choice? It was the perfect issue to split the reluctant Blairites from the enthusiastic Blairites (the latter included, needless to say, the prime minister). The fact that unions and party would react against it weighed heavily with some, but only encouraged others. Blair was a master of running against his own base, as indeed Thatcher had been. We worked out the argument for it, and it was compelling. The wealthy have the greatest choice, we argued, because they can opt for private healthcare or education, or they can easily move house to get their child into the school of their choice; whereas we wanted choice for everyone. When those with wealth already had choice, why should the poor have to put up with a take-it-or-leave-it single offer? In the end, choice is an exercise of power. Thus by extending choice we were empowering the powerless. We argued too that choice would be attractive to the middle class and help persuade them to remain committed to the public services, which in turn would mean they would be prepared to pay taxes to support them. Once the principle and the case for it were established, certain policy prescriptions followed. For example, a diversity of provision would be required, so foundation hospitals would be given independence from the National Health Service and academies with similarities to charter schools in the US – independent state schools – would be established in education. Similarly, citizens needed information on which to base their choices, so transparency about performance would be vital; and advice for those who wanted choice but didn't have the confidence to exercise it, would be desirable. The conversation at Chequers about principles flowed into strategy, which in turn determined policy. This is the key insight here, in contrast to what all too often happens, which is that a government simply generates a list of things to do. In the plush, dark lobby of a London hotel, I met the top civil servant from the education department of an emerging Asian nation. Small, energetic and perhaps a little jetlagged, he had been encouraged to pick my brains on the way back from a UNESCO event in Paris. We hadn't met before, so while we relaxed in each other's company, I asked him about himself and about his country's recent election. Then I turned to his agenda for the meeting. 'How can I help you?' I asked. 'What are the challenges you face?' 'Oh,' he replied, 'it's tough. I've got sixty-nine initiatives to implement before Christmas!' My heart sank. That word 'initiative'. As I've said before, I'd abolish it if I could. Success in achieving ambitious goals is not just hindered by lots of separate initiatives – they actively get in the way. I learned this in the Department for Education in Blair's first term. In the first couple of years, we were always looking for things to announce. Announcements and initiatives are first cousins! Some of them were very good, others less so, but there were simply too many of them. On my regular school visits, I would set out the strategy for the staff and then listen to their comments. The first was almost always something like, 'Well, Michael, you might think there's a strategy; we think there's just one damn thing after another.' That's the effect that initiatives have. They cause confusion. They undermine clarity about priorities. Sixty-nine initiatives before Christmas would be a lot of work and almost certainly make things worse. Yet there are management consultancies even now that advise governments to create a portfolio of initiatives... This is a chapter about strategy – your broad approach to achieving your goals. There's nothing wrong with ideology as a starting point. In fact, it's a key ingredient of political debate. In elections, we are not just choosing between different leaders and different manifestos, we are choosing between different views of the world. Ideologies in other words. And if the government's ideology is clear enough (and is not devoid of foundation in reality), it can help get things done. Take Margaret Thatcher. After the first three or four years were spent, in her words, 'sorting out the supply side', she moved on to a series of privatizations of the state-owned enterprises before addressing the problems of health and education. During that time I happened to run into a Treasury official in a pub. He said, 'It's very simple for us because we know that whatever the problem is, the question she wants us to answer is "How do you make a market?" ' Crucially, therefore, they were able to apply this perspective not just to the problems in the PM's focus, but also to those that lay ahead. Then, when the all-seeing eye fell on that problem, the thinking was already far advanced. A good example of how ideology can help. This explains too why, at the start of his second term, Blair wanted to clarify and establish the four principles of public service reform which became the theme of a series of speeches. Blair had learned by then that the 'eye-catching initiatives' which he had demanded in his first term brought only superficial progress; they did not change how systems work. Not all of his ministers kept up. One was famously dismissive of the approach: 'Strategy is b*****ks!' he asserted and carried on announcing initiatives. RULE 14 WORK FROM PRINCIPLES TO STRATEGY TO POLICY (and put a stake through the heart of initiatives) In the modern world, ideological differences are not as stark as they were in the mid-twentieth century. 'What works' should be the mantra of the twenty-first century. The differences now tend to be differences of emphasis – some favour choice as a good in itself, others do not. Some would like to reduce tax rates over time, others have a modest preference for more public expenditure if they can make it work. Some have greater faith in markets left to themselves; some less so. These debates, such as that between Bhagwati and Sen mentioned in the Introduction, can be very vigorous and – at the extremes – broad as well as deep. Often, the widest, deepest political divisions are over cultural issues – gay marriage, the ordination of women, immigration and so on, as the Tea Party in the US exemplifies – rather than over core political issues. Either way, there are a range of legitimate positions and many can be shown to bring benefits if reforms are well designed and there is a systematic approach to delivery. The problem is that such broad ideological positions don't necessarily tell you what to do in any given case and most certainly don't provide you with the detail. Moreover, the evidence is important and it has clear messages which you ignore at your peril. But even with the evidence well assembled, there are judgements to make, for instance about the specific moment and context in which a reform is introduced. The other pitfall of much of the evidence is that it is subject-specific, relating for example to health, education, crime or welfare, but not necessarily connecting all of them. The main aim in this chapter is to set out five broad approaches which are potentially applicable across a wide range of government responsibilities. In relation to each, there is a brief commentary drawing on the evidence. The central point of the chapter is that sound strategy is a precondition of a successful delivery. Strategy without delivery is vacuous; delivery without strategy is incoherent. In establishing the five approaches, I've drawn on my own previous work (see pp. 333–42 of _Instruction to Deliver_ ) and also on the excellent work of Gwyn Bevan from LSE and Deborah Wilson from Bristol. They and others have focused on what they describe as 'the natural experiment' that occurred in the United Kingdom after the Blair administration introduced devolved legislatures and governments for Scotland, Wales and Northern Ireland. What this involved was the pursuit of divergent paths from largely similar starting points in health and education. The analysis that results has much more general applicability, as I hope to show (Fig. 11 below). Five Paradigms of System Reform Figure 11 In addition to the five approaches there are three functions that the system needs to consider centrally. Taken together, these three constitute the concept of stewardship, which in essence means leaving a system better than you found it. Finally, there is the wider context of community engagement. This chapter deals with each in turn. ##### APPROACH 1. TRUST AND ALTRUISM Trust and Altruism is the approach that everyone would like to work and which many governments have adopted and some continue to depend upon. It was for decades the way the UK's National Health Service was approached. It has also been the default approach to governing school systems across the world. In this model, the basic idea of government is to fund the inputs (buildings, equipment, salaries and so on), staff the service with professionals who have been trained and have gained certain professional qualifications, and leave them to get on with it. Attachment to their professional ethics, it is believed, means they will do their best for those they serve, while the profession itself will ensure the professional learning and growth of its members. Seen from the point of view of the professions, the approach can be summed up in the phrase: 'Give us the money and get out of the way.' It has many attractions for government too: not much action is required, there is no need to engage in a battle of ideas; it assumes the best of the public service workforce and leaves much of the decision-making to them. It assumes that accountability, data and performance management are not necessary. For these reasons, the approach suits public sector professionals too – pay without accountability has its attractions, needless to say. Moreover, when there is system underperformance, the strong presumption is that lack of resources is the cause of the problem and therefore the solution is to spend more. This means public debate focuses purely on inputs: the professions want more money and pay, and often the public supports them; governments have to balance competing demands and raise money through taxes, so there's a cap on their ability to respond. As a consequence, this approach results, surprisingly often given its basis in trust, not in harmony but in acrimony – usually over funding – between government and professions. In his excellent account of professional motivation, Julian Le Grand distinguishes between 'knightly' motivation – altruism – and 'knavish' motivation – self-interest. Trust and Altruism assumes that all the public sector professionals are knights, not knaves. Attractive all round though it may be, there is one significant problem with it. It doesn't work. As Le Grand points out, 'the assumption that knightly behaviour characterised those working within the welfare state proved... vulnerable... many politicians... grew increasingly sceptical of the view... that professionals were only concerned with the welfare of their clients'. He was talking about Britain, but the insight applies generally. Bevan and Wilson, in their striking comparison of health and school performance in Wales and England, reach a firm conclusion: > The consistent finding is that the [Trust and Altruism] model resulted in worse reported performance in Wales as compared with England on what were each government's key objectives of improving examination performance at age 16 and reducing long hospital waiting times. While they accept that England's system of sharp accountability had some problems, they conclude that 'the benefits... did indeed outweigh their dysfunctional consequences'. They continue: > So we argue that our findings show that the [Trust and Altruism] model not only lacks the theoretical justification we identified earlier, but has been found wanting in our (and other) empirical studies. Reliance on Trust and Altruism also explains why so many school systems barely improved over the decades between 1950 and 2000. The cause of the failure is that the public service professions, while often full of very fine people, are not all knights; many – like all human beings incidentally – have some knavish elements and a few are very knavish indeed. Professions are powerful; their cultures are strong, which can be good (doctors responding to any emergency) or not (a quarter of India's teachers not turning up on any given day), depending on the time and place. The professions often hold governments captive too, through powerful unions, effective lobbying and in places – such as India and the US – direct involvement in elections themselves. I've walked into an ordinary government school in a suburb of Accra, Ghana to see Trust and Altruism at its worst. The headteacher was warm and welcoming, but also resigned to his fate. The teachers took no notice of him and firing them was virtually impossible. He showed me into four classrooms; in three there was no teacher at all, though the sight of visitors brought two of them scurrying back; in the fourth classroom, the teacher was present, eating her lunch in the corner of the room; she did not look up once, even when spoken to by the headteacher, and studiously ignored the children. This is not untypical, as everyone knows but prefers not to mention. The government of Ghana pays teachers well. In fact, their pay is such a big proportion of the education budget (over 95 per cent) that there is hardly any money left for anything else. There have been no new textbooks for years. While, of course, there are some heroic and dedicated teachers in Ghana's government schools, the overall effect is clear: the government is paying for knights and getting knaves. There are places where the knightly assumptions work better. Finland is famed for having an outstanding school system, which consistently comes out well in international comparisons, though it has fallen away somewhat recently. It takes a fundamentally knightly view of its teachers, who are given extensive professional autonomy and deliver high quality across virtually all schools. At first the country was surprised by its success and unable to explain it. Now, a small group of Finnish educators, led by the indomitable Pasi Sahlberg, have made themselves famous by touring the world to explain how Finland does it. The answer, in brief, is that teaching is a very tough profession to join in Finland. Education is highly valued and respected in the culture, so the country's brightest and best young people aspire to get into it. One British minister I know learned this first hand on a visit to Finland, where he was shown round by an impressive young woman from the Finnish foreign ministry. 'How did you get such an excellent job?' he asked her. 'I applied for teacher training and they turned me down,' came the reply. With a talented intake every year and a strong, principled professional culture, the teaching profession goes from strength to strength. In effect, it holds its own members to account through this strong culture. In addition, the society as a whole has a strong commitment to equity and expects these excellent teachers to do everything they can to enable children who fall behind to catch up. And they do. The system is high-performing, though the constant praise has led to a degree of complacency and a lack of further reform, according to Sahlberg, resulting in a fall in performance in recent years. Aspiring to have a school system like Finland's has its attractions. However, in most parts of the world it would be rash to base a strategy on achieving it, partly because it is a very difficult thing to do and most systems already have professional cultures of a very different nature; and partly because Finland itself is highly unusual – small, homogeneous and with what sociologists call 'thick social capital'. It is a place where everyone's tax return is available online. Learn from Finland, absolutely yes; copy it, probably not. As they say at the end of some children's television programmes: Don't try this at home. RULE 15 TRUST AND ALTRUISM IS POPULAR BUT DOESN'T WORK (other than in unusual circumstances) ##### APPROACH 2. HIERARCHY AND TARGETS (OR COMMAND AND CONTROL) This approach, Bevan and Wilson say, has 'a limited set of public targets that clearly signal priorities, with specified rewards for "successful" organisations and sanctions directed at those responsible for running "failing" organisations'. It is much the same as the approach I have described in previous publications as 'Command and Control', defined as 'a top-down implementation of a change the government wants to bring about'. As an approach, it has a bad name, mainly because it is unpopular with professionals who feel the pressure, but the evidence suggests that, implemented well and sustained, it can be effective. Bevan and Wilson show how well it worked in England in reducing health waiting times and improving school performance, especially compared to Wales and Scotland, which preferred Trust and Altruism. Performance Wedge Figure 12 It is particularly effective in improving underperforming services or parts of services, which provides an opportunity to introduce another useful tool of policymaking, the Performance Wedge. Jim Collins's book _Good to Great_ is justly famous for its analysis of the leadership and strategies that take companies from good performance to great. It was one of the seminal texts I read while leading Blair's Delivery Unit. We found we had to add to his two-point scale two lower levels of performance, Awful and Adequate. Thus we arrived at the Performance Wedge. While there were great elements of services – such as a great hospital or school – none were universally great. By contrast, some were largely awful. For instance, fine collection in the UK was at 50 per cent when the Delivery Unit first took responsibility for it in 2001; by 2004 it was at 75 per cent. Big progress, but surely only from Awful to Adequate. The adjectives above the wedge describe the attitude of citizens as performance improves. While it is poor, citizens will exit if they can; if it reaches adequate, they continue to grumble. It may be a huge political achievement to shift a service from awful to adequate, but the citizens are unimpressed; they think you should have done it years ago. Only when a service becomes good or great do citizens begin to consider dancing in the streets. Hierarchy and Targets (or Command and Control) is particularly appropriate for shifting a service from Awful to Adequate. No point after all in trusting or relying on altruism where a service is awful; the time for that is surely past. Just make them sort it out! Of course, if you attempt Hierarchy and Targets and do it poorly, you might make things worse. For example, New York State struggled for years with its intervention in the poor-performing Roosevelt School District. You will need political courage because it will be controversial, but do it well and the citizens will benefit. (They may not applaud, because they always knew it was awful; why didn't you?) Gwyn Bevan emailed me in April 2014 with his latest work on health performance to say: >... the interesting finding is that in response to the criticisms of the performance in Scotland as compared to England, the Scots strengthened their system of performance management and created a delivery unit within their department of health. The outcome has been that performance on waiting times in Scotland is now similar to England. Wales still lags behind. In short, Scotland abandoned Trust and Altruism, adopted Hierarchy and Targets and reaped the rewards. In relation to health, Wales has yet to do so. In my recent personal experience, the best evidence of this approach in practice has been the Punjab Education Roadmap, which began in 2011 and continues. Hierarchy and Targets has been applied across a province with over 20 million school-age children and 60,000 schools. Enrolment, student attendance, teacher attendance and the provision of facilities and materials such as textbooks have all risen significantly as a result of setting targets and introducing effective hierarchical management. Even learning outcomes have improved a little. A system that was unmanaged and therefore Awful is now managed and is therefore becoming Adequate. A crucial ingredient of the Roadmap – monthly data on key indicators collected from every school – is representative of the Hierarchy and Targets approach generally: it depends on a steady supply of reasonably good data on which to base decisions. Incidentally, in polls of Pakistan's citizens in June 2014, around 65 per cent thought Punjab the country's best-run province. The major challenge with this approach comes after it has been sustained for a while and is succeeding: that is, how to sustain the progress beyond Adequate to Good and then from Good to Great. It works to get to Adequate, but can it also get a system to Great? This is doubtful – in the words Joel Klein of New York City and I hit upon, 'You can mandate adequacy but you cannot mandate greatness; it has to be unleashed.' Indeed the very idea of 'mandating greatness' seems absurd. You can make people meet the standards of adequacy; to bring about greatness you have to create the conditions in the system and foster it. Targets might still be desirable or necessary, but once a system moves beyond adequate, a different overall approach is required. This is where Daniel Pink's three elements – autonomy, mastery and purpose – come into their own. They inform the next two approaches. RULE 16 THE HIERARCHY AND TARGETS APPROACH WILL GET YOU FROM AWFUL TO ADEQUATE (if executed well) ##### APPROACH 3. CHOICE AND COMPETITION As we have seen, Tony Blair was determined to put choice and competition at the heart of his second-term public service reforms. It has also featured strongly in school reforms in the US in places such as New Orleans and Boston and in the way healthcare is organized in many countries. Julian Le Grand, whom we met earlier, is the best-known theorist of applying choice and competition to public services. He argues that the introduction of choice and competition empowers the citizen; extending his chess metaphor, they become queens instead of pawns. The effect is to incentivize public sector institutions and professions regardless of whether they are knights or knaves. In other words, the approach does not depend on assuming knightly behaviour as in Approach 1, or knavish behaviour as in Approach 2. To make it work, the service user – patient or parent, for example – must be given real choice and the information on which to base the decisions; the money in the system must follow those choices; as a result the more attractive or successful providers will gain and the less successful will struggle. So far, so good, but it is not straightforward. Public services are not pure markets in the way that mobile telephones or consumer durables are. In the pure markets, the consumer can easily move from one brand to another and indeed exit altogether. If some consumers opt out in such a market, that is not a problem. Also, the producer can vary the price in response to market signals. In a public service, there is public as well as private benefit. I benefit not just from ensuring my child is vaccinated, but from you and everyone else doing so too. I gain from my child being educated – and yours. Indeed, schooling is compulsory almost everywhere for this reason. Moreover, the government fixes the price – or more precisely, makes the service free at the point of use and pays the price. So, however much choice and competition are introduced, it is a quasi-market rather than a pure one. This has practical consequences which political leaders sometimes find it hard to deal with. If the quasi-market results in some schools benefiting at the expense of others, how long can a failing school be allowed to wither away, offering poor education to fewer and fewer children? Surely not long. The government needs to intervene. If a quasi-market points to a hospital closure being required, will the politicians involved have the courage to see it through in the teeth of a likely vigorous, local, public campaign? Or in the case of a university under similar pressure? The evidence suggests this is politically very difficult. At this point the choice and competition policy may need a dose of hierarchy and targets to make it work. Nevertheless, the policy can also be attractive to politicians both because some ideologically prefer markets and because, if the policy works, it frees the government from day-to-day intervention as the system becomes sustainable. As Bevan and Wilson put it, 'Quasi-markets have high transaction costs, but are popular with governments because pressure on poor performance comes from an invisible hand...' Blair saw choice as a key ingredient in creating 'self-improving systems' – different words for the same basic point. There is much academic debate about whether choice and competition work in the sense of driving up performance. The evidence so far is unclear, though cumulatively it is beginning to suggest that under the right conditions choice does work. Certainly low-cost private schools in the developing world are contributing to improved performance. And if choice is seen as something valuable in itself, the argument is stronger still. The challenge when market thinking is applied to public services is to ensure not only that performance improves, but that equity is at least protected and perhaps even enhanced. Pure markets aim at efficiency rather than equity. Therefore, when market thinking is applied to public goals, the policy approach needs actively to promote equity. If, for instance, the money follows the student in a school system, then students from low-income families could receive more, as they do in England, but often not in the USA. In a healthcare system, advice could be targeted at those least able to seize the opportunity of choice. To quote a specific case, the vouchers in the Punjab school system – which can be redeemed at registered private schools – are only for poor families whose children are not in school. Here choice and equity are combined as a goal of policy. By contrast, the original voucher scheme in Chile in the 1990s was universal – everyone got the same voucher however wealthy or poor. The voucher for the rich became a subsidy of the private school fees they already paid; for the poor it enabled a rudimentary education in a public school. Equity gaps, already wide, widened further. The other challenge for government with choice and competition is in ensuring that genuinely diverse options are available. Even if the customer has choice, and the money follows their decision, the choice is more apparent than real if all the options are broadly similar in style and substance. Moreover, in some locations, particularly rural ones, providing a range of options might not make sense, so diversity in practice is a largely urban phenomenon. In relation to this challenge, much of the best thinking has been done among charter school advocates in the US, especially in its large, urban districts. In New Orleans now, the vast majority of children are in charter schools. A not-for-profit organization, New Schools for New Orleans, attracted charter providers to the city and helped them navigate the politics and regulation. This brought a diverse range of providers to the city. Meanwhile, the public authorities, through the Recovery School District, created a policy framework that offered parents choice and rewarded school providers who delivered outcomes. Recently, the idea of the policy 'nudge' has become widely advocated, sometimes as an alternative to strategy and a systematic approach to delivery. In fact, it is entirely consistent with both, as David Halpern, Britain's leading nudge 'guru', affirms. It is best seen as a major contribution to thinking about how quasi-markets can work successfully in public systems. To summarize a somewhat academic or theoretical argument, the authors of _Nudge: Improving Decisions About Health, Wealth and Happiness_ argue for what they call 'libertarian paternalism': 'libertarian' because they advocate choice in the public as well as private sectors; 'paternalism' because they believe that real human beings as opposed to 'econs' – the theoretical human beings beloved of economists – don't necessarily maximize either their own interests or those of their society. In these circumstances, they argue, it is legitimate for the public authorities to incentivize behaviours that are more socially desirable. They suggest, therefore, that in some circumstances, officials overseeing public provision might see themselves as choice architects – using a series of nudges to achieve better social outcomes. One way to do this is to turn the 'status quo bias' – the preference most of us have against change – to advantage. This is, in fact, exactly what Julian Le Grand, quoted at the start of this section, advocates: the creation of a choice and competition system so that the key players in it are incentivized to do the right thing whether they are altruistic or not. Thus, while uniform systems seek to impose better outcomes, including equity, quasi-market systems seek to incentivize them. Drawing on language from the famous _1066 and All That_ by Sellar and Yeatman (Cavaliers are wrong but romantic, while Roundheads are right but repulsive), Figure 13 illustrates the dilemma. Quasi-markets, left to themselves, will end up in the top left box, while traditional uniform systems will end up in the bottom right. No one wants the bottom left, though some systems end up there by accident. The goal for most is the top right, and the strategy question is how to get there from wherever the starting point is. Equality and Diversity Figure 13 Seen in this light, 'the science of choice' which the authors of _Nudge_ describe and advocate is an increasingly important aspect of the science of delivery and crucial to getting this third approach to strategy right. RULE 17 CHOICE IS BECOMING INCREASINGLY IMPORTANT IN PUBLIC SYSTEMS (it's a good in itself) ##### APPROACH 4. DEVOLUTION AND TRANSPARENCY The fourth approach is particularly appropriate where choice doesn't work well: running prisons or immigration systems, for example. No one thinks prisoners or illegal immigrants should be offered choice. In services such as these, there is no obvious customer who can exercise choice because the customer is the government on behalf of citizens. It involves devolving power and responsibility to managers close to the frontline and then, through transparent publication of data on outcomes, holding them to account. For this reason, what I describe as Devolution and Transparency, Bevan and Wilson call Transparent Public Ranking. The classic pioneer of Devolution and Transparency in the public sector was the New York City Police Department under Commissioner Bill Bratton in the early 1990s. Policing is another service where individual choice doesn't apply, but Bratton wanted to create a similar pressure for improved performance on a police department that was notoriously inefficient in a city whose reputation at the time was at stake because its crime rate was so high. His answer was to create Compstat, which was both a data system and a process. Weekly data on all major crime types was collected at precinct level and published regularly. Power and responsibility were devolved to precinct captains. The citizens of New York City (and its vigorous media) could see how crime varied across the city – almost in real time. Meanwhile, Bratton organized regular sessions where precinct captains would be gathered together to watch one of their number being quizzed about performance. The precinct captain on show would find data about the precinct put on the screen in front of peers and then be asked to explain any successes or failures. The central team would prepare well for such meetings and could be quite aggressive, New York-style, in challenging an account. If, say, a captain claimed to be on top of litter, the central team might put up on the screen a photograph taken the previous day of litter in his precinct. The watching precinct captains would be invited to join in making suggestions as to how to improve this precinct's performance. Note how, through the Compstat process, the data and devolution enable not just vigorous challenge but, crucially, learning about best practices or innovative ideas. In its first year or so of operation, the challenge element was at the forefront, and a significant proportion of precinct captains who couldn't stand the heat moved on or found themselves moved on. But over time it was the learning element that mattered most: the process enabled ideas to spread quickly across the city, which became a hotbed of trial, error and innovation. The pressure of Compstat provided the incentive to precinct captains to adopt ideas that worked. In other words, the process replicated one of the best features of markets, which is the incentive to innovate and adopt successful practices. Crime fell steadily in New York City from Bratton's time onwards. It may be true that some social trends helped, but it is no doubt true also that the increased effectiveness of the police department was a critical turning point. Through his changes, Bratton demolished the deeply held belief that crime was a remorseless tide that could not be reversed and convinced his officers – not least through smart new uniforms – that they could get a grip. Others took notice. Anapolis is a small pretty town on the shores of Chesapeake Bay. The water laps and frets around the yachts in the harbour. Evening diners look out over the bay. A little way up the hill is the seat of Maryland's government. In strict accordance with America's belief in the separation of powers, the legislature and the governor's mansion are separated too, by tree-lined streets. Some of America's state capitals turned out to be a disappointment – Dover, Delaware hardly sets pulses racing – but Anapolis is special, with the unity of its classical architecture and views across the ancient bay. Governor Martin O'Malley knows that not all of Maryland is as pretty as Anapolis. There are the sprawling Washington DC suburbs of Montgomery County on one side; and the mean streets and boarded-up houses – think _The Wire_ – of Baltimore on the other. O'Malley had been mayor of Baltimore prior to his election as governor of the state. Learning from New York City, he had introduced Citistat, a successful attempt to drive Bratton-type reform through city government. _The Wire_ had done him no favours, with its portrayal of a shady mayor playing both sides of the street, but O'Malley's own track record was impressive. Baltimore, a city with multiple challenges, made progress under his stewardship, and in 2006 his success there catapulted him into the governorship and the mansion in Anapolis. He had done his 10,000 hours of deliberate practice and was well prepared to be a successful governor. 'What's the difference between a goal and a dream?' he quips. 'A deadline.' Once installed in Anapolis, he combined his Citistat experience with a small delivery unit modelled on our experience in Britain. He says the 'relentless discipline of delivery' is the key ingredient of 'a new way leadership'. Now, at the end of his second term, the results are plain. The key economic and public service indicators in Maryland suggest that it is in much better shape than it was. It is one of America's highest-performing and best-governed states. The recession has been tough everywhere, but Maryland has proved more resilient than most. O'Malley argues that you can embrace the new technologies to ensure openness, transparency and performance management. In fact, for the 'show me' generation, he believes there is no alternative. Citizens demand, in his words, 'timely, accurate information shared by all'. And he emphasizes 'all means all'. For example, anyone can now see online what progress is being made on cleaning up Chesapeake Bay. Devolution and Transparency, it seems, worked well on its shores. Ten time zones away in the Queen of Cities, Lahore, it worked well too. Ask the Secretary – Schools (from 2010 to 2013), Aslam Kamboh, what single factor drove up performance in Punjab in his three years in charge, and he will answer with a single word: 'rankings'. Each month, the data on key indicators – such as teacher presence or student attendance – was collected from every school. It was analysed in Lahore, enabling Secretary Kamboh to rank order his thirty-six Education District Officers. The top performers, on a ranking that takes into account progress and the varied starting points, are rewarded both financially and symbolically (a cup of tea with the chief minister being highly prized), while questions are asked of the bottom performers – and sometimes they are moved on. Kamboh's successor has maintained the tradition. It's still working. The research evidence suggests that Transparent Public Ranking works, not because consumers make different choices on the basis of the data (often they don't) and not because committed professionals choose to learn from their peers (they are more likely to critique the data), but because, crucially, rankings put reputations at stake. The precinct captain in New York City has a reputation at stake when he or she appears in front of peers; when data on performance is made public in Maryland, the managers concerned have a reputation at stake; and in Punjab, where face and reputation for individuals and families are so culturally powerful, the local officials know their reputation is at stake too. J. H. Hibbard, the American healthcare researcher, has shown, as Bevan and Hamblin put it, that comparative data is more likely to be used if presented in a ranking system that makes it easy to identify the high and low performers. She argues that four characteristics are needed to ensure that Transparent Public Ranking will be effective: * A ranking system * Data published and widely disseminated * Data easy for the public to understand * Future reports which show changes in performance. Another US-based healthcare study confirmed Hibbard's view: 'the impetus to use the data to improve has been limited almost entirely to hospitals that have been named as outliers with poor performance'. More recent data from the UK supports this case. In England, the star ratings of hospitals met Hibbard's criteria and drove improved performance. In Wales and Scotland, where Trust and Altruism ruled (at least until recently), there was little or no progress. In fact, here poor performers attracted extra cash. As the Auditor General for Wales put it in 2005, 'the current waiting time performance management regime effectively "rewarded failure" to deliver the waiting times targets'. In Scotland, two 'failing' boards were bailed out. In other words, these systems incentivized doing the wrong thing. A further study, this time looking at Australia, Canada, England, New Zealand and Wales over the period 2000–2005 (exactly the time the PMDU began to have an effect in England, incidentally), found that: > Of the five countries, England has achieved the most sustained improvement, linked to major funding boosts, ambitious waiting-time targets and a rigorous performance management system. To put it bluntly, 'name and shame' works, especially in moving Awful performance towards Adequate or Good. It's not clear that once performance reaches Good the incentive effect continues, but it will certainly put pressure on any unit in the system that falls behind. RULE 18 TRANSPARENT PUBLIC RANKING WORKS (don't flinch) For politicians, then, the evidence is clear. The issue is whether they have the political will to take on the interest groups, especially public sector unions, who will undoubtedly resist such transparency. Contracting out services or commissioning them has been a major theme in public policy in the past couple of decades. Often the involvement of the business sector in the provision of government services provokes controversy: Are you for or against their involvement? Shouldn't public services be fully publicly provided? This ideological debate should be over by now. The problems with purely public provision are twofold. First, the public sector doesn't have – and cannot possibly have – all the skills and capacity necessary to do everything that might be publicly provided – whether it's building an aircraft carrier, manufacturing a drug or publishing a textbook. Second, where public provision has been given a monopoly, it has sometimes become (as monopolies tend to) inefficient, occasionally grossly so. At its worst, public sector unions have such great sway over political decision-making that they not only represent their members, they also hugely influence the political authorities with whom they negotiate. No wonder in these circumstances that the public service is run in the interests of the workforce rather than the consumer. I used to collect examples of what this looked like at its worst: Bill Bratton found when he arrived at the NYPD that the drug squad worked 9–5, Monday to Friday! There are Mental Health Services that close over Christmas and New Year, even though this is known to be the time when demand for them peaks; orthopaedic surgeons take Friday afternoon off to play golf; long lunch hours at courts suit judges, but not justice. A seminal moment for me came when I was a councillor in the London Borough of Hackney in the late 1980s. The Direct Labour Organisation (DLO), which did repairs for the large swathes of public housing in the borough, wanted a pay rise. Negotiations with council officials had stalled. The union representing the DLO decided to use its muscle inside the Labour Party to get its way. Since Labour had a huge majority in the council, if the Labour group of councillors supported the pay rise, the council officials would be overruled and the DLO would get its way. In spite of the fact that every Labour councillor knew that the service provided by the DLO was lamentable – they knew because their constituents never ceased to tell them so – in a meeting where the galleries were packed with aggressive DLO workers, the Labour group chose to support 'the workers' (i.e. the union) against 'the management' (i.e. the council officials). With two or three other Labour councillors out of fifty or so, I voted against the pay rise and was jostled and spat at on the way out for my pains. This story has unfortunately been played out again and again in different forms – sometimes without the spitting – around the world, and it's the reason why public authorities began to contract out services in the first place. Our leader back then, Andrew Puddephatt, whom I came to admire hugely, finally called time on this kind of complicity, calling it 'theft from working people'. At last we were standing up for the consumers of the services rather than the providers. Devolution and Transparency provides a good basis for contracting out, not for ideological reasons, but for pragmatic ones, because it makes clear which parts of a service are working well and which less so. In Britain the Cameron government is enabling private managers to take over poorly performing hospitals. In Massachusetts, the commissioner, Mitchell Chester, has intervened to take over the Lawrence School District where performance was shocking. In the latter case, while the takeover is by the state rather than by a private provider, the new leadership of the district, with significant success, has brought in a range of new providers to solve the district's serious problems. In short, while the Choice and Competition model of reforms depends on the upward pressure of individual consumers making choices, the Devolution and Transparency model enables downward pressure – competition imposed from above – and also lateral pressure – peers learning vigorously from each other. Given the numerous stories of failed contracting out, most infamously with large IT projects, it is worth pointing out that the evidence is not conclusive one way or another between public and private. This is not surprising, given that both sectors are capable of screwing up massively. Everything depends on how it's done. Are the contracts clear? Do the public authorities and private contractors build credible working relationships? And so on. What often worried me in the Blair years was that departments let contracts without having the necessary skills either to design them well or to manage them after they were let. Many civil servants seemed to believe that once a contract was let, they didn't need to worry about it any more. Out of sight, out of mind proved disastrous time and again. RULE 19 CONTRACTING OUT SERVICES BREAKS MONOPOLIES (but don't think it relieves you from management responsibilities) There are some straightforward lessons that governments can apply to improve the effectiveness with which they contract out services and develop public–private partnerships. Alan Trager from Harvard has made himself an expert in this emerging field. He argues that governments need to set up the negotiation of a contract well. Drawing on the work of Lax and Sebenius ( _3-D Negotiation_ ), he argues that the first consideration is the architecture of the negotiation, who should be involved and when and what is the scope of the conversation; second, he says, governments need to be much clearer about the overall design of the deal they want to do – what value, what substance, what outcomes, over what time period? Finally they need to think through the tactics they will apply at the negotiating table. All too often governments skip the first two steps and go straight to the third. Moreover, they are frequently outgunned in the negotiation by a company that brings more experienced (and better paid) negotiators and lawyers to the table while the government muddles through. Trager also says any government negotiating team should 'bring a designated listener' so they are in a better position to reflect between sessions. Crucially, too, the government needs to be able to walk away before striking a deal and live with the temporary embarrassment. Otherwise, its leverage in the negotiation is minimal. To illustrate, I recall with horror a time in government when I was finalizing a deal with a private contractor knowing that my political masters would not want to live without a deal and that this particular contractor was the only one left, the others having decided it was all too risky. We got a deal, but the impact over subsequent years fell well short of our aspirations. Once a deal is done, contracts need to be managed and relationships built. They never take care of themselves. This requires senior officials with specialized skills to be involved. This is because many of the problems that contractors will work on are not 'simple and technical', but 'complex and adaptive' (to use the language of Ron Heifetz and Marty Linsky, also of Harvard). The practical implication of this is that however well set up the initial contract is, the world will change in unexpected ways and not every eventuality can be predicted at the outset. This is when the quality of the relationship becomes decisive. Dame Tessa Jowell, the politician who oversaw the London Olympics from concept to delivery, was a master of building these relationships. The 2012 Games were a huge triumph as everyone knows, but even here there were setbacks. The most visible was not long before the Games when the company G4S admitted it had not been able to recruit sufficient security people for all the venues. The military stepped in (and loved it), and all went well. When I asked Tessa to reflect on this experience, she said she wished now that they had embedded a senior civil servant inside G4S from the start so that communication would have been constant, and nasty surprises, such as the one the country got, would have been avoided. Relationships again. Marty Linsky, in the same conversation, put it more grandly, 'The work of transformational leadership is about human dynamics.' Incidentally, these points apply whether the contractor is a for-profit private or a social sector organization. The other major consideration is whether there are any ideological issues to take into account; are there some aspects of a service which should be purely public; and any that should be totally private. In part, these are matters of political disposition. Julio Frenk was Minister of Health from 2000 to 2006 in the administration of Mexican President Vicente Fox. His crowning achievement was to bring all Mexicans within a comprehensive health insurance programme, before which millions each year were bankrupted trying to fund healthcare for themselves or a family member. He has thought carefully about where the private sector should and should not play a part in healthcare systems. He argues that the government on behalf of the public should set 'the rules of the game' and provide the funding; no role for the private in either of these spheres. Provision or delivery might be either public or private, or a combination of the two. Pharmaceuticals should be private, though of course within a regulatory framework set by government. As he himself says, there are political choices to be made, but what he outlines seems pragmatic, sensible and potentially generalizable. One final point on contracting out: how much of a service do you need to contract out before the performance of the whole service changes for the better? After all, part of the point of contracting out is not just to improve performance where the service is contracted out, but to incentivize improvement across the board. If there is convincing research on this question, I have yet to see it, but experience suggests the answer is in the range of 10–15 per cent – once private prisons provide 10–15 per cent of the total prison service, the competitive threat will be sufficient to ensure that the other 85–90 per cent improve. Once 10 per cent of the knee and hip operations are privately provided, orthopaedic surgeons across the country will begin to wonder about taking Friday afternoon off. Once ten or twelve local authorities out of 150 with poor education services are on the receiving end of intervention, the rest will begin to improve. ##### APPROACH 5. PRIVATIZATION (AND VOUCHERS) As a young Labour Party activist in the 1980s, I must admit I was no fan of Nigel Lawson. The slightly arrogant gaze, the over-ebullient ties, the shock wave of black hair swept back and the uncompromising case he made for Thatcherism all combined to put me off. Even then, though, I could see he was an ideologically influential figure as well as a powerful Chancellor. Now, all these years later and having read his impressive autobiography, I am compelled to believe, in spite of my reaction back then, that he had a powerful case at the time which on the whole history has endorsed: the case for privatization. The previous four approaches are different ways to reform a public service while keeping it public. Privatization is a different solution altogether – it simply says: Why waste time on reform? Let's sell it off and let the market deal with it. As one of Sinclair Lewis's characters observes in _Main Street_ , 'And you want to reform people like that when dynamite is so cheap?' Of course, privatization is not appropriate for every service – no government has (yet) chosen to sell off an entire school system or even large parts of it, for very good reasons, but in Britain in the late 1970s, huge swathes of the economy were under public ownership – gas, water, electricity, the post office, telephones, much of North Sea oil, the railways... on and on. In 1982, the nationalized industries accounted for over 10 per cent of total national output and employed approaching 2 million people. Moreover, everyone in Britain knew that the services were often poor. In 1980, I moved into my own flat for the first time and called (from a phone box) to order a telephone. The only question I was asked was 'What colour do you want?' To be truthful, I hadn't thought about this before I rang, and when I did think about it I didn't really care. Black would be fine. What I did care about, though, was how long it would take before someone could come round and install the phone. That would take a while, I was told. More worryingly still, the person I spoke to told me they were running out of numbers. Running out of numbers? This was not something I had worried about before either. I began to imagine a lucrative career smuggling illegal Colombian sixes into the country... Thanks to Nigel Lawson and his colleagues (and some serious technological change), this seems like ancient history now – but in parts of the modern world, the privatization revolution has still not occurred and much of the economy remains in public ownership – state-owned enterprises in the modern jargon. Often they are heavily subsidized. Some countries have massively improved the performance of these companies by setting up an efficiently run state holding company which applies modern management disciplines. Khazanah, the excellent state holding company in Malaysia, is a very good example; sometimes once a company has been improved it can be sold off, other times it will be kept in public ownership, but either way it will be properly managed, and generating revenue for the taxpayer. In some other countries, there remains a huge, moribund state sector riddled with inefficiency and sometimes mired in corruption. The Sharif government in Pakistan, elected in May 2013, inherited just such a sector. In a country which has big debts, difficulty collecting tax and an expensive military, public money is extremely scarce and it makes no sense to be pouring it into subsidies for an inefficient and corrupt state-owned energy sector or for Pakistan International Airlines. Effective privatization is, therefore, very much an issue of the present globally, which takes us back to Nigel Lawson. In a series of ministerial positions in the 1980s, Lawson was the ideological core of the privatization agenda. When he thought there was any risk of his colleagues backsliding or becoming faint of heart, he would fire off a memo to them. As a successful former journalist, he wrote with clarity and punch. Most of Britain could see what he saw: the nationalized industries had not delivered what they had promised. They had been intended to improve industrial relations but by the 1970s and 80s were often strike-ridden. They were meant to contribute to full employment, but in practice had resulted in state-subsidized overstaffing. They were supposed to improve productivity, but on that measure Britain was falling behind other countries. With their failure to perform, the case for them had unravelled. The boards that oversaw them, and their chairmen, did not leap from this analysis to privatization. Instead, they proposed another round of changes in governance and management. Lawson took on this (as he saw it) woolly thinking in no uncertain terms. 'We have created industrial baronies,' he exclaimed, 'not truly accountable to anyone – Parliament, Government, shareholders or the marketplace.' His prescription, set out most clearly in a note to his colleague Nicholas Ridley on the subject of the water industry, was full-scale privatization, even for so-called natural monopolies, of which water was a case in point. This is a summary of his argument: 1. Businesses are more efficient in the private sector than the public. 2. Water and sewage is a business like any other. 3. A quarter of the [water] industry is already in the private sector. 4. Of course it will need regulation to protect the consumer and the environment. 5. Even though a natural monopoly, once privatized the industry will have to compete for capital in the private sector and face a published daily share price – a comment on performance and a spur to management. Egged on by Lawson, the Conservative governments of 1979–97 saw through a huge programme of privatization. They learned lessons as they went. There were three basic issues that needed to be resolved in the process – first, what price should it be sold for? Too high and no one will come forward, too low and the taxpayer has been ripped off; second, how should the industry be broken up – by region as with water, by function (track and train operators separated in the case of rail), or should it not be broken up at all (as with telecoms initially)? And third, what form of regulator was required and what regulatory framework should it set? They did not get all of it right by any means, but the overall effect, accepted broadly now, was of much improved service. Tellingly, not a single one of the industries privatized in that era has been brought back into public ownership, nor has any government seriously considered doing so. By luck more than judgement, the British became world leaders in privatization just before the Berlin Wall came down and Communism across central and eastern Europe, followed by the former Soviet Union, imploded. Privatization marched boldly across the former Communist bloc, learning more lessons as it went. In Russia it became obvious that if you privatize in a hurry before you have some basic functions of a market economy – accountancy standards or robust banking – you risk replacing monopoly with kleptocracy, which is what happened there. Around 1,500 people ended up owning half of Russia. This fear that the well-connected few will benefit from a fire sale of public assets is not without foundation in some parts of the world. The government in Pakistan is right to consider privatizing some of its state-owned assets as a way to improve performance and cut costs, but it is not surprising, given the country's history, that some are fearful. In September 2013 a newsletter from _Rise for Pakistan_ , an e-journal of passionate Pakistani youth, argued: > Say No To Privatisation... Privatisation is promoted and supported with the claims of reducing economic burden on the government and improving the efficiency of the institutions. But past history of privatisation process not only in Pakistan but all over the developing world, shows that rather than accomplishing these goals, it ends up lining the pockets of top bureaucrats, politicians and other top officials involved in this process. It is impossible to deny the risk to which it draws attention. But this is not an argument against privatization in principle, merely an argument for undertaking it in an open, transparent and measured way. Where corruption is rife and the state or the other basic institutions of a market economy are weak, this is easier said than done. However, in these circumstances, clinging on to the state-owned assets is likely to result only in a drain of cash and continuing execrable performance. The key is to get the details of the regulatory framework right and, along with it, ensure transparent governance of the businesses that emerge. Government can learn from regulatory regimes that failed as well as from those that succeeded. And the regulatory regime needs to move on to keep up with changes in the economy and technology. As Joss Garman argues in relation to Europe's energy requirements, there may be long-term economic benefits in regulations that are more stringent in requiring 'cleaner, more efficient and home-grown' energy supply. This is because both geo-politics and technology are continually moving on. As Poland, the Czech Republic and the Baltic states, as well as Malaysia, show, well-designed privatization is a serious policy option to consider as states and societies modernize themselves for the twenty-first century. Critically, it reduces the size of the state, helps raise revenue and enables the state to focus on the things that matter most to citizens – increasing security, reducing crime, improving health and education and providing a climate for investment and economic growth. RULE 20 WELL-DESIGNED PRIVATIZATION CAN IMPROVE EFFICIENCY (it can also lead to smaller, more effective government) Vouchers have always been the subject of controversy. The basic idea is that where a state wants to make a service available, instead of funding it directly, it gives each consumer/citizen their share of the money and enables them to buy the service themselves. It is a simple idea predicated on the view that markets are more likely to provide quality than monopoly public services. In this sense, vouchers are merely the most radical version of Choice and Competition, described above, but because they are the most extreme version, I have included them here. Ever since Milton Friedman advocated vouchers for school education, as long ago as 1962, they have been debated, and indeed the experiments have largely centred on their applicability in school systems, where the parent receives the voucher and spends it on behalf of the child. Before plunging into the debate briefly, it should be pointed out that, at least in theory, vouchers could work for numerous other services including aspects of social care and healthcare or for further and higher education. While the basic principle, in part due to its simplicity, has real appeal, in practice there are a number of problems to overcome. One is that if you introduce a universal voucher scheme you will be giving vouchers, paid for by the public purse, to some parents who have already opted out of the state school system and into the private one. This is a deadweight cost – a state subsidy to parents who evidently don't need it. A second problem is that the administration of vouchers is quite challenging in practice. In the voucher scheme for poor families in Punjab whose children are out of school, there are three substantial administrative tasks – finding the deserving families, which in a country with poor records and no census since 1998 is by no means straightforward; registering the private schools eligible to receive the vouchers and ensuring they conform to certain minimum standards; and finally making sure that the voucher is as corruption-proof as it can be and that the intended children actually become the beneficiaries. None of this is simple, but we are persevering successfully because of the potential benefits in access, quality and cost efficiency. As of the summer of 2014, over 200,000 children from poor families are getting an education they would otherwise not have had. In the end, as Gabriel Sahlgren points out in his rigorous analysis, the issue is not an ideological one – there is too much polemic on both sides of this subject – it is a matter of getting the design right. 'The conclusion,' he says, 'is that school choice and competition have the potential significantly to increase school quality, but that design matters.' He quotes Terry Moe, long-standing US advocate of school choice: 'choice always operates within a structure... which in turn shapes the kinds of outcomes that choice will ultimately generate... Different structures, different outcomes.' So in Punjab, which has the fastest-growing voucher scheme in the world, we have focused it on poor families who are out of school. We have helped strengthen the administrative capacity of the Punjab Education Foundation, which oversees the scheme. We have kept the quality assurance process simple and objective – any private school that accepts voucher children must allow all the children to be tested annually (by an independent organization) and show that the vast majority of its pupils are making progress. And periodically we audit the process to identify and tackle any abuses. So far the evidence is positive, not just for the children who get vouchers, but for the system, because the competition provides a wake-up call to the government sector. RULE 21 A WELL-DESIGNED VOUCHER SCHEME EMPOWERS THE BENEFICIARIES (and can promote equity) In the long run, if experiments such as that in Punjab prove successful, it is possible to imagine governments in the developing world adopting vouchers as a universal strategy. After all, the evidence from the developing world suggests that millions of parents on low incomes have chosen low-cost private schools because the government schools are so poor; furthermore, the low-cost private sector achieves better outcomes at much lower cost. In these circumstances, there has to be a temptation for a government to switch much, perhaps eventually all, of its funding from the moribund and wasteful to the effective and economical. This would be consistent with the Julio Frenk view of the world: government sets the rules and pays, public and private compete to provide. Summary of the Five Paradigms Table 7 In the meantime, the electronic payments systems that are proving so helpful to economies in the developing world – more than one third of payments in Kenya are now made on mobile phones – could help resolve at least some of the administrative challenges. Indeed, there are already experiments with conditional cash transfers as a means of combating poverty. ##### STEWARDSHIP: THE ROLE OF THE CENTRE OF A SERVICE Whichever of the five approaches – or combination of them – is chosen, it is easy to miss a startlingly obvious point: someone at the centre of the system has to oversee it in its entirety and secure its long-term interests. This is stewardship, a fundamental responsibility of government. As the fan diagram above (p. 64) makes clear, there are three aspects of stewardship. Sometimes governments delegate part or all of these functions to a regulator – usually an agency of government – but even then the (ultimate) responsibility lies firmly with government, as we discovered in the financial crisis. The authors of _The Gardens of Democracy_ use a gardening metaphor: 'Tending and regulating... signify the same work but tending frames the work as presumptively necessary and beneficial rather than as something to be suffered.' This tending is an essential function of government, whether your preference is for small or large government. First, strategy: of course, you can choose not to have a strategy at all and simply muddle along. This may even be popular, especially with the public sector workforce – but it remains a choice nevertheless, and what it amounts to is Trust and Altruism by default... or to put it more crisply, drift. Generally speaking, though, a well-run sector needs government or the regulator to survey further possible technological change, shifts in global patterns of provision, likely demand and so on. As Francis Fukuyama argues, legitimacy comes in part from delivering results, but in part also from being able to anticipate and respond to changing citizen demands. Strategy remains a responsibility even after privatization, where the nature of regulation, for instance, or the extent of any subsidy – for a transport system, for example – are clearly government functions. Singapore has a government that excels at this aspect of stewardship. As the fan diagram also makes clear, the second is to put in place the means of monitoring performance across a system. In the case of a privatized industry, such as energy in many countries, this role generally falls to a regulator; in a state-run system such as schooling in Punjab or Louisiana, the role falls to government. Sometimes some governments choose not to measure outcomes in any effective way, perhaps in response to professions fearful of the impact of comparative data on crime, health outcomes or school performance. But where government abdicates this responsibility, the effect is to shift the debate to a focus on inputs. Alternatively, the media steps in where governments fear to tread – this often results in comparative data of less value and less quality, but no less impact. Better, then, for government to take responsibility. This is why when Dalton McGuinty, then premier of Ontario, announced that the government would publish school test scores rather than leaving the task to the _Toronto Globe and Mail_ , he gained the unlikely distinction of a standing ovation from teachers. Given the rise of 'Big Data' – the capacity to collect and analyse massive quantities of data in close to real time – which is currently transforming much of business, it is unthinkable that government-funded or regulated services could stand against the tide, even if there was a good case for doing so (which there isn't). The issues are therefore what data to collect, which indicators to value, and how to present the data in a way which has integrity and vividness. Moreover, it is not just a question of performance data, but also of setting regulation and checking for compliance. If a water company is not meeting environmental standards, the public needs to know. If pricing policy is being abused to enable unwarranted profits, the public wants something done. The third and final essential task of the centre of government is to ensure that the necessary human capital is in place, properly regulated, with the required skills and motivation. It takes seven years or more to train a doctor, and almost as long to train an engineer or an architect, so the supply of such scarce skills needs planning ahead. And because trust in professionals is such a key aspect of their capacity to play their part, even in a largely private and self-regulating profession such as engineering, there is a role for government in creating the circumstances that allow such self-regulation and in ensuring dialogue with the profession's leaders about matters such as access to and content of engineering degrees and training. Where the profession is also part of a public service, such as teaching, the government responsibility is all the greater. In a market, firms may be able to recruit and train the staff they need, but even the larger ones cannot ensure the national supply of the necessary qualified people. Government has a clear responsibility here, and in a public service much more so. Having experienced in 1999 being in the Department for Education during a period of acute teacher shortage – with headteachers expressing constant anxiety and the media hyping up every school which claimed it had closed due to staff shortages – I know what failure in this respect feels like. We had increased school budgets, which schools evidently welcomed, but we failed to anticipate the obvious truth that the schools would spend the extra cash on recruiting more teachers. The result was that schools in more comfortable and less expensive locations recruited the staff they wanted, and those in challenging and expensive locations suffered badly – which is why the rougher parts of London screamed loudest. RULE 22 GOVERNMENT SHOULD TAKE ITS STEWARDSHIP RESPONSIBILITY SERIOUSLY; THAT INCLUDES STRATEGY, REGULATION AND THE SUPPLY OF SKILLED PROFESSIONALS The positive outcome of the crisis was the most thorough-going reform of teacher preparation, pay and incentives of the past generation, which had long-lasting beneficial consequences. But it has not erased the memory of the crisis, which I wouldn't wish on anyone responsible for any major public service. The moral is that ensuring a supply of the necessary professionals is an essential function of government. Even in the good times, never forget it! Supply, though, is not the only issue. There is also the ongoing question of the relationship between governments and the professions who work in the public services. Often around the world this is tetchy, or downright dysfunctional. In the three years I have been working in the Punjab, there have been strikes by both teachers and doctors, as well as a range of other public servants. Acrimony between teachers and government is common across Africa, as well as Britain, Canada, Australia and parts of the US. Doctors, nurses and other healthcare professionals too often find themselves at odds with government, as do police officers. Sometimes the issues at stake are pay, conditions, pensions or workload; sometimes they are about policy itself. For reasons that become apparent at various points in this book, some tension is inevitable. Public servants will always want improved pay or conditions, while governments, unless they are irresponsible, will always face financial constraints. Public servants will always want a say on policy because it will be their job to implement it, and they are knowledgeable, while governments often have an electoral mandate and will want to deliver their promises. Public service professionals will want to advance slowly and incrementally (and stay in control), while (some) governments will be in a hurry. Julio Frenk describes a process by which a strategy decays through a consultation process. The vision is outlined; consultation with professionals and others blurs it; a proposal is made; its radical cutting edges are bevelled and smoothed and rounded through further consultation; the legislation is introduced and legislators are lobbied; the legislation becomes still less radical, still more incremental; and up to this point, implementation has not even begun. We'll see (in the chapter on irreversibility) how this process of decay can be fatal. At this point it is sufficient to note how easy it is in this process for the relationship between government and professionals to decay too. Meanwhile, as the debate about a set of proposals evolves, the public watching are wondering what on earth is happening to their public services and what value they are likely to get for the money they deliver up through their taxes. Often they are thinking 'a plague on both your houses'. At the same time, perhaps, those who can afford it are opting out into the private sector. An incredible 70 per cent of the children in Delhi, Lahore and Accra have already done so – so this is not just the wealthy, it's the poor too. Of course, it's not always like this, but the scenario I've described is not uncommon, and it illustrates why it is an important function of government to think through how to build this relationship with the public service professionals. There is a tendency to believe that the way for government to buy peace in this relationship is to be soft or passive. However, the evidence suggests, as we have seen, that the Trust and Altruism implied will not work; nor in all likelihood will it buy peace. Much better is confident, assertive leadership, both in government and among the professionals. The strategic implications are set out in Table 8, which is a summary of how these crucial relationships might evolve as the quality of a service moves up the Performance Wedge. The Relationship between Government and Professions Table 8 Bringing about this shift in performance requires, among other things, attention from both government and the professions themselves to the need for a constant refinement and development of professional skills; easy for government to neglect, but fundamental to delivery of constantly improved outcomes and an essential aspect of stewardship. These, then, are the five approaches to reforming or transforming large public systems and the three essential aspects of stewardship. The five vary in their effectiveness; all can work in certain circumstances, though some are less likely to work than others. Choosing between them will in part be a question of ideology and in part a question of the nature of the challenge or the goals that have been chosen. For Awful to Adequate tasks, Command and Control plus 'naming and shaming' are most likely to succeed. For Good to Great tasks – because greatness cannot be mandated – Devolution and Transparency or Choice and Competition are better placed to succeed – and indeed where there is a strong professional ethic, as with teachers in Finland, there are the circumstances when Trust and Altruism might turn out to work after all. And that leaves just one further point to make. In any given large service there is likely to be significant variation in performance, so the strategy as a whole might weave together elements from among the five approaches – Command and Control for the poor performers, Transparency for all and increased autonomy and room to expand for the top performers. Overall, then, getting the reform model right is a sophisticated challenge for any government or specific government service, but not an unthinkable one. The key is to make the time for informed discussion of the principles on which you want to base policy. In the whirl of activity and crises that is government everywhere, time for discussion of principles and strategy is remarkably difficult to find. It is worth it, though, because the alternative is coming up with one of those 'portfolios of initiatives' beloved of consultants and that, in spite of the good intentions, is without doubt the road to hell. ##### COMMUNITY ENGAGEMENT On 14 August 1908, Mohandas K. Gandhi, not yet Mahatma, wrote to General Jan Smuts, Transvaal Colonial Secretary, to tell him that the Indian community in the province planned to burn their registration certificates. They would accept an education hurdle for the entry of Indian immigrants into Transvaal, but not a racial one. Two days later, the burning began, as the _Transvaal Leader_ described: > Paraffin was poured in and the certificates set on fire, amid a scene of the wildest enthusiasm. The crowd hurrahed and shouted themselves hoarse; hats were thrown in the air and whistles blown. How had Gandhi unlocked this extraordinary level of community engagement in a course of action which would soon result in not just Gandhi himself being locked up, but also 2,500 other members of the South African Indian community? First, there was moral strength in the case. As Gandhi put it: > Unenfranchised though we are... it is open to us to clothe ourselves with an undying franchise, and this consists in recognising our humanity... I say that no matter what legislation is passed over our heads, if that legislation is in conflict with our ideas of right and wrong, if it is in conflict with our conscience, if it is in conflict with our religion, then we can say that we will not submit to the legislation. The identity and dignity of the Indian community were at stake and, powerless though they appeared to be, no one, not even General Smuts, would be allowed to take those away from them. Second, the way they made their case had great moral power. Some argued for simply lobbying, sending letters, arranging meetings; others argued for violent confrontation. Gandhi rejected both, and instead advocated what became known as Satyagraha. > The movement in the Transvaal, with which I have identified myself, is an eloquent and standing protest in action against such methods. The test of passive resistance is self-suffering and not infliction of suffering on others. General Smuts was at a loss: > In more primitive times one would have met [this campaign] by simply issuing a declaration of war. But in these times it is impossible to do that and therefore the situation became a very difficult one for us to handle. Those early Satyagraha campaigns in South Africa struck both their participants and the governments against whom they were aimed as an innovation in public protest. No wonder Smuts was unsure how to proceed – how do you face down a committed community whose protests harm only themselves, not anyone else? Now imagine if that level of community engagement could be unlocked in favour of a public programme. Imagine that level of belief in support of a major government reform. That is why the fan diagram is set against the background of an ellipse entitled Community Engagement. Hard to do, of course, but suspend disbelief for a moment. The education of children, the health of a nation, the safety of urban communities: these are all causes in which people believe and about which they have strong views. There will sometimes be protests – against a hospital closure perhaps – but imagine if the energy of the community was unlocked and channelled into delivering the outcomes that people want. In the end, however much government invests in doctors, nurses and medical technology, only families and communities can make sure all children eat healthily and exercise regularly; only individuals can decide to stop smoking; only parents can see to it that homework is done, perhaps even enjoyed; and only citizens can ensure their streets are safe. One problem with the post-war era and its welfare state was that people began to expect the state to provide; they became passive recipients of a service; they became dependent. If, instead, they were active participants – exercising choice, taking responsibility and demanding quality – the services would deliver much higher performance at no extra cost. The key here is motivation. Dangerous territory for a political leader – make the case in the previous paragraphs, and you risk being accused of softening up the service for cuts and abdicating responsibility. This is what happened to David Cameron when he proposed 'the Big Society' – a good idea shot down before it reached its prime. Similar extended periods of failure of government dampen people's expectations. Because they expect little, they cease to make demands. Those on the inside of the ineffective government then blame the citizens for having low expectations. Sania Nishtar, who was briefly minister of health and education in Pakistan, wondered why there was no 'upward pressure' on government to deliver. She even stopped her ministerial car randomly to ask women walking by why they did not demand more. The answer was that they had given up. They expected little of government and it then lived up to their low expectations. It is vital, therefore, that those responsible for government and public service – if they are to focus on delivering the best possible outcomes for the precious taxpayers' money invested in public services – find ways to unlock and harness public engagement. There is an extensive literature on this subject, well tackled in the think-tank world. Here are some of its topics: * Transparency Provide information – about services, outcomes and threats (Australians now routinely guard themselves against the hot sun in a way they didn't a generation ago). * Open debate Encourage dialogue between professionals and public, both at the point of delivery – the GP's surgery for example – and publicly. * Empowered communities Enable community groups to take on services (and budgets) that in the past have been 'provided' by government (free schools in England, charter schools in the US). * Social enterprise Make it easier for people to establish social enterprises and for them to compete when services are being contracted out. * Social alchemy Celebrate social alchemists – those who assemble the disparate elements that change the game at local level (such as the Harlem Children's Zone). * Competition Shift the burden of proof so that services are contracted out unless there is a good reason for them not to be. * Learn from business Ask yourself why some major companies – Apple or Starbucks – generate such passionate commitment. Some or all of these options will make a difference. As you consider the five paradigms and the three stewardship responsibilities, never forget the potential of public or community engagement to transform outcomes. ##### POLICYMAKING Somewhere between strategy and implementation you find policy. Much of government goes straight to policy, forgetting strategy. And then, as we've seen, underestimates implementation. Indeed, many civil servants claim their real expertise lies right here, with policy. But policy without strategy is rarely transformative; and policy without implementation is worthless. It is true that there are some things where implementation will take care of itself. For example, assuming there is general support for it, a ban on smoking in public places will take care of itself because citizens will enforce it. But the 'big' changes that transform outcomes for citizens are mostly much more challenging than this to implement, which is why we need a science of delivery. At this point, though, it is worth emphasizing that policy matters, matters a lot, as long as it's in its rightful conceptual place between strategy and implementation and takes account of both. Reams have been written on the subject and don't need to be repeated. For our purposes, we just have to know what questions you would need to ask yourself if you were tasked with preparing the policy on anything. A good first question would be: How does it relate to the strategy (if there is one)? After that, I have not found anything better than the document circulated in the Department for Education in England by its permanent secretary Chris Wormald, who also heads the cross-government policymaking profession. The Five Policy Tests 1. WHAT'S THE POINT? PURPOSE – Are you absolutely clear what the Government wants to achieve? 2. WHAT'S IT GOT TO DO WITH _US_? ROLE – Are you absolutely clear what Government's role is? 3. WHO MADE YOU THE EXPERT? EVIDENCE – Are you confident that you are providing world-leading policy advice based on the very latest expert thinking? 4. IS YOUR ADVICE PREDICTABLE? CREATIVITY – Are you confident that you have explored the most radical and creative ideas available in this policy space... including doing nothing? 5. BUT WILL IT ACTUALLY WORK? DELIVERY – Are you confident that your preferred approach can be delivered? And then, just to be really practical, Wormald asks four more questions which reveal his knowledge of Whitehall. _Satisfied you pass those tests? Then ask yourself this..._ 1. WOULD I BE COMFORTABLE EXPOSING MY POLICY THINKING TO THE HIGHEST LEVEL OF CHALLENGE IN THE DEPARTMENT? 2. IS MY POLICY ADVICE ARGUED LOGICALLY AND CRISPLY, AND FREE OF JARGON? 3. IS IT FREE FROM ERRORS? 4. WILL MY ANALYSIS AND THINKING BE AVAILABLE FOR OTHERS TO USE AND LEARN FROM? Successful governments, then, think strategically, adopt proven methods of reform, take their stewardship responsibilities seriously and ensure they engage with stakeholders and communities. They also ensure that the policy advice they receive is of high quality. So, looking back, that meeting at Chequers in June 2001 was profoundly important. It was the moment when the prime minister and his team understood not just their approach to public services, but also how to turn strategy into delivery. Now all we had to do was turn the strategic approach into policies and plans, service by service. The struggle governments face in doing that is the subject of the next chapter. ## 4 ## Planning On 15 May 1944, just three weeks before the big day, General Bernard Montgomery – Monty to his British admirers – summed up what the Supreme Allied Commander had achieved: 'Plans and preparations are now complete in every detail. All difficulties have been foreseen and provided against. Nothing has been left to chance. Every man knew exactly what he had to do.' By completing these preparations, Dwight D. Eisenhower, known as Ike to his friends, had made history long before he was elected president of the United States in 1952. A stellar military career resulted in his appointment by Roosevelt in December 1943 as Supreme Allied Commander in Europe. In this role, he was personally responsible for planning and implementing one of the most challenging military campaigns of all time – the Allied invasion of Normandy known as Operation Overlord, which was the first step towards the liberation of France, and eventually all of western Europe, from Nazi oppression. D-Day, 6 June 1944, involved 12,000 planes to attack the enemy, 7,000 vessels to transport soldiers across the Channel from England and almost 160,000 troops. Within a couple of months, more than 3 million Allied troops from several countries were involved in the liberation. The night before D-Day, Eisenhower wrote a note to himself which reflected the loneliness of the leader at an historic moment such as that: 'If any blame or fault attaches to the attempt it is mine alone.' As an exercise in planning, Operation Overlord is logistically mind-boggling. Think not just of the planes, vessels and troops, but also of the military equipment, not to mention the food and basic supplies. Ike had overseen the planning in advance and took command as the plan became reality. To add to the complexity, the task was made that much more difficult by the giant-size egos Eisenhower had to deal with as the plans were made. He had to argue with Roosevelt about the role the French would play; with Churchill about the bombing strategy; and about everything with his subordinate George S. Patton, whose combination of genius and prickliness was legendary. In each case, Ike prevailed. Reflecting on the whole process long after these momentous events, he remarked famously, 'In preparing for battle, I have always found that plans are useless but planning is indispensable.' Eisenhower's wisdom had somehow not seeped into the Whitehall culture by the time the Delivery Unit was established in 2001. One of the first things I read after I had been appointed was a paper by two external advisers to the Treasury who had been asked to review how departments were making progress with the targets they had agreed to in the year 2000 as part of Gordon Brown's spending review. A number of things surprised the advisers, but none more so than the fact that, in most departments, as they put it, 'there was no plan'. It seemed to defy common sense that goals had been set and multimillion-pound budgets allocated, but there were no plans. This was a seminal insight for me, not least because I was about to ask departments for plans to deliver the priorities established by Blair and approved by the cabinet in the few short weeks after the 2001 election. In my four years in the education department, I had learned to expect better. Well, a little better. There I had discovered that when asked for a plan, civil servants would jump to it... and they would come back a few weeks later with something more like an essay, often very well written and, if you were lucky, decorated (as one of my No. 10 colleagues later put it) with the occasional number. It just required the glossy cover to round it off. This, then, it seemed, was the choice: on the one hand, no plan at all; on the other, an essay. In these circumstances, successful delivery seemed improbable. In Ike's terms, we would get neither the plan nor the planning. The civil service appeared not to have grasped the underlying truth that Ike understood: that until you grapple with the messy, day-to-day realities of getting something done, you simply can't understand what it takes. When I read in business strategy books that leaders deal with 'the big picture' and 'overarching strategy' while delegating all the detail, I groan. Serious leaders never do that, because they understand Ike's point. Their challenges are not to avoid the messy, ground-level reality, but to be selective in deciding when, where and how to intervene and in which details; and of course to build an effective team (at which Ike, incidentally, excelled). So we told Whitehall back then exactly what we did and did not want. No essays. No glossy covers. Instead, we asked for real practical plans, with folds and creases, scribbled notes in the margins and coffee stains. Above all, we wanted to know what was going to be done, when it was going to be done and who was responsible. It was not just Whitehall that had this challenge. In Punjab, Pakistan, there had been an Education Sector Plan for years prior to the start of the Education Roadmap in 2011. The World Bank and the Department for International Development (DFID), along with Punjab officials, had put years of hard work into that Education Sector Plan... but it was a descriptive essay, not a real plan. In this case, there was a plan but no planning. Nor was anyone responsible for checking that the elements of the plan got implemented. Small wonder that the province was not making progress towards the Millennium Development Goal and that the chief minister had no idea that his province was not making progress. In 2014 we discovered an almost identical failure in Punjab's health sector. This is a chapter about putting that right. About ensuring that Ike's kind of planning gets done and that political and official leaders have a plan that they can use, and constantly update, to drive delivery of the goals they have set. We'll come to the qualities of a good plan in the final section of this chapter but, taking Ike's advice, there are four sections on aspects of planning that come first. ##### UNDERSTAND THE PROBLEM In September 1776, all was not going well for George Washington and the Continental Army. They had been trying for some weeks to defend New York from conquest by the British, who had sent the greatest armada that up to that point had ever crossed the Atlantic to accomplish the task. In August, the British had driven a dishevelled Continental Army from Long Island. Indeed, had it not been for the good fortune of a dense fog on the morning of 20 August, it is doubtful whether the Americans would have escaped across the river to Manhattan to fight another day. The situation did not improve for them there. When the British followed them onto Manhattan, on 15 September, the Continental Army did not stand and fight. It fled. 'The demons of fear and disorder seemed to take full possession of all...' commented one soldier present that day. Only the heroics of a then-unknown American officer, Thomas Knowlton, at Harlem Heights salvaged any pride at all. And even after that, Washington knew he and his men were in the direst of straits. At the Continental Congress in Philadelphia, William Hooper of South Carolina saw the early glimmer of hope in the situation: > It becomes our duty to see things as they really are, divested of all disguise and when the happiness of the present age and millions yet unborn depends upon a reformation of them, we ought to spare no pains to effect so desirable a purpose. Marty Linsky of Harvard Business School makes a similar point in modern language. He urges leaders to be 'relentlessly optimistic about the possibility of changing the world and brutally realistic about the difficulty of getting it done'. The start of good planning, then, is – in John Kotter's words – to 'confront the brutal facts'. That is precisely what George Washington and his team proceeded to do in 1776. For five days from 20 September, while New York City burned, they assessed the state of the Continental Army, which historian Joseph Ellis summarizes as 'deplorable'. Having not flinched from seeing things as they actually were (rather than as they wished they would be), they came up with a plan. If instant success was not possible, they reasoned, they had better prepare for a long war in which they would wear out the British rather than defeat them. That required an army of 60,000 with men conscripted 'for the duration' rather than seasonal volunteers from the thirteen state militia. Two weeks later, the Continental Congress approved the plan and the stage was set for the long grind to the ultimate triumph of American independence... though the first step was inevitably a further chaotic retreat. More recently, a similar exercise in recognizing reality took place in Lahore in October 2010. I visited Pakistan regularly from 2009 on in my role as the DFID's Special Representative on Education in Pakistan. As a result, I came to understand its (deeply dysfunctional) school system and what needed to be done about it. The problem wasn't knowing what to do, it was finding someone with the courage to do it. Everyone had been telling me that the one politician with the courage and executive mindset, as well as the position, to act was the chief minister of Punjab, Shahbaz Sharif. That month, Fenton Whelan and I managed to get to see him and a handful of his officials without the stultifying presence of the massed ranks of the aid industry. We showed him the data that demonstrated Punjab was not on track to hit the Millennium Development Goal of universal primary enrolment by 2015. Clearly this was news to him – neither his officials nor (even less forgivably) the aid agencies had told him the inconvenient truth. What really shocked him were the two pictures below. They laid bare the scale of the problem in a way that no amount of data could. In spelling out a problem, it is as important to affect the emotions as it is to engage the intelligence. Shahbaz turned to his officials: 'Is this true?' There was a silence that seemed to last for an age. His officials looked at their shoes. Eventually I chipped in: 'It is true. The only question is, what are we going to do about it?' On the way out of the meeting, one of Shahbaz's officials commented tersely, 'You should never tell people what they don't want to hear.' Inadvertently, he had explained in a sentence why Punjab was so severely off track – no one up until then had been prepared to confront the brutal facts. Now that the chief minister had done so, we were ready to start planning. The Punjab Education Roadmap was born in that moment. (At the meeting in which the Punjab Health Roadmap was born in April 2014, the chief minister was similarly affected. He turned to his staff after looking at the data and asked: 'How can you sleep at night?') Figure 14 RULE 23 UNDERSTAND IN YOUR HEAD (and feel in your heart) THE GAP BETWEEN YOUR ASPIRATION AND THE UNVARNISHED REALITY An essential element of that meeting in Lahore? The appeal to emotion, to the heart as well as to the head. Similarly, in William Hooper's advice he makes an emotional case – the future happiness of the world is at stake! Dispassionate, cool analysis _and_ emotion. Not one or the other, both. ##### WORK OUT HOW YOU WILL DRIVE CHANGE The next step is to decide what to do. At this point, it is important to remember the content of chapter 3. What is your overall approach to changing reality and advancing towards your goal? Which of the five paradigms, or what combination of them, will you rely on? How will you ensure that at the centre of your system you can play the three essential stewardship roles? How will you approach any policy changes you need? At the planning stage these almost philosophical approaches need to be turned into a timetabled sequence of actions. Remember, the keys to a plan are deciding what actions need to be taken, when they need to be taken and who is responsible for each of them. The big danger at this point is over-complexity. It is essential to avoid it. This is what we did successfully in Punjab, after that seminal meeting with Shahbaz Sharif. First we looked for school systems of a similar nature which had succeeded in making progress and examined what they had done. As it happened, this work was already in progress for a report we were preparing for publication. The stories of Minas Gerais in Brazil, Western Cape in South Africa and Madhya Pradesh in India were powerful and clear – in each case the school system had improved because it had ceased to rely on Trust and Altruism and had applied a version of Hierarchy and Targets. Given they were on an Awful to Adequate journey, this was the right approach and it was working. Thus we knew what the drivers of change were. Learning from these stories, we were able to develop a Roadmap or plan for Punjab with five elements. 1. Data and Targets Targets would be set for the province as a whole and for each district. 2. District Administration All appointments would be made on merit, instead of on political connection, and the new district leaders would be trained to deliver the Roadmap. 3. Teacher Quality We would prepare lesson plans for every one of 200,000 primary teachers in the province in English, Maths and Science and train them to use them. 4. Punjab Education Foundation We would use the PEF, an autonomous organization with government and donor funding, to bring in an element of choice and competition, by funding low-cost private schools and expanding a voucher system for poor families. 5. Supporting Functions We would improve facilities at schools, and include a number of practical developments which the Secretary – Schools wanted as part of the Roadmap. It was a very simple plan. It was also incomplete – nothing in it at that early stage about school principles; nothing about strengthening the enrolment drives the system undertook (ineffectually) every year. That doesn't matter; these other aspects came later, as we regularly updated the plan. The point is, it was a good enough plan to get things started once we had broken the actions in each of the five areas into specific practical steps with a deadline and a person responsible. Table 9 shows a typical page of the Roadmap plan from 2012. RULE 24 UNDERSTAND THE POTENTIAL DRIVERS OF CHANGE (and base your plan on them) The difficulty wasn't arriving at the plan, it was pulling the people at the centre of the Punjab education system together around the plan. In December 2010 and January 2011, in spite of the evident commitment of the chief minister to the emerging Roadmap, the other interested parties – the Punjab officials, Pakistani government people, the World Bank, even key people in the DFID, which was supporting my work there – were at odds either with the idea of the Roadmap or with aspects of it, or with each other. To be honest, for much of that time, myself and my team in Lahore felt pretty much alone. Extract from Roadmap Delivery Plan Table 9 This state of conflict around a plan in the early stages of a change is not unusual. At a much grander level it was part of Eisenhower's planning for D-Day, as we've seen. There are two ways to resolve these conflicts – one which works and one which is almost certain to be fatal. Often civil servants, sadly, choose the latter. The first way is to keep going back to the evidence, to the chosen model of reform and keep making the case for why the emerging plan should work – and refine, refine, refine. If, as in the case of Punjab, the leader himself is on your side, this will help, but don't rely purely on the time-honoured 'the PM (or CM) insists...', especially if you're not quite sure what he or she thinks. Listen and see how, on the basis of vigorous dialogue, you can refine or improve your plan. Be ready to learn always, but don't compromise. You might make adjustments to the timetable perhaps, but even here the weight of argument will always be for delay, so someone needs to make the case for urgency – every day of not acting on the plan is another day of the system failing citizens, in this case the families and children of Punjab. This won't be fun, but resting your case on the fundamentals, and perhaps the leader's support, has its rewards. The second approach, so often chosen, is to make one concession after another in order to get 'buy-in' or 'shared ownership' from each of the warring stakeholders. Civil servants often advocate this way forward, always supported with plausible arguments about the importance of 'winning hearts and minds' or the influence of certain stakeholders ('We really need to ensure the World Bank is onside'); but the main reason they take this line is that it avoids conflict (which they abhor) and ensures, as far as possible, a quiet life. The problem – known in negotiation theory as 'the theory of side payments' – is that if you make all the concessions required, there is a good chance you will emasculate your plan before implementation has even begun. It's like designing a plane, putting it on the runway, weighing it down with boulders, weakening its engines, clipping its wings perhaps, and then wondering why it doesn't take off. The argument for doing less or doing nothing often looks plausible (and is of course sometimes right); whereas the case for being bold needs constantly to be made. Avoiding emasculation before implementation was exactly my challenge at the beginning of the Punjab Roadmap. As I wrote in my diary at the time of one very senior federal official – 'she doesn't have a strong view, but wants her friends in... government to be happy...' Meanwhile, a top consultant to the DFID was advising me to 'assemble the international community'. The next comment in my notes simply reads: 'inclusive vs getting things done', which says it all. 'Inclusive' sounds warm and appealing, and if it is possible, so much the better – but it can also be an excuse for procrastination and dilution. The reality in some governments can be even worse than this. Some of the stakeholders aren't just sceptical, they actually want to block what you are doing, though they might not say so overtly. Stein Ringen quotes the political scientist Charles Lindblom on this theme: 'Many people constantly try to change the social world. An explanation of their failure more plausible than that of inertia is to be found in the great number of other people who are vigorously trying to frustrate social change.' Since this is often the case, it is as well to be aware of the reality. This is why success in government requires not only good planning but also political will. The combination is crucial because, with both in place, as soon as implementation begins, progress is possible and people who once opposed it will start to come round. In Punjab, we were able to build sufficient support in the end – though it was touch and go for a few weeks – to get started in January 2011. My team began to build the three stewardship functions crucial to the centre of the system (see chapter 3) as the early phase of implementation began. With the strategy broadly established, the key was to set up the data collection system. The World Bank had made a significant contribution by creating inside the Punjab government the Performance Monitoring and Implementation Unit. The idea was great – monthly data from all districts on key indicators of educational progress – but the execution until then had been poor: only a few districts were collecting and submitting data and even then with a long time-lag. (The developing world is littered with half-finished institutions such as the PMIU at that time: 'It would have been marvellous,' the aid agencies assure themselves, 'but they didn't implement it properly.') One key Punjab official, with the help of my team, turned the PMIU into a purring, smooth-running data-collection machine. The Punjab example illustrates several crucial steps towards a credible plan. * Keep it simple; it doesn't need to be perfect, just good enough to get started. * Expect conflict. * Adhere to the principles and evidence; don't make so many concessions to get 'buy-in' that you end up with an inedible soup. * Don't forget the three stewardship functions – even if you plan to devolve. * Try to have the leader on board. * Get started. RULE 25 PREPARE A PLAN TO IMPLEMENT YOUR STRATEGY THAT IS GOOD ENOUGH TO GET STARTED (and don't make concessions for a quiet life) ##### LABS Idris Jala brought to government policymaking in Malaysia a new approach which he had tried and tested in previous roles with Shell and Malaysia Airlines. He calls this a 'laboratory' or just 'lab'. He explained his thinking in an interview he gave to a researcher from Princeton. > When I first went there [into government] it was me and my assistant and then we built a small team. But because we wanted to hit the ground running, the first thing we did was to run laboratories, so within a month of my arrival we started running six laboratories. There was one lab for each of the six National Key Results Areas. Each lab involved between forty and fifty people, a mix drawn from within government and the relevant field, public and private, who could bring different perspectives to bear on achieving the NKRA – reducing crime or improving urban public transport, for example. Idris locked them in a hotel for six weeks, provided a trained facilitator, lots of data and the occasional prime ministerial visit and set them the task of finding ways to meet the prime minister's ambitious goals. To participants, he likes to quote the line from the Eagles in 'Hotel California': 'You can check out, but you can never leave.' Note how different this is from standard policymaking – in a lab, people concentrate full-time for six weeks and have to emerge with solutions and a plan of action. Quite different from the desultory, once-a-month project meetings where there is plenty of talk but little action, and between meetings still less. Idris explains the key to success: >... the labs began [with] very, very tall targets, almost what you call impossible targets, targets that will cause you to have fear of failure... If we came with an incremental target... there is no need to transform... [stretch targets] require a radical approach and outside-the-box thinking... Then he adds: > I always believed this, people actually know the solutions but the reason why they don't execute it is because there are a lot of roadblocks along the way... The ideas were already there... but to move it from idea to results there are hurdles such as technical hurdles, political hurdles, administrative process hurdles... We were really focusing on ensuring that the hurdles that prevented us from doing this before are now removed in the labs. What makes the lab different? Idris Jala's team answer: * Bringing together key relevant people from the public, private and NGO sectors. * '3 feet' implementation programmes. * Budget request as part of the output. * Key performance indicators assigned to relevant stakeholders. * Syndication sessions with top leadership. In short, it is an intensification and acceleration of much of what is in this chapter. That phrase '3 feet' needs elucidation. Idris Jala points out that people are always talking about a '30,000 feet' perspective, as if somehow that is enough when it comes to planning. There are numerous euphemisms – '30,000 feet', 'strategic', 'high-level', 'high-level overview', etc. – but they all mean the same thing: vague. To avoid the '30,000 feet' perspective Idris insists on '3 feet' implementation – that is, close-to-the-ground reality. Six weeks locked in a hotel is, he says, usually long enough for people to get over their interpersonal tensions and really plan in practical detail. He admits that the output of a lab will not be perfect, but it will be enough to get started. Labs had worked in Idris's business career. Now they worked in a government setting. The key lesson in each case, he argues, is to be very clear about the performance indicators you want to shift. In business it is likely to be profit; in government there needs to be a hard (and ambitious) indicator too, such as falling crime rates or improved health outcomes, which takes you into the debate elsewhere in this chapter about trajectories. ##### LEGISLATION As you plan, you may discover you have to change the law, in which case you will need to legislate. The approach varies according to the constitution. In consensual countries such as Germany or Norway, there is a laborious process of consultation and a set of procedures to follow. In congressional systems such as the USA or large parts of Latin America (and now Kenya), the executive has to lobby the members of Congress in small groups or even one by one, as Julio Frank remembers doing assiduously to push his health reform through in Mexico. Talk to Melody Barnes, Obama's director of the Domestic Policy Council in the first term, and again and again you hear her talk about day after day spent 'on the Hill' lobbying members of Congress. In parliamentary systems, the government usually has a majority in Parliament and therefore securing the passage of legislation is easier. Even here, though, attention to factions and individuals, especially in the governing party or parties, is vital. Tessa Jowell, Britain's minister for the Olympics, remembers assiduously courting individual ministers and key figures in all major parties to win support for London's bid. The tactics of passing legislation, which clearly varies country to country, are dealt with extensively in the literature and this is not the place to delve into it. Suffice it here to make two vital points. One is that legislation is not only vital from a legitimacy and constitutional point of view, it is also an opportunity to test the narrative, the strategy and potentially the organizational approach in the court of public opinion. By generating public debate, it can also help to define dividing lines between supporters and opponents and to create momentum. Get it wrong and the opposite applies. The second is that compromise may be necessary to build a coalition in the legislature capable of passing the law. After all, a majority is required. A sense of realism in government is essential about this – but equally it is vital to guard against making so many compromises that the original intent becomes impossible. And of course the opposition know this and will choose their tactics accordingly. If they can't win head-on, emasculating the bill with a string of minor amendments is an attractive option. Be on your guard. ##### THE DELIVERY CHAIN Have you ever read one of those medieval history books full of lines like this: 'Edward gathered an army and hastened north...'? Hang on a minute! How did he do that? There he is at Westminster or Winchester and someone, perhaps a messenger, comes in with a piece of news that prompts the thought 'I need to head north with an army.' What do you actually do next (in a world where nothing moves faster than a horse)? The answer is, you depend on a delivery chain. You gather what barons and men you have with you at court, but in all probability that will not be enough, so you send messengers to the leading barons wherever they are and tell them to meet you in Derby or York. They receive the message and – unless they see an opportunity in this moment of crisis to overthrow you – they send a message to their knights and retainers, who in turn scrape together the no-doubt reluctant peasants, and eventually, if you are lucky, everyone is heading north. There are many opportunities for this to go wrong. Put another way, there could be weaknesses in the delivery chain. Perhaps the messenger got lost, was stopped by bad weather, or was killed by an enemy agent. Perhaps he is like Baldrick in the BBC comedy series _Blackadder_ and garbles everything you tell him. Perhaps the baron receiving the message doesn't fancy hastening north and comes up with an excuse, or perhaps he simply wasn't where he was thought to be – maybe he's visiting his estates in Normandy... and so it goes. The truth is that to gather an army and hasten north to fight the Scots was a staggering logistical challenge. It is all very well for Shakespeare to put inspiring speeches in the mouth of Henry V – and the historical record suggests the man himself was inspiring – but what made him successful was his mastery of the delivery chain and the associated logistics. He hastened south to fight the French (if crossing the Channel in a tub of oak counts as 'hastening'). His army arrived equipped for the task. Every time I read one of the many books about inspirational leadership, I think about how Henry V made sure that all his archers had functioning long bows and quivers full of arrows. That was then. Now is no different; however noble the objective, however brilliant the motivational speech and however detailed the plan, without an effective delivery chain, nothing gets delivered. Jonathan Powell, Tony Blair's chief of staff, put it this way: 'A new Prime Minister pulls on the levers of power and nothing happens.' The moral is that the plan you develop has to include an understanding of the delivery chain and how it needs to be strengthened. It is important to be clear that this is not a centralizing idea at all. It is simply a statement of fact – if you are a prime minister or minister or top local government official, or in charge of delivering any major objective in any large organization, even if your goal is to decentralize, you have to understand the delivery chain, otherwise you'll end up feeling the way Tony Blair did when he was new. In the Prime Minister's Delivery Unit, we were quite blunt about this: If you can't draw the delivery chain, you can't deliver. My first experience of thinking this through for a large system was for the National Literacy Strategy in England in the late 1990s. Here is the way I thought about it. David Blunkett had to improve standards of reading and writing among eleven-year-olds. Implicit in this commitment was that in one way or another he intended to influence what happened inside the head of an eleven-year-old in, say, Widnes. The delivery chain is what makes that connection explicit; so how can we connect the child in Widnes to the minister in Westminster? What happens inside that eleven-year-old's head is influenced chiefly by her teacher – the first link in the chain; the teacher is influenced by the school's literacy co-ordinator, who, in turn, is influenced by the headteacher – the second and third links in the chain. The headteacher is influenced by the school governors and the local authority, who are influenced by the regional director of the National Literacy Strategy, who answers to the national director of the strategy. He in turn answers to the head of the Standards and Effectiveness Unit in the DfES, who answers to the secretary of state. And thus we have established the delivery chain. Figure 15 is an example of a delivery chain from Punjab, Pakistan. There are two crucial aspects of creating or reviewing a delivery chain. The first is to check whether _the actors_ at each point in the chain have the will and capacity to do what is being asked of them. Remember that, as you go out along the delivery chain, the number of actors increases. In the National Literacy Strategy, the numbers were as set out in Table 10. When you examine the will and capacity at each level, you should expect variation. It might be excellent, as it was in Blackburn, Wigan and Tower Hamlets, or problematic, as it was in Bristol, Walsall and Norfolk. Clearly you need to monitor this as the strategy unfolds, and where the problems lie will change over time. At the beginning, though, you just want to check that as far as possible across the system people are ready to go. The second, equally crucial, aspect is not the actors, but _the links_ between them. You might have someone marvellous in charge of literacy in one of the local authorities, but if that authority is in conflict with the headteachers, success is unlikely. Similarly, one of the regional directors may have a poor relationship with one of the local authorities he or she is responsible for. In short, the links in the chain are as important as the actors themselves. In my experience, consultants with organization charts remember the actors, put them in boxes, and forget the relationships. Yet it is the relationships that matter most. As Lawrence Freedman put it, 'A chain is as strong as its weakest link, the more links in the chain, the higher the odds that something might go wrong.' Delivery Chain – Punjab, Pakistan Figure 15 1 | Person responsible for delivering the result (me) ---|--- 1 | Director of the National Literacy Strategy 15 | Regional directors 150 | Local authorities, each with someone playing my role at local level 400 | Literacy consultants 19,000 | Headteachers, each with a literacy co-ordinator (so another 19,000) 190,000 | Teachers teaching literacy hours 3.5 million | Children, lapping it all up Table 10 When we started on the thankless task of improving railway performance in the Blair administration, the link in the delivery chain between the Department for Transport (DfT) and the train-operating companies was not so much flawed as (almost) irretrievably broken. The officials in the DfT believed that, the railways having been privatized, the punctuality of trains was nothing to do with them; the train operators despised the officials anyway and wanted to be left alone. Meanwhile, the passengers were, understandably given the execrable performance of the trains, screaming, 'What's the government going to do about it?' and the prime minister was saying to me, 'Michael, what are you going to do about it?' And I answered that we would fix the delivery chain, which is what we (painstakingly) did. And eventually performance began to improve steadily. RULE 26 STRENGTHEN THE DELIVERY CHAIN (don't think you can get away without doing so) ##### DATA AND TRAJECTORIES Sir Bradley Wiggins became a true British hero in 2012 when he became the first Briton to win the Tour de France. Shortly afterwards, he added to his already substantial tally of gold medals by winning the time trial at the London Olympics. Sheer talent at the peak of form. Well, yes, but that's not quite all there is to it. Wiggins and Team Sky, led by the relentless Sir David Brailsford, had been planning a British win at the 2012 Tour de France for well over a year. In the language of delivery, they had a target. They also had a plan: Wiggins was entered in five races in the year before the Tour, including setting out to win the Paris–Nice race in the months immediately prior to it. As Wiggins himself put it in his own inimitable style, 'to win Paris–Nice you still have to be bloody good... I was in bloody good form, at 95, 96, 97 per cent, but that last few per cent [required to win the Tour] is going to come from fine-tuning.' In other words, they had a trajectory too; 100 per cent performance in July meant 95 or 96 per cent performance a few weeks earlier. And his coach had a mass of data on his weight, his blood pressure, his speeds and above all his power – the amount of force he was exerting on the pedals minute by minute. Their approach involved poring over the data day after day and addressing Wiggins's weaknesses as well as his strengths. One of his weaknesses was hill climbing. It wasn't that he was bad at it; rather that he was quite good but needed to be brilliant. The plan involved focusing on this between March and June of 2012, during which time Wiggins would cycle up 10,000 metres each week – 'a little bit more', as he puts it, 'than the equivalent of going from sea level to the top of Everest... daily grind. It's bloody hard work.' So yes, Wiggins is an exceptional talent and was at the peak of his form that July when he won the Tour de France, but that was only the visible part of the performance, when the television cameras clicked on. To enable that, he and his team had a goal, a plan, a trajectory and a mass of data, which enabled them to see whether he was on track or not, and to tweak the plan when they needed to. Could he have won without all this? Not according to him: 'The critical thing is that I couldn't have done any of this without all the background training going back to November...' We want our triumphs, whether political or sporting, to be romantic – triumphs of brilliant ideas or moments, triumphs against the odds. Victories snatched from the jaws of defeat. Occasionally they might be like this but, whisper it if necessary, mostly they won't. They'll be victories ground out, perhaps incrementally, step by step through attention to detail. As Matthew d'Ancona, one of Britain's best political commentators, put it when I was in No. 10, 'There is no drama in delivery... only a long, grinding haul punctuated by public frustration with the pace of change.' The trajectory is the key to establishing this mindset. You know where the data is now on your chosen metric; and you know where you'd like it to be because you've set your aspiration. Go away and draw the line that connects the two points: that, at its simplest, is what a trajectory is. How hard can that be? It turns out, quite hard. First of all, there are psychological barriers. If you are a civil servant, and your minister or someone who works for a prime minister asks you for a trajectory, your first reaction is not 'How will I do that?', it is 'What will they think (or do) if I get it wrong?' So, when asking for a trajectory, it is necessary to anticipate this reaction and state the obvious: 'Of course you'll be wrong!' After all, how many people who predict what will happen four years into the future turn out to be right (except perhaps by sheer chance)? Figure 16 You then have to anticipate the next reaction, which is obvious too. If you know the trajectory is going to be wrong, what's the point in tracing it? And here we get to the fundamental point about trajectories. The reason for drawing a trajectory is that it forces you to think about the connection between the actions you are going to take and their impact on the outcomes. This is truly profound. It means that, as the plan unfolds, those responsible can check whether the actions set out in it have the effect that was intended; and if they don't, they can learn from what actually happens and tweak the plan. Sometimes, even if everything is on track, it is worth double-checking. When we were monitoring health wait times in England, once the plan was in place they fell steadily from one month to the next. But just to be sure, we checked the sub-trajectories for each type of operation. All were heading in the right direction except orthopaedics, where we were then able to intensify the strategy. In short, a trajectory is a wonderful thing, and no one should plan a major government reform without one. It was an essential element of winning the Tour de France. How much more important is it to apply such proven techniques to massive (and expensive) government programmes on which millions of people depend for their fulfilment or maybe even their lives? RULE 27 NEVER GO ANYWHERE WITHOUT A TRAJECTORY (you'll learn better, faster and deeper) It is one thing to decide you should have a trajectory. It is quite another to draw a really good one. When we first started asking for trajectories during the PMDU days, we used to quip that the officials concerned went racing back to their departments and got out their most sophisticated piece of analytical equipment – a ruler – with which they drew a trajectory very much like the one in Figure 16 above. There are two serious limitations to this approach. One is that it enables the relevant officials to avoid doing the tough, analytical thinking required to connect their actions to the outcomes; in other words, to avoid the whole point of developing a trajectory in the first place. Constructing a Trajectory: The Key Questions --- 1. What is the performance indicator? | What data set are you going to rely on for your trajectory? What are its strengths and weaknesses? 2. What is the target? | What target have you set? And what is the deadline? 3. How will you collect the data? | Is your data-collection system reliable? Does it give you the level of detail you need? Is it timely? 4. What is the historic data run? | What happened in the past with this indicator? Are there interesting blips or outliers? 5. How will you estimate the future? | Remembering of course that the human world rarely travels in straight lines... 6. Can the data be broken down by locality? | Will you be able to see which regions/localities/hospitals/police forces, etc. are doing well and which aren't? Could you look at quartiles and track them separately? 7. Can the data be broken down by category? | Can you separate out different crime types? Or types of operation? Or students by race and ethnicity? All these will help you understand. 8. Can the data be broken down by policy? | Will you be able to see the impact of each of the different strands of your policy? Which are the big drivers of change? And which aren't? Table 11 The other is simpler but just as problematic: the truth, obvious for anyone who stops to think about it, is that the human world rarely travels in straight lines. A hundred years ago we discovered, thanks to Einstein, that even the straight lines of Newtonian physics weren't quite as straight as we thought; and social science is far less predictable. In the Delivery Unit, and tried and tested elsewhere on many occasions, we developed a set of questions to ask yourself when preparing a trajectory. These are set out in Table 11. As in everything else in the science of delivery, attention to detail pays dividends. Once these questions have been addressed, it is possible to get to the next level. While trajectories are rarely straight lines, there are sometimes patterns. One common shape for a trajectory is shown in Figure 17. This shape of trajectory is common because there are some things which improve rapidly once you focus on them, and then get harder. Literacy in primary schools is like this – you get a jump from the focus and accountability. Social science calls this the Hawthorne Effect, after an experiment in which performance on a production line improved, not because of the amount of light – which was the research question – but because the workers on the line knew they were being researched. These are the kinds of reforms where people talk about 'picking the low-hanging fruit' or 'the quick wins'. (Too often that is all they do, and progress is not made irreversible.) After that all progress depends on grinding out improvements in the quality of work, which is hard. Figure 17 Figure 18 Figure 18 exemplifies another common shape for a trajectory. This type is appropriate where there is a lot of hard work to do in the preparation phase or a lot of investment required up front before improvement can occur. It has to be said, though, that in the Delivery Unit and after, I have always looked at trajectories of this shape with _intense suspicion._ Too often you would see a trajectory like this and realize that the first significant improvement was three or four years in the future and you knew the official (and maybe the minister) was thinking, 'by that time, I'll be in another job'. However, they might really have had a point, in which case you needed to know what milestones – actions taken – would be met in the three or four years before improvement would occur. Once the broad shape is clear, the key actions and the trajectory should line up as illustrated in Figure 19. Now you need to pay attention to some minor details which, if you forget them, might lead to confusion. A good example is seasonal variation. In Punjab, February is exam season. Student attendance drops. April is the wheat harvest. Again, it drops. In Britain, in winter people are more likely to get ill, so waiting times for operations will be under greater pressure. When it gets dark earlier in the evening, certain types of crime become more likely. If you want to get even further into the detail, young people are more likely to get drunk on a Friday night and (at least in the UK) hit each other over the head with beer glasses. This, in turn, means that Accident & Emergency departments are more likely to feel under pressure at that time of the week – so numbers and perhaps wait times there will go up. Then there is the famous problem on the railways of 'leaves on the line', a cause of delay and hilarity in the UK (and just delay in New York City). Seasonal patterns can be anticipated (though they should not be excused – if Friday night is a busy time for A&E, increase the staffing) and built into trajectories. Illustrative Trajectory Figure 19 Once done, the trajectory becomes a crucial monitoring tool. Those responsible at every level, including the delivery unit, can monitor progress against trajectory and learn from the deviations. All this requires collecting the data in the first place, and in swathes of the public services this can be a stumbling block. However, at the PMDU we had Tony O'Connor, one of the founding fathers of the science of delivery. Tony set up a small team of four people to be the number-crunchers at the heart of the Delivery Unit's work. Over the years that followed, Tony taught us all three important lessons about how to ensure data has influence – you have to collect it, then you have to ask the right questions, and finally you have to present the data in a compelling way. Each of the three is essential. If you don't have the data, you're stuck of course; if you ask the wrong questions, or interesting but irrelevant ones, at best you are an academic; and if you don't present the data well, the chances are that the minister or prime minister will miss the point altogether, sometimes because numbers are not quite their speciality, but often simply because they are very busy and they don't have the time or patience to make sense of a complex table. Tony's _pièce de résistance_ was a moving PowerPoint graph of the huge increase in teenagers involved in street crime in 2001–2. As the line shot across the screen, there was a sharp intake of breath from the PM and the ministers in the Cabinet Room. From then on, I knew we'd have collective commitment to solving the problem. Some years later, I had a similar sense of triumph when the chief minister in Punjab looked at a beautiful map of enrolment by district across his province and said, 'I'm going to sleep with this map under my pillow.' RULE 28 COLLECT DATA, ASK THE RIGHT QUESTIONS AND PRESENT THE ANSWERS BEAUTIFULLY (and don't forget integrity) Tony made another equally important contribution. His job was to make sure that any slide we produced for the prime minister had integrity. It was tempting for someone in a position like mine to tell the prime minister a compelling and plausible story – but what if it wasn't true? 'You are the conscience of the Delivery Unit,' I used to tell Tony and his colleagues, and they took the role to heart. Data is hard to collect on a systematic basis, at least in the initial stages; and as any number of people will point out, however well you collect it, there will be flaws in it and it won't tell you all you need to know. Added to that, some will oppose data collection because they fear what it will show. Incredibly, the very same people who make one or more of these arguments against collecting regular data, in the next breath purport to advocate 'evidence-based policy'. If you want evidence-based or evidence-informed policy, there is no alternative but to collect good data. When designing the data collection system, Table 11 above is crucial. It is important to think about how much detail is needed. Clearly it is cheaper and easier to rely on a sample, say an opinion poll, but if this gives you only system-level data, it may not be enough. For example, it is helpful to know whether crime is going down or up at the level of the country, but much more helpful operationally to know in addition which places are seeing the biggest falls and rises, as well as what is happening to different crime types. Another vital aspect of the design of data systems is the timeliness of collection and analysis. Too often in government you find that data is collected, but by the time it is gathered at the centre of the system, many months have passed. By the time it has been analysed as well, many more have passed; and by the time it's available to the system leader responsible or the public it is barely relevant any longer because the world has changed so much in the meantime. Data on long time-lags is of no use at all to those managing the system because any decisions it might have informed are long since gone. This is not an argument against large-scale academic evaluations which take place after the event. If done well, such research can, and should, be of huge value to future policy. While I was in the Department for Education between 1997 and 2001, two major studies were commissioned which have proved to be of global significance. One was the evaluation of the National Literacy and Numeracy Strategies undertaken by Michael Fullan and his colleagues, which proved to provide a platform for the ten-year reform of Ontario's school system between 2003 and 2013, in which Fullan himself played a leading part. Similarly, the study we commissioned from professors Kathy Sylva and Pam Sammons (now of Oxford University) has enabled the tracking of successive cohorts of children for over a decade, and provides insights into such important questions as 'Does the type of preschool education children experience between ages three and five have measurable effects on how well they do at the end of primary school at age 11?' (The answer is yes.) Insights such as these are vital to building the evidence for future policy, potentially across the globe, but they are of limited value in real time at the point of decision. Of course you can consult the researchers as you go and it is desirable to do so, but they are often cautious about reaching conclusions too soon – before they've done the research in fact! As a result, sometimes researchers get a bad name in policy circles, but this is unfair because they are only doing their job; the point is, it is a different job from delivering results in a government programme. Similarly, ministers and those responsible for delivery in government often get a bad name among researchers – 'Why didn't they wait until we'd finished our study?' This is unfair too – people trying to deliver government priorities have to get on and do that, which often means, in practice, taking decisions with incomplete information. There can also be incompetence on both sides – poor research or poor policy and implementation – but that is a different issue altogether. Reports undertaken by a government audit or inspection regime are a rather different category, and those involved in these kinds of study have a responsibility to provide insight and comment as close to real time as possible because part of their mission is to ensure good outcomes for citizens and good value for taxpayers – not simply to publish 'I told you so' reports after the event. The Audit Commission in the UK, before it was summarily abolished, produced beautiful reports, but too long after the event; the Government Accountability Office in Washington has a similar track record. The Office for Standards in Education (Ofsted) in England used to cause me similar frustration. They would write a report on some education policy I was responsible for and explain to me the key criticism just before it was published. Often the messages were pertinent, and either we had already discovered the problem and sought to fix it, or occasionally we hadn't and would have benefited from knowing sooner. They defended the degree of confidentiality on the grounds that it was part of their jealously guarded independence. For my part, I respected (and valued) their independence, even when it meant being in the firing line of their critique, but I wanted to know sooner what problems they had uncovered so that we could address them and the citizens – in this case children and their parents – could benefit. It was as if Ofsted was sometimes more interested in the headlines they could generate than in their impact on implementation. The main point here is that neither research studies nor reports from auditors or inspectors address the need those responsible for implementation have for good, close-to-real-time data. And without data of this kind, it is impossible to manage a system or check progress. A couple of examples will illustrate the point. In 2007, the Minister of Health in Namibia and his cabinet colleagues were concerned about the poor health outcomes in the country – everything from the devastating impact of HIV on the population to performance of the health services themselves. They made Kahijoro Kahuure, an agricultural economist with a reputation for getting things done, the government secretary (top civil servant) at the Ministry of Health and Social Services. He inherited an organization with a lack of strategy, a lack of leadership and coordination and a track record of failure in implementation. At the heart of the problem, a review of the health system at the time noted, was totally inadequate data. There were 'no clear performance indicators or tracking systems to give feedback on how goals were met'. In short, the ministry did not know what the problems were or whether they were making progress towards solving them. The lack of data was something the new government secretary focused on and, as a result, had a remarkable impact. Here is one example. Most of Namibia is rural and dispersed. Apart from Mongolia, it is the emptiest country in the world, with just 2.5 people per square kilometre, but its capital city, Windhoek, nevertheless has all the problems of urban areas, one of which at that time was that ambulance response times were notoriously slow. No one knew how long they were exactly, but everyone knew that if you called an ambulance, even in an emergency, you shouldn't hold your breath (even assuming you could). Data changed everything. Once they knew the average response time was two and a half hours (the average!), they started looking for solutions. An obvious one made all the difference. For non-emergencies or routine transport, they could send a bus. Immediately, actual ambulances were freed to deal with actual emergencies. Within a few weeks, the average response time was down to twenty minutes. There's a key lesson here – once there is good data, checked regularly, it drives action. By making a problem transparent, you create the conditions for solving it. Those who dismiss data analysis as 'bean-counting' and a burden on the system miss this fundamental point. Here is another example. The Delivery Unit took on responsibility in 2002 for reducing road congestion on Britain's motorways. Anyone who has driven on them, especially around the large conurbations such as London, Birmingham or Manchester, knows what a challenge that is. Small, crowded island; not enough roads, insufficient public transport and too many cars. Add to that – in 2002 – the tenth consecutive year of economic growth, and you have a motorway system under severe strain. 'What was the plan?' we asked the relevant officials, and in reply came a collective shrug, 'What can you do?' So we asked a simpler question to try to generate a conversation: 'How do you measure road congestion anyway?' It turned out not to be as simple as we thought. The Department for Transport had invented a bizarre system of paying thirteen people in the country to drive various routes at different times of day and calculate an estimate of congestion. This data was then aggregated once every two years (and revealed that the problem got worse!). On the day I happened to ask this innocent question, the officials responsible felt obliged to apologize for the fact that one of the thirteen had had an accident the day before, so the system was even less effective than usual. I felt there was something faintly biblical about this and wondered whether the thirteenth man had been paid thirty pieces of silver, but I didn't say so. Instead, with the appropriate degree of hesitancy and humility, my colleagues asked a killer question: 'Haven't you heard of GPS?' It turned out they had, but it had never crossed their minds that they might use it to monitor congestion on Britain's motorways – at least until that moment. Several months later, all objections overcome ('It'll be expensive'; 'Not remotely as expensive as the road congestion we're unable to manage at present', etc.), the GPS-based system was – as it were – ready to roll. We piloted it on the motorways around Birmingham with the newly enthusiastic team at the Highways Agency. The six weeks of monitoring revealed a mass of precise data about what caused congestion and – just as with ambulances in Windhoek – a creative search for solutions. The most common causes of congestion were accidents – how could the operation to clear the highway be speeded up? – and breakdowns – how could the recovery services get there faster? The least common cause of congestion in that period (it only happened once) was an elephant crossing the motorway, and we suggested they need not worry about this at all. More importantly still, they began to know precisely at which junctions and which times congestion was most likely. And most importantly of all, they knew what was happening as it happened, in real time, not two years after the event. Just as with ambulances in Windhoek, the data made the system manageable. Officials who had sat shamefacedly in headquarters, dreaming up excuses for why nothing could be done about the remorseless tide of road congestion, suddenly found that they could after all make a difference. In the years afterwards, road congestion in Britain remained a problem, but management of it improved. Each accident caused less delay; drivers were warned by signs above the motorways when there were delays ahead; and hard shoulders became available for use on urban motorways during rush hour. RULE 29 DATA MAKES A JOB DO-ABLE (until then, all you can do is make excuses and hope for the best) ##### LEAD INDICATORS Where I live in Devon, you sometimes get those early mornings shrouded in fog. Branches of trees look like ghostly arms. Sounds are muted. If you climb a hill, you rise above the fog. You can look down on the blanket of fog stretched across the magnificent Taw Valley. 'Foggy at seven,' they say, 'fine by eleven.' And if you wait and watch, as the sun rises and warms the land, the fog lifts and it is indeed fine by eleven. The knowledge of country people, developed when livelihoods depended on the vagaries of the weather, often turns out to be soundly based. In the language of the science of delivery, 'foggy at seven' is a lead indicator of 'fine by eleven'. If you just tracked the main indicator – fine weather – you'd be depressed at seven because it's anything but fine; but if you know that fog at that hour is a lead indicator, it's a reason to be cheerful. Technically a lead or leading indicator is 'a metric that helps to predict future performance on a target metric'. Fog predicts fine weather. Lead indicators are very useful indeed, because if you get them right they will tell you _before_ your target metric moves that you are on track. Or not, in which case you can begin to make adjustments to your plan. It therefore makes sense in the planning phase to identify what might be lead indicators. Often there is research which can help inform you. As with other indicators, lead indicators can raise ethical issues which you need always to keep under consideration. Charles Duhigg tells one powerful cautionary tale in _The Power of Habit_. Target, the US drugstore chain, knew that new parents often became the most loyal customers. They asked researcher Andrew Pole to see whether he could identify from their purchasing patterns which of the women who shopped at Target were pregnant. These patterns would be lead indicators of becoming parents and Target could start... well... targeting such women with marketing even before the baby was born. Pole cracked the problem analytically, but it raised ethical questions, especially when an outraged man in Minnesota complained at his local Target that they were mailing material about baby clothes and cots to his daughter who was still in high school. Shockingly, it turned out Target knew more about this man's daughter than he did; she was indeed pregnant. Being right about the lead indicator in a technical sense, though, did not make what Target had done right in an ethical sense. Lead indicators don't always involve such dilemmas. For example: * A lead indicator of reduced infant mortality? See what's happening to vaccination patterns. * A lead indicator of student graduation? Check how many credits they got in the first semester. * A lead indicator of performance in a failing school being turned round? Order in the corridors. And so on. They are always there, like the canary in the coal mine. And the fog on those mornings in Devon. ##### THE PLAN ITSELF Once you have understood the problem, decided what will drive change, made sense of the delivery chain, produced a trajectory (which isn't a straight line) and identified some lead indicators, finalizing a plan should be plain sailing. In fact, the planning – in Ike's terms – as opposed to the plan, is done. Still, there might be someone demanding a plan for accountability purposes and for the sake of thoroughness it is worth completing the task. This is where templates come in. In any large organization, ask for a plan and in return you'll get asked for a template. In the early phases of the Delivery Unit, we refused to produce one, partly because we wanted to see what people would do without one, but also because we knew if we produced a template, next thing it would be called 'a form' which had to be 'filled in', after which it would be a short step to accusing us of creating bureaucracy. Whereas no one could object to being asked for a plan since they already had responsibility for achieving a goal. Later, in dialogue with other governments (where I was advising rather than instructing), I relented. We produced a template. A good delivery plan should: 1. Articulate its purpose. (What's it for?) 2. Set out the key actions and make clear for each one who is responsible and when it is intended to happen. (Who will do what when?) 3. Set out leadership and governance and how performance will be managed. (Who is in charge?) 4. Show the delivery chain, with its strengths and weaknesses and how, where necessary, it will be strengthened. (How will you make this happen?) 5. Incorporate benchmarking – what comparisons will be made of progress, both with implementation and against trajectory. (What are the reference points?) 6. Explain how key stakeholders will be managed. (What relationships matter most?) 7. Identify the resources necessary to deliver. (How will you pay for it?) 8. Anticipate and prepare to mitigate key risks. (What might go wrong?) It is just a question, in other words, of being thorough. There are hundreds, perhaps thousands of books about planning and about programme and project management on the market, some better than others. This is not one of them – but the list above is a summary of what they say. Helmuth von Moltke, the German field marshal who created the much admired (and much feared) Prussian army of the mid-nineteenth century, made the point that 'No plan survives contact with the enemy', but it didn't stop him from planning thoroughly. On D-Day, things did not go according to plan for Ike either, and over the next few weeks – at the Falaise Gap, for example – some things went horribly wrong, but the value of the planning paid off handsomely. Paris itself was liberated on 25 August 1944, less than three months after D-Day. Implementing a government programme may not involve quite the same drama (and certainly won't involve the degree of force), but however good the plan, it won't survive contact with reality. The planning should, though. It ensures the resilience and the foresight to persist. The key then is how those responsible know whether the facts on the ground are changing in the way that was anticipated and hoped for. More than anything else, that is about building routines into the way government works. This is the subject of the next chapter. ## 5 ## Routines One of my favourite walks is to climb England's highest mountain, Scafell Pike, from Langdale. This is not the shortest ascent, but on a fine day is surely the most glorious, a walk which demands serious, persistent effort over several hours and which offers magnificent rewards in return: panoramic views of rugged mountains, glacial valleys and the distant Irish Sea. A. Wainwright, whose incomparable guides to the Lake District mountains make wonderful companions, summarizes the walk thus: > The walk falls into four distinct and well-contrasted sections. > > 1. To Mickleden Sheepfold – easy, level walking, Gimmer Crag on the right and the Band rising on the left. > 2. Rossett Gill – gradual climbing. Bowfell's crags well seen on the left. Rossett Pike on the right. > 3. Rossett Pass to Esk Hause – undulating grass shelf with two descents where streams flow to Langstrath, right... > 4. Esk Hause to the Summit – easy gradients, but becoming very rough across a lofty plateau; two more descents before the final steep, strong rise. > He adds, 'This is a splendid walk, depending for its appeal on a wide variety of scenery and on the elusiveness of the Pike [the summit], which... remains concealed until the final stages.' He has a pencil-and-ink drawing of the moment when, after many hours, the summit finally comes into view, and comments: > Many hearts have sunk into many boots as this scene unfolds. Here, on the shoulder of Ill Crag, the summit comes into sight, at last, not almost within reach as confidently expected by walkers who feel they have done quite enough already to deserve success, but still a rough half mile distant with two considerable descents and much climbing yet to be faced before the goal is reached. I love this walk and Wainwright's description of it because it is such a perfect representation in genuine landscape of what it takes to achieve any major goal in life – the sense of endeavour, the challenges (such as Rossett Gill, which is much tougher than Wainwright suggests) on the way, the persistence when your muscles ache, the moments when you think success is within grasp but it's not, it's half a mile distant with two descents and significant climbing still to do. It is a metaphor for life, and it demands, like much of achievement in life, steady, relentless effort – the walk may be in four distinct phases, but in the end what gets you to the summit is putting one foot after another, however rough the terrain might be, again, again and again. This chapter is about building the resilience into government that enables that persistence. This resilience is partly a question of leadership and partly one of building the right processes – routines – into the way government operates. There is a whole section later (in chapter 7) about the leadership delivery requires; this chapter focuses first on the specific aspects of leadership required to get implementation started, and then on those crucial routines that can be built into the way government works to enhance the likelihood of success, once delivery is under way. ##### THE LAUNCH > September 24 2010. Oprah Winfrey: 'So, Mr Zuckerberg, what role are you playing in all of this?' > > 'I've committed to starting the Startup: Education Foundation, whose first project will be a one hundred million dollar challenge grant...' > > Oprah again, interrupting, 'One. Hundred. Million. Dollars.' Thus Mark Zuckerberg, founder of Facebook, announced his $100 million investment in Newark Public Schools. Why Newark, New Jersey? Because Zuckerberg 'believed in' the mayor of Newark, Cory Booker, and the governor of New Jersey, Chris Christie, who were there with him on Oprah's show. Now that was a launch that had the 'wow' factor! The most famous chat show host in the world, the most famous face of the new generation of social media innovators. An up-and-coming African-American mayor, willing to take on the vested interests in his own Democratic Party. And the larger-than-life Republican governor, willing to challenge orthodoxy on his side of the aisle too. That set everyone talking; talking about the disaster that was Newark Public Schools; talking about the spectacular rescue to come; talking about the exciting new alliance of leaders of the new America. Just one problem – no one had done anything yet. With a launch like that, expectations are raised sky high, but until someone starts work, the ground reality is unchanged. The gap between the two is vast. The result is a greatly increased risk. Rhetoric. Reality. Many a great idea has fallen through the gap between them. As Dale Russakoff put it in the _New Yorker_ , Cory Booker, Chris Christie and Mark Zuckerberg had a plan to reform Newark's schools. In fact, 'They got an education.' In this case, it would be wrong four years later to say there has been no progress – Cami Anderson, who runs the School District, points to increases in achievements, enrolment and physical infrastructure. However, even the celebrities involved would admit the outcomes are far short of the aspirations set out that day on _Oprah_. There has been effort and struggle and conflict, and some eked-out progress, but not yet transformation. It is certainly worth considering whether they might have made more progress had they quietly got started and sought attention on _Oprah_ only once they had some results to show. As Denisa Superville wrote, 'the road from now to 2017 [in Newark] could be extremely bumpy'. As this example shows, a launch is both an opportunity and potentially a moment of risk. There are many ways to screw them up, as I have discovered myself. When my political masters made the mistake of allowing me to launch Education Action Zones (in January 1998) and I slipped up in briefing journalists, we ended up with front-page headlines roaring that privatization of the education system was at hand. A senior figure in government rang David Blunkett and asked him to 'fire the mad professor'. Fortunately for me, David declined to do so. The problem is that launch and delivery are integrally related. Get it right, and those involved feel informed and excited to be part of it. You have a fair wind. You can get on with the job. Get it wrong, and people are confused. They don't know what the story is and they may very well feel ill-disposed towards your agenda. Added to that those politically and administratively responsible for the policy are now on edge, fearful of making another blunder and on the defensive, constantly having to explain what they really meant. Meanwhile, the media, having smelt one rat, are actively looking for others. All of which is to say, before a launch there are some tough questions you should ask yourself: * Do you really need a launch? * If so, how much do you want to raise expectations? * What form will it take? * Who will make the announcement? Is he/she a seasoned professional? Do you really need that celebrity? * What's the message? Will the reality bear it out? When? * How does it connect to the big strategic themes? * What will the critics say? * What is the one-line (hopefully memorable) summary? * What is the policy's name? Indeed, giving your policy a name is not to be underestimated. In business you'd call it getting the brand right. Muhammad Pate was a remarkably successful health minister in Nigeria, whose mission was to improve health outcomes by taking preventative approaches and especially by ensuring more effective immunization campaigns. He set up a delivery unit in his health department and drove the agenda relentlessly for three years until he left office in 2013. In a country where delivering in government is notoriously difficult, he made impressive progress. The president loved the policy, partly because of its intrinsic merit, but partly also because Muhammad Pate branded it brilliantly. He called it 'Saving One Million Lives'. He had heard about a programme led by the American healthcare expert Don Berwick called 'Saving 100,000 Lives'. 'We added a nought and then [to plan it] worked backwards from there,' he explained to me. ##### THE LULL You have established your goals (chapter 1); you have set yourself up to deliver them (chapter 2); you have decided your reform strategy (chapter 3); and you have done your planning (chapter 4). Now, if you haven't already, you need to get started. This may be the moment when, in von Moltke's terms, your plan does not survive contact with the enemy, but the most likely response at the outset is something much worse – a deafening silence. A lull. The doldrums. I remember the awful feeling in September 1998 when the literacy hour – the daily literacy lesson – was supposed to go live in all the classrooms in 19,000 primary schools across England. After all the planning, all the preparation and all the controversy, this was the moment of truth. Would there be a boycott? Would there be demonstrations? Or, worse still, what if, in spite of all the training that had already taken place, the primary teachers of England just carried on doing what they'd always done? What would government do then? How powerless would I be? To be ignored: surely a worse fate than to be resisted. I'm sitting at my desk on the fourth floor of the Department for Education with a view of Hawksmoor's magnificent west front of Westminster Abbey, wondering... Nothing. Are they doing literacy hours out there, or not? I visit a couple of schools, and they are! Good – but obviously they are going to do literacy hours when I visit. I summon the relevant top people from Ofsted, which inspects roughly 500 schools per month. Surely they'll know? They tell me they'll produce a report for me next spring. Fine, I say, but I want to know what's happening now. Could I meet some actual school inspectors who've been in classrooms? Yes, I can, but not until October. By then, I'm thinking, six vital weeks will have gone by, what if there is no momentum by then? I talk it through with John Stannard, my wonderful director of the Literacy Strategy. He is calmly optimistic, as are his fifteen regional directors... but maybe they are too calm, too patient? Then, out of the blue, from a source I hadn't expected, at the end of September, I get the feedback I want. Conor Ryan, David Blunkett's special adviser, had been at the Labour Party Conference and all the MPs he met had been telling him what a marvellous advance the literacy hour was; parents loved it! Now, at last, after a month, I know something is happening out there because constituents rarely approach MPs at all, still less often with positive stories. By the same token, I know that if there had been a campaign of opposition to it, the MPs would have made absolutely sure David Blunkett heard about it. Experience helps too. I knew a wonderful school principal from Kentucky who was asked to turn around, in short order, a failing school in Pennsylvania. She told the superintendent that she would do so on one condition. She expected that for the first two months after she started at the beginning of September, the superintendent would hear a series of complaints; the condition was that every single complaint should be studiously ignored. Then, by November, the principal promised, the school would begin visibly to improve... which is what happened. The lessons for leadership are clear – hold your nerve, trust experienced people and keep looking for a sign, like Noah waiting for the dove to return with a twig in its beak. RULE 30 DON'T BE SPOOKED BY THE DEAFENING SILENCE (but keep listening) ##### THE IMPLEMENTATION DIP You get your planning and early implementation right. Everything is new and exciting. After the lull, the early feedback is positive. There is a sense of celebration – it's working! Then there is the implementation dip. I remember vividly explaining this concept to the cabinet in Colombia shortly after Juan Manuel Santos had been elected president in 2010. At the time, he was enjoying positive poll ratings of over 80 per cent. The meeting took place in the grounds of a beautiful villa outside Bogotá. No sooner had I mentioned that however well they were doing at that moment, the implementation dip lay ahead, than the blue sky turned dark, thunder followed bolts of lightning and the rain became torrential. The symbolism was not lost on any of us. Figure 20 This is a moment of high risk. Celebrating success too early is, as John Kotter points out, a very easy mistake to make. It can have two problematic consequences. One is that the opponents of the change celebrate with you because they hope that you'll stop paying attention. Whereas a few months ago they were saying 'It's impossible', now they are saying 'We're already doing it.' The second is that you will think the job is done. After all the preparation, the euphoria. And in government there are always many other issues clamouring for attention: if this is going well, why not turn to the rest of the agenda? A huge error! This is a moment when serious leadership is required. Not only would it be a mistake to turn away – and play into the critics' hands – the truth is, the state of affairs is probably about to get worse before it gets better. The euphoria of 'the new' wears off; suddenly it's about grinding out results. It is a hard slog; there are new, perhaps difficult, skills that now have to be mastered, not just understood. There are new, perhaps demanding, management arrangements. The excitement of a new boss wears thin when it turns out she is human too; and this is happening a thousand times across a system. Worse still, the new arrangements, with all the skills and management, don't seem to work. The real world proves stubbornly resistant to change and for a while things may seem to go backwards. This is because there are four stages of learning* which everyone has to go through. Before the change was introduced, most of those due to be involved were _unconsciously incompetent_ – they didn't know what they couldn't do because no one had asked. Oblivious, and happy about it. Then, when they realized whatever it was was coming and new skills were required, they discovered they had a problem. They were going to be asked to do something they couldn't yet do. They were now _consciously incompetent_. Beginning to learn this new set of skills was exciting at first; then it dawned that whatever it was was harder than expected to learn, so it would be much easier to retreat to the old ways of doing things. If the leadership doesn't prepare for this moment, not only will the reform fall into the implementation dip, it will probably also never come out of it. A huge waste of time and energy; a huge missed opportunity. We might call this the Grand Old Duke of York problem – he marched them up to the top of a hill and he marched them down again. Except in this version they straggle down. However, if the leadership anticipates the implementation dip, those involved can be taken to the next stage, _conscious competence_. As with the child who has just learned to ride a bicycle, the new skills begin to work, to make a difference, but each aspect of applying them requires conscious effort. Remember driving home that first time after passing your driving test? Everything from looking in the mirror to signalling left feels so self-conscious until eventually it becomes habit. And still, persistence is required until the new skills, the new arrangements, become the new normal: we used to do it that way, now we do it this way. Why didn't we do this years ago? This is _unconscious competence_ , the highest level of skill. Think about this from an individual level for a moment. You want to lose weight. You sign up for the gym and feel proud of yourself for taking this first step. You have that induction session in which the trainer shows you how to use all those machines – the rowing, the running, the cross-trainer, the weights. Very exciting. After a couple of sessions, you get the hang of them. You are now consciously competent... and then it dawns on you that everything so far was the easy bit. You go regularly for a while, but your weight stubbornly refuses to go down; then comes that cold, winter morning when it's barely light outside and you've got a pile of work waiting for you in the in-tray at the office. Maybe the right thing to do today is skip the gym and go straight to the office... yes, absolutely right... and the routine begins to crumble. Imagine that psychology working across a system such as Britain's National Health Service with 1.2 million employees, or Punjab's schools with 300,000 teachers, or New York City's police department... To take a change through the implementation dip requires an act of leadership. You've done the planning, you've survived the lull, now you have to cross the gap. On that walk up Scafell Pike, this is the moment when the glorious glacial valley of Mickleden is behind you. You have covered the ground in great strides, listening to the gurgling of the running brook – or 'beck' as they call it in those parts – chatting with your friends. And now Rossett Gill rises ruggedly, even brutally, before you. It's still easy to turn back... Scafell Pike from Langdale is not a walk for the faint-hearted, believe me, and neither is any major reform of a government or public service. RULE 31 ANTICIPATE THE IMPLEMENTATION DIP (and demonstrate the leadership required to get through it) ##### DISTRACTION In June the monsoons come to Pakistan, Northern India, Nepal and the Himalayas. Rivers such as the Kali Gandaki flowing down between Annapurna and Dhaulaghiri turn into broad, raging torrents crashing out of the mountains into the floodplains below. The huge rivers – the Brahmaputra, the Ganges and the Indus – sluggish before the monsoon, become vast and swirling as they head to the sea. In 2010, the monsoon rains were unusually heavy, not just in the high mountains, but across the Indus basin. They were the heaviest for eighty years – and it kept on raining. The ensuing flood covered one fifth of the land area of Pakistan. Over 20 million people were affected and around 2,000 lives lost. Estimates of the economic impact vary, but some suggest the cost to the country was over $40 billion. Ban Ki-moon, UN General Secretary, called it the worst disaster he had ever seen. Not any flood. The Flood. For just under a year before The Flood, I had been trying to encourage Pakistan's leaders at federal and provincial level to take education reform seriously; to stop simply writing reports about how awful it was and actually do something. I had been due there that August, but with the country underwater and everyone understandably distracted by the inundation, I postponed my visit. The waters were receding by the time I came to Islamabad in September. Forty or so of the country's education leaders assembled as planned in a small, somewhat tacky hotel in the outskirts of the capital for a two-day session on the implementation of education reform. What I found – maybe I should have expected this, but I hadn't – was forty people who were utterly shellshocked. Most had been pulled off their education duties to tackle the flood and its consequences. In effect, what they said to me, led by an influential adviser to the prime minister, was, 'We can't do education reform any more; we've had a flood.' For me, this was a make-or-break moment. I could have been understanding and sympathetic. The flood had been devastating. I _knew_ that, though I hadn't _felt_ it as they clearly had. I could have accepted their plea, in which case there would have been little point continuing my efforts. (It would actually have been a relief from a personal point of view – I had too much to do and it was demanding to visit Pakistan every month.) What I did, though, was the opposite. Something in me made me seize this moment. I chose to be ruthless. 'Did the flood make your schools better?' Silence. 'Did the flood make your schools better? You agree you had an education problem before the flood... you've still got one after the flood.' Silence. Crises will come and go in all countries, more often in some than in others; at these moments, someone has to lead, has to make sure that while the crisis is addressed, it doesn't overwhelm the pre-existing agenda for change. Allow that, and failure beckons. Hence Harold Macmillan's famous lament about why he had not achieved more: 'Events, dear boy! Events.' That ruthless moment in the tacky hotel was my leadership moment. Exactly a month later, we had the breakthrough in Lahore that led to the Punjab Education Roadmap which resulted in improved education for millions of children. RULE 32 DEAL WITH CRISES (but don't use them as an excuse) ##### STANDARDS They say that in Kyoto, the former capital of Japan, 'If you throw a stone it will hit either a Buddhist monk or a university student.' It is an ancient place of religion and learning. I loved visits to the Golden Pavilion and the Nijo Palace, but the most important learning for me – bordering on the religious, in fact – came from the journey there. I travel from Tokyo to Kyoto on the 9.10 a.m. Shinkansen or bullet train. At 8.45 a.m., the train arrives at the platform having come from Kyoto. Four women in pink wait at the door of each carriage. They bow to disembarking passengers. As soon as the carriage is empty, they climb aboard and clean it. They make it spotless. Then the seats, which were all facing forwards as the train came from Kyoto to Tokyo, are reversed so that on the journey back to Kyoto we'll all face forwards too. The driver walks past, in immaculate uniform, proud to be about to drive this magnificent train. And it is magnificent. The engine looks more like a rocket than a train and the white carriages are low and sleek. At 9.05 a.m. the women in pink have completed their work and we embark. The seats recline. There are footrests and hooks for jackets. Later I discover the toilets are spotlessly clean too. The wifi works. On time (needless to say) the train begins to move. There is no jerk or clank. Just smooth acceleration. The engine doesn't roar, it purrs. It's not far to the first stop at Yokohama, but thereafter you feel the sense of acceleration and then the genuine speed. On the right, views of Mount Fuji, snow-capped, rising above the clouds; on the left, the coastline. There are three stops before Kyoto, where the train arrives on time (needless to say), having completed 285 miles in just over two hours. Since it first came into service the Shinkansen has transported 5.6 billion passengers. There has never been a serious accident, and the average delay is under a minute. Travelling up and down from London to Devon as I do, I experience First Great Western (FGW) trains often. FGW are not a bad train-operating company by British standards. Often their trains are on time, sometimes they aren't. In Britain, being within ten minutes of the advertised arrival time counts as on time anyway – inconceivable in Japan – and sometimes we drift inexplicably. The lost time accumulates. There is often a puzzling congestion at Reading (don't they know we're coming?), where we lose more time. Meanwhile, the toilets are often dirty, the water in the basin doesn't always run, and the windows are grimy. Yes, I know the Japanese invested in infrastructure back in the 1970s and 80s when we in Britain chose not to, and that makes a difference. I know too that however hard FGW tried, they couldn't provide a view of Mount Fuji (White Horse Hill isn't quite the same) but – and this is the point – they could do something about the small things: the cleanliness, the pride of the staff, the explanations when things go wrong... and they could show a greater sense of intent rather than, as it often feels, just shrugging their shoulders when delays occur. In short, they could set higher standards. The key word in this account is 'standards'; not bad by British standards, but by Japanese standards, awful. Now, there is a key point about standards: it's not just the provider who is responsible, it is the customer too. The reason why First Great Western standards are as they are is that we allow them to be. The standards are embedded in culture, the organizational culture and the wider culture. However clear your priorities, however good your strategy and your planning, it is really important to remember that, if success depends on setting new standards, you are embarking on culture change. That will take time. And the only way to succeed, therefore, is to build those new standards into your routines, into what you do all day every day. You have to change expectations at every level. Such rigour creates resilience. My colleague Denise Todd was in Japan when the terrible tsunami and earthquake hit. She was in Tokyo, so not at the epicentre, but for everyone there it was still a major trauma. The following day she travelled from Tokyo to Kyoto on the bullet train; it left on time and arrived on time. That is what setting high standards does for performance. ##### DILIGENCE After standards comes diligence. _Chambers Dictionary_ defines 'diligent' as 'steady and earnest in application' and 'diligence' as 'steady application; industriousness'. Diligence matters because however the standards are set, the key is that they are steadily applied on every occasion. For some government programmes this is not just desirable but essential. Eliminating hospital-acquired infections, for example, depends on everyone washing their hands every time. Eliminating polio, the same attention to detail applies. Many health and safety procedures, ditto. When it comes to children learning to read, the issue may not immediately put lives at risk, but diligence there matters too. Or policing – the recording of a crime or taking down a witness statement; done shoddily it can lead later to the collapse of a case. Diligence and reliability go together. Reliability isn't very exciting, but it is really important if you are to succeed. Too often, public officials contrast reliability with creativity or imagination. They imply reliability is somehow beneath them; what's needed is an act of brilliance instead. This is self-deception. The opposite of 'diligent' is not 'creative', it is 'shoddy'. The opposite of 'reliable' is not 'brilliant', it is 'unreliable'. And standards, once set, change the lives of citizens only if they are diligently applied. This requires a cultural shift in the professions, as set out in Table 12. In its school system, Singapore comes close to illustrating what this shift looks like. Atul Gawande, the wonderful writer on largely medical issues, puts it this way: **The Required Cultural Shift** Table 12 > People underestimate the importance of diligence as a virtue. No doubt this has something to do with how supremely mundane it seems... There is a flavour of simplistic relentlessness to it. And if it were an individual's primary goal in life, that life would indeed seem narrow and unambitious. > > Understood, however, as the prerequisite of great accomplishment, diligence stands as one of the most difficult challenges facing any group of people who take on tasks of risk and consequence. The point could not be better made. In his later book _The Checklist Manifesto_ , Gawande builds on this theme. Mistakes will happen, he argues. Sometimes this will be because we don't know enough; sometimes it will be because we didn't apply what we did in fact already know. The solution, Gawande goes on to argue, is simple. Experts, including top professionals, need checklists. The checklist, seen from this perspective, is not something reductive or limiting. It is a prompt for diligence and an underpinning which allows professionals to be both reliable and creative. This argument is fundamentally important and undeniable. Checklists underpin professionalism; and the opposite of 'professional' is 'amateur'. If delivery is to become a science, therefore, we need standards, diligence and checklists, applied with that 'simplistic relentlessness' to which Gawande refers. In short, we need routines. ##### ROUTINES In the past few years, there has been an outbreak of excellent books on the subject of popular social psychology. One of them, by Charles Duhigg, examines _The Power of Habit_. He looks at the challenges we all face in breaking bad habits and establishing good ones – neither task is easy, as we all know. Early in the book, a major in the army tells Duhigg how becoming conscious and systematic about habits has changed his life. > Understanding habits is the most important thing I've learned in the army. It's changed everything about how I see the world. You want to fall asleep fast and wake up feeling good? Pay attention to your night time patterns and what you automatically do when you get up. You want to make running easy? Create triggers to make it a routine... My wife and I write out habit plans for our marriage. This is all we talk about in command meetings. I keep wondering whether the major's marriage is going well – maybe he is taking the idea too far – but, more significantly, I see the connection between the psychology of habits and what I was trying to do in Downing Street in the early days of the Delivery Unit – build good habits and break bad habits. Later in the book, Duhigg himself makes this connection between personal habits and organizational ones by telling the story of Paul O'Neill. After a career in the government bureaucracy, O'Neill became a successful businessman, and in 1987 was approached to lead Alcoa, America's leading aluminium company. To the surprise of the markets and the workforce he had inherited, he built his entire strategy around a single indicator – and it wasn't sales or profits per unit of aluminium. In fact, it was not a commercial goal at all. It was, in his words, 'to make Alcoa the safest company in America. I intend to go for zero injuries.' His insight was simple: > I knew I had to transform Alcoa. But you can't order people to change. That's not how the brain works... If I could start by disrupting the habits around one thing [safety], it would spread throughout the entire company. Suffice to say, his strategy worked. Under his leadership, Alcoa's safety record became exemplary and its commercial track record enviable. By changing working habits, he changed effectiveness and motivation, and as a result profits soared. Interestingly from our point of view, O'Neill had learned about the importance of habit during his seventeen years in government. I wish I had met him back in 2001 when I was establishing the Delivery Unit, because by then I was starting to arrive at exactly the same insight. I had spent four years in the Department for Education, and had seen that where we had built systematic routines to check progress, as with literacy and numeracy, we had been successful; where we had not, as with Education Action Zones (an attempt to improve schools in tough locations), we had failed. So much of government seemed haphazard – another new idea here, a media story there, a crisis in the making on the horizon... As a result one often simply tried to get through the day. And yet we in the Department for Education were seen by the rest of government as _the_ success story, and we had results to show for it: fewer failing schools, more children able to read and write and do sums. Early in 2001, before the election and the establishment of the Delivery Unit, I was asked to present to the top team at the Home Office, where they were struggling to satisfy Blair that they had a grip on crime. In a meeting at No. 10 that became legendary, the Home Office officials had attempted to convince the prime minister that crime rose in a recession because there was greater poverty and rose in a period of prosperity too because there was more stuff to nick. This had not gone down well. In my presentation, I see that I was struggling towards the idea of habits and routines but had not quite got there yet; I talked about 'continuous monitoring and problem-solving' and 'maintaining the focus even when it gets boring'. I was almost there, but a conversation with O'Neill would definitely have been enlightening. In Downing Street they were nowhere near. They were responding to the media, asking for new ideas, summoning ministers haphazardly to explain, and on the lookout, as Blair said in that infamous note that leaked, for 'eye-catching initiatives'. It was in these circumstances that I started urging Downing Street to change the way its Policy Unit worked so that it pursued implementation. Then, when that seemed too complicated, I proposed instead that they set up a separate delivery unit, which, eventually, is what came to pass. At the heart of this idea, I put routines – routine reporting, routine data collection, routine monitoring, routine problem-solving. Later, when I looked back at what had been happening and what we eventually put in place, I drew a contrast between 'government by spasm' (the old way) and 'government by routine' (the new way). I turned it into a PowerPoint slide (Table 13). Table 13 I've used this slide now on a number of occasions with people from governments all around the world. They 'get it' immediately. The first panel all too often describes their daily experience; the second, they see, would be so much better, if only they could change their bad habits for good ones. The rest of this chapter is about how to do that through establishing routines that work. RULE 33 GOVERNMENT BY ROUTINE BEATS GOVERNMENT BY SPASM (it's not even close) The whole point of routines is that they are dull. Predictable by definition, they are the opposite of a surprise (which we call a break in the routine). As Charles Duhigg points out, routines should become a habit and the whole point about a habit is that you don't have to think about it. 'Boring' might be a problem in many walks of life – such as sport, art or writing – but in government it is quite simply necessary. There is no shortage of surprises in government – in fact, there are far too many of them. Few presidents or prime ministers leave office complaining that they have been bored. Far from it. Listen to James Callaghan, who had a very tough time: 'It is never a misfortune to be Prime Minister... it is absolute heaven.' He doesn't sound bored. And then there's Churchill's famous (perhaps apocryphal) answer to a young woman who sat next to him at a dinner shortly after the war: 'But you don't understand, my dear, I loved every minute of it!' Certainly not bored. And countless leaders around the world would say the same. When I ran the Delivery Unit for Blair, my task could be defined as having responsibility for bringing the dull, the boring and the predictable to the heart of government. This might be why one journalist described a presentation I gave as 'comparable to a lecture from the speaking clock'. Some people might have been offended by such denigration, but for me it was confirmation that I was doing my job, which was, in common parlance, to grind stuff out. Somebody has to if results are to be delivered, especially in a world defined by volatility, uncertainty, complexity and ambiguity. And here's the secret: you have to find ways to make the dull, the predictable and the boring absolutely riveting, not to journalists, but to the key figures inside government. In what follows, I'll do my best to pull off the same trick for the reader. #### Monthly Notes Years ago, the _Economist_ ran an advert that proclaimed 'It's lonely at the top, but at least there's something to read.' There certainly is something to read, above and beyond the _Economist_. Even the most assiduous prime minister, president, governor or minister couldn't possibly read everything presented to him or her. So when, early in my time in No. 10, Jeremy Heywood, the prime minister's private secretary, suggested I wrote a regular monthly note for Tony Blair on each priority area, I wondered how on earth I'd get him to read it. I debated the idea with my team and they were cautious – why write a monthly note, they asked; much better to write a note when there's something to say. I was tempted to agree with them, but that just shows I had not yet fully learned the importance of routines. With Heywood constantly reminding me that monthly notes updating Blair were vital, both to make sure I got noticed and to make sure the prime minister kept paying attention, eventually I insisted. And we began sending them; once a month, a note on progress towards the health targets; the same on crime, on education and on transport. The routine had begun. Four aspects of the notes were designed to make them interesting. First, it was vital they were well written. They came to me late on a Friday morning and I personally edited them if necessary, to try to give them life. Second, there was the fascination of waiting to see what the next update of the data will reveal, which is why football fans love league tables and why baseball fans are data geeks. We would do the same with the monthly data on health or crime. Once we had a clear, elegant way to present the data, we could simply update it each month – the prime minister would be waiting to see whether the data was going the right way or the wrong way. It wouldn't quite be true to say he was on the edge of his seat, but there was certainly an element of suspense. Third, the iconography of the note – the way it looked – became familiar, so the prime minister could find what he wanted to know rapidly. (As with the newspaper you like, you know your way around it.) Finally, I wrote a brief covering note of just a paragraph or two, highlighting what I thought were the key points and – always, always – suggesting a solution wherever we identified a problem. Often I would add at the end in bold a question such as 'Do you agree?' Blair could then just say 'Yes' or 'No', but the key from my point of view was that it made it more likely he would read the note. Similarly, we have designed beautiful charts for the chief minister of Punjab. For him, we also provided monthly updates of what we call 'heatmaps'. These are stunning maps of Punjab and its thirty-six districts: a district that is on-target on the indicators is green; a district way off-target is red, and there are of course the two shades in between. This information is gold dust for the chief minister. I know he shares the sense of suspense about what the data will say. A 2014 email from him says, 'I'm really glad to know the improvements in Roadmap data; however I would be equally interested to know the trends in enrolment data which I believe is a stepping stone for advancing the reforms...' I had to tell him he'd have to wait another week for that; another leader on the edge of his seat. At the time Jeremy Heywood and I hit upon the monthly note concept, I assumed that we had invented something new – but that was hubris. Long after I had left No. 10 I discovered, reading Christopher Andrew's monumental history of MI5, that its leaders had had almost precisely the same debate in relation to Churchill during the war. As Andrew explains: > 'Duff Cooper proposed that the Security Service send the Prime Minister a monthly report of two or three pages.' Given that Churchill had just expressed fascination with the incredible story of Agent ZIGZAG, they were a bit worried he'd get overexcited. Guy Liddell wrote cautiously, 'There are obvious advantages in selling ourselves to the PM, who at the moment knows nothing about our department. On the other hand, he may... go off the deep end and want to take action, which will be disastrous to the work in hand.' They knew their Churchill, but decided to take the risk of a monthly note because, as Sir David Petrie explained, 'It is only fair... that the good work of the Service... should be brought to the notice of the Prime Minister.' There is only one thing worse than being noticed, and that is not being noticed. So their first monthly note was prepared and submitted to Churchill on 26 March 1943. It has to be said it was easier for them to make their notes riveting for the prime minister than it was for me when updating him on the punctuality (or lack of it) of trains. Here is an extract from that first note: > In all 126 spies have fallen into our hands. Of these, eighteen gave themselves up voluntarily; twenty-four have been found amenable and are now being used as double-cross agents. Twenty-eight have been detained overseas and eight were arrested on the high seas. In addition twelve real, and seven imaginary persons have been foisted upon the enemy as double-cross spies. Thirteen spies have been executed and a fourteenth is under trial. It may lack graphs and maps, but it has everything else, including data, clarity and telling detail, including the brilliant 'seven imaginary persons' and some real executions. Churchill added a comment at the bottom of the note: 'Deeply interesting.' An understatement, I think. Certainly Blair never wrote anything quite so complimentary at the bottom of one of our notes. One minor digression is required here because it is so astonishing. Just as with us, the MI5 leadership were determined to make sure the note was well written. They naturally asked their best writer, who was also handsome and loved by everyone, to draft it. He was Anthony Blunt, who many years later was revealed to have been a Soviet spy. As a consequence, Andrew points out sardonically, 'it is highly probable they [the monthly notes] went to Soviet intelligence as well – and quite possibly to Stalin personally.' So, the lesson for the science of delivery is that while matching MI5 for sheer riveting detail will probably be impossible, it is hard to beat the value of a good monthly note. It keeps the leader informed and interested, means you know what he or she thinks, and demands the discipline of regularly updating the data. Crucially too, it requires a synthesis of the big picture on a regular basis. The narrative is constantly being updated. This creates a sense of momentum. I told you it was possible to make the dull, the predictable and the boring absolutely riveting. Well, deeply interesting at least. RULE 34 PREPARE MONTHLY NOTES FOR THE LEADER (and make them 'deeply interesting') #### Routine Meetings or Stocktakes It would be hard to write two words together more likely to induce a yawn than 'routine meetings'. We have discussed routine already; meetings are the dread of leaders in almost every sphere, and their diaries get crammed full of them. In his witty and poignant account of four years as Labor Secretary in Clinton's cabinet, Robert Reich comments at one point: 'I'm scheduled to the teeth.' He then lists his commitments for that day (2 March 1993), which begins at 6.45 a.m. and finishes at 9.00 p.m., and in between has two conference calls, two media interviews, a speech, a reception, forty-five minutes of 'telephone time' and eleven meetings! 'No one gives me a bath, tastes my food or wipes my bottom – at least not yet. But in all other respects I feel like a goddamn two-year-old.' Many leaders through the ages have felt similarly overloaded and infantilized. The Audit Commission in Britain once entitled a brilliant report they wrote on the ineffective use of time in local government _We Can't Go On Meeting Like This_. In this context, when a fan of the science of delivery arrives on the scene suggesting routine meetings, he or she risks being run out of town. As with the routine notes, the trick is to convince the relevant leaders that these routine meetings will be different, that they may even be riveting and actually save time. Such claims will surely be met in the first instance with a sceptical raised eyebrow. The only counter to this is to ensure that the meetings, when they do happen, really are different. What does different look like? Implicitly everybody knows – just ask a group of top people what the characteristics are of the (few) meetings they actually look forward to... The problem is that meetings like this are the (rare) exception, not the rule. For the science of delivery to work, the meetings related to it have to have these characteristics. In truth, it's not that difficult. It is simply a matter of being conscious of how the meetings should unfold and planning them carefully in advance. It is a question, in other words, of replacing bad habits with good ones. Eleven Characteristics of a Good Meeting --- * A well-planned agenda with major focus on just one or two items. * Enough time but not too much, and a clear endpoint. * The right people in the room, not too many hangers-on and no one who drones on and on. * Well-chaired, with a clear opening and a strong, action-oriented summing up. * Good, sharp briefing materials in advance. * A shared acceptance of any data to be used (so time isn't wasted arguing about the validity of the data). * A brief opening presentation. (No more than five minutes. Really.) This is in case someone, perhaps a prime minister, hasn't read the briefing. * A collaborative atmosphere that allows – encourages even – divergent views. * Live theatre, not over-planned ceremony. * Genuine deliberation. * Start and finish on time – or even early. Table 14 There are two pitfalls with the planning of a meeting. On the one hand, if there is insufficient planning, all kinds of unsatisfactory consequences can follow – too much time spent on early, unimportant items and then rushing through the items that matter; a rambling, unfocused debate about whether the data is valid or the briefing adequate... and so on. On the other hand, over-planning can kill a meeting too. The biggest fear most civil servants have about a meeting is not that it will reach a bad decision, but that there might be 'a scene'. They want 'orderly' and 'smooth', and fear surprises and disruptions. As a result, civil servants over-plan, which means they are brilliant at state funerals, but less good at ensuring meetings are genuinely open. Planning for an open, productive meeting paradoxically takes more time and imagination than over-planning does. Hence the emphasis in Table 14 on live theatre, not over-planned ceremony. The fundamental issue is that the success of a meeting on delivering outcomes requires genuine deliberation. All too often in governments around the world this is what is lacking, and the lack of it can have the direst of consequences. When I picked up _The Blunders of Our Governments_ by Anthony King and Ivor Crewe in a bookshop just off Trafalgar Square, I did so with some trepidation. I knew it was a book about thirty years of blunders in British government and that a significant number of those it focused on were during the Blair years. I knew that the first thing I'd do once I got out of the store would be to look myself up in the index. (Yes, I know what that says about me.) However, given the title of the book, on this occasion I hoped _not_ to be in the index, and discovered that that was the case. Even more valuable than the accounts of the blunders were the lessons King and Crewe drew from their research. Perhaps the most important, and certainly the most relevant to the science of delivery at this point, was what they called a 'deficit of deliberation'. They write: > 'Deliberation' is not a word one hears very often in connection with British politics – for the good reason that very little deliberation actually takes place. British politicians meet, discuss, debate, manoeuvre, read submissions, read the newspapers, make speeches, answer questions, visit their constituencies, chair meetings and frequently give interviews, but they seldom deliberate. This sentiment is startlingly similar to how I saw the routine stocktake meetings on delivery under Blair. I didn't use the words, but I think it would be fair to claim that, through the stocktakes, we were addressing the deficit of deliberation which King and Crewe rightly identify. Helpfully, King and Crewe go on to describe the characteristics of good deliberation. 1. Careful consideration, weighing up. 2. Not being over-hasty, taking one's time. 3. Conferring, and taking counsel. This reinforces my view of a good stocktake too – that there is a shared evidence base, that the right people are in the room or have been consulted and that, above all, there is an honest conversation (a 'weighing up' in their terms) in which difficult issues aren't avoided and divergent views are welcome. On the face of it, stocktakes generally would not have met King and Crewe's second rule to take one's time. We were usually in a hurry, and indeed trying consciously to counteract an inbuilt bureaucratic tendency to delay. However, the value of routine stocktakes – not each one, but the regular series – is that you make a provisional decision in each one knowing that you can come back to it in the next. You can decide to try something at one meeting, see whether it works, and refine or not as the case may be, in the next. Sounds obvious, but fundamentally a series of stocktakes focused on achieving a goal becomes a learning process, and once you get that going major blunders are much less likely. Mistakes, on the other hand, will be common, and welcome because they provide an opportunity to learn. Let me give an example from Punjab. Following the May 2013 election, the chief minister and his party were committed to the introduction of District Education Authorities across Punjab. These would provide a new, elected tier of government responsible for education in each of the thirty-six districts. This was a long-standing party aspiration and, given that Punjab is a province of 100 million people – bigger than most countries – it does not make sense to try to run everything from the centre in Lahore. However, the Education Roadmap, which the chief minister loves, depends on the province keeping hold of certain vital responsibilities – data collection to enable benchmarking of districts, setting overall strategy and targets, insisting on merit-based appointments rather than patronage and, perhaps most important of all, holding district-level leadership to account and being able, in the last resort, to intervene. As we have seen in chapter 3, these are vital ingredients of an Awful to Adequate strategy. In short, two good ideas – DEAs and the Roadmap – in tension with each other. In the normal course of events, without stocktakes, these two ideas would have been developed separately, moved towards realization independently and collided, causing some kind of crisis, one possible outcome of which would have been the demise of the Roadmap. With stocktakes, a different course of events was possible. Before the DEA legislation became something more than a gleam in the eye, I raised the potential conflict. The chief minister enjoyed it when I said that just as the British had lost an empire in 'a fit of absence of mind', he risked doing the same with his Roadmap. He also got the point. Then, at successive stocktakes, we refined our collective thinking about how these two ideas might be made consistent with each other, or better still might combine to strengthen our capacity to deliver at every level. Because we had all the right people in the room – the chief minister, chief secretary, chairman of planning and development and finance secretary as well as the schools secretary – all the relevant interests were present and we could have a rounded discussion. We established a small working group to go through the details before the next stocktake, at which point we could further refine our thinking. That, in short, is true deliberation. It did not mean there would not be problems. It did mean there wasn't a crisis. It also meant that there were opportunities we would otherwise not have considered. There is one other massive advantage of planning a series of stocktakes into the future, and it is so obvious it is easy to miss: each stocktake provides a deadline. It might not be a real-world deadline like the go-live data on President Obama's healthcare website, but a false deadline – one that is a simple construct of the leader's calendar. Nevertheless, it creates a huge opportunity for those responsible for delivery to set deadlines for others to meet. In conversation, Martin O'Malley calls this 'the circle of inevitability'. My friends and colleagues Katelyn Donnelly and Saad Rizvi, who were the beating heart of my team on the ground during the early days of the Punjab Education Roadmap, couldn't help noticing how, in the week just after a stocktake, the Punjab officials always claimed to be 'busy' and were hard to pin down, whereas in the week before a stocktake those same officials were clamouring for advice and working late to complete tasks in time. Once they saw that the stocktake would be an honest conversation and the chief minister would know whether or not a task had been successfully completed, they realized that the only way to impress him was to actually get things done. Warm words and excuses would no longer get them through a meeting. Perhaps not quite so visibly, the same thing happened during my time in No. 10, and in every other place where effective routines have been established. After all, it's no more than human instinct, the same instinct which means we revise much more rigorously in the last week before an exam than we did earlier; and that has us rushing to tidy up before mother arrives. Looking back at my diary, written at the time, it is notable how my accounts of stocktakes focus on the quality of the relationships as much as on the content. This is partly because I knew the formal minutes would capture the content and data, but it is also a key insight into delivery – if the relationships are strained, delivery is much less likely to occur than if relationships are good. Delivery is a soap opera as well as a documentary. The importance of priorities, data and routines is clear too – otherwise the meetings would have defaulted to the media stories of the day. Just reading my account of a random week in January 2004 – a week which began with a meeting on Monday morning with Blair and finished with an awayday at Chequers on Friday, at which I was expected to open proceedings, and in between included two stocktakes on Tuesday and Wednesday and a visit to Canada leaving on Wednesday evening and returning in time to get to Chequers on Friday – makes me exhausted, and reminds me of the commitment I made to Blair early on: delivery never sleeps. It also reinforced the fact that that a focused, well-briefed leader makes a big difference. Sometimes the initiative for routine meetings will come from the political leaders themselves; there are those who know they need order and routine to succeed. One such was President Calvin Coolidge, who succeeded to the office in August 1923 on the unexpected death of Warren Gamaliel Harding. Coolidge rarely makes the experts' lists of the top American presidents, but in his own terms he has a case. He set out to cut the budget and unleash American business, and all the indicators suggest that on those terms he was a tremendous success. Not everyone, of course, agrees with his goals, and soon after his term of office finished America was plunged into depression, but he certainly delivered on his promise. And, as Amity Shlaes makes clear in her excellent biography, routine meetings were absolutely vital to his success. > The [budget-cutting] meetings took place once a week. He scheduled them at 9.30am on Fridays before the session with the full Cabinet at eleven... Together, the president and his budget director [Herbert Mayhew Lord] cut, and then cut again... there was a sense of awe and duty to their meetings. At one of their early meetings, they decided to send 'a stiff letter to all government departments, warning that they needed to remember to spend less – $300 million less in total'. Soon afterwards, to prove their determination, they agreed to take one-fifth out of the budget of the District of Columbia (which was then controlled by the federal government). They used their routines to look at every detail. Soon enough, civil servants were told that to be issued with a new pencil they would have to return the stub of the old one, to prove they really had used it up. The _Los Angeles Times_ summarized the new president's approach in a headline: 'President To Be Own Watchdog'. Five years later, in the last three months of his presidency (and after his successor Herbert Hoover had been elected), Coolidge was still at it. In December 1928, when he might have been expected to be winding down, incredibly, he had five meetings with Herbert Mayhew Lord, and the following month Lord himself took the opportunity to celebrate the budget surplus: 'The pruning knife fell here, there and everywhere in the grim fight for a balanced budget,' he said. Perhaps not everyone's idea of triumph, but undoubtedly a triumph for relentless focus and the disciplined use of routine meetings. Sometimes, particularly after a period of hyperactivity in government, such as the First World War had been, leaders such as these are perhaps a necessary corrective. RULE 35 ROUTINE MEETINGS OR STOCKTAKES CREATE FALSE DEADLINES (and solve problems before they become crises) ##### REVIEWING THE DELIVERY AGENDA 'Deeply interesting' monthly notes and regular stocktakes, perhaps every quarter, help establish a rhythm and the effective use of time, but periodically something deeper is required. It is necessary to review not just each priority in the delivery programme, but the programme as a whole. Is it broadly on track? Where should overall strategic focus of the president, prime minister or governor and his or her delivery unit be in the next few months? What lessons can be learned from across the programme that could be applied generally to accelerate and deepen progress? What has happened elsewhere in the world that we might learn from? These are big questions with immense potential to strengthen a government's capacity to deliver, but they are not the sort of questions that will arise in the preparation of a monthly note or a routine stocktake; still less in the kind of weekly meeting Coolidge favoured. For questions such as these, a six-monthly rhythm makes most sense. A year could work, but it is a big chunk of a term of office so adds risk, while anything less than six months doesn't give enough time to do the analysis and learn the lessons. This is a good moment to reinforce a point made in a previous chapter: a major advantage of having a delivery unit or equivalent is that it can prioritize these big, strategic delivery questions as well as the day-to-day driving of progress. The rest of the bureaucracy, whether at the centre of government or in the departments, is unlikely to have the time, inclination or capacity to do so. Najib Razak, the prime minister of Malaysia, whom we have met before, favours an annual review. He is an ardent fan of the science of delivery and has perfected the art of routine meetings. He has six priorities and sets aside time each Monday to review progress on one of them – so each priority comes round every six weeks. Every week, on a Monday, 9.30 a.m.–12.30 p.m. is delivery time. His head of delivery, Idris Jala, sits in the cabinet and so is able to report regularly there, but once a year the prime minister wants something more substantial. Is the programme really working? And how should it be developed? So he established an International Advisory Committee (IAC) which meets once a year. Idris Jala explains the benefits: 'We have a view that is inside-out and I really want an outside-in view, a fresh view. [The IAC] usually tell us, this is how other countries do it... they give us a lot of pointers on how to make performance better the subsequent year.' The committee is sent extensive documentation and asked to submit comments in advance, both on progress on each of the priorities and on the overall strategic challenges ahead. (In 2013 all the material came not on paper, but on a beautifully pre-prepared tablet device, demonstrating Malaysia's tech-savvy edge.) The committee involves international experts from the World Bank, for example. It includes me too. I have always submitted comments and suggestions for the future, such as urging independent checks of the data systems and stressing the importance of not letting early success distract the PM and his cabinet from the task ahead. I haven't always been able to attend the meetings in Kuala Lumpur, but when I did, what I saw was a PM and his delivery unit head willing to give extensive time to listening, learning and thinking, and then strengthening their approach. Obviously there is a public relations benefit to the government of having an international panel express support for the programme – but this is a side benefit and indeed, if the programme wasn't delivering, it would be a risk. In the Blair administration, we ultimately developed two of these more strategic routines; the first was a six-monthly exercise undertaken within the Delivery Unit, largely by my staff and their counterparts in departments. This led to a comprehensive report to the prime minister on what was going well and what was going less well across the programme. As part of this process, the plans for each priority for the next six months were agreed with departmental heads and then shared with their ministers and with Blair. The lessons from the programme as a whole were drawn out too, and these were shared with the cabinet. The second routine, which began in July 2002 and became an annual highlight for me (and an hour of torture for the world's media), was my joint appearance with Blair at one of his monthly press conferences. Here I made public my report on where we had got to on delivering the domestic policy priorities. The next two sections look at each of these two routines in turn. #### The Assessment Framework and the League Table It was one of those windows you'd never notice unless you looked for it. Although it was large and gave out onto Parliament Street – just across from No. 10 and the Cenotaph – it was grimy and, if you bothered to peer through the grime, your view of the room would be further inhibited by the heavy, dingy net curtains. We were told these were 'bomb' curtains designed to prevent flying glass in the event of an explosion, and you could tell they would work! Once, 53 Parliament Street had been the offices of Britain's greatest engineer, Isambard Kingdom Brunel. In the early autumn of 2001, in this meeting room (known as the Westminster Room) behind the grimy window and the unwashed net curtains, a small team from the new Prime Minister's Delivery Unit invented a very beautiful thing. It served the Delivery Unit and the prime minister extraordinarily well for four years, since when it has been sadly neglected, for which I blame myself more than anything. If I had to choose just one technique on which to found a science of delivery, this would be it. We called it (blandly) the Assessment Framework. This is the problem we were seeking to solve: we knew the data, once it started flowing on each of our goals, would tell us whether we _were_ making progress and, once we had trajectories in place, whether we _were_ on track. What it wouldn't tell us was whether, ultimately, the goals would be delivered on time and on standard. What a political leader wants to know, more than anything, is not whether things are on track, but whether the ultimate objectives will be delivered at the relevant point in the future. In the Westminster Room we were wrestling with how to predict the future – at least in relation to Blair's delivery agenda. Crucially too, we wanted to be able to compare progress on very different goals – railway performance with crime, maybe: this meant that what we were judging departments on was 'Likelihood of Delivery'. We arrived at a good-enough version of the Assessment Framework in October 2001 and then unveiled our judgements using it, generating a storm of controversy inside the upper echelons of the civil service. For the first time, someone had sought to benchmark departments' performance – they could see how well they compared with each other. One permanent secretary rang me in a rage; he hadn't been consulted, and what was more, he added, 'Bloody hell! You've even traffic-lighted it!' I knew that if we had consulted him and his colleagues, we would have been prevented from doing it, or at least its cutting edge would have been lost, so I made no apology. I responded by offering a home truth or two about how government functions. Since time immemorial, political leaders have turned to their close advisers and asked, 'How is so-and-so doing?' and the advisers always reply on the basis of whatever they happen to know about so-and-so, which most likely is based on a media report or one of the ugly rumours that circulate all the time in big bureaucracies. So-and-so never finds out what was said, or on what basis he or she was judged. The judgement, however, might have massive consequences for a programme or a career. So I explained to this permanent secretary that at least now he could see what the judgements were and the criteria on which they were based. Bluntly, I said, this was a step forward for evidence-based policy and performance management. And, if he could provide me with different evidence that would show his performance in a better light, I would correct the judgements that the prime minister had seen. He couldn't, of course... and over the next few years we became firm friends. By December 2001, we had revised the Assessment Framework and the process, in part in the light of this angry conversation, and over the next few years we honed it into a powerful and accurate tool of prediction. At this point, it is important to remember first my predilection, shared with the handful of colleagues who developed the Assessment Framework, for four-point scales; and second that, prior to setting up the Delivery Unit, I had begun to tour Whitehall with a presentation called 'How to Implement Absolutely Anything', which, for me at any rate, was the founding text of 'deliverology'. This presentation established a set of guiding principles for each of the four stages of implementation (Fig. 21). **Four Stages of Implementation** Figure 21 I gave a copy of this to my chief number-cruncher, Tony O'Connor, and a small team of colleagues and asked them to develop the Assessment Framework. The key was to be able to do the benchmarking and the rank-ordering. The work developed from there, ultimately being tested, as we've seen, with the prime minister and top civil servants. Tony and his colleagues came up with a chart that set out the four things you need to know to assess the Likelihood of Delivery (Fig. 22). First, the _Degree of Challenge_ : how ambitious is the goal? If it's very ambitious, it makes it much harder to deliver. If it is merely an incremental step, it makes it much easier, so we have a four-point scale. VH – very high challenge H – high challenge M – medium challenge L – low challenge. Given that we were examining Blair's top priorities and these had been set in the first flush of a spectacular election victory that summer, needless to say most of our agenda was either H or VH. **The PMDU Assessment Framework** Figure 22 Second, the _Quality of Planning, Implementation and Performance Management._ Once you know the degree of the challenge, you then need to know whether the planning of those responsible is good and actionable; whether they already have experience of implementation and whether they have a process for managing performance across, for example, twenty-three train-operating companies, 300 hospitals or 23,000 schools. The more of these features that were in place, the more you would be willing to believe they could deliver. Another four-point scale here, the familiar one: * Green * Amber-Green * Amber-Red * Red. Third, the _Capacity to Drive Progress_. This has some overlap with the second aspect, but is nevertheless quite distinct. This is about every level in the system, all the way out along the delivery chain. Do they know what the priority is, do they even know that it is a priority? Do they have the skills to do what they need to do to make it happen? Do they have the relationships among them to drive the agenda forward or is there conflict between unions and management, local and central government or different departments in the centre whose collaboration is essential? Again, we used the traditional four-point scale. Finally, in addition to measuring ambition, planning and capacity, you need a measurement of progress in time. You might be making good progress but running out of time to hit the target. So again we arrived at a four-point scale, based on the presentation mentioned above. 1. Policy development – you are still working up the policy and approach, and implementation has barely begun. 2. Early implementation – you've got going, but it's only the start. 3. Late implementation – you are well on the way, but the process is not yet complete. 4. Irreversible progress – the target has been met, the structure of the system changed as necessary, and the culture is now such that no one wants to go back to how it was before (this, by the way, is a high bar – see chapter 7). Having reached a judgement on each of these four aspects, the latest data (see top right of Figure 22) should be reviewed, as a kind of reality check. Several available data sources could be looked at – reported crime, recorded crime and crime surveys, for example. And with all that done, it is possible to reach a combined judgement on Likelihood of Delivery, again using the familiar four-point scale. To guide teams to a decision, the Assessment Framework has a detailed rubric with questions and descriptions of Green and Red. We never pretended this was wholly scientific – far from it. The framework provided the basis for an informed conversation leading to an informed judgement. The first time, we had this conversation purely among ourselves in the Delivery Unit; thereafter we involved the relevant departmental officials too. They inevitably knew more about their area of responsibility than we did – but we knew more about what characterized successful delivery; the result was a rich and meaningful dialogue which was as important an outcome of the process as the colour judgements. More practically still, in reaching the judgements, it became abundantly clear what needed to be done to advance progress in the six months ahead, so delivery plans could be updated and refined. To top it all off, we then reached agreement between the department and ourselves about what they needed to do and what we would do to assist and ensure that progress actually occurred. We called it a Joint Action Programme. Just in case there is any danger that the reader is finding this a little boring, let me say simply this: what I have just described is the engine room of delivering results in government. It will never make the news, and most journalists would have no inclination to understand it, but the crucial (and prosaic) truth is that if this process, or something like it, is working, delivery is highly likely; without it, much less so. It has to be one of the classic tales of defeat snatched from the jaws of victory. In the autumn of 2013, President Obama faced down the Republicans who had forced him to close government. He refused to buckle to their blackmail; he would not concede on his vaunted health reform, popularly known as Obamacare. After all, he pointed out, Congress had passed Obamacare into law; to allow a congressional minority to overturn it through a blackmail threat would undermine American democracy and future administrations of whatever party. Polls showed that the American people were behind the president, leaving the Tea Party no option but to beat a shame-faced retreat. Political victories don't come much bigger than this. Unfortunately, behind the scenes the president was just becoming aware – and very soon America would become aware too – that his victory would be snatched away, not by a competing issue of major political principle, but by, of all things, a website. At a meeting in the White House on the evening of 15 October 2013, the president was briefed by a small team of advisers that HealthCare.gov, the online insurance market at the centre of his health reforms, was about to crash. With the deadline for Americans to register under the new legislation just a few months away, the website on which everything depended was riddled with problems. The meeting inevitably and rightly focused on crisis management – what to say to the media and how to fix the problem. The president prepared himself to take responsibility: better to humiliate yourself than be humiliated by others. Jeffrey D. Zients, troubleshooter and former chief performance officer, was put in charge of sorting out the mess. As the story unfolded in a blaze of talk shows, the president's political victory disintegrated. Trust in Obamacare unravelled as his personal promises turned out to be unfulfilled. Jeffrey D. Zients, an extremely competent operator whom I had met previously and come to admire, made good progress, but the president suffered excruciating months of delay and took a blow to his credibility from which he may never recover. And in spite of Zients's excellent efforts, the website remains, shall we say, suboptimal, and the timetable has slipped. Not just Obamacare, but the Obama presidency had been damaged, perhaps irreparably, by what in this book we would call a 'delivery failure'. The president himself realized this in that fateful meeting on 15 October: 'We created this problem we didn't need to create... it's of our own doing, and it's our most important initiative.' As the president understood, it did not need to be like this. The damage was self-inflicted and, with the right disciplines in place – the disciplines of delivery – it could and should have been avoided. After all, the legislation on which Obamacare is based had been approved by Congress and signed into law in 2009, a full four years earlier. By the spring of 2014, the news was somewhat better for the president. Thanks to Zients's efforts, the website was up and running, registrations were increasing and the debate was about whether enough young, healthy people would register to cover the costs of others. 'The increase in medical enrolments across the country is encouraging but more work is left to do,' said Health Secretary Kathleen Sebelius, shortly before she stood down, in effect carrying the can for, and drawing a line under, the débâcle. I conclude simply that the catastrophe of the HealthCare.gov website would have been inconceivable if, from 2009 on, the routines set out in this chapter had been applied. How could it have if the president had been receiving and reading monthly notes, if stocktakes had been data-informed and deliberative and if the White House and the department had collaboratively reviewed progress every six months, really getting beneath the surface? The problem would have been identified ahead of time and solved. There might have been delays, but they would have been planned. And some of the earlier grand claims might have been toned down, but how bad would that have been? There would not have been a catastrophe. There may be something close to gridlock in modern Washington, but that is no excuse. As Governor Martin O'Malley put it at a seminar in London in 2014, 'Failure to drive delivery contributes to gridlock, not the other way round.' Profound, generational reforms such as Obamacare are too important to be left to government by spasm. They depend on government by routine. If the Obama administration had put the routines in place, not just on Obamacare but across, say, four or five priorities of personal importance to the president, they would also have developed the capacity to learn from delivery in one sphere lessons applicable in others. Arne Duncan, for example, has been the most successful education secretary in US history, a point acknowledged across the political divide in Washington. His understated tone, ability to listen and persistent focus on results are exemplary. His team put in place routine processes to drive delivery and, by 2013, the results in states such as Tennessee and Delaware, which were prioritized by Duncan, were coming through. The entire federal system could learn lessons from this approach. In the bowels of the Office of Management and Budget, Shelley Metzenbaum battled away at this kind of thing for four years but, though she would not say so herself, her work never received the priority it deserved and eventually she was tempted away from the administration. In the Blair administration, once we had completed applying the Assessment Framework to each of the twenty-odd priorities, we were able to benchmark them against each other. In fact, we could rank-order for the prime minister his top twenty priorities on the basis of their Likelihood of Delivery. The result was surely the best one-page summary of a government programme any British prime minister has ever had. Table 15 shows how it looked in July 2004. **Progress on the PM's Priorities** Table 15 At a glance, the prime minister could see which priorities (PSA = public service agreement target) were on track and which weren't. I could make sure that, in the ensuing six months, effort was focused where it was needed most, on those slabs of Red in the bottom half of the table. More importantly, this league table (or ranking) would be shown both to the cabinet and to a meeting of permanent secretaries. They could all see what was working and what wasn't. This needed some careful handling in a number of respects. Take the risk of leaks, for a start – had it become public at the time, the league table would have been dynamite, perhaps costing a minister or two their jobs. Perhaps, too, undermining our approach altogether since, had it leaked, the pressure to massage future judgements would have been substantial. So, at Blair's suggestion, we put the league table up on a PowerPoint screen at these meetings, but we didn't print it out or circulate it. You must consider the sensitive question of the egos involved, and how those whose responsibilities are a sea of Red will feel and respond. An important point to re-emphasize is that Red is not necessarily a judgement on someone's performance; simply a statement that whatever it is, for whatever reason, is not on track to deliver in future. And you need to prepare people so that they are not humiliated out of the blue in front of their peers. Take the case of the permanent secretary responsible for the priority at the bottom of the league table shown above. I had dinner with him the night before this was presented to all his colleagues, briefed him on the issues and then hammered out with him what action he needed to take so that he could respond positively to my presentation the following day. There is no doubt that a ranking of this kind is a spur to performance, as the evidence in chapter 1 shows. This permanent secretary drove real progress over the next six months so that the priority concerned was no longer bottom of the pile by December. Perhaps more important than any of the above is the ability to learn generalizable lessons across the agenda. My staff and I used to debate this thoroughly in the run-up to my presentations to cabinet. The idea was to make the lessons sharp, clear and memorable. Here are two examples, one from 2003 and one from a year later. 2003 > Lesson 1 – A week may be a long time in politics but five years is unbelievably short. > > Lesson 2 – Sustained focus on a small number of priorities is essential. > > Lesson 3 – Flogging a system can no longer achieve these goals: reform is the key. > > Lesson 4 – Nothing is inevitable: 'rising tides' can be turned. > > Lesson 5 – The numbers are important but not enough: citizens have to see and feel the difference and expectations need to be managed. > > Lesson 6 – The quality of leadership at every level is decisive. > > Lesson 7 – Good system design and management underpin progress. > > Lesson 8 – Getting the second step change is difficult and requires precision in tackling variations and promoting best practice. > > Lesson 9 – Extraordinary discipline and persistence are required to defeat the cynics. > > Lesson 10 – Grinding out increments is a noble cause... but where progress is slow, it's even more important for people to understand the strategy. 2004 Table 16 What a delivery unit is able to do through a process such as this is connect the detail to the big picture and the frontline to the centre. We were often accused of being top-down (as anyone working for a prime minister or president is likely to be) and sometimes we were, but more often this missed the point – we were the connection, and the impact was two-way. ## Press Conferences A delivery unit or equivalent is better placed than any other part of the machinery of government to summarize the government's underlying progress towards its goals. In Blair's first term, he and Peter Mandelson had tried to do this by preparing an annual report as a company would do. They failed to get the tone right and the media received it with howls of derision. When Blair asked me to report for the first time in July 2002, it was simply to explain to the media how we approached delivery and to tell a story or two, but in the following years we turned it into a report on progress and found a way to get the tone right. At the insistence of Blair himself and his media people, I was to be deadpan and to make the report as plain-speaking about failures as about successes. So I'd do my presentation with its beautiful graphs in a monotone. Throughout one of them, the _Sun_ 's political correspondent responded to each new graph by muttering, under his breath, 'Bullshit'. (He wasn't aware that the person on his immediate right was Tony O'Connor, who had drawn the graphs.) Once I had finished, Blair would ask the journalists if they had any questions for him or me, and they would respond by asking their usual questions for that night's news and ignore the whole thing. No journalist ever asked me a question. The following morning, I would be mocked in the comment pieces for my dullness, but in fact the strategy worked. The handful of more serious journalists absorbed the messages. In July 2004, Peter Riddell of _The Times_ noted that my 'update tends to be ignored by journalists as a slightly tedious ten minutes before they can get on to the red meat of politics', but, he argued, 'The most important words came not from Tony Blair, but from Michael Barber.' Why? Because I had summarized the situation of the government: > Last year I said to you that there was demonstrable progress in most areas, but it was not yet irreversible. This year I am more optimistic. There is widespread and significant progress which is becoming irreversible. If the routines are in place, the political leader can move from crucial detail to big picture, from nuts and bolts to overall design, from individual to nation because he or she, or at least the head of delivery, really knows what's happening now. RULE 36 A FULL-SCALE REVIEW OF THE PROGRAMME AT LEAST ONCE A YEAR PROVIDES DEEP LEARNING (which can be acted on immediately) There is another walk in the Lake District that I love, this one less rugged, less dramatic and less hard work, not least because Rossett Gill is not involved. Even so, it is achingly beautiful and it has a personal connection because it involves crossing a wooded hillside once owned by my great-grandfather. His son, my grandfather, built a log summerhouse up there some time in the early twentieth century. It was still there when I was a boy, with its stained-glass windows, wooden benches and large jugs with which we fetched water from the stream below. Once, I'm told, my grandfather was there looking, as he always did, for the distinctive summits of Langdale Pikes in the distance, only to discover that, in the year since he had been there last, the view had been obscured by a far-off pine tree which had inconsiderately grown taller. He went striding off through the wood with an axe and, after a good walk, chopped down a single tree before returning to the summerhouse. Miraculously, the view was now clear, and his beloved Langdale Pikes stood out gloriously against the sky. How had he known so precisely which tree to cut down? Here was someone who could connect detail to big picture and back again. That is what delivery routines make possible for a political leader. You become able to see both the wood and the individual trees – and the view beyond. The next chapter examines how to solve problems as they arise. Notwithstanding my grandfather, you cannot take an axe to everything. RULE 37 UNDERSTAND THE WOOD AND THE TREES (and the view beyond) ## 6 ## Problem-solving In my diary I noted: > Blair rang me at 9.00am on Saturday morning. Fortunately I'd been up for 10 minutes by then. He was worrying away about illegal asylum applications. I gave him the very bad news about the figures in this week's report – a big jump in applications due to the 'closing down sale' before the tougher benefits arrangements came in. He asked me to check the figures, but also suggested we went into 'Cobra' [Cabinet Office Briefing Room where emergencies are handled] mode on the question. I consulted Jeremy [Heywood] and then rang Blair back. On the Monday, Blair cleared his diary for the entire morning and proceeded to spend the next five hours – yes five – going through the asylum process forensically, finding out how it worked, what had happened, what was being put in place and what more could be done. This was a prime minister trying to understand in great detail what precisely was going on. I realized at the time that between his phone call to me on Saturday at 9.00 a.m. right over the weekend he had been focusing on asylum. He had even read the 1951 UN Convention Relating to the Status of Refugees. This weekend in February 2003 reveals a lot about how a politician at the height of his powers – a few weeks before the war in Iraq began, as it happens – goes about solving a very big problem on the brink of becoming a crisis. Blair demonstrated mastery of the key elements: * Focus: he had been working on it the entire weekend. * Prioritization: he cleared his diary. * Data: he asked me to check the figures. * Details: he wanted to be sure himself that he understood exactly how the process worked and spent five precious hours on it. * Confronting facts: I gave him 'the very bad news' and he didn't rant, he got to work. And he suggested the COBRA process, used in times of crisis, because he had seen how it galvanized the entire system and he knew how it worked. In fact, the asylum issue had not yet become a national crisis, but the prime minister treated it as a crisis to prevent it from becoming one. He knew the steady stream of illegal asylum seekers was politically damaging; more fundamentally, he knew that if the asylum system lacked legitimacy, if it was easier to get into the country by breaking the rules than by following them, then his vision of a diverse, open, modern Britain was threatened. The following day, Blair summoned to COBRA all the ministers from departments with influence on the asylum issue including, crucially, the Home Office, which oversaw immigration policy, and the Foreign Office, which issued visas. Previous meetings of this group had been chaotic, tense and acrimonious, but this one wasn't. Instead, there was a hugely authoritative performance from the prime minister. At the end of the meeting, I suggested we convene again in two weeks' time so as to keep the pressure on. Because the leader had done his detailed homework, he could dominate the meeting and give instructions. Because he had a delivery unit, he knew the agreed actions would be followed up. Because by then we knew the importance of routines, we set a (false) deadline of two weeks for the actions to be taken. The moral is that dealing with a major problem often just requires the science of delivery to be applied with greater intensity. Deliverology on steroids, perhaps. By 2003, in No. 10 we had tried to bring clarity to the management of problems. We had arrived at what we described as four levels of intensity. The first level was straightforward: we called it the 'timely nudge'. We in the Delivery Unit would notice a problem – maybe this month's data is off trajectory, or perhaps there's a problem in one locality that we've visited. Or maybe we had doubts about the effectiveness of a particular official. Usual civil service practice in such circumstances would be to shrug, assume it's nothing serious and wait and see. Delivery Unit obsession suggested an alternative – could we dig a little deeper into the data? What had caused the deviation? If one locality was off track, perhaps we should check another couple to see whether there was a pattern. It may be true that one swallow doesn't make a summer, but what about two or three? In each or any of these circumstances, we would raise the issue with the relevant department, probably with a top civil servant just to make sure they were aware and, incidentally, knew we were watching. Here's an example from Punjab, Pakistan. It is all too common in the Pakistan civil service for officials to be appointed and then moved on within months. People can be transferred for all kinds of reasons – for being corrupt, for being honest; for being effective, for being hopeless; for being well-connected or for not being well-connected enough. The result is that it becomes impossible to achieve any kind of consistency in strategy or implementation, and therefore little or no progress is made. In 2011, for the education reform in Punjab, the chief minister put a stop to the revolving door. He insisted all officials would be appointed on merit and that, subject to performance, they would stay in post for two to three years. It took a while for the new norm to be established, but once it was, the effect on performance was dramatic. At last, basic management was in place and the system could learn, refine and improve. After the election of 2013, a new challenge arose, however, because the new prime minister of Pakistan, brother of the chief minister of Punjab, inevitably needed good new officials in Islamabad and just as inevitably turned to Punjab, the country's best-run province. In February 2014, with an excellent new post-election team of officials in place to oversee the education reform, the Secretary – Schools heard that two of them were to be transferred to federal government. He contacted me, I emailed the chief minister, the transfers were blocked. That is a typical timely nudge. The second level of intensity occurs where a problem is significantly affecting delivery but the cause and solution are not obvious. Perhaps performance is plateauing or dipping, perhaps one part of the country is underperforming and no one is quite sure why. In a situation such as this, again the tendency in bureaucracies is to wait and see and hope it gets better, just as football managers sometimes hope a player will run off an injury. This might work occasionally, but more often than not it is an excuse for avoiding action – and the hard work entailed – as a result of which the problem is likely to get worse. In the year 2000, we in the Department for Education intervened in the Education Authority in Liverpool. In spite of huge hostility in the city to intervention from London, we threatened the use of our power to contract out the city's deeply ineffective Education Authority. The chief executive of the city, David Henshaw, acted with vigour and commitment; new leadership of the Education Authority was put in place; the entire staff was moved to bright, modern offices; and a radical improvement plan was put in place. I made six visits to the city in the course of the next few months to persuade headteachers and teachers that this was not an attack on them, but a matter of ensuring that they received the support they deserved from the city and the children got the education they needed to succeed in the modern world. It worked: the next few years saw real progress. A few months later, a report revealed that Bristol had similar problems. This time the leaders of the city argued that the report was harsh, that they were making progress behind the scenes and that all that was needed was an advisory group to oversee progress. I was tired after four years of relentless reform and didn't have time for visits to Bristol on top of those I was already making to Liverpool (and Leeds) and so decided to give the city the benefit of the doubt. Big mistake. Bristol's education system continued to underperform for another decade. That was when I invented for myself the rule, 'If you are about to give someone the benefit of the doubt, why are you so doubtful?' In short, with a Level 2 problem, if in doubt, act. And act with rigour. Investigate the cause of the problem – we'll come to means of doing so later in this chapter – and search for solutions. Look at what worked elsewhere. Create the circumstances in which those responsible see the benefit of being open about the problems and imaginative about solutions. Don't fall for the oldest excuse in the book (as I did with Bristol): 'We're already doing it.' Also, if at first you can't see the cause of the problem, don't assume there isn't one – after all it was the fact that the data was off-track that prompted you to act in the first place. Look beneath the surface. In his poetic book _The Old Ways_ Robert Macfarlane gives a perfect example when he goes to sea in the Western Isles with an experienced sea captain. > I was realising that Ian had two simultaneous states on the water. One was quietly and simply joyful to be at sea. The other... was analytical: his mind gathering data from sources and of types that I barely knew existed; from subtleties of wind, wave and waymark, from smells, from what he had called in a poem, 'the bounce of light from incidental land' and the 'elaborate counter-physics' of tidal water... 'You need to look for disturbances to the expected,' he told me, 'be alert to unforeseen interactions.' Level 2 problem-solving involves expertise, insight and clarity; perhaps a team combining knowledge of the relevant frontline, expertise in the relevant areas and the leadership of a persistent problem-solver who does not accept superficial answers or glib assertions that 'it'll be all right on the night'. We called Level 2 Standard Problem-solving. Level 3 problems are similar to Level 2 but more complex, less amenable to the normal problem-solving techniques and reflect more severe challenges to delivery. In public systems they often also involve significant political complications – differences of view between leading political players in the government, for example, or maybe the issue is at the heart of political confrontation between parties, which inevitably brings with it greater media scrutiny or intrusion, making it much harder to admit failure; leaders clam up, troublemakers leak, the storm is whipped up. With a problem at Level 3 – Intensive Problem-solving – therefore, what is needed is similar to Level 2, but it will also require greater political and leadership input from a trusted minister, for instance, and from whoever leads on delivery. And it may need either a longer time period or greater intensity. What is unlikely to work is the standard bureaucratic solution: the establishment of a 'high-level' working group or committee that meets occasionally and, whatever the intention, allows things to drift between meetings. Level 4 is Crisis Management. Delivery is seriously threatened, the public is screaming with frustration and the newspaper headlines are, from the government's point of view, a horror story. In my time in the Delivery Unit we used this approach twice, once with street crime in early 2002 and again a year later in relation to illegal asylum seekers (the story with which this chapter begins). After that, with the Delivery Unit in full stride and on top of the data, we avoided these major crises because we saw them coming and could resolve problems using Levels 1 to 3. Crises will always beset governments, and understanding how to manage them is a vital skill. In the winter of 2013–14, the British government found itself almost overwhelmed by a series of storms off the Atlantic – wind brought down trees and rain caused the rivers to flood, especially across southern England. At first, quite understandably, the government assumed that the storms would die away after a week or so, or a month... but when they kept on coming every few days for three months, the level of intensity rapidly shifted from Level 2 to Level 3 to Level 4. As a detached observer (living under the path of the Atlantic storms), I thought the government and the relevant department (the Environment Agency) were handling the situation pretty well while it was at Level 3. When it shifted to Level 4, public frustration understandably went up a gear and headlines reflected that the government was too slow to react and suddenly sounded complacent. From that moment on, the government was chasing the game. This illustrates a key issue: when a crisis reaches Level 4, what you say matters as much as what you do. Much better to overreact than underestimate; much better to sound sympathetic and decisive than to defend a record, however good. Overreact yes, but that does not mean panic, because panic makes good judgement less likely. Listen to Mark Cavendish, the great cyclist: > A sprint isn't a chaos bomb exploding in your sightline, it's not bedlam on fast forward – it's a multiplication of problems to be solved quickly... but at the same time rationally... I generally have more energy and move faster than anyone else because I am staying calm and clinical. Worse still in a crisis is any attempt to pass the buck. In November 2013, following Typhoon Haiyan, one of the most devastating weather events ever known, President Benigno Aquino of the Philippines, who until then had an excellent track record and good ratings with the public, sought to pass the buck for the ponderous response to the crisis on to the local authorities. There was a tangled web of family and party politics behind this, but it made him look unsympathetic and defensive, and undermined his credibility. Soon, other problems, such as corruption among some of his cabinet colleagues, began to pile up, and he was under pressure. There is a reason why one crisis often seems to be followed by another. It is simply this: while things are going well, much else is forgiven; once you are on the run, the public and the media are on the lookout for other blunders or sins. Trust is broken and a downward spiral becomes a distinct possibility. All the more reason to ensure an effective response when faced with a Level 4 problem. By contrast, Shahbaz Sharif handled a major outbreak of dengue fever in Punjab in 2011 excellently. He held daily meetings at the crack of dawn with all the relevant officials. They hated it – they told me so – but not only did it ensure that Shahbaz Sharif looked in charge – telling the story – it ensured much more rapid action than would otherwise have been the case. The innovative Punjab Information Technology Board mapped each case of dengue fever, which enabled the government to target its response more effectively, and to identify and deal with the specific pools of stagnant water in which the dengue mosquitoes were breeding. A massive publicity campaign took place in parallel – both telling a story of a government in charge and increasing the effectiveness of the action against the outbreak. RULE 38 CATEGORIZE PROBLEMS BY THEIR INTENSITY (and act accordingly) Yes, there was some duplication of effort. Yes, there was some loss of impact elsewhere (including for a month or so on the Education Roadmap) and yes, maybe more resource was thrown at the problem than strictly necessary... but that is with hindsight. With a Level 4 problem, efficiency is rightly sacrificed for effectiveness. Many lives were saved – by definition, you can't count them – and the government's credibility was unscathed, perhaps even enhanced. The contrast a couple of years later when a similar outbreak in another province, Khyber Pakhtunkhwa, caught an inexperienced government unawares was stark. There, the epidemic got out of hand, and for the new government the honeymoon period came to an abrupt end. Figure 23 illustrates a simple way to think about the four levels of intensity. Over time, if you become effective at Levels 1 to 3, you won't need Level 4, except for those crises known as Acts of God – which you cannot predict but still have to deal with. **Levels of Intensity** Figure 23 Shortly after completion of the Trans-Siberian Railway, Russia found itself at war with Japan. There was a problem for the Russians. As Christian Wolmar explains in his excellent history of the great railway, it ran uninterrupted from Moscow to Irkutsk, close to Lake Baikal, one of the world's largest, deepest lakes. And beyond Lake Baikal it ran uninterrupted through Manchuria to Port Arthur or Vladivostok. In between, though, everything carried on the railway had had to be detrained and taken across the mighty lake in a steamer, the SS _Baikal_ , and a couple of its sister ships. This created a significant challenge in peacetime; in war, with the mass of men and matériel headed for Port Arthur, the problem greatly intensified. Winter made things worse. Lake Baikal freezes solid in January and doesn't unfreeze again until April or May. Unfortunately for the Russians – perhaps no accident, given Japanese competence – the war began in February 1904. They tried to run a railway across the ice, but early on a locomotive fell through it and sank. After that they had to rely on horses and carts. The result was chaos. The men and supplies rushed to Siberia from Moscow piled up on the shores of Lake Baikal. Thousands of soldiers kicked their heels while cases and bales of weapons, uniforms and the other essential supplies stood uselessly on the station platforms in Irkutsk and on the lakeside. Crossing the lake in the other direction were the inevitable refugees fleeing the conflict, bringing with them hunger and disease. A year later, the Russians found themselves facing a calamitous defeat, which in turn stoked revolution. There was no single reason for this defeat, but the logistical challenge of the immensely long supply lines and the bottleneck at Lake Baikal were without doubt major contributors. The lesson of Lake Baikal for the science of delivery is to be forensic about logistical issues. They may seem dull in the grand scheme of things, but they make all the difference. There is a wider point too. In the first section of this chapter, we categorized problems by their severity and the response by its intensity. In this section, the intention is to categorize problems by their nature because the nature of the problem will determine the approach to problem-solving. Given the number of problems and crises governments face, you might think that they would by now have become expert in knowing how to respond. After all, when things go horribly wrong there are often inquiries designed to ensure that whatever it is 'never happens again'. I always shudder when I hear that commitment, because while an identical problem is extremely unlikely, there is a high chance that a related problem will recur and the 'never again' commitment will come home to roost. Moreover, in my experience government bureaucracies are not terribly good at learning. Collective memory is selective and haphazard, and systematic knowledge management usually absent altogether. Furthermore, there is almost no learning across departmental boundaries, so the chances of a lesson learned, say, in the health department being applied to a similar situation in education are very low indeed. The result is that mistakes are repeated, and each problem or crisis is responded to as if it is something entirely new. In the slow-moving years of the early and mid-twentieth century, the consequences were less severe, partly because the pace of change was slower and government activity was less, and partly because senior civil servants tended to stay in place longer and thus had their experience to draw on. The first head of MI5, Sir Vernon Kell, held the post for over thirty years. When R. A. Butler became education minister in 1941, aged thirty-eight, almost all of the top civil servants in his department had been there since he himself had been a schoolboy. At the moment, governments around the world too often fail to learn from their mistakes – simply because their bureaucracies are not set up to do so. Indeed, the instinctive reaction of a bureaucracy to a problem or a crisis is to become more cautious, more risk-averse. Ironically, in an era when change is as rapid as it has become in the early twenty-first century, this reaction ultimately adds to the risk of failure rather than reduces it. A delivery unit or the equivalent can step into this void and make a significant contribution. After all, the whole point of a delivery unit is to learn what works in delivery and then apply those lessons systematically to challenges across the government's various delivery priorities. To give an example, once the Department of Health really got going on reducing wait times in Accident & Emergency departments, they set up a 'war room' which examined the weekly data promptly, learned the lessons from it and followed up immediately with those A&E departments that appeared to have underperformed the previous week. The effect, as with the New York City Police Department and Compstat, was a significant improvement in performance. From a delivery perspective, we saw this as a model of best practice which we then recommended to the Home Office in dealing with its multiple challenges, and to the education department in dealing with truancy. In short, we were becoming a centre of expertise in delivery rather than policy. Table 17 It therefore makes sense to try to categorize problems by their nature so that systems can improve at diagnosis and learn systematically what the options are for solving different types of problem. Table 17 illustrates, as a start, what could be done. It is just a sketch, but as Nassim Nicholas Taleb's 'black swan' argument makes clear, unexpected events of one sort or another will recur. How much more effectively could government work if it was systematic in identifying the type of problem and the intensity of the response required? Of course, any given problem may involve a number of these characteristics – for instance, poor leadership and poor implementation are likely to go together. Using real examples, here are some considerations about how to respond. RULE 39 DIAGNOSE PROBLEMS PRECISELY (and act accordingly) #### Cut your losses if you can The well-known saying 'When you're in a hole, stop digging' has a point, and not just when your apology for an insult simply exacerbates it. In government, while a determination to honour commitments and follow through on delivery is rightly highly valued, there are times when simply cutting your losses, apologizing and then abandoning whatever it is, is the best thing to do. And it takes significant courage because accusations of pusillanimity or incompetence are almost inevitable. The London Olympics of 2012 were recognized the world over as an unqualified success. Even in cynical, muddling-through Britain, 83 per cent of the population thought they were a success, and just 2 per cent a failure – remarkable figures by any standards. My own recollection is that the period of the Games was the only time when, for several weeks even London cab drivers, who had consistently moaned about it for the previous year, suspended their cynicism. Bill Bryson memorably said that the British are the only people in the world who, when asked 'How are you?', reply 'Mustn't grumble.' For those glorious weeks, no one grumbled. Such was the triumph, that a series of disasters not many years before was forgotten. London had won the right to host the 2005 World Athletics Championships, but we were so far behind with building a stadium in 2003 that disaster was looming. Tessa Jowell, the new Secretary of State for Culture, Media and Sport, inherited responsibility for the World Athletics Championships from her predecessors and when, in a meeting with the officials responsible, no one could tell her who was in charge, decided to cut her losses. Embarrassing? Yes, absolutely. Better than a fiasco at the Championships themselves – or pulling out just before? You bet. Not only did that searing experience help Tessa Jowell become the minister for the Olympics and play a leading part in delivering those spectacular Games, but if she hadn't abandoned the flawed plans for the 2005 World Athletics Championships, the 2012 London Olympics would probably never have happened. As Churchill said, 'I have never developed indigestion from eating my words.' #### Remember Relationships ' _Kto vinovat?_ ' One of the first phrases I learned in my (largely failed) attempt to master Russian. It means 'Who's to blame?' The other phrase I learned at the time was ' _Sto delat?_ ', Lenin's favourite question, 'What is to be done?', except that in Russian it would be better translated – accompanied by a shrug – as 'What can you do?' One Russian friend went so far as to say that if I mastered these two questions it would be enough to get me through most conversations with government. Hello, please and thank you would do the rest. The more you think about it, the more you realize my friend has a point – and not just in Russia. When there is a problem in government, and still more so when there is a crisis, these are indeed the only two questions that matter. The problem is that they get asked in the wrong order. As soon as a problem becomes apparent in government, both politicians and bureaucrats reach for 'Who's to blame?' Some try to vanish, some begin wiping their fingerprints from the crime scene, and others start pointing fingers. The effect is corrosive; as everyone tries to mind their own back while burying a knife in someone else's, it becomes almost impossible to solve the problem. Meanwhile, more often than not, the media is screaming that 'heads should roll', adding to the tension. For the public, though, while 'Who's to blame?' is clearly of interest, 'What is to be done?' is both more urgent and more important. Jim Collins puts it succinctly: 'Conduct autopsies without blame.' From our Delivery Unit perspective, we sought always to answer 'What is to be done?' first, and allocate blame – if it needed to be allocated – later. For this to happen, the relationships inside government have to be constructed so that the relevant people have an incentive to be plain-speaking and to take responsibility. No yelling. No threats. No blame (at this stage). Instead, thoughtful questioning, a focus on the facts, a sifting of myth from reality and a determination to confront the brutal reality. In the PMDU, we learned from Benjamin Zander, the great conductor, to respond to a blunder or reports of failure with the phrase 'How fascinating!' That way, we could get people to talk openly about the causes of the problems as well as the symptoms. Crucially, we could persuade them that there was a problem and it needed fixing. Our next line was simple: 'We're not going away until it's fixed.' Once people realized we meant it, we could establish a collective focus on solving the problem, whatever it was. Sometimes you can supplement the evidence that comes from the key officials with other sources. At any given moment when implementation of a major change is in progress, there will be people complaining about it. The question is, how do you know when this standard rumble of complaint is being exceeded? And when it is, how do you separate out the signal from the noise? If you have time, you can do surveys or investigations. If you don't, though, one crude but effective means is to ensure you have in place a network of people at the frontline – such as headteachers, GPs or police officers – whom you know well enough to be sure they will tell you what they think rather than what you'd like to hear. A headteacher from Cumbria stormed out in the middle of one of my speeches twice in six months. When I called him after the second time – my assistant had gathered his name badge from him – he was impressed (once he'd got over the shock) and told me what he really thought. People such as this are gold dust in a crisis. At the government end you have to have leaders who are willing to listen too. Again, a delivery unit can help make the connection between the frontline and the leader. In the weeks immediately before the 1917 February Revolution in Russia, Tsar Nicholas II was out of Petrograd at one of his palaces when Rodzianko, the head of the Duma (the Russian Parliament), sent messages to him warning that the trouble on the streets of the capital was getting out of hand. Nicholas dismissed them in his diary: 'More rubbish from that fat slob, Rodzianko.' If you don't listen, you won't hear the noise or the signal. #### The Tools of the Trade In a crisis – Level 4 – you have to act fast and drive hard. At Level 2 or Level 3, you may have more time to analyse the problem. What you don't have time for, ever, is a major inquiry or the commissioning of an independent review. It is not that these are never worthwhile for other purposes – of course they are – but they are not fast enough to provide solutions in anything like real time. We developed tools of analysis which helped us, and the relevant government departments, to understand rapidly and effectively the nature of the problem that confronted us. In fact, in the case of both street crime and illegal asylum, the two Level 4 problems we faced, we had applied these tools in the preceding weeks – before a serious problem had become a crisis – and as a result, when the crisis culminated we were able to both explain the problem and suggest practical solutions. The most important tool was the Priority Review, which incorporated another, the Delivery Chain Analysis. My colleague and friend Richard Page-Jones, a distinguished former school inspector, played the leading role in the invention of the Priority Review. It is a rapid analysis of the state of delivery in a high-priority aspect of the government agenda, with a firm focus on identifying action that needs to be taken. It can be done in a month from start to finish and results in a short report to the prime minister and relevant ministers. Once we decided to do one, we would put together a team of four or five people from the Delivery Unit and the relevant department. We would add an acknowledged expert in the field and one or two trusted people from the frontline of the relevant service. For the next month, this review would be, if not their full-time role, then most definitely a major part of their work; not one of those committees that meet monthly and, in between, drift. In the first week, the team would pull together all the relevant data on the issue, with the assistance of our statisticians, and then generate some hypotheses about the state of play. As a result, they would have provisional answers to questions such as: Are we on track to deliver the goals? If not, how far off track are we? What are the causes of this problem? And how might it be fixed? In the same period of time, the team would decide on field visits and set them up. Thus, by weeks three and four, they were ready to get out to the frontline. In effect, what they did, starting at the frontline and working back, was to interview people at every level in the delivery chain – this is the Delivery Chain Analysis – and ask the same set of questions: What is working? What isn't? How strong are the links in the delivery chain? What could be done to improve performance? And finally (always motivational), what advice do you have for the prime minister? The team would promise to pass advice on. By asking these questions at each level, the team rapidly gained insight into where the problems were and whether they were logistical or human. A key twist that clinched the effectiveness of the process is that they would carry it out in two separate locations, one where the data was on the whole good and you could test out solutions, and one where the data was poor and you could accurately diagnose the problems. By the end of week three or four, the team would therefore have tested its hypotheses from week one and refined its understanding of what the problem was and how it could be solved. That just left the preparation of a brief, compelling report, usually in PowerPoint, with some killer charts and clear recommendations. Again and again, these Priority Reviews proved their worth. So much so, that after a while instead of applying them only when we had a problem, we applied them routinely, even where the data looked promising. The result was that we anticipated and avoided serious problems and therefore moved from cure to prevention. The biggest challenge in most cases was not carrying out the review, but persuading the relevant officials at the outset that it would be a good use of their time, given that they were busy already, albeit sometimes responding haphazardly to the problems the review would enable them to solve. The other reason they had for resisting the idea was less honourable, but understandable nevertheless – they were worried about what they might discover out at the frontline. Once we had their commitment, though, they almost always found the process genuinely riveting. It had pace, energy and insight. Above all, once it was done they could do their jobs better. Any government in the world can do this with virtually any problem. The secret lies in the urgency and the positive can-do tone. Idris Jala and his Pemandu colleagues in Malaysia invented a variation on Delivery Chain Analysis called, memorably, the Putrajaya Inquisition. Putrajaya is Malaysia's striking Islamic capital in the jungle, not far from Kuala Lumpur. When Pemandu identifies an official or group of officials who are the block to progress in the delivery chain, it issues those officials with an invitation to come and see the prime minister in Putrajaya about three weeks hence. Miraculously, by the time the officials arrive for the meetings, the blockage has been unblocked, the problem solved. This leaves the prime minister with the simple task of thanking them for resolving things. #### The Value of a Crisis 'Why is it that when we have a real crisis... we get the job done?' Blair to me on 11 March 2002. Good question. The answer is that in a real crisis, solving the problem is prioritized and everything else on the agenda takes a back seat for a while; if money is required, it is found; above all, the normal constraints of the day-to-day are lifted. As Blair discovered, the military in particular are very good in a crisis. Alastair Campbell once exclaimed, 'The only people who deliver in this country are the spies and the soldiers.' The military are pre-eminent in a crisis because that is what they are trained to be. Sir Kevin Tebbit was the permanent secretary at the Ministry of Defence from 1998 to 2005. Though a civil servant through and through, Sir Kevin had that upright military bearing and clipped, lucid way of explaining things that you might expect to find in a top army officer. In his role, he had more opportunities than he might have wanted to see the military handle a crisis, including wars in Sierra Leone, Kosovo, Afghanistan and Iraq, as well as civil emergencies such as the fuel crisis of 2000 and the outbreak of foot-and-mouth disease in Britain's cattle in 2001. Once asked to take something on, he explained when I interviewed him, the military just get on with it. They may not even be conscious of how they go about it, it's just what they do. 'What you see as a detached observer,' he continued, 'are clear lessons in how to handle a crisis.' Watching them, Tebbit identified four key factors. 1. Preparation and improvisation The military combine the boring detail of routines, preparation, training and drill – everything from square-bashing to table-top planning – with mission commands; officers are trained to extemporize within their delegated area of responsibility so they can act promptly. In other words, the military don't choose between routine and improvisation. They do both (as Nelson taught them long ago). 2. The Red Team Drawing on an idea which originated in the US, for each mission British military commanders establish a Red Team, a small group whose job it is to criticize the way the mission is being run, to try to identify flaws in the planning and action and to identify potential risks or circumstances where the mission might go wrong. In short, they are authorized to be frank, blunt and critical. Dissent and critique are legitimized and incentivized. This makes groupthink, identified by Anthony King and Ivor Crewe as a major cause of blunders in government, much less likely. 3. Battle rhythm The military, says Tebbit, start with the assumption that they will dominate the crisis and solve the problems once they establish a battle rhythm. Every morning and evening promptly at a fixed time, all the relevant military players – logistics, intelligence and the battle commanders as well as the Red Team – assemble to assess the state of play and make decisions about what to do next. This establishes a rhythm (or routine) of review, decision-making and action which drives progress and, if done well, means that problems are addressed promptly rather than neglected. 4. Command centre At the centre, the military make sure they have sufficient capacity – erring on the side of overdoing it – to get the job done. Commenting on the foot-and-mouth crisis of 2001, Tebbit said that the Ministry of Agriculture, Fisheries and Food (MAFF), with veterinary surgeons in the lead, simply didn't get its arms round the crisis. They consistently underestimated the capacity required to solve it. Only when the military took charge was this problem rectified. As he explained to me, 'The vets didn't know the difference between consultation and discussion on the one hand and command and action on the other.' The military clearly did. Once you've seen a crisis solved by applying these techniques, it's a short hop to looking at a problem which is just shy of a crisis and deciding to utilize the crisis techniques. This is what struck Blair so forcefully in early 2002. The previous year, the outbreak of foot-and-mouth had threatened the livelihoods of countless farmers and the state of the British countryside. It had also threatened the reputation of Blair's government in the run-up to a general election. Blair postponed the election for a month and, frustrated by the ponderous response of the MAFF, brought in the military. He himself led the emergency response. Cattle were slaughtered and smoke rose from piles of their carcasses. People were banned from their beloved countryside walks for a while. Terrible though it was, it was clear that the government had 'gripped' the problem at last and was on the way to a solution. Blair's respect for the military was enhanced further. The following February, when the prime minister summoned the Home Secretary and me to his office in Downing Street to discuss the growing epidemic of street robberies – muggings for mobile phones mainly – the foot-and-mouth precedent was clearly in his mind. He knew that it would be inappropriate to involve the military, but surely we could learn lessons from the way they had responded. Blair's solution? We should call together COBRA and deal with robbery through that mechanism. In short, we haven't quite got a real crisis yet, so why don't we create one? We did exactly that. It took a while for the Home Office officials to catch up. At first, their minds still worked within the tramlines of business as usual, but the whole point of a crisis is that business-as-usual is suspended. Within a few months, the number of muggings had been halved. The problem was under control and business as usual – now better informed – returned. Needless to say, you cannot create a crisis too often because you would then undermine the entire government machine. This book is mainly about making sure that the machine becomes more effective at delivering for the citizens, but, just occasionally, when a problem is at Level 4 or heading that way, it is worth remembering the phrase that became ubiquitous after the collapse of Lehman Brothers in 2008: a crisis is a terrible thing to waste. To summarize, as it becomes apparent there is a problem – you can see it in the data, for example – step one is to acknowledge its existence, step two is to decide what kind of problem it is – both its intensity and its nature – and step three is to do something about it – and to do so without giving the benefit of the doubt. Err on the side of rigour. While a delivery unit might prompt each of these steps, the responsibility for solving the problem lies squarely with the relevant minister and department. There is a difference between solving a problem and ensuring that it is solved. The delivery unit might pitch in to inspire and assist, but the responsibility is clear – as is the attribution of credit once the problem is solved. **Problem-solving Approach** Figure 24 #### EXCUSES, EXCUSES We all know the tendency in our own lives. So it is with bureaucracies. In your heart of hearts you know there is a problem that needs solving, but at the same time you don't want to confront it and you can think of plenty of excuses for not doing so, or at least for postponing facing up to it. In such situations, many of us find our willpower wanting, which is why having a coach, a teacher or a mentor can be useful; someone to stiffen your resolve, to tell you to do what you know you ought to. As problems arise, a delivery unit or equivalent can play exactly this role: prompting action that otherwise might be postponed – or as I used to put it to top officials in departments, part of our job is to help put steel in your spine. For the delivery unit team, what this implies is that you have to recognize the excuses as you hear them and take them off the table. Once you start accepting them, you are well on the way to 'going native'. Here is a guide to the most commonly heard excuses and how to counter them. EXCUSE 1: | We're already doing it. ---|--- RESPONSE: | How come we have a problem then? If you are already doing what is needed, where's the evidence it is working? Are you doing it with enough intensity? Maybe you're doing whatever it is on paper in the department, but it's not biting out there in the system. To put it bluntly, you may be able to fool yourself, but you can't fool me. EXCUSE 2: | You're asking the impossible. RESPONSE: | It may look impossible to you, but they've done it before in France/the US/China (delete as appropriate). If the other 90 per cent of your system performed as well as the top 10 per cent you'd exceed what I'm asking. You just need a little bit of courage. Three departments last year told us the same thing and now look – they're flying! EXCUSE 3: | It's impossible and we're already doing it. RESPONSE: | I promise you I've heard this combination of excuses 1 and 2 more than once from the mouths of officials. The response is that they can't both be true – get real! EXCUSE 4: | It's very risky. RESPONSE: | Agreed, but not as risky as doing nothing. And if this doesn't work we'll try something else. EXCUSE 5: | There will be unintended consequences. RESPONSE: | Of course, there always are. Some of them might be positive, incidentally, but either way we'll check. The inevitable consequences of not acting look a good deal worse. EXCUSE 6: | By intervening you are distracting us from delivering. RESPONSE: | If you were delivering, we wouldn't be intervening! We'll help you deliver once the way forward is clear but understand we are not going away until the problem is solved... at which point we will want to congratulate you. RULE 40 TAKE ALL THE EXCUSES OFF THE TABLE ##### LEARN FROM FAILURE Ernest Bai Koroma was elected president of Sierra Leone in 2007 on a platform of delivering improvements in energy, transportation, health and education. In a country which not long before had been embroiled in a brutal civil war which left 50,000 dead and 80 per cent of the population living on less than a dollar a day, Koroma's commitment was a tall order. He published his Agenda for Change in April 2008, but as Michael Scharff comments in his account of this work, 'Koroma quickly found that implementing his agenda was far more difficult than writing it.' Beautifully put. He established a Strategy and Policy Unit (SPU) to oversee implementation, in part on the advice of Tony Blair's Africa Governance Initiative (AGI). The idea was right, but somehow it did not quite work out. Governance in Sierra Leone was genuinely difficult; many of the nation's most talented people had left the country during the war and as a result many key posts were filled by people who were not up to the job. The ministries themselves were often without power for hours or even days on end. Moreover, the first version of the SPU had problems of its own. Its five key staff did not work effectively with the ministries. Relationships were confrontational. Partly this was because they were too senior – a former minister and a former attorney general. Also, they were much more interested in thinking 'out of the box' than about the nitty-gritty detail of ensuring things got done. Added to that, no one was clear what the unit's mission was – was it really delivery or, as its name implied, something much more to do with long-term strategy? As we've seen, one of the key lessons of the science of delivery is that if you combine strategy and delivery, more often than not the former trumps the latter in priority – in Sierra Leone they learned this lesson the hard way. Finally, and crucially, the SPU failed to establish the routine stocktakes. By 2009, though there were some successes, the SPU experiment had failed. As in life and sport, so in government. What marks a leader out is not whether they experience failure – of course they do – it's how they respond. It must have been a temptation to President Koroma simply to close the SPU down and move on, but he didn't. Instead, identifying the problem correctly as one of implementation rather than design, in 2010 he reconstituted it, having learned the bitter lessons of the previous three years. The remodelled SPU, again taking advice from Blair's AGI, set out, in Scharff's words, 'to reduce the number of priorities on the agenda, develop a stronger rapport with ministers, rework the monitoring system... and improve the quality of information provided to the President'. The new SPU also hired a different kind of people, candidates who combined management experience with vital interpersonal skills, supported by a bigger team of more junior analysts. Crucially, too, they established monthly stocktakes which the president chaired and at which SPU officials led the discussion. This gave ministry officials working on the priorities access to the president. Meanwhile, the president, with an excellent memory, was able to remind officials that decisions taken months before ought by now to be implemented. Sierra Leone began to make progress. The director of the SPU, Victor Strasser-King, appointed following his success overseeing a dam-building project, put this progress down in part to the sharper accountability and in part to the people. 'You have to be very particular about the people you hire. You don't want people who come in with their own agenda.' Miatta Kargbo, who played a key role in the SPU's health work, put it this way: > If a ministry is just told to 'set targets and implement them and we'll get back to you later', it's not going to get done. But the monthly updates, the advisers being in ministries and engaging as partners, the whole change model that you're not just there to monitor... but there to partner, drive change, remove bottlenecks and facilitate getting the work done, that's really important. Only someone who deeply understood the science of delivery could express those sentiments so succinctly. As a result, she said, 'together we are transforming our great nation'. It is important to remember that had they not chosen to learn from failure and try again, the chances are they would never have succeeded. RULE 41 LEARN ACTIVELY FROM EXPERIENCE (failure is a great teacher) ##### RESISTANCE The French love a good revolution. The tradition of opposing government on the streets of Paris, established in 1789, is alive and well. No sooner does a government embark on any major reform of government or the public services, than members of the relevant interest groups pour out onto the streets, shut the service down and wait to see whether the government has the courage to take them on. Invariably it doesn't, which is why France spends such a large proportion of GDP on its public services without the stellar results such lavish expenditure might be expected to deliver. Not every nation has the same revolutionary tradition as France, but in most countries, when radical reforms are proposed, there are significant and powerful vested interests which will seek to defend the status quo. Often they have been beneficiaries for decades of the kind of failed Trust and Altruism discussed in chapter 3, and reform for them is likely to be at best uncomfortable. Thus the problems that occur with implementation are often associated with managing or facing down resistance. There is a fondly held notion among civil servants, the public and public sector workforces, that the way to avoid resistance is to get 'buy-in'. If the government consulted more and conceded more to bring their opponents onside, all would be well – that is the theory. And yes of course government should consult, but it is totally unrealistic to believe that buy-in will inevitably result. The fact is that at least some of the opponents of a government reform are probably determined to try to resist it successfully, and any concessions you make to them will only encourage them to come back for more. In these circumstances, delivering successfully depends not on getting buy-in but on anticipating where the resistance might come from and applying some proven techniques to ensure the best chance of success. Here are some guidelines. 1. Implement well The widely held view that the way to bring about change is to 'win hearts and minds' and then proceed is largely a myth. In fact, the reverse is true: you need to proceed, and if you do so well – a very big and important 'if' – hearts and minds will follow. This is because beliefs follow behaviours. Think about it at a practical level – you might hope that a new piece of software for your computer will enhance your performance, but until you try it, learn how to use it and see that it works, you are unlikely to believe in its power. The behaviour – using it – leads the belief. So it is with a government reform: it is a journey into the unknown, leaving a safe haven behind. Why take the risk? Can I hold onto as much as possible of what I've got? No amount of persuasion, least of all from politicians, who are rarely trusted, is likely to change your mind. However, if you embark on the journey and find it goes well – so much better than expected – then your beliefs will follow. That 'if', of course, applies in spades. Implement badly and the sceptics will have a field day; a brave departure will become a débâcle. I saw this close up, long ago, when I represented a sceptical vested interest group, the National Union of Teachers, at the time when John Major's government in England was introducing tests in English, mathematics and science for eleven- and fourteen-year-olds. The objective behind the tests was two-fold: check that the children were learning so that parents could be informed; and hold the schools to account for their performance. Such a mix was unlikely to appeal to union members who only a year or two earlier had questioned the idea of a national curriculum, never mind national testing. Had the government proceeded with due care and attention, it could probably have imposed the tests in spite of the scepticism of most teachers and the downright opposition of some, but it didn't. Instead, it made one blunder after another. While it piloted the maths and science tests for fourteen-year-olds, it failed to do so thoroughly for English, the very subject where there was the most opposition. Then, at the last minute, the hapless minister John Patten decided that the tests would include a paper on Shakespeare – a good idea (which I secretly supported) had the ground been prepared, but inflammatory in the circumstances. As the opposition began to mount, the government's examinations chief bizarrely claimed that these tests were the best-prepared in the history of education. (Bold claims need to have a foundation in fact.) Meanwhile, the government refused point blank even to meet representatives of teachers, thus adding fuel to the fire. The result of this incompetence was that in the summer of 1993 all the government's tests were successfully boycotted, not just the controversial English one. John Patten lost his job. I watched all this with incredulity, having drafted the motion for the NUT which set the profession on the road to a boycott. I also absorbed the lesson for government – there is no greater gift to a government's opponents than poor implementation. This was a lesson I took to heart when a few years later I found myself in government and responsible for the implementation of education policy (including the controversial tests). 2. Remember Achilles and Odysseus both have a point – and so does Pericles Lawrence Freedman's magnificent history of strategy explains that Ancient Greece provided us with two archetypes of strategy. There is Achilles, who depends on force or strength, and Odysseus, who depends on cunning. For much of the Trojan War, they depended on Achilles, but in the end it was Odysseus' idea of the Trojan Horse which brought them victory. Bearing both of these perspectives in mind makes sense for a government embarking on a major controversial reform, which is why Theodore Roosevelt used to say, 'speak softly and carry a big stick'. As far as possible, government needs to build its strength, but it also needs to be tactically aware and imaginative. In old-fashioned conflicts, strength came in part from reserves of funding or physical material – such as the Thatcher government famously stockpiling coal before the 1984 miners' strike (and the miners playing into the government's hands by going on strike as summer began). More often, though, it is a question of political capital, and here other factors come into play. What else in the government's portfolio, other than your conflict, might be draining its reserves of political capital? If that is happening, you risk losing even if you play your hand well. (I remember seeing Blair's political capital leaking away in the aftermath of the conflict in Iraq.) Furthermore, ultimately political capital derives from legitimacy – or in cruder terms public support – and this is where Pericles comes in, because in a Greek city-state, just as today, public support was fundamental in a conflict. Having sought to negotiate a settlement with Sparta to avoid war, Pericles became an advocate of it: 'If you yield to them you will immediately be required to make another concession, which will be greater, since you will have made the first concession out of fear.' As the war began, he continued to insist on restraint as well as firmness. In the conflict itself he sought to avoid a land battle, in which Sparta would be superior, and to wear the enemy down. As Freedman comments, 'Pericles' success lay in his authority and ability to convince people to follow strategies developed with care and foresight.' This is a statement about political capital, and the point is that it comes not just from what you do, but how you do it and how you explain it. Some people may support you, others might be persuaded by the rhetoric. Still others will express doubt, but perhaps concede that at least 'he knows what he's doing' (or she, of course). 3. Think of the whole bell curve and don't be dragged into mud-wrestling There is a famous bell curve – attributed to Everett Rogers – which illustrates how you might expect a population of stakeholders to respond to a proposed reform. **Bell Curve of Adoption** Figure 25 If you are Pericles, the innovators will be with you, and may be even more enthusiastic than you are yourself. The early adopters, if not initially convinced, will soon come with you. They may not want war with Sparta, but they will readily see that you have acted with restraint and that it would be wrong to make concessions out of fear. The late adopters will be more sceptical, but since you are Pericles, they probably think your insights are better than theirs and that means, reluctant though they are, they'll come on board so long as the proposed conflict goes well. With that much support, Pericles doesn't really need to worry about the laggards; they've got nowhere else to go and can be left ineffectually whingeing on the sidelines. The problem is that not everyone is Pericles. Plus the modern world is a good deal more open and diverse than it was back then. Maintaining or strengthening political capital therefore involves not just effective implementation, but effective implementation combined with a clear, consistent, convincing message about why the reform is needed, how it is going to work and what the benefits will ultimately be. As Governor Martin O'Malley says, you need the techniques of delivery _and_ a convincing story. As the message is crafted, it is worth thinking consciously about how it will impact on each segment of the bell curve. The ideal, from the government's point of view, is that the innovators are ahead of government, urging you to go faster; they then provide a counterweight to the much larger groups who would prefer delay. The key target group of the communication should be the early adopters, who can become allies but first need to be convinced that this really is going to happen. Put another way, they don't want to be marched up a hill only to be marched down again. The late adopters are also significant. For them, the message is: 'We understand your concerns and we've taken them into account. We've thought about what you said and have made adjustments on a practical level.' But they too need to believe it is really going to happen. As long as this is in doubt, they have every reason to act (or not act) on their scepticism. The key group not to target is the laggard group. They are unlikely to be persuaded and will seek to make a lot of noise to draw attention to themselves. Some of the media will love them because controversy makes a good story. This means they do have to be answered sometimes, but the answer needs to be targeted to keep the early adopters onside and bring the late adopters round. What the laggards want, above all, is to drag you into a mud-wrestle in which you and they become equivalent, the early and late adopters start tearing their hair out at the spectacle, and the public conclude 'a plague on both their houses'. In short, Pericles' combination of firmness and restraint has much to be said for it. Since Pericles' time, the means of communication have changed. Even in the past decade, since I was in No. 10, they have done so dramatically. Social media have become ubiquitous. The days of the controlled announcement are over; or, more accurately, the controlled announcement is only a small part of the story. In this context of the explosion of social media, the communications company Brunswick suggests it is better to think less about making announcements and more about joining conversations. 'It is said that there are only seven plots in drama. We think there are 11 big conversations about the challenges facing the world today – and that corporates [and governments, I would add] need to join these conversations.' To win arguments these days, governments need to engage continuously in these conversations rather than relying on set-piece speeches and announcements. The Keys for Dealing with Resistance * Don't rush to compromise at the first sign of resistance. * Address fears of change – don't give way to them. * Deploy evidence. * Point out the risk of inaction. * Constantly emphasize the moral purpose. * Tell a convincing story. * Remember the bell curve. * Join the conversation and stay engaged. 4. Apply the methods of principled bargaining There have been major advances in recent decades in approaches to negotiation. What is surprising is how little effect they've had on the way government and key stakeholders, including unions, negotiate with each other. Here I am referring not just to formal negotiations but also to all those interactions with stakeholders in relation to a major change which are in effect a negotiation. Though the literature on the subject is vast, the impact on government and its relationships is still limited. This is a pity because the potential is significant – better results could be achieved at less cost in damaged relationships. The diagram below (Fig. 26) is the best way to summarize the basic ideas. It is simple really. In a negotiation, if you have little concern for yourself and your point of view and not much for the other either (bottom left), you might as well withdraw. If you have high concern for yourself but not much for the other (bottom right), you will no doubt take them on. If you have low concern for yourself and your own position but high for theirs (top left), you will yield. Why not? Meanwhile, your opponents are thinking about the issues from their perspective. If you and they both decide to contend, a collision is inevitable. As a result, what generally happens, depending on the outcome of a trial of strength, is that the solution is found somewhere in the middle, usually a messy compromise (middle box) that often satisfies no one, and everyone knows that sooner or later it will be necessary to come back to the issues. Relationships are frayed, perhaps permanently, and the suppressed conflict often festers beneath the surface. Paradise postponed, perhaps indefinitely. **Negotiating Strategies** _Source_ : Ramsbotham, Woodhouse and Miall, _Contemporary Conflict Resolution_ Figure 26 The top right-hand box is different. Here you have high concern both for yourself and the other. If you are a government, this is in fact where you should almost always aim to be. You may be at odds with doctors, nurses or the police, but in the end you know you depend on them for success. They might not reciprocate your concern for them, but if they are part of a publicly funded service they really ought to, because they share an interest with government in convincing taxpayers that theirs is a service worth investing in. This last point often gets forgotten in the war of words, with the consequence that the public despair. Once the negotiation is pushed up into the problem-solving box, it is possible to be creative and to take off the blinkers that so often limit the field of vision in a traditional negotiation. Once the parties establish a shared purpose – improving the quality of service and outcomes for the citizen while maximizing the value of every tax pound or dollar – then the potential is huge. If in addition the negotiation is informed by evidence rather than merely taking positions, progress can be made. This might sound idealistic, but in my experience simply creating the possibility of a more principled negotiation – whether formal or informal – opens up new options for solving the problem. In any case, I'm not recommending it as an alternative to Achilles, Odysseus or Pericles, which is why I introduced them into the chapter first. Power matters, so does guile and so does persuasion; but to depend purely on these may be to miss the point. Also, as Lawrence Freedman makes plain, if you overdo the exercise of power you risk achieving compliance rather than collaboration. (In the words of another ancient, Tacitus, 'They made a wilderness and called it peace.') Similarly, if you over-depend on guile you risk losing trust. Indeed, one of Freedman's big points is that playing your cards well helps to fill your pot of political capital in a conflict, whereas playing badly drains it. There are sometimes some blockheaded and unreasonable people – in among the laggards – whom you may never bring round and there is no point being unrealistic about that. But the lesson is that for the most part you are more likely to solve more problems more effectively if you appeal to 'the better angels of our nature', as Lincoln put it, or at least start there. RULE 42 NEGOTIATE ON THE BASIS OF PRINCIPLE (but don't depend on it) ##### AVOIDING FOLLY Philip II, king of Spain, was also at various times before his death in 1598, also king of Naples, Sicily and Portugal and duke of Milan. Briefly, as a result of his luckless marriage to the ill-fated Mary I of England, he was king of England too. During his long reign, silver from the Spanish Empire in South America came across the Atlantic from Cartagena to Seville in vast quantities. You might think this would have made Spain rich, but keeping the sprawling empire together was a constant drain on Philip's resources, especially once the Netherlands – also part of his empire – revolted and refused to buckle. Philip took ruling seriously. He insisted on seeing every document of significance and many more that weren't. In his rambling Gormenghast of a palace – the Escorial outside Madrid – Philip sat in the gloom commenting endlessly in his spidery script. What's more, he had an obsession. His dead English wife's sister, Elizabeth, was now on the throne in England and had two serious faults as far as he was concerned – she was Protestant and she was aiding the Dutch rebels. Philip planned her downfall: a vast armada would sail from Spain, pick up the flower of the Spanish army from its base in the Netherlands and overthrow both Elizabeth and the English Reformation. Endless planning. Endless spidery script. When eventually, in 1588, after a decade or more of agonized planning, the armada sailed, it proved luckless. More nimble English ships disrupted its progress. Then they launched fireships in among it when it moored on the French side of the Channel; and finally an Atlantic storm blew it beyond England into the North Sea. A few stragglers made it back to Spain, having sailed home the long way, round the north of Scotland and the west coast of Ireland. Many of Philip's advisers had warned consistently of disaster ahead; the logistical challenge was simply too great, especially against the English, who were already becoming masters of the sea. Barbara Tuchman, the wonderful American historian, had this to say about Philip: 'No experience of failure of his policy could shake his belief in its essential excellence.' That is folly in a nutshell. Five years before her death, Tuchman published _The March of Folly: From Troy to Vietnam_ , a sumptuous history of the follies of government, including Philip II's, freewheeling across the centuries. She starts with a reflection: > Mankind, it seems, makes a poorer performance of government than of almost any other human activity... 'While all other sciences have advanced,' confessed [America's] second President, John Adams, 'government is at a stand; little better practiced now than three or four thousand years ago.' Tuchman identifies four forms of misgovernment: tyranny, excessive ambition, incompetence or decadence and folly. Her book, she says, is just about the last of the four. Helpfully for our purposes, she defines folly: >... the pursuit of a policy contrary to the self-interest of the constituency or state involved. Self-interest is whatever conduces to the welfare or advantage of the body being governed; folly is a policy that in these terms is counterproductive. She then says that for her enquiry folly must meet three criteria: 1. It must have been perceived as counterproductive in its own time, not with hindsight. 2. A feasible alternative course of action must have been available. 3. It must have been the policy of a group and sustained over a period of time. She makes special mention of 'woodenheadedness, the source of self-deception' and claims it plays a large part in government. 'It consists in assessing a situation in terms of preconceived fixed notions while ignoring or rejecting any contrary signs.' It will be immediately obvious how this is relevant to delivery. Folly is all too likely and woodenheadedness all too common in government circles. King and Crewe, cited earlier, identify some classic recent British examples. The science of delivery, if it is to have any credibility, must reduce or eliminate the risk of folly and identify and challenge woodenheadedness wherever it occurs. Conducting the routines rigorously, as argued in chapter 5, and solving problems systematically, as outlined in this chapter, should in fact do so. In the routines, the focus on data and the insistence on an honest conversation make folly less likely. Bringing to bear on problems the views of outsiders as well as insiders has a similar effect. Identifying problems early and solving them before they become intractable also prevents folly. Still there is a special kind of problem that the science of delivery needs to be alert to; and it may come upon you so gradually that you don't notice from one stocktake to the next and therefore may not seek to apply the problem-solving techniques to it. As on an English summer evening when the light fades imperceptibly and you notice far too late that the darkness is upon you, so with a policy. You might see all the indicators looking OK – a dip here, a dip there, but nothing to worry about – and then far too late you find you have a massive failure on your hands. Darkness has fallen. Or to take a phrase from politics long ago, the Fabian Society used to talk of 'the inevitability of gradualness'. For them it was a good thing; slowly everything would improve, so socialism – to which they aspired – might arrive without the need for revolution. But what if the inevitability of gradualness was a process of slow decline? Would you notice before it was too late? History is littered with examples of well-intentioned programmes that turn out to be disastrous. At any given moment, the argument for carrying on outweighs the argument for turning back, but the cumulative effect of each incremental decision is catastrophic. The slide into war in Vietnam, which Tuchman examines in depth in _The March of Folly_ , is a classic of the genre. Robert McNamara, Secretary of Defense under Kennedy and Johnson, and thus closer to the disaster than almost anyone else, waited nearly thirty years to publish his reflections on the searing experience. What began as an understandable (in the context of the Cold War) attempt to prevent the expansion of Communism across Vietnam, became a metaphorical swamp into which the US was sucked deeper and deeper. McNamara, a highly intelligent and successful former head of the Ford Motor Company, wanted performance indicators of progress as any businessperson might (and as the science of delivery would recommend). But what indicators could you use to show progress in such a war? McNamara explained his thinking in his 'unsparing' (as the _Wall Street Journal_ called it) account of his experience, _In Retrospect_. > I was convinced that, while we might not be able to track something as unambiguous as a frontline, we could find variables that would indicate our success or failure. So we measured the targets destroyed in the North, the traffic down the Ho Chi Minh Trail, the number of captives, the weapons seized, the enemy body count and so on. > > The body count was a measurement of the adversary's manpower losses; we undertook it because one of Westy's [General Westmoreland, the US commander] objectives was to reach a so-called crossover point, at which Vietcong and North Vietnamese casualties would be greater than they could sustain. One recoils morally even to read this. Did no one on the inside question the ethics involved? Even if you assume the war was justified, surely winning it with the fewest possible deaths should be the goal? Whereas here McNamara chose 'body count' because, as he almost says, they couldn't think of anything better. And, once that choice was made, success presumably was defined by increased body count. So one oft-repeated lesson of the science of delivery that may help prevent folly is to remember the moral purpose. Even if you leave this aside for a moment, McNamara's indicator looks like folly because that war in a distant corner of a foreign field could not be won simply by killing more and more of the enemy. History taught that. The situation did too; the war was in Vietnam, to state the obvious. And then, to make matters still worse, McNamara explains that in any case 'often reports [on body count] were misleading'. Westy stated in the spring of 1967, as McNamara tells us, that the crossover point had been reached but the CIA had a different view: the enemy were increasing their numbers. Even almost thirty years later, McNamara was complaining how 'hellishly difficult' it was to know what to believe. Surprisingly, he does not point out that in wars the army will usually have an incentive to exaggerate its impact, while the intelligence agencies have an incentive to find out uncomfortable truths. In this case, they probably both had a point; the body count may well have been rising, but this was simply provoking more and more Vietnamese to join the defence of their homeland. To be brutal about it, therefore, if body count was an indicator at all, it was a lead indicator of future Vietnamese strength. The second lesson in avoiding folly is never to lose sight of the big picture, which is what was happening to McNamara. Incidentally, if this can happen to McNamara, surrounded as he was by very talented people – 'the brightest and the best' – it's worth remembering that being clever is no guarantee against folly. Indeed, since being clever and being arrogant often go together, it may make things worse. Many other lessons of earlier chapters apply too – confront facts, however brutal, address the deficit of deliberation, don't rely on any single indicator, avoid endlessly giving the benefit of the doubt and be prepared, from a position of leadership, to challenge yourself and everyone around you. In the end this is what McNamara did. In May 1967, he opposed Westy's request for an additional 200,000 men and called time on the downward spiral to folly they had all found themselves caught up in. He wrote a brave, even moving, note to President Lyndon Johnson, whom he revered. > The picture of the world's greatest superpower killing or seriously injuring 1,000 non-combatants a week, while trying to pound a tiny backward nation into submission on an issue whose merits are hotly disputed, is not a pretty one. McNamara stayed in office, though, for another nine months, leaving on 29 February 1968 after a warm exchange of letters with the president. On the way to the formal departure ceremony in his honour, McNamara and the president got stuck between floors in a lift. McNamara's story is a cautionary tale of a once good man marching with the wrong indicators to the kind of folly described so vividly by Barbara Tuchman. The result? Irreversible failure. The next chapter examines how to ensure irreversible success. RULE 43 GUARD AGAINST FOLLY (it has been common throughout history) ## 7 ## Irreversibility ##### LEADERSHIP There was a bit of clamour outside the hotel, with lots of taxis picking up or dropping off for a function. I said to my Toronto friends that they could drop me and I'd walk the last 100 yards. They laughed at me, but, when I persisted, let me get out. In that 100 yards I realized how people die of cold. With windchill it was minus 30º Celsius that January evening, and I was very relieved to make it back into the hotel. The event I had attended had been anything but chilly, though. I had spent the afternoon and evening with Dalton McGuinty, the newly elected premier of Ontario, and his cabinet. After years in opposition, they had won a great victory in November 2003 and now had a full term (at least) ahead of them to transform Ontario. They had been following the progress of the Blair administration from afar and asked me to present on how we drove delivery. I set it out for them, found they were riveted by the details of what it would take, and somewhat overwhelmed as well as excited by the opportunity the voters of Ontario had given them. Overall, I was impressed. While maybe not classically charismatic, McGuinty came across as a leader with a quick wit, a willingness to learn, a thoughtful manner with people and, perhaps above all, a fierce determination to seize his opportunity. He was one of those politicians who gave you the sense, like Clement Attlee after the war in Britain, that they had qualities better suited for governing than campaigning, though of course he was good at both. I left the following morning wondering whether I'd seen the early days of a major success or something that would soon enough end in tears. And I stayed in touch especially, but not only, with their education reform, which McGuinty had marked out as his top priority. In this chapter, we will take it as read that you have mastered the content of the previous chapters, and that you are a fully paid-up deliverologist. The question now is: will you see it through? Will the change to which you aspire become irreversible? It turned out Dalton McGuinty really was in it for the long haul. He won three election victories and stayed ten years as premier, during which time he resolved an energy challenge, fostered innovation in the economy, responded to the financial crisis and its consequences, made good progress on health and saw the Ontario education system rank consistently as one of the best in the world, with high standards and high equity. In September 2010, tempered by seven years in office, McGuinty reflected in a keynote speech in Toronto on what he had learned about the kind of leadership required to drive through reform, as he had done by then in education. Without the kind of sustained leadership he demonstrated, a major reform programme is unlikely to become irreversible. Early in the speech he recognized the improved results and the top-five placing of Ontario in the global rankings. 'So are we excited about our progress? You bet... Now are we satisfied with our progress? Different question... I'm reminded of that Belgian car that broke the world land speed record in 1899 when it went 100km per hour. The name of the car was "La Jamais Contente", the "never satisfied".' Then, encouraging the audience of educators to keep learning and improving, he set out the eight lessons for political leadership. They are an excellent exposition of the leadership required from a head of government who really wants to drive profound change in a major field. I've summarized and generalized from them below. Lesson 1 – The drive to make progress can't be a fad. It has to be an enduring government priority backed by resources and an intelligent plan. Lesson 2 – A reform is not important to your government unless it's important to the head of your government – personally. Lesson 3 – It doesn't matter how much money you invest, it doesn't matter how much you want change, you won't get results unless you engage the workforce. Lesson 4 – To succeed, you need to build capacity among staff and to empower the right people in the right way. Lesson 5 – Settle on a few priorities and pursue them relentlessly. Lesson 6 – Once you start making progress, you've got permission to invest more. Lesson 7 – The job is never complete. Lesson 8 – The best way to sustain your effort to improve is to keep it personal. You yourself, as a leader, have to care. McGuinty finished the speech by talking about the role education had played in successive generations of his family. So it's personal, political and persistent. No amount of science of delivery can succeed without sustained leadership, ideally, as in McGuinty's case (or Shahbaz Sharif's in Punjab), from the head of the government. If McGuinty is a role model at head-of-government level, then at ministerial level it is hard to find anyone more effective than Andrew Adonis, who first in schools and then in transport drove significant progress in the Blair and Brown governments. Andrew and I were colleagues in No. 10 before he became an education minister following the 2005 election. Andrew had been pursuing education reform for years from Downing Street, so he knew what he wanted to do. His priority was to create a large number of independent state secondary schools called academies which would deliver in the state sector what Britain's great independent schools were delivering in the private sector. Above all, he wanted to be sure that these academies would transform educational opportunity in the most deprived areas in England, especially in its big cities. This wasn't all he wanted to do, but it was his passion and his priority. He was close to deserving that accolade: 'a single-issue fanatic'. As he put it himself, 'The challenge was simple. It was to create successful all-ability secondary schools, absolutely no-nonsense about standards and results... and making it possible to bridge the debilitating divide between state and private education.' Later he brought the same focus and drive to ensuring there would one day be a high-speed train service from London to Manchester, perhaps of the same quality as the one from Tokyo to Kyoto. Soon after he had become a junior education minister, I asked him how he was going about pursuing his passion for the academies. He had discovered what countless ministers before and since have discovered: your day gets filled up with stuff! And if there isn't quite enough paperwork coming down the pike, your private office can always fill another slot with a futile meeting or two. 'I spend fifty hours or so a week just keeping the work ticking over,' he said, 'and then I really drive change in the other twenty-five hours.' Eventually, when the academies policy had become successful and, indeed, irreversible (because the coalition government that succeeded Blair and Brown adopted it), Andrew had time to reflect on what it takes to be a reforming minister. His thoughts, set out in _Education, Education, Education_ , are consistent with McGuinty's perspective, but go deeper. Here they are: 1. Address the big problems. 2. Seek the truth and fail to succeed. 3. Keep it simple. 4. Be bold, but go with the grain as far as possible. 5. Lead and explain, lead and explain. 6. Build a team. 7. Build coalitions not tabernacles. 8. Champion consumers not producers. 9. On important issues, micromanage constantly. 10. Keep calm and carry on. 11. Remember that reform is a marathon, not a sprint. 12. Always have a plan for the future. Eisenhower, whom we met at the beginning of chapter 4, offered advice on leadership very similar to that given by our two modern politicians. He was referring to leadership in the military, but might just as well have had politics in mind. > All of us are human and we like to be favourably noticed by those above us and even by the public. An Allied Commander-in-Chief, among all others practicing the art of war, must more sternly than any other individual repress such notions. He must be self-effacing, quick to give credit, ready to meet the other fellow more than half way, must seek and absorb advice and must learn to decentralize [we would say delegate now]. On the other hand, when the time comes that he feels he must make a decision, he must make it in a clean-cut fashion and on his own responsibility and take full blame for anything that goes wrong whether or not it results from his mistake or from an error on the part of a subordinate. Everything depends, he added, on 'your personality and good sense'. There is nothing difficult conceptually about McGuinty's, Adonis's or Eisenhower's advice; the challenge, as with delivery in general, is being disciplined about following it when there are so many potential distractions and pitfalls. To realize the difficulty, you just have to look around the world and see how few leaders and ministers are able to pursue their agenda over the long run. In some cases, they don't have the personality and good sense, but in many they simply don't have the single-minded discipline. Sometimes this is also because they are not around very long. The same is true all too often of officials in Pakistan and India. Some British prime ministers – Blair and Brown both, for example – reshuffled their ministerial teams so often that some unfortunate ministers rarely spent more than a year in a post before the kaleidoscope was shaken and they emerged in another role, only for the cycle to be repeated. To take just one example, Kim Howells MP held six different roles during Blair's ten-year premiership. I saw a good deal of him when, at Transport, we were enjoying collaborating on the brave effort to ensure that the trains ran on time. We were just getting some traction when, for no reason I ever understood, he was gone again, this time to the Foreign Office to deal with the Middle East. In other cases, the temptations of the office seduce ministers. Perhaps they become unable to resist the comforts of the role and allow the civil service to mollycoddle them. Perhaps they would rather announce initiatives, make speeches and open buildings. There is a career to be made in this line of business – but it is not a career which delivers improved outcomes for citizens, and certainly not irreversible change. RULE 44 THERE IS NO SUBSTITUTE FOR SUSTAINED, DISCIPLINED POLITICAL LEADERSHIP ##### IRREVERSIBILITY Before coming to the definition of irreversibility, it is worth pointing out that it is different from a word that has become fashionable, especially in development circles – sustainability. Often the first question I get asked when I tell the story of the Punjab Roadmap is, 'Is it sustainable?' Part of the problem with this question is that the word has become so hollowed out it is in danger of losing its meaning. Indeed, it is now widely used to oppose anything radical – 'it won't be sustainable' – and thus to undermine ambition. When the Roadmap began to deliver significantly improved outcomes, its critics said, 'Yes – but it won't be sustainable.' The truth is that, at any moment with any ambitious programme, whether or not something is sustainable depends on what people do – will they see it through or not? This is a statement of the blindingly obvious, but is too often missing from the debate. And if people do see it through, by applying the disciplines of delivery, what they will achieve ultimately is not just sustainability, but irreversibility. Irreversibility means not being satisfied merely with improvement in outcomes but asking whether the leadership, structures and culture are in place that will guarantee the right trajectory of results for the foreseeable future. This definition is tough. It's a high bar and it should be the objective for any major reform. See it through. Persistence cubed. And don't get bored on the way. The next section explains how to avoid that fate. ##### OVERCOMING BOREDOM 'Boredom is... a vital problem for the moralist, since half the sins of mankind are caused by the fear of it,' Bertrand Russell once said (perhaps self-reflectively). Not just for the moralist, I should add, but also for the deliverologist. Much of the excitement in politics, whether for the politicians or the civil servants who work for them, lies in the new: the new strategy, the new crisis or the new state of affairs, or in the public: the speeches, the media and, if you're lucky, the celebrity or semi-celebrity status. But irreversibility depends on persistence, as both McGuinty and Adonis make plain. Stick at it, get into the details, oil the engine, whatever it takes. It is, as Adonis says, a marathon not a sprint. There are some people who have the right kind of mindset to grind out a change over time (I discovered I was one of them), but the evidence suggests that in politics they are relatively rare, which means the challenge is to mitigate the boredom somehow, or even better to find ways of making it interesting. This is not as hard as it sounds. The potential for boredom can certainly be mitigated if the political leader takes a continuing interest in the issue, as McGuinty with education or Blair with health. Regular access to a political leader is like gold dust, and if they value persistence, it makes it much more interesting for you too. And a delivery unit can reinforce this – the steady round of data, routines and problem-solving makes it much harder for a minister or department to deviate from the chosen path, even if they'd like to. This is why the routines described in chapter 5 are so important. But mitigation may not be enough. As McGuinty points out, at least for himself, the fact that it is a major aspect of public policy may not be enough to sustain a focus. It also needs to be personal, and to have deep meaning at that level as well as the system level. This means constantly reminding yourself of the purpose and the benefits to citizens. Then again, the best way to overcome the boredom is to turn it around. Beneath the surface of the data there is always plenty to fascinate – the contrast between one hospital and another, the geographic variation, race or gender differentials and occasionally the bizarre, such as the elephant crossing the motorway (in England) to delay traffic. Better still is if you can achieve that 'anorak' or 'geek' mentality where minor changes in the data become fascinating and each week or each month as you wait for the next set of data the tension mounts. As Adonis says, undisputed truths are rare in the field of public policy, so there is always a need to debate and understand the meaning of changes in the data. Here what matters most is the people you spend time with, which is why any successful leader, whether in politics, bureaucracy or business, should ensure they have a team around them that will challenge and question without fear. Being surrounded by yes men and women simply adds to the boredom, whereas constant challenge from within the team keeps things vibrant (and reduces the risk of folly). Finally, from the data and regular visits to the field, which McGuinty and Adonis both emphasize, you can always go deeper into a problem. It is one thing to understand national trends in reading performance, another to know what motivates Bangladeshi eleven-year-olds in London and their parents to succeed. It is one thing to track rail performance over time, another to understand why leaves on the line are a problem in New York City but not in Warsaw. Moreover, as time progresses, there are new allies (and perhaps new enemies) to keep you entertained. And at the end of the rainbow there is that crock of gold called credit, or even a legacy. This, needless to say, often turns out to be more elusive than you might think. For one thing, as the proverb has it, success has many parents and failure is an orphan. The truth is that, at the outset of any proposed reform, almost certainly there were several people who could lay a claim to having originated it, although at that time (particularly if the idea is ambitious and/or controversial) it is likely that there were many more people questioning it. Through the implementation dip, a number of sympathizers probably peeled off too. But then, as it gathered momentum and began to deliver, those who peeled off, and indeed the original sceptics, begin to come on board. Some of them even discover that they were 'always' in favour of it after all. So it is not just that success has many parents; it acquires even more as time goes by. In terms of managing the boredom, therefore, it's as well to prepare for sharing the credit with a larger group than you might have expected. Then there is the media. Maybe at the beginning, when you announced the change, you got some good headlines, but after that chances are the critics got more airtime and column inches than you did. As you went through the implementation dip, it got worse. You held your nerve, but it was no fun. At least, when momentum is generated and the results are coming through, you will be rewarded for your courage and persistence... or maybe not. Good news barely registers. 'Government Success' is not much of a headline (at least in countries with a free press; and where it isn't free, the headline won't be believed). Having paid the heavy price earlier, you now struggle to get a story on page 17. Worse still, the media, unforgiving as it is, does not just avoid reporting the good, it also changes the story. Even if you do get your good news onto page 17, the chances are it will be drowned out by some other story, related to your area and on page 1 or 2, which is bad news. I remember vividly the fact that, as we reaped success in reducing health wait times in 2004 and 2005, the media moved the focus onto hospital-acquired infections (we reduced those as well later, in the teeth of opposition from the health service, but that success too was drowned out). And then there is one last twist to the diminution of credit. Somewhere along the way, the validity of your data – the currency in which your success is measured – will be questioned, and however robust it might be, because it is government data the chances are you will not be believed. I recall all too clearly a press conference I did in Lahore with the Secretary – Schools on the latest set of positive data from the Education Reform Roadmap. I set out the figures and then there was a question to me (in English) about how valid and reliable the data was. Before I'd finished answering, the rest of the room was engaged in vigorous argument with the secretary. They were sufficiently polite to the outsider not to yell at me; the shouting match was in Urdu. The truth is that for any major reform, you have to win the battle twice: once on the ground to make the change real, and once on the airwaves to convince people that it is real. It is indeed a vital part of the change programme to take on the battle in the media as well as in the real world, because the two are integrally linked, but don't imagine that as a result, in return for the boredom and persistence, you will be rewarded with credit. The crock may be smaller than you think and its contents might not necessarily be gold. Julio Frenk, the highly successful former Minister of Health of Mexico who now leads the School of Public Health at Harvard, has a nice line on how to achieve a legacy. The wrong (but all-too-common) way, he says, is to claim, as they say in Mexico, that 'My predecessor was an idiot and my successor is a traitor.' The right way is to take care first of two things: don't make big mistakes, especially of the personal scandal variety; and don't ruin something good that you inherited even if it was from a government of a different political persuasion. Get these fundamentals in place, he says, and then embark upon successful reform; then and only then do you have a chance of substantial legacy. Years after the event, when the media is focused on the latest hapless government attempting the latest hapless reform and you are enjoying your retirement from politics, someone somewhere might publish some academic papers proving that you did make a difference. And later still, with a bit of luck, it may get written into a history book. In this way, legacy is often a pale and distant echo. Which means that the only certain way to alleviate the boredom and handle the pressure is to do what you are doing because it is the right thing to do. Moral purpose trumps all the rest every time. RULE 45 PERSIST (but don't expect the credit) ##### THE UNVARNISHED TRUTH Both McGuinty and Adonis emphasize the importance of continuing to learn, and there are plenty of opportunities to do this, whether it's through debating the latest data with officials, reading reports, or getting out on the frontline and seeing for yourself. With the right kind of curiosity, each and all of these can be genuinely fascinating. But there is a deeper question – how do you get the unvarnished truth about what is happening at any given moment and how you are doing? Grigory Potemkin, as well as serving Empress Catherine the Great as a general and governor of a region, was her lover. When he won a great victory at Ochakov, he received the kind of praise that must be unusual from a monarch or political leader of any kind. 'I take you by the ears and kiss you in my thoughts,' the empress wrote. As a result of his various campaigns, Russia acquired swathes of territory from the Ottoman Empire in the Crimea and around the Black Sea. He was rewarded by being made governor of this New Russia. When the empress came to visit for an extended period, he erected temporary villages along the banks of the river Dnieper to impress her. Once the royal party had moved on, the village would be dismantled and rebuilt downstream to impress Her Majesty once again. Hence one of Russia's greatest military leaders has become famous in history not for what he actually accomplished, but for having these fake villages named after him – the Potemkin villages. (Another way to achieve legacy is to have something named after you.) Not everyone has pulled off such a massive deception of a visiting dignitary (and historians debate whether Potemkin himself did), but what we might call the 'Potemkin Syndrome' is alive and well. The chief minister of Punjab was shocked by the photographs we showed him of Punjab schools, not because he hadn't ever visited schools, but because, whenever he had done so, the authorities had done the equivalent of a Potemkin village makeover in advance. Even for a minor celebrity like me, if they knew I was coming they tidied up and made sure the teachers were present. This is not just a syndrome in the developing world; it was the same for Blair. Everything would be meticulously planned in advance. The twenty-first-century need for security simply adds to the air of unreality. The visits are still important and an opportunity to learn, but they are not real. And the same applies to visits by ministers, civil servants and other dignitaries. The unannounced visit is hard to achieve, but much more revealing. On my trips to Punjab, I'm only rarely able to escape the security people (and the truth is mostly I don't want to because I value their advice). But I do want to visit schools unannounced. To give one example, in 2012 I was able to do so out by the Indian border in the district of Kasur. You could see the barbed wire and the armed watchtowers, beyond which lay India. The river Sutlej, broad and slow, wound its way past. Local farmers pulled a ferry laden with hay across it. A kingfisher put on a show over the river and the cattle wandered freely. At the boys' school some students were present and the water ran from the handpump, which was good. The boys were outside though, some sitting on the dusty ground, others on a rickety bench. The teacher was doing suspiciously little teaching, and while the boys had textbooks, they were tattered and torn and all open at different pages. There were some classrooms, but they lacked furniture and one of them was full of military equipment, presumably for the border guards. Meanwhile, the girls' school was locked up and the teachers were said to be on 'World Bank training'. Maybe, but the lock had a suspiciously permanent look. In late 2013 and early 2014, I made further unannounced visits and mostly I was impressed by the visible impact of the Roadmap, but I did stumble across one school where thirty or more children aged from four to ten were locked in a school compound but out of the classrooms. There was not an adult in sight and no water ran from the pump. Soon enough the headteacher came running; he had been relaxing in his house nearby. Such a visit doesn't tell you everything, but it is a reality check. Importantly, too, it affects your emotions. It redoubles your energy and commitment; the need to change the facts on the ground. The master of the unannounced visit was Theodore Roosevelt. His stellar career, which catapulted him to the presidency by the age of forty-two, saw him appointed police commissioner of New York City in his mid-thirties. Roosevelt was a larger-than-life character, destined to dominate any room he walked into. The British Liberal John Morley once exclaimed that he had seen 'two tremendous works of nature in America – the Niagara Falls and Mr Roosevelt'. Having fired the two top officials in the New York Police Department within three weeks of taking office as commissioner, Roosevelt turned his attention to the officers on the beat. To root out endemic corruption he needed to address the frontline as well as the head office. Jacob Riis, a journalist who had become a friend, urged Roosevelt to accompany him on some unannounced nightwalks. Riis knew the city, paving stone by paving stone. Larger than life though Roosevelt was, hidden under a floppy hat and a big overcoat he became indistinguishable from the rest of New York nightlife. He found patrolmen sleeping, enjoying meals and in one case chatting to a prostitute who recommended to the unfortunate officer that the way to deal with the inquisitive stranger in the large overcoat was to 'fan him to death'. The next morning, these officers and a group of others were summoned to the commissioner's office for a severe dressing down. Roosevelt loved it: 'these midnight rambles are great fun,' he wrote to his sister. The newspapers loved it too. As far away as San Antonio in Texas, it was reported in the _Daily Light_ that 'Police Commissioner Roosevelt finds that he can secure more information in one night than he would in a year in broad daylight.' In short, he learned a lot, as no doubt did the patrolmen of New York City. Irreversibility depends on changing the facts on the ground, and there is no alternative but to check. If you can't always do it yourself, remember that, unlike in Roosevelt's day, everyone has a camera now. Idris Jala, head of Pemandu, sometimes sends his staff out to check that a piece of road or a bus shelter promised by the relevant department is actually there in reality. ##### OBSESSION AND THE ELEGANCE OF YOUR BEHAVIOUR Not all learning comes from visits or even from the flow of data on outcomes. Nor is it the case that only the outcomes matter and the processes can be left to themselves. As Andrew Adonis points out, for the things that really matter micro-management is not a sin, it's a necessity. In the Delivery Unit, once we were established, we had the flow of data and visibility of the relevant departments' delivery plans. We also had the dialogue that led up to the stocktakes and the stocktakes themselves. For a while we thought that was enough information for us to check on our impact. Then our deputy director, Peter Thomas, persuaded me that he should be allowed to tour the headquarters of several top-performing American companies. He came back passionate about a new insight: we weren't learning enough to ensure success. Yes, we knew in broad terms what progress was being made on wait times or railway performance, for example, but we knew little about the processes we had put in place to encourage improved delivery – and anyway in 2003 the data itself was hardly overwhelmingly encouraging. There were some good stories, but the whole programme of priorities in the summer of 2003 did not look on track, never mind irreversible. Overcoming my initial reluctance, Peter insisted we had to identify the elements that we believed would make the Delivery Unit successful and then check whether we had them in place. He was persistent to the point of being irritating – one of his sterling qualities, incidentally – and what he developed, and we as a management team then adopted, became very important to us. We decided in effect to apply deliverology to our own processes. To ensure our future success, we agreed we had to have four elements in place: 1. Great people in our team 2. Great processes such as stocktakes and monthly meetings 3. Great relationships with key players in government 4. Great outcomes – were the goals we had set being achieved? The key point here is that we understood that success on the fourth point would be a consequence of success on the first three. To put it in jargon, the first three were lead indicators. If we put in place the first three, which we controlled entirely, the fourth would follow. We took these four elements, put them into the kind of balanced scorecard advocated by Robert Kaplan, and every six months rated ourselves (inevitably) on a four-point scale (see Fig. 27). We carried out this rating thoroughly. We commissioned independent people to interview ministers and officials in departments about their view of us and the quality of our relationship with them. When any member of staff left, we carried out an exit interview. We put one of our team in charge of each process so that, in addition to driving progress on whichever target they were responsible for, they had a responsibility periodically to review that process – a monthly note, for example – and see whether we could make it better still. Thomas's point was that we had to get our house in order; and if we did we would enhance our leverage. PMDU Balanced Scorecard Figure 27 The effect of all this was revolutionary. We were no longer leaving anything to chance. As great sportspeople do, we controlled at the level of detail everything we could control. (Maybe this is why the press referred to me as 'the control freak's control freak'.) In the light of Britain's later phenomenal success in cycling, I prefer to think that we were close to developing what Sir David Brailsford, former head of UK cycling, calls the aggregation of marginal gains. This is what he says: > People often associate marginal gains with pure technology, but it is far more than that. It is about nutrition, ergonomics, psychology. It is about making sure the riders get a good night's sleep by transporting their own bed and pillow to each hotel. It is about using the most effective massage gel. Each improvement may seem trivial, but the cumulative effect can be huge. So huge, in fact, that Britain dominated the cycling events at the London Olympics (winning more gold medals in cycling than Australia won in the entire Games). Some of Britain's rivals even worried that the British bikes had rounder wheels! Meanwhile, Team Sky, also managed by Brailsford, provided two consecutive (British) winners of the Tour de France. In the Delivery Unit, we were on the way to becoming as obsessive as Dave Brailsford. As Nassim Nicholas Taleb puts it with only slightly less obsession than Brailsford: > Your last recourse against randomness is how you act – if you can't control outcomes, you can control the elegance of your behaviour. This kind of obsession with your own processes at a level of detail unlocks the door to irreversibility. Next time someone tells you that leaders should focus on the big picture and leave the detail to subordinates, show them the door. RULE 46 LEARN THE LEARNABLE AND CONTROL THE CONTROLLABLE (obsessively) ##### BUILDING CAPACITY Nassim Nicholas Taleb's book _Antifragile_ describes how organizations can become more than resilient; they can develop so that they don't just survive shocks, they benefit from them. Curiously, his two main examples of governments which succeeded in this respect are the Ottoman Empire and Switzerland, neither of which is an ideal example for modern governments. The former because it declined and fell, and the latter because it is consciously established as an exception. Even so, Nassim Nicholas Taleb's brilliant book makes numerous telling points, one of which happily introduces this section. In seeking to reduce its risk, he says, 'Government too often transfers risk from the collective to the unfit.' It is a devastating statement, all the more so for being recognizably true. The implication is that to ensure anti-fragility in a system it is necessary to ensure it is 'fit' or, to put it more technically, has in place the capacity to deliver. In chapter 2, the concept of the 'guiding coalition' was introduced – the small group of seven to ten people in key positions who share a deep understanding of the goal and how it is to be achieved. In this chapter, we have met an example of this in practice: Dalton McGuinty's Education Results Team, which included not just the premier himself, the education minister and key officials from both the premier's office and the education department, but also an outsider, Michael Fullan, who was appointed adviser on education to the premier. Fullan was and is one of the world's leading thinkers and writers about reforming education systems, as well as being a patriotic Canadian and a citizen of Toronto. As an adviser to McGuinty, he contributed powerfully in three ways. First, he brought to the guiding coalition insights from around the world; he was a kind of living, walking, talking engine of international benchmarking. Second, he was a bridge or translator between government and the teaching profession. McGuinty took power after years of conflict between government and teachers, so trust on both sides was at a low ebb. While neither side trusted the other at the outset, both trusted Fullan. Third, his knowledge and his relationships enabled him to assist McGuinty and his education ministers in thinking about the next steps. Remember Andrew Adonis's last rule (see p. 223), always have a plan for the future? Michael Fullan made sure they always did. The Fullan role could be replicated in a variety of ways, but something like it is required because the guiding coalition, while necessary, is not enough. The guiding coalition has to reach out and build relationships in the system – and these need to become deeper and more extensive over time if irreversibility is to be ensured. The expression Fullan and I use to describe this process is 'ever-widening circles of leadership', illustrated in Figure 28. Beyond the guiding coalition are the system leaders, the top officials who manage the system, perhaps in a government department or in key agencies or regional bodies. Beyond them are the unit leaders – such as local police chiefs, hospital managers, school principals or headteachers; and beyond them, the workforce. All these serve the users of the service and ultimately the citizens. The guiding coalition needs constantly to foster leadership commitment in each of the concentric circles, while at the same time representing and communicating with the outer circle, the citizens. If the approach to delivery is working, the number of committed and capable leaders in each of the circles should be increasing all the time. You can track this. **Ever-widening Circles of Leadership** Figure 28 The lesson of the previous chapter's examination of resistance needs to be remembered – rather than winning hearts and minds and then making the change, the right approach is to win hearts and minds _by_ making the change. Because behaviour comes before belief, and competence before confidence, fostering these widening circles of leadership cannot be divorced from building the skills and capacity of the workforce; indeed, the two have to be integrated. In effect, they are the same thing. Get this right and irreversibility is well on the way. There are three levels to it. The third is the most important and the one that governments almost always miss (though often they also miss the first two). The first level is _awareness raising_ : ensuring that the people involved in delivering a major change – civil servants, police or nurses, for example – are aware of what is coming, why it is coming and what it is likely to mean in practical terms. This has to be convincing in the sense that they have to believe it really is going to happen; otherwise, as someone once said to me, they think change is like London buses – you can miss one because another will be along in a minute. To ensure it is convincing and that it reaches all those who need to know, it is necessary to use multiple communication channels, not just the management cascade, which is notoriously inefficient on its own. The second level is _formal training_ , which may apply to specific segments or the entire workforce and provides the skills necessary to adopt whatever the change may be. Here governments make two classic errors. The first is to assume it is not necessary and that awareness-raising will be enough. There may occasionally be a significant change which does not demand any new skills from the workforce, but that is rare. The second mistake is to think that, for a complex new skill, a one-off training session at the beginning will be enough. However good, it almost never is, because with any skill, until you try using it you don't know what questions to ask. The moral is that this kind of formal training needs to be on the model of train-do-train-do etc. Much better to spread three days over a six-month period than to do all three in a block at the outset. The training also needs to be of the highest quality, otherwise it will breed cynicism in an inevitably sceptical workforce. The third level of capacity-building is to _embed it_ in the way the system operates. This is crucial to irreversibility (and anti-fragility). It is also fundamental to unleashing greatness. The best way for a nurse to follow through in practice on the learning related to a major change is to learn from his or her peers and line managers in context, rather than at a training centre. The best way for a police officer to learn how to follow up a mugging is to do so from a coach who not only knows how but has personally done it; much more convincing than a trainer who has never been on the beat. The model organization in this respect, in my experience, is BRAC, the Bangladeshi non-governmental organization. They embed continuous learning in all their major programmes, whether it is chicken farming, women's empowerment or education. I have seen it first hand in their education programme. BRAC educate 10 per cent or more of the primary-age children in Bangladesh. Almost all of their pupils have for one reason or another dropped out of the (very poor) government system. BRAC take these children through to completion of primary school in four years instead of the five allowed in the government sector. In spite of spending a year less, the BRAC children massively outperform the children in the government sector. This is partly because BRAC have a well-worked-out model of good primary education, but above all it is because they train their teachers to be excellent. Almost all BRAC primary schools are one-room schoolhouses with thirty to forty children and just one teacher, which makes their success all the more impressive. Furthermore, these teachers are almost always young women from the local community who have just graduated from high school. These young women receive no more than three or four weeks of training before they start teaching. That is enough to get them going, but what brings about the quality and the remarkable consistency from one school to another is the fact that these teachers are visited weekly by a trained coach who watches them in action and offers specific advice. These coaches were invariably BRAC teachers previously, which means they are able to demonstrate with the children the point they want to make. Then, once a month, the teachers meet in a cluster of a dozen or so at a nearby training centre and spend a day learning specific new skills, both from the trainer and from each other. For these training sessions, they sit on the floor with a pile of materials in front of them just as the children do in their classes. To put it in technical language, the training itself models the pedagogy required in the classroom. You can visit one school after another and find the BRAC model being delivered again and again by teachers who most of the time are working alone. Remarkable. And the children not only learn, but are joyful. More remarkable still. Once capacity-building is embedded in the day-to-day work, once peers learn systematically from each other and have the opportunity regularly to see excellence in practice, you have the chance to unleash greatness. What BRAC and many other successful organizations do is easy to describe and looks effortless, but is in fact extremely sophisticated, which is why it remains relatively rare. Too often individual public sector professionals or other staff rarely get feedback, rarely see excellence and rarely demand the highest standards of all their peers. This is the crux of the relationship between capacity and culture. Seen from the point of view of a government seeking to deliver results, this is one of the most sophisticated challenges there is. And here there really is no quick fix. Michael Fullan, as so often, captures what is required in a simple phrase: the learning is the work, and the work is the learning. RULE 47 INVEST DEEPLY AND CONTINUOUSLY IN SKILL AND CAPABILITY (commitment will follow) Twenty years on, the horror of Rwanda's genocide in 1994 remains hard to comprehend; in a small country, up to a million people were slaughtered and millions more displaced. The scale of the tragedy pushed the vast majority of those who survived into poverty and left the government and its system in utter disarray. In 1995, four in five civil servants in Rwanda had not even completed secondary school. A few years later, President Kagame and his government embarked on 'Vision 2020' to try to rebuild their country. A major aspect of this was building the government's capacity. 'Capacity-building' is a key word in development-speak, the language spoken by providers of international aid. Much of capacity-building has been provided over the past twenty years or so around the world, some of it by 'fly-in, fly-out' consultants and some of it by sending officials in need of it to courses in other countries. The overall effect of this kind of capacity-building has been limited, and the Rwandan government knew it. 'The problem was that these fly-in, fly-out consultants just weren't leaving capacity here in the country,' commented Vivian Kayitesi of the Rwandan Development Board. Another Rwandan, Anita Kayirangwa, was similarly critical of courses abroad: 'You send someone to another country to train, and what they're learning isn't applicable at home. For example, they may be learning to use software that is not even used in their home office.' In collaboration with Tony Blair's Africa Governance Initiative (AGI), the Rwandan government designed a much more effective approach to capacity-building which took account of the lessons set out in the previous section, and has proved far more effective. It was based on a five-step process, the first two of which (about priorities and organization) have already been established earlier. The other three are: 1. Assess capacity gaps Analyse the relevant organizations and decide precisely where and how their capacity needs to be strengthened. 2. Design a package of support Design a tailored package of support to build the required capacity. 3. Mobilize support Mobilize the international community to find and where necessary provide the support. In delivering the programme, which was broadly successful, vital lessons were learned. First, capacity-building needs to be provided at three levels – the individual, the institutional and the organizational. Miss any one of these and the capacity to deliver will not be in place. Second, the 'nuts and bolts' as the AGI puts it – HR systems and financial management systems, for instance – matter as much as, if not more than, general themes such as leadership. Third, focus on solving problems. The AGI and its Rwandan counterpart ensured the capacity-building was real; it helped to solve the actual problems that were preventing delivery as well as build people's skills. In short, capacity-building is essential and should be powerful and engaging, as in this case. The tragedy is that in many countries, because the experience has been so poor over so long, the very phrase 'capacity-building' prompts stifled yawns – sometimes not even stifled. Of course, Rwanda still has major problems. As much as 50 per cent of its government activity is donor-supported – it has been a darling of donors over the past decade – which leaves its progress a long way from irreversibility. In addition, there are risks that the government will slip into authoritarianism, as William Easterly has pointed out. Time will tell. In the meantime, the lessons from the country's Strategic Capacity Building Initiative remain pertinent and relevant to many other countries. ##### THE POLITICS OF IRREVERSIBILITY There is a politics to irreversibility too. Just when you are pursuing the science of delivery to perfection and marching towards irreversibility, along comes an election and in comes a different government with a different agenda. In a trice, gone! Worse still, while your major change might have been a matter of vigorous debate in a campaign, it is also quite possible that it wasn't, that the government whose agenda you've been pursuing lost not on these reforms, but on something else altogether – a foreign conflict, economic competence, time for a change, a new charismatic leader of an opposition party, or all of the above. There is an underlying point here: this is what happens in democracies. Just as it throws up all kinds of weird and wonderful representatives of the people, so it also sometimes flattens a good reform or reform programme by accident. The question to consider, therefore, while a reform agenda is being pursued, is, what can you do to make it more likely that the agenda will continue to be pursued after the next election, whatever happens? Or, to reverse the point, what can you do to minimize the political risk to your agenda? One political scientist who has given this question careful consideration in his thoughtful and provocative book _Nation of Devils_ is Stein Ringen of Yale and Oxford. Ringen defines government clearly and narrowly as the politicians who hold executive office – the cabinet in the UK, for instance. He makes a clear distinction between government and the officials or bureaucracies which theoretically serve them. His point is that the government is made up of a very small number of people trying to make very big changes in countries which, though varying in size, are – apart from the occasional Luxembourg – very large things. He argues: > The puzzle is this: if, say, twenty people are to rule 60 million, those twenty are when all is said and done, helpless. They are in power, but governing involves thousands and thousands of civil servants, officials and workers in a myriad of ministries, departments, directorates and other agencies. Indeed, it involves everyone who lives in the country, both those who are engaged in the apparatus of governance and everyone else who allows themselves to be governed. A minister cannot so much as set up a meeting without getting a secretary to arrange it. Ringen is in effect making the case for a science of delivery and reinforcing the rationale for this book. He goes on to lay out the difficulties of getting bureaucracies to do what you want, and the challenges of dealing with opposition and the various vested interests. He then comes to his profoundly important conclusion: > To get others to take you seriously, you start by showing them your power. But once it is established that you are a governor with the power to command, your ambition to be effective bids you to pull back from the use of the power you have in your hands... if you unleash it except in the last resort, you will get the power of others against you and fail in what you want to achieve. The prudent governor should leave power latent as much as possible and turn to persuasion. In other words, use power with restraint and try to bring people with you. Working for compliance gets you only so far; ultimately, he argues, 'Ministers must inform and educate their people, and touch their souls and stimulate their will.' This is a high bar but, if it can be achieved, it almost guarantees irreversibility; after all, it is unlikely that any political party could be elected by promising to reverse something that has touched people's souls. The question remains, though, _how_ to do this or even get close to it in the messy, conflicted reality of many modern societies, especially given electoral cycles which tend to be no longer than four or five years. Here are some options. First, as Ringen points out, the way you act – using power with restraint, being open, levelling with people even if the message is difficult or unpopular, all these count individually and above all cumulatively. They combine to create an overall impression, and that impression provides the context in which people judge a programme or decide what commitment to make to it or not. We all know there are some politicians we respond to and some we don't, and that this is a different point from whether we agree with them or not. In Pakistan, even his political opponents acknowledge that Shahbaz Sharif is seriously committed to the future of his province. To refer back to chapter 6, both Achilles and Odysseus have their parts to play, but for irreversibility Pericles is more important than either. Second, the argument referred to (in chapter 3) about (in Blair's terms) moving from 'flogging the system' to 'a self-sustaining, self-improving system' is clearly central. The former approach depends on government; the latter by definition much less so and is therefore closer to irreversibility. Third, in some countries the democratic process has developed a consensual tradition. It is not that the parties don't compete over ideas or personalities; rather that they expect to work together, sometimes in coalition and sometimes simply because that is the way the political culture expects them to behave. This is often true, for example, in the way Germany goes about fundamental reform, even when it doesn't have a grand coalition of the two largest parties. The result is often slower, more deliberate change, but also change with much better prospects of lasting longer and becoming irreversible. Stein Ringen's underlying belief is that the Scandinavian countries, including his native Norway, are the paragons of good governance and move forward in consensual fashion. I agree that in consensual democracies such as Norway, the chances of irreversibility are higher, assuming there is systematic implementation. I am less convinced by the implication of Ringen's argument that if only other countries – his book is mainly about Britain and America – adopted the Norwegian approach, all would be well. To exaggerate slightly, his case is that Britain and America are badly governed, and the solution is for them to become like Norway. Somehow, I don't see that happening, because of the deep, combative political cultures in those places, though as always there is plenty to learn from Norway (and other well-governed countries). Still less is Norway a model that many developing countries and emerging democracies can simply adopt. Fourth, therefore, the politics of irreversibility in conflictual democracies such as Britain, the US and Australia needs attention. Surprisingly, perhaps, it is possible. Essentially, the answer is simple – you push the reform far enough and deep enough for the opposition either to adopt it enthusiastically or at the very least decide it would be more effort to unwind it than to sustain it. One key factor in opposition thinking will of course be how popular the change is by the time the election comes round. Here are two classic examples from the UK. Tony Blair, while in opposition, embraced the trade union reforms that Margaret Thatcher had put in place, because he believed they were popular and effective. He did so in spite of much of his own party, in part because he did not believe Labour would be electable if it proposed to overturn them but also because he believed they were right. A decade or so later, Michael Gove, while in opposition, expressed enthusiasm for the academies policy that Andrew Adonis had led and promised to expand their number if the Conservatives were elected. That has now become irreversible along with the trade union legislation. In America, welfare reform, including Workfare, which required those receiving welfare to work for it, was adopted by some states in the US in the early 1990s and was generally strongly supported by Republicans. The (Democrat) Clinton administration then adopted the approach as the policy of the federal government. The effect was to make 'a hand up not a handout' a broadly shared policy across much of the United States. Similarly, standards-based school reform in the US – involving setting higher standards for students and testing students to see whether these standards have been achieved – originated in a number of states, mainly in the south, but ultimately became widespread and a shared agenda of (moderate) Republicans and Democrats. Successive US secretaries of education – the wonderfully named Margaret Spellings under Republican President George W. Bush and then Arne Duncan under Democratic President Obama – have put the weight of the federal government behind this agenda. Building support at individual state level for this policy often involved a coalition of business and civil rights activists, who wanted better performance for children of all backgrounds. Some of the best state governors, such as Jim Hunt in North Carolina in the 1980s and 90s, proved adept at mobilizing this coalition and ensuring that both Republicans and Democrats across the state supported the direction of travel. In fact, Texas and North Carolina, two of the most improved states in school performance in the 1990s, pursued a decade of reform through both Republican and Democrat administrations. One underlying point in all these approaches is that the degree of popular support is vital. If a reform is popular, it is much harder to reverse. This is particularly the case if there are a large number of direct beneficiaries. The sale of council houses to their tenants was not just popular. Crucially, it also gave those who took the opportunity to purchase a very strong stake in then ensuring the policy was not reversed. Similarly, one of the reasons Nigel Lawson and his colleagues in the Thatcher administration were so keen to encourage small investors as well as institutions to buy shares in the newly privatized industries (see chapter 3) was that they thus created a powerful, well-motivated constituency of support for the policy. It is one thing for a government to renationalize an industry owned by 'fat cats' in the City; quite another to attempt it if millions of ordinary Joes (or as the advertisement of the time had it, Sids) are shareholders too. In short, irreversibility depends on legitimacy and public support, another reason why it is important both to implement well and to communicate effectively as you do so. In the search for irreversibility, it is possible to turn deadlines that look like a threat into an opportunity. An election, say. Often what happens when an election hoves into view is that, first, the campaign takes over so no one pays attention to delivery any more and, second, the incumbent government starts to play safe, to reduce risk, to close things down. Both responses are likely to have a negative effect on delivery. The deadline of an election, though, is also an opportunity. I always said to my staff in the Delivery Unit that our job was not to help Blair get re-elected – he had other people much more capable of doing that than we were. Our job was to deliver outcomes for the citizens and get good value for every tax pound. That meant that, as it appeared on the horizon, the election of 2005 was not an acceptable distraction. On the contrary, I urged that we maintain or increase the momentum and make the most of having in place experienced teams of ministers and officials who, more than three years into a Parliament, knew what they were doing. We should do everything we could to continue to improve performance. After the election, what would be the state of affairs? No one knew. So make the most of the short term. In this vein, I wrote a note to Blair on 13 October 2004 setting out what I wanted him to tell the cabinet as they began to think electorally. * In some areas we've made huge progress and I don't want to lose it now by taking our eye off the ball. In others, real drive in the next two to three months will deliver the results we've worked for over the past three years. * I want everything done between now and December to ensure successful delivery of this Parliament's objectives. This is your top priority for the next two months. In Delivery Unit terms you should be trying to shift as many traffic lights to green as possible by Christmas. * It is vital that planning for implementation of the five-year strategies [which had been published in July 2004] proceeds with urgency and radical ambition in the next few months... * In summary, it is very important that as the pre-election climate heats up in the next few months, you ensure your department doesn't get distracted from the central task of delivery both short and medium term. Sometimes officials stand aside from politics somewhat disingenuously. They say that elections are nothing to do with them. It is true that neutral civil servants should not be seeking to help a governing party to win an election, but it is not true that the elections are irrelevant to their work. If the electoral cycles can be used to the advantage of citizens by getting more delivered and better value for taxpayers' money, go for it. Similarly, it is irresponsible to believe, as so many civil servants do, that in the run-up to an election everything should be allowed to slow down. Political decision-making may be temporarily suspended, but public money is still being spent, services are still being used – and patients, students and passengers are still hoping, perhaps expecting, to be served well. There should therefore be no let-up in the search for irreversibility. RULE 48 THINK THROUGH THE POLITICS OF IRREVERSIBILITY (anticipate the future) ##### PLAN FOR THE FUTURE The final element of securing irreversibility is always to plan further ahead. 'Plan for the future,' said Andrew Adonis; 'You're never done,' said Dalton McGuinty. For very long periods of human history, there was a steady state punctuated by moments of drama or change. In our era, that is reversed – there is permanent change at pace punctuated by moments of steadiness. In other words, if you don't set out to change the future, it'll change you. So planning ahead is a necessary ingredient of seeing the current agenda through to irreversibility. I had the opportunity to meet Shahbaz Sharif in June 2013, shortly after he had been re-elected as chief minister of Punjab. He was exhausted by the successive demands of an election campaign and reconstituting a government, which included assisting his brother, who had been swept into office as prime minister of Pakistan. There were maybe seven or eight of us in the meeting. Myself and my team, along with the chief minister and a couple of his most senior officials. On the plane journey, I had prepared eight ambitious goals that the chief minister might want to set for the end of his new five-year term. He liked them but, flushed with victory, wanted to be even more ambitious and more urgent. The officials were raising sceptical eyebrows in response, but when one of them intervened to say that what the CM was now proposing was impossible, the CM batted his admonition away. (It reminded me of a comment an official made to me in the heady months after Blair's election victory in 1997: 'I've understood,' he said. 'You want it all and you want it now.' That was about how it felt.) But actually the goals didn't satisfy Shahbaz Sharif even as he amended them, until he hit on a way of encapsulating them in a sentence: 'I want to be like Malaysia.' Now that, by 2018, is seriously ambitious, but it is also an aspiration to capture the imagination of the citizens of Punjab. Malaysia may not be perfect, but it is a well-functioning Islamic democracy that has achieved middle-income status and is heading on up. Its education system and its public services generally are well ahead of Punjab's but are not so far ahead as to be off the charts. With the aspiration clear, we have set out with the new team of officials to plan the next five years. We've set hard-edged and ambitious goals for 2018 and the system is gearing up to make progress towards them. And increasingly the drive towards the aspiration is coming from the Punjab officials rather than from my team. A plan for the future and growing capacity to deliver it – steps on the way to irreversibility. ##### DRIFT On 12 November 1936, in the House of Commons Winston Churchill rose to propose an amendment which warned the (Conservative) government of the dangers posed to Europe by the Nazi rearmament of Germany. The government was genuinely worried about this important strategic development, but it was also conscious that the First World War had finished only eighteen years earlier; as a result there was little appetite in the country for an arms race, still less for conflict. The problem was the government didn't have a plan or a direction. It was a very British government led by a very British prime minister, Stanley Baldwin, and they were pursuing the very British approach of muddling through while hoping something would turn up. Churchill was of the same party but out of office and out of favour. These were his wilderness years and he was obsessed with German rearmament. He had a clear plan for the future – in response Britain should rearm, not because he wanted war, but because he knew the famous maxim, 'If you want peace, prepare for war.' After arguing from the facts, Churchill came to a peroration which has justly become famous. The First Lord of the Admiralty, he said, had promised that the government was always reviewing the position. > Everything, he assured us, is entirely fluid. I am sure that that is true. Anyone can see what the position is. The government simply cannot make up their minds, or they cannot get the Prime Minister to make up his mind. So they go on in a strange paradox, decided only to be undecided, resolved to be irresolute, adamant for drift, solid for fluidity, all powerful to be impotent. So we go on preparing more months and years – precious perhaps vital to the greatness of Britain – for the locusts to eat. Baldwin, in his late sixties at this time, had been round the block more than once, and probably shrugged it off as one more rhetorical flourish from a politician who had long since had his day. But events bore out Churchill's fears rather than Baldwin's hopes, and Churchill's finest hour lay ahead. The speech stands as an incomparable critique of drift. For those committed to leading any great enterprise of public reform or transformation, the words should be committed to memory. Drift is the enemy of irreversibility, which depends on there being a vision of the future and momentum towards it. In government such visions always have a hard edge. They cost money, often rather a lot of it. And the raising, oversight and allocation of government money are central to the realizing of a vision and to achieving irreversible change and therefore to the science of delivery; all the more so in an era of austerity. It is to this subject that the final chapter is devoted. RULE 49 DRIFT IS THE ENEMY OF DELIVERY (momentum is its friend) ## 8 ## (Other People's) Money 'Let Pharaoh do this,' asserted Joseph, having interpreted the Pharaoh's dream. > Let Pharaoh look out a man discreet and wise, and set him over the land of Egypt... and let him appoint officers over the land and take up the fifth part of the land of Egypt in the seven plenteous years. And let them gather all the food of those good years that come, and lay up corn, under the hand of Pharaoh, and let them keep food in the cities. And that food shall be for store to the land against the seven years of famine, which shall be in the land of Egypt; that the land shall perish not through the famine. Pharaoh took this advice, and put Joseph, aged just thirty, in charge. He rewarded him with a precious ring, fine linen and gave him use of 'the second chariot which he had'. Joseph 'went throughout all the land of Egypt. And in the seven plenteous years the earth brought forth by handfuls... And Joseph gathered corn as the sand of the sea, very much, until he left numbering, for it was without number.' Then, as predicted, the famine years came; 'and dearth was in all lands; but in all the land of Egypt there was bread... And all the countries came into Egypt to Joseph for to buy corn...' There is great wisdom in the ancient texts. (The story of Yusuf appears in the Quran too.) Pharaoh's dream as interpreted by Joseph – what we would now call a Treasury Forecast – suggested that, contrary to contemporary assertions, boom and bust had not ended. After seven years, the boom would be followed by bust. Joseph advises strongly, therefore, that instead of spending as if there were no tomorrow, you should save 20 per cent ('the fifth part') of each year's revenue so that when the bust comes you have the resilience to get through it. In other words, draw a trajectory for gathered corn, which will result in a store of at least 140 per cent of the baseline, a good year. Then strengthen the delivery chain; put 'a man discreet and wise' in charge of delivery and, for the next link in the chain, 'let him appoint officers over the land'. Pharaoh was no fool. He saw that the wisest advice came from Joseph, so made him responsible. Joseph didn't hang around in the palace wearing his new finery; he got out there, just like Theodore Roosevelt. He built a data system and started counting the grain (or had someone like Tony O'Connor count it for him). He stopped counting only when he had more grain stored than he knew what to do with, because it was 'without number'. In delivery terms, he was well ahead of trajectory. When the years of dearth came, not only could Joseph feed all Egyptians, people from other countries beat a path to his door too, including eventually, as the story ends, his own father. Even then, Joseph didn't lose his shrewdness – the Bible tells us that he 'sold' the grain to the Egyptians. He didn't want them to become dependent on a welfare cheque. He also sold the corn to the other countries, no doubt at a high price, 'because the famine was so sore in all the lands', thus building up Egypt's foreign exchange reserves. The wisdom of the Pharaoh and Joseph therefore ensured Egypt's resilience through a crisis in the global economy. In fact, Nassim Nicholas Taleb might argue that Egypt didn't just survive the recession, it thrived, and thus demonstrated anti-fragility. (What would Egyptians give now for such stewardship and governance?) The moral is clear: when the tax revenue rolls in in the boom years, don't spend it all at once. And as the downturn looms, be prepared. Why, thousands of years after the story of Joseph was written for us all to learn from, do governments find it so hard to apply these lessons? It might be argued that the electoral cycle confounds long-term thinking; Joseph, after all, planned fourteen years ahead. I'm not sure this is convincing, though, especially after the economic trauma of the past decade. My guess is that citizens would be impressed by a leader who argued that, as the recovery strengthens, not only are we going to pay off the debt, we are also going to invest in our own future resilience. This might not be the easiest message for the campaign trail, but surely it should not be beyond today's political leaders to find the words to express the fundamental concept of stewardship. Neither Pharaoh nor Joseph refers to government money. The more you think about it, the more the phrase is problematic. It implies a bottomless pit of resource that can be allocated on a whim. The truth is that governments do not have money of their own; all they have is your money and mine, taken from us to meet the demands of the public (also you and me). Strictly speaking, governments have two sources of money: taxation, which they take from the present generation; or borrowing, which is in effect a tax liability on a future generation. Either way, it is a good discipline when in government to remember that the money you spend is other people's money.* The content of the previous chapters is well established, even if only relatively few governments have applied it systematically. The content of this chapter is emerging, not established – an aspiration rather than a demonstrable reality. Surprising really, given how long we've known the story of Joseph. ##### BUDGETS Mitch Daniels perhaps came closest to proving the case made here. When he took office as governor of Indiana in January 2005, the state's economy was suffering and the budget deficit he inherited was over $600 million. When he left office eight years later, he was able to look back on seven consecutive years of budget surplus, all the more remarkable for the fact that his second term was overshadowed by the worst recession in post-war US history. The finances were not the only problem. The state had seventy-four departments, agencies, boards and commissions, working largely in silos, often with overlapping responsibilities and no clarity about what specifically they were supposed to be delivering. The Indiana Department of Transportation, for instance, was responsible for some toll roads where the toll was 15¢ while the cost of collecting each toll was 34¢! At the Bureau of Motor Vehicles, where people stood in line for far too long, some employees were still stuffing public dollars into desk drawers. The Pew Trust, which rates the efficiency of state authorities, gave Indiana a C grade, below the US average. Daniels had campaigned on a platform of getting public expenditure and the deficit under control and took the view that he had to move fast. Drawing on contacts he had made in the course of his years in business and in federal government, during the transition period he had built a team of like-minded people – his guiding coalition, in the terms of this book. The day after his inauguration, he set to work. In a letter to all public employees, after affirming his commitment to public service, he quickly made his point: > Please understand: things are going to be different. From now on, Indiana state government will be about results. We will ask of every department, What are our goals here? What will we measure to determine whether we are achieving them or not and whether we are getting steadily better at it? This was a clear statement that the science of delivery would be applied. Immediately afterwards, Daniels reformed the muddled process for managing budgets. Building on his experience as head of the Office of Management and Budget under George W. Bush, he established a new central budget office, Indiana's own OMB, and gave its chief cabinet rank. 'Having a very empowered budget office was very important,' he later told Michael Scharff of Princeton. Within this OMB he set up the Government Efficiency and Financial Planning Unit, which was a next-of-kin to the delivery units described in this book. Daniels's unit combined a delivery agenda with a fiscal agenda. Cris Johnston, a partner from an accountancy/consulting firm, was appointed to lead it. It established goals for each department, systems for monitoring progress and then used the results to decide departmental or agency budget allocations. The new unit had collaborative target-setting conversations with the departments and agencies. Cleverly they linked budget allocation not so much to demonstrable success, especially in the early days, but to a willingness to establish data systems and start measuring progress. As Adam Horst put it, an important consideration in allocating funding was 'how well the department embraced the idea of measuring agency performance... Over time, they got the message that if they couldn't show they were efficient and effective and measuring, we would be less inclined to fund them.' Meanwhile, Governor Daniels used routine monthly cabinet meetings to check on progress towards the major goals. His own office collaborated with the new unit to submit summaries of progress to the government every two weeks. The governor had the personal discipline – not evident in all leading politicians – to commit to and stick to the routines he established. Moreover, in contrast to the Blair administration, where the mantra was 'Investment for Reform', Daniels pursued reform alongside major cuts in expenditure. In this sense he anticipated the reckoning to come when the financial crisis hit in 2008. He ended collective bargaining on his first day in office. He outsourced swathes of services and there were layoffs too. However, for those who stayed, there were benefits. Daniels introduced significant rewards for performance, including, towards the end of his time in office, giving each state employee an extra $1,000 as a reward for their part in turning a major deficit into a substantial surplus. The key to progress with state employees was simple – the introduction of basic management. Everyone was more accountable; poor performers were removed; successes were rewarded. The drive for outcomes and efficiency worked. By 2012, Pew was describing Indiana as 'the most improved state in the country'. Results were improved too. The average wait time for a driver's licence was under nine minutes, down from over forty, for example. The governor's approval rating was 63 per cent in April 2012, unusually high in a year when politicians around the world, after years of economic gloom, were perceived poorly. Interestingly too, in spite of the cuts and the end of collective bargaining (or perhaps because of them), state employees were better paid and more strongly motivated than they had been. Daniels had not avoided controversy, far from it. In addition to his hard line on state employees, he had also raised taxes in his early days to the chagrin of some of his political allies. And by the end of his eight years, in spite of the progress, he wasn't satisfied. 'We believed strongly that culture was an impediment to success and that it required changing if Indiana were ever to move ahead at somewhere near the pace of change in the world,' said one of his key advisers. Daniels agreed that that was a multi-year task, not yet irreversible by the time he had to stand down as his term limit required. What Mitch Daniels showed is that you can improve performance significantly while controlling or even reducing costs. This is surely the task for government around the world in the twenty-first century. People never enjoy paying taxes, but are generally willing to do so if they see that they and their fellow citizens get a good return on their investment. Most people are sufficiently broad-minded too that they are willing to pay for some services or functions that they themselves don't necessarily need: care for the elderly, education for children and welfare, at least as a temporary measure for people between jobs, for instance. However, unlike in the mid-twentieth century, when the welfare state was new and, after the predations of the 1930s and the devastation of war, people were willing to pay up and take it on trust that the government would do its best, in the early twenty-first century they want to see outcomes, to see the evidence that their money is used well. As people's work and capital have become more mobile, competition between countries in effect limits what can be collected in tax. Yes, some countries, in Scandinavia for example, continue a tradition of high tax for quality services, but such settlements are unlikely to arise in future, and in some high-tax countries such as France, the settlement is coming under sustained pressure from the globalized economy. Corporate taxation, a major source of revenue for governments in the twentieth century, is also proving ever more difficult to collect as companies themselves become globalized. Some corporations may be blatantly manipulating the various tax systems to avoid paying up, but the problem goes deeper than that. In a globalized economy, it is not clear whether a company should be taxed where its owners reside, where its goods or services are produced or where its goods or services are consumed. Existing tax systems are still based on a twentieth-century model which assumes a national base and a physical product; neither assumption is true any longer. This downward pressure on the ability to collect tax coincides with a need to pay off debt, which many governments accumulated to frightening levels during the global recession. Some of this debt arose from inefficiency, but much of it resulted from acting necessarily to stave off a meltdown in the global economy and the misery that would have accompanied it. Now add into the mix a growing expectation among citizens for quality services from government. The take-it-or-leave-it public service that was accepted as good enough by grateful citizens is a thing of the past. They want quality and evidence of quality. And, while taxpayers are willing to take a broad view, their willingness to keep paying taxes will erode if the services they _do_ use – schools or social services, say – are of poor quality. So the context for the science of delivery in the next decade is one in which the government faces a triple bind: 1. Downward pressure on the ability to raise taxes. 2. A debt burden which needs to be reduced. 3. Growing demand for improved quality of services. The challenge is therefore, in a word, productivity, the productivity of public services. Where Mitch Daniels has led, others will have to follow. Certainly we in the Prime Minister's Delivery Unit began to think hard about these issues back in 2004–5. Maybe, we were thinking, the Blair mantra of 'Investment for Reform' had passed its sell-by date. Maybe we'd get more reform for less cost by tightening our grip on the public finances. If at that point we weren't ready to cut public expenditure, at least we could control the rate of increase. RULE 50 'MORE FOR LESS' TRUMPS 'INVESTMENT FOR REFORM' (and may deliver more) That was then. Before the financial crisis. Before the ballooning of public debt. Before the era of austerity. What was then a speculative case now seems to me to be the very essence of the challenge of how best to spend other people's money. 'More for Less' trumps 'Investment for Reform'. To reinforce this case, it was possible back when I was in No. 10 to consider the value for money of health service expenditure, as the resources available to it increased at 8 or 9 per cent per annum. While significant improvements in healthcare were evident, it was clear too that they were not being brought about as efficiently as they might have been. Choice and competition certainly helped, but some of the new national contracts – such as the deal done with general practitioners – were excessively generous: too much investment for too little reform. My team and I wrote a note to the prime minister setting this out, but while the Blair government innovated in the way it drove performance, its innovations in controlling cost did not keep up. It's not that there weren't innovations; it's just that in the end they didn't go far enough. In 1998, in his first Comprehensive Spending Review, Gordon Brown introduced the notion of Public Service Agreements which the Treasury formally agreed with government departments. These set targets for the outcomes that the departments would deliver in return for the funding the Treasury allocated. The basic idea was both radical and excellent, but at first there were far too many targets and some of them were poorly designed and unmeasurable. The idea was refined in future spending reviews. A second innovation was that spending would be allocated for a three-year period, not just one, with the spending review process on a two-year cycle. This meant that each spending review picked up and refined the last year of the previous spending review while allocating funding for a further two. Furthermore, departments were allowed to shift money between years, as were individual service units such as hospitals and schools. This was a major advance, often overlooked. These changes set the context in which the original Delivery Unit was established in 2001. In partnership with the Treasury, it took the discipline of managing public expenditure further by reducing the number and improving the quality of targets and most of all by putting in place a process of driving delivery rather than just allocating the money and hoping for the best. When, in 2002–3, I met the IMF team that visited annually to review the British economy, I asked them for advice on which countries to learn from about how to manage delivery and public expenditure more effectively. They were very clear in their response: 'This is the frontier.' We had important elements it is true; PSAs, three-year budgets and a systematic approach to delivery. In addition, citizens could check progress against the targets on the Treasury website (though it wasn't always up to date). The two gaping holes were first that we didn't vigorously control cost, the Treasury's public expenditure officials having become policy wonks rather than bean-counters; and second that, while we managed for outcomes, we did not manage public sector productivity. To put this in different terms, we were not able to compare even within departments, never mind across departmental boundaries, the cost per outcome. That seemed to us an impossible task, but during those same years two leading practitioners of public service reform in the US faced up to it, undaunted. While the British government was bringing about these improvements in the management of public expenditure, on the other side of the Atlantic two Americans were developing an altogether more radical approach to the same issues, and working with states such as Washington to put it into practice. In the aftermath of the dotcom crash of 2001 – which barely registered an effect on UK public expenditure – many US cities and states found themselves facing a fiscal crisis. Against this backdrop, David Osborne, who in the previous decade had been one of Vice-President Al Gore's favourite thinkers on the subject of reinventing government, and Peter Hutchinson wrote _The Price Government: Getting the Results We Need in an Age of Permanent of Fiscal Crisis_. It was relevant then, but now, in the age of austerity, its time has surely come. They set out five key challenges that any government should address in shaping a budget: 1. Get a grip on the problem: looking at the problem in what they describe as a 'clear-headed' way. Is it about income, borrowing or spending, or a combination of all three? 2. Set the price of government: how much in total would the government like to spend and how much are the citizens willing to pay? 3. Set the priorities of government: among the many possible priorities, where should the government focus its energy and investment? 4. Allocate available resources across the priorities: at this point, hard choices have to be made because the overall total (what we in the Blair administration called 'The Envelope') has already been set. The priorities guide the choices, but that doesn't make them easy. It is always difficult in practice to redirect resources from lower to higher priorities, but if you are not willing to do that you are not really prioritizing. 5. Develop a purchasing plan for each result: this is the most radical part of the model. Once the allocations are decided, instead of simply passing the funding on to the relevant existing services, Osborne and Hutchinson propose the development of a purchasing plan by a Results Team for each priority area. They quote the then governor of Washington State, Gary Locke, putting the point succinctly: 'We asked them to forget the loyalties they have to the agencies they represent. Be like citizens. Tell us where to put the money, so we get the best results. Tell us what programs can be consolidated. Tell us what programs don't make a large enough difference in getting the results we want.' The Results Teams then produce what I would call a Delivery Plan for each major outcome. Because the assumptions of the status quo have been directly challenged, the result is an outbreak of creativity. Note here Governor Locke's suggestion to 'Be like citizens'. This is the way to think about other people's money. With this perspective in place, they rigorously examined the existing agencies and their budgets – what would they buy or continue to buy? What would they like to buy if they had more money? What would they eliminate first if they had less? And what would they eliminate anyway? Osborne and Hutchinson propose a number of means, within this framework, of getting better value for money, including 'divesting to invest', 'consolidation', 'rewarding performance' and streamlining administrative systems so that they enhance accountability and reduce bureaucracy. What they suggest is not unlike what Mitch Daniels actually did so successfully. When I reflect on our own approach in the Blair administration, I see parallels too. An analysis of 'the problem' was undertaken and an overall 'envelope' for public expenditure was fixed by Blair and Brown, but the setting of priorities fell short of the Osborne and Hutchinson model. Blair and Brown had different and competing priorities, and the cabinet as a whole did not debate the issue thoroughly. Moreover, unlike the final stages of the Osborne and Hutchinson model, departments tended to be less radical in the preparation of 'purchasing plans' and in rethinking the budgets of existing agencies. I think this is where the problem of having too much money, or at least too fast a rate of increase, meant we collectively lacked rigour and creativity. The situation did not force us to rethink the status quo; too often we were simply able to add to it. The Cameron government has been much more vigorous in taking an axe to existing budgets, but ironically much less clear about defining priorities and outcomes. Maybe one day a British government will put the whole thing together... Here is an approach to budgeting that is conceptually clear as well as radical and practical; the question arises then, why is it so rarely used? The answer is that it is very difficult to do well. Challenging existing budgets means challenging well-entrenched interests. Public sector workforces are usually well organized and usually, too, they are advocates of the status quo or at best incremental change. Even within government, among the politicians as well as the bureaucrats, establishing shared priorities, questioning current practices and taking courageous decisions can cause divisions and conflict. In these circumstances, rather than apply the bold, clean-slate approach Osborne and Hutchinson propose, it is easier to muddle through a year at a time. This may not solve the fundamental problem, but it postpones the day of reckoning for another year. In any case, as Osborne and Hutchinson point out, there are a number of 'deadly deceptions' – use of accounting tricks, borrowing, selling off assets or delaying maintenance, among others – which can help massage your budget into something like respectability, at least in the short term, leaving you to breathe a sigh of relief and, like the Charles Dickens character Micawber, hope that in the meantime something will turn up. It didn't work for him (until he emigrated and started over) and, in an age of austerity, it won't work for governments either. RULE 51 PERIODICALLY ADOPT A BOLD, CLEAN-SLATE APPROACH TO BUDGETING (to liberate resources) At the moment the incentives at every level in the system run counter to the practices Osborne and Hutchinson advocate and we should want to see. The machismo of ministers is measured by how big their budget is and how much more they can add to it in a spending round. There are no prizes for offering up a failing programme and its budget, still less an adequate one that isn't a top priority. The whole negotiation between a spending minister and his or her department all too often becomes a charade – 'smoke and mirrors' as they used to say in Whitehall. Or, in another unappealing phrase, when under pressure to deliver evidence of being tough on costs, officials would recommend offering up a 'bleeding stump' – a programme that you pretended to be willing to cut, knowing that the finance minister or the prime minister could not countenance giving it up. If the incentives for ministers are misaligned, they are no better for officials. Of course, officials understandably want to impress their ministers, so tend to share their incentives, but there is more to it than this. Generally speaking, civil service pay scales and management practices reward civil servants who manage large budgets or large numbers of people or both – the bigger the organization you run, the more respected you are. While on the face of it this seems obvious, in truth it runs directly counter to what is required for an era of austerity when, surely, the efforts of civil servants who lead small, effective teams, cut programmes, control costs and reduce numbers of people should be highlighted and rewarded. Then there are the public servants – police officers, teachers, nurses and so on – who, as we've seen, also constantly demand increased pay, better conditions, reduced workload or more support. If these are denied, we are told in a frequent refrain, 'morale' will be worse than ever. In April 2014, the BBC News website included a quote typical of the genre. Christine Blower, general secretary of one of England's teachers' unions, was quoted justifying a threat of strike action by saying that Education Secretary Michael Gove should engage with the union on 'education policies, on workload and accountability, teacher pay including performance related pay and his unfair pension changes... If the strike happens it will be Michael Gove's fault... Teacher morale is at a dangerously low ebb.' In short, the union didn't like the policies of the elected government and would strike to protest against them. The education department's response was to point out that 'the vast majority of our teachers and school leaders are hard-working dedicated professionals... teaching has never been more attractive, more popular or more rewarding. A record number of top graduates are now applying to become teachers...' In other words, the union's leadership were speaking for an unrepresentative minority. This exchange might have been played out in dozens of languages across dozens of countries – and the point to note is that it is entirely about inputs. The claims of spending ministers to have serious budgets, the relevance of senior officials who can manage large numbers of people and budgets in the billions, and the morale, latent skill and commitment of public sector workforces are undeniably important. Indeed, they are essential ingredients of delivering for citizens. So how do we square the circle of ensuring these features are in place while at the same time not incentivizing their steady incremental growth, that constant accretion that absorbs so much public money? The answer is that as long as the debate is focused purely on the inputs – the amount of money spent – it will always be flawed. A major theme of the science of delivery is a shift to outcomes – the results that government and public services actually deliver. This is a major advance but on its own is not sufficient either. The original PMDU was given the task of ensuring some ambitious outcomes were delivered and, thanks to the efforts of government departments and millions of public sector professionals and workers, progress was indeed achieved. However, this progress was made during a period of rapid growth in public expenditure – arguably too rapid – and no attempt was made to ensure the results were delivered while restraining the costs. This is where the debate now needs to go. The way to square the circle is to focus not just on the inputs nor just on the outcomes, but on both simultaneously. It is time for the public services to come to terms with public sector productivity. ##### PUBLIC SECTOR PRODUCTIVITY The vexed question of the productivity of public services has a long academic history. It involves extensive examination of economic theory and some major technical complexity. In the market sector, inputs and outputs can be reasonably easily computed because the outputs have a price and inputs have a cost. This is not to say that the measurement of productivity in the market sector is without its complexity, but for practical purposes most of the time it is straightforward. In the public sector – where the service is free at the point of use or where the price is fixed by government rather than the invisible hand of the market – the measurement of productivity becomes much harder. Adding to this complexity is what value to put on quality – how well a teacher teaches a lesson has a major impact on student outcomes, but it is hard to put a number to it. Or again, the thoughtfulness and responsiveness of a care worker who visits an elderly person in their home makes a huge difference to the person visited, but is hard to measure. To take the complexity one step further, the quality of surgery may make a difference to whether the patient needs further surgery in future. If the first operation is effective, it may reduce the need for future operations – that is surely a gain in productivity, but could easily be accounted as a loss. Preventative measures – such as promoting the use of suncream to reduce skin cancer in future – pose similar questions to those interested in public sector productivity. Then there are those vital public services whose sole aim is that nothing happens – the measure of success of a counter-terrorism organization is that nothing happens. Similarly, in relation to protection against floods, the ultimate measure of success for the British Environment Agency would be that no one is flooded. But there is no bottomless pit of other people's money. Resources are constrained. What is the right amount to spend to protect the public against floods? And how would you account for the outcome? Richard Murray of the Swedish Agency for Public Management suggests: > In part this can be explained by general difficulties in measuring the output of services... But in part it must be explained by a completely different perspective [from that of economists] on public services... Resource for the production of public services has not been regarded as input into a production process but _as an end in itself_ [my italics]... In short, for practical purposes, the input is the output. Tony Atkinson, the Oxford academic asked by the UK Treasury to report on public sector productivity, put it even more succinctly. Having explored the challenges rather more thoroughly than I have here, he suggests that 'In the face of these difficulties, some might wish to return to the earlier convention that output = input.' To his credit, he rejects this counsel of despair, but his simple little equation explains eloquently why around the world we have the problems described above, with spending ministers and officials and with public sector workforces. Many governments faced with these difficulties still leave productivity in the 'too difficult' box and continue to operate on the output = input basis in spite of its obvious inadequacy. This suits public sector workforces, whose pay, conditions and pensions form the bulk of the 'input' and whose accountability depends on moving away from output = input. Atkinson, urged on by the UK Treasury (and the European Commission), argued for something better, namely measuring what is achieved by spending on public services because 'we cannot simply assume that outputs equal inputs in such a major part of the economy'. Atkinson goes on to argue that the UK Office for National Statistics should seek to take account of quality in the public services even though 'Quality has many dimensions and some will prove elusive...' He adds that 'If quality adjustments cannot be comprehensive, they should be representative of the range of dimensions.' Furthermore, he sets out the challenge of taking this view; no single number can capture the full complexity of the desired outcomes. Those words, 'no single number', are the key here. Instinctively we all know – without mastering either economics or statistics – that you can't reduce something complex to a single number, which is why in the _Hitchhiker's Guide to the Galaxy_ we find the notion ludicrous that the meaning of life is 42. Relying on outputs = inputs is manifestly absurd and, especially in an era of austerity, unacceptable; we need to move on and seek to measure the output of government services, including quality as well as quantity. This is difficult to do and we will need more than one number to make sense of something so complex. There is one further problem. The Atkinson Review was commissioned by the Office for National Statistics, whose job is to publish a mass of data on extremely important aspects of the country. By definition, once this data appears, it is already out of date, sometimes significantly so. Of course, it is important to analyse productivity retrospectively to see the patterns and trends. The accumulation of this kind of analysis will inform public policy over time. What it won't do is provide a basis from which those responsible for a specific programme day-to-day can drive improved productivity – i.e. improved outcomes for the same or less funding – nor will it help governments have an informed conversation about which of a number of different ways to invest public money is likely to be most productive. To do this, we need an approach which is predictive (it will tell us what is likely to happen), pragmatic (we can cut through the complexities of theoretical debate and have something do-able) and, above all, provides a common language for those within and outside government who want to have an intelligent conversation about productivity and what to do in a practical way to enhance it. For that, we must turn to Harvard professor Mark H. Moore, whose classic _Creating Public Value_ (published in 1995) has stood the test of time. Moore's book sees the world from the perspective of public sector managers – his examples range from refuse collection to management of parks and drug rehabilitation. From our point of view, what makes his account so helpful is that everyone we meet in the book is tasked to deliver a service of some kind to citizens. This gives his work a powerful dose of day-to-day practice to go with the strong theoretical perspective. As he himself says, his aim is to help 'practitioners actually facing the problems'. The proper definition of management success, he argues, is 'to increase the public value produced by public sector organisations in both the short and the long run'. Moore goes on to set out what he calls 'a managerial view of public value'. First, he argues that value is in part a matter of perceptions – if citizens think a service is good, then that is positive. Second, he points out that the dialogue between citizens and government enables society to identify what it values. This will be different in different places and at different times. Third, therefore, public managers need to strengthen the institutions they lead. Delivering in the short-term at the expense of long-term capacity of the institution or service is not necessarily progress. Fourth, the citizens, not just the users of a service, need ultimately to be convinced because the money being spent is their money. Fifth, this means that citizens need a convincing account of how public service leaders intend to allocate their resources. And finally, public service leaders need to be adaptable and able to respond to changing circumstances. Moore's definition of public value and his advice on its implications for public managers take us a step on the way from the Atkinson Review towards something that is both predictive and pragmatic – and something that will not just have one number summarizing this complex area – but it doesn't quite get us to where we need to be. As Moore himself explains, 'even though [the] conceptual definition of success for public managers is clear, how to measure it is not... Moreover... much of the effectiveness of managerial interventions may depend on the small details of execution as well as on conception.' The question is whether we can take Moore's analysis and push it that critical stage further. In essence Moore's argument is that public value is created when the following are in place: * Outcomes are delivered. * The institution or service concerned is well managed, resilient and capable of delivering in the long-run as well as in the short-run. We could summarize this by saying public managers are responsible for 'pragmatic stewardship' as well as outcomes. * The beneficiaries of the service and the citizens/taxpayers perceive it to be effective and run broadly in accordance with society's values. * The resources allocated to it are being used efficiently in pursuit of the authorized goals. If these are the elements of public value, it should be possible to create a review framework which would help public managers and officials or indeed government ministers to think through systematically what it would take to drive public value. In a report for the Massachusetts Business Alliance for Education, published in March 2014, my colleagues and I attempted something along these lines. Making the case that this chapter has made throughout, we argued that if Massachusetts aspires to have the best education system in the world then, among other things, it needs to become more effective at managing productivity. There is support for this view from no less a figure than US Secretary of Education Arne Duncan, who put it bluntly: > It's time to stop treating the problem of educational productivity as a grinding, eat-your-broccoli exercise. It's time to start treating it as an opportunity for innovation and accelerating progress. Exactly. And Duncan's point applies to public services in general, not just to education. We were wary, however, of urging a focus on productivity without spelling out how it might become operational so, drawing on Mark Moore and the kind of practical thinking elsewhere in this book, we set out a framework for productive reviews. Table 18 summarizes it. It looks neat, but how would you use it? **Framework for Productivity Reviews** Table 18 #### A Framework for Productivity Reviews The first part examines the results or outcomes the system aspires to, the degree of ambition, the progress made towards the goals and, where it is too early to tell, what can be learned from lead indicators. At its simplest, a measure of productivity would take these outcome measures and 'divide' them by the inputs and reach a measure, but the productivity of public systems is not that simple. The second part of the framework measures the views of citizens, students and parents. A public system also needs to generate public confidence, partly because that will help ensure its longevity, but also because public confidence is itself a desirable outcome. In education, if students are motivated and parents actively supportive, then that will affect the academic outcomes positively. In health, if people work out and eat well, again outcomes will be better. Public systems therefore need survey data to enable comparisons of citizen and user attitudes. Even without productivity reviews, such data would be powerful and valuable. The third part of the framework is designed to ensure that those who have stewardship of the system at each level think not just about the present and the delivery of results this year and next, but also consider the long-term well-being of the system – its resilience and capacity to anticipate and manage change over time. This resilience comes from having in place effective processes, such as budgeting or contracting, from having staff with the right attitudes and capacity, and from having great relationships within the system. This part of the framework would require valid and reliable surveys of staff attitudes and motivation, which would in any case have intrinsic merit. The fourth part of the framework examines inputs. Are these adequate? Are they used efficiently? Can the citizen follow the money through the system in a transparent way? To make the review feasible, key financial data would have to be made available and be comparable across local units, such as districts, not just at an aggregate level, but also on specifics such as the costs of pensions and benefits. Much of this data is available in government systems around the world, but it is not always used systematically, and is often presented poorly, making it hard to use. It should be immediately clear that the framework picks up the four points above we drew out from Mark Moore's analysis, including separating the beneficiaries (in a school system, the students and parents) from what Moore calls the owners, the public. A framework of this kind is potentially valuable, but how could it be applied in practice? I would suggest a pragmatic approach such as that described for delivery capacity reviews in chapter 2. The outcome of a review would be traffic lights judgements on each of the four parts of the framework. How useful would this be? Critics would say that it would fall well short of the mathematical precision beloved of economists and scientists and from this perspective therefore fail to provide a satisfying answer to the productivity question. To which my answer is, that is precisely the point. Remember Atkinson's warning that one number could never capture something so complex? Remember Mark Moore's point that in spite of the insight his book provides, it does not offer a means of measuring public value? It is worth remembering too the warning of Louise Horner and Will Hutton, that 'Public value... entails responsiveness to refined (that is considered, informed) public preferences, which means that the public value will change over time.' And, finally, don't forget that out there are some seriously good economists working theoretically and experimentally to try to crack this public sector equivalent of Fermat's Last Theorem. One day, they might tell us the answer or answers, but supposing it takes a while (as indeed it did with Fermat's theorem), what then? My reply to the critics of the approach outlined here is _it's all we've got_. And at the very least, unless we start attempting something practical we will not make any progress. Evidently doing just one productivity review on the basis described above, while it would no doubt yield something of value, would not tell us a lot, but imagine five, ten, a hundred or more? Imagine the ability to compare across services, admittedly imperfectly, the ambition of outcomes, the use of lead indicators, the degree of public confidence or the approach to transparency. Surely this would provide rich insights from which those leading public organizations – whether politically or officially – could learn lessons and then apply them. For example, it would help government avoid 'resource imprisonment' (trapping money in failing programmes) and achieve 'resource fluidity' (moving money towards priorities and successes). In the process we would have achieved another of our goals: the establishment of a common language about public sector productivity that would take it out of the field of theoretical economics and place it firmly in the hands of practitioners. The debate that would ensue could change the nature of the conversation between a spending minister and a finance minister; it could change the way civil servants are evaluated and rewarded, and it could change the nature of the dialogue between governments and the public sector workforce. These are more than marginal gains. And it has to be worth a try because the current alternative is the intellectually bankrupt output = input. Who could possibly defend that in an era of austerity? Let's now imagine how a government could combine the various elements we have discussed in this chapter – setting goals, setting budgets and measuring productivity – into a sequence or cycle that would bring order to the management of other people's money as Mitch Daniels did so effectively. Constitutions and laws affect the budget process in each country, so it is not possible to set out a definitive approach. The Public Expenditure Cycle (Table 19) is intended to be a conceptually clear approach that pulls all the crucial elements together in a new way that could then be adapted and refined for specific purposes in a given country. I've allocated times of the year to each step not because they are necessarily right for each country – clearly not – but to give a sense of how the process might unfold. A two-year cycle of this kind has a great deal going for it. It allows time to refine and define priorities and to rethink how to approach delivery as in Osborne and Hutchinson's purchasing plans. It also ensures there is an entire year devoted to finalizing planning and then implementing, without the distraction of a spending round. By contrast, a one-year cycle is an endless negotiation, with all the key players focused inward on each other's tactics rather than outward on what is being delivered. To illustrate this point, I remember in the late 1990s driving with US Education Secretary Dick Riley – a former governor of South Carolina and a tough, experienced operator with a wonderfully gentle demeanour – from the education department to the White House, where I had been invited to see Bill Clinton sign that year's budget, which he did surrounded by police officers and teachers, who were the main beneficiaries. On the way back to the education department, just minutes after the budget had been signed, Riley made a call to an important congressman. He told me it was the first step in negotiating the following year's budget. **Public Expenditure Cycle** Year 1 | ---|--- Spring | * Cabinet debate on government priorities, following public 'conversation' or consultation over six months before * The overall framework for public expenditure in the next cycle proposed by finance ministry, debated and agreed by cabinet, thus identifying priorities Summer | * Departmental proposals developed within this context, including identifying clear outcomes/targets, programmes that might be cut and areas where the approach to delivery might be changed * Separate, related process applied to cross-cutting themes (such as tackling drug abuse or problem families) Autumn | * Departments finalize negotiation with finance ministry and settle * New overall settlement agreed by cabinet and published * Departments begin work on a 'purchasing plan' Year 2 | Spring/Summer | * Purchasing plans finalized and formally approved * Focus on setting up to deliver and starting on delivery Autumn | * Review of progress so far, including productivity reviews and lessons learned * Public conversation begins again * Lessons of review applied Year 3 | Spring | * Cycle begins again Table 19 RULE 52 MAKE PUBLIC SECTOR PRODUCTIVITY CENTRAL (a two-year budget cycle will make a big difference) ##### FINANCE MINISTERS However much emphasis the government as a whole places on the productivity of public expenditure, in the end the person who has to eat, sleep and breathe it is the finance minister. Even if every finance minister adopted the processes described above – and few have done so yet – there is another major problem which is simply this: finance ministers are extremely busy people who have a huge burden of responsibility. In 2014 I had the privilege of spending a few days with half a dozen African ministers of finance, when my main contribution was to discuss with them delivery and productivity as part of the management of public finances. Clearly, this was an important part of their job and they were certainly interested. It was striking though, when we asked them to list their ambitions and priorities, how this was just one of many. Here are some of the others: Raising economic growth and making it inclusive. Eliminating corruption. Maintaining macro-economic stability and sustainable debt. Restructuring the National Treasury. Improving the standard of living for the people. Reducing poverty. Reducing social vulnerability. Putting in place proper macro-economic and fiscal policies. Mobilizing adequate domestic resources. Implementing new tax measures. Attaining single-digit inflation. No one could argue that any of these are unimportant or relatively insignificant! Often in African countries the finance minister is a technocrat, perhaps with an economics or business degree from a (US) university and the experience of a spell at the World Bank. This makes sense at one level because the tasks of ensuring macro-economic stability, such as cutting inflation and managing debt, clearly require significant technical knowledge; at another level it is a problem because these tasks are intensely political, both inside with cabinet colleagues and outside with the people. This became apparent when we asked the ministers what the biggest barriers to their success were. Reflecting on their challenges inside government, they pointed to: Emerging unbudgeted needs. Many priorities to implement. Timely implementation of requirements from other sector ministries. Political buy-in with weak capacity in Parliament. Political interference. Donors weren't always helpful either. One finance minister expressed frustration with 'bureaucracy and long processes in mobilising international resources'. Another referred simply to 'conditionalities'. If internal politics was complicated, externally it was harder still. One minister referred to the difficulty of winning 'people's support in implementation of new measures of tax', and the 'inadequate domestic resources' available. Another referred to 'corporate cartels [who] avoid taxes', 'high interest rates', the 'slow pace of implementation' and 'corruption'. Still another to 'vested interests'. These are by no means the only challenges they face. There are also the big public–private partnership deals – such as a dam in an environmentally important location – for which the finance ministry, on behalf of the government, is usually the centre of expertise. These can become explosive issues. We took the ministers through the case study of the Chilean government, Endesa (a large company) and the proposed Ralco Dam on the Biobío river. The basic conflict was between the citizens of Santiago, the capital, whose energy needs were increasing rapidly along with the city's growing wealth, and the Pehuenche indigenous people whose land, hundreds of miles from Santiago, would be flooded once the dam was constructed. Most of the Pehuenche settled, but famously five 'nanas', or grannies, held out and in the end, after a massive local and international campaign, Endesa gave up. This context is worth spelling out for finance ministers because in these circumstances it is difficult – perhaps impossible – for them to give the time and attention to public sector productivity that is required, however much people such as myself might advocate it. It is also worth pointing out that some of what these ministers are held to account for – indeed are holding themselves to account for – such as macro-economic stability, is hugely affected by matters far beyond their control. The Zambian minister of finance, just to take one case, has very limited influence over the price of copper (which will be heavily determined by the growth rate in China); yet this is central to all his projections for the Zambian economy. Still less could he or any of his colleagues in Africa be held to account for the sub-prime lending crisis in the US which prompted the near-meltdown of the global economy in 2008. So the pressures on these pivotal figures in governments are huge; they have an array of vital responsibilities, for some of which they are at the mercy of global economic affairs. Nor is this the case just for finance ministers in the developing world. Take Alistair Darling, who was Britain's Chancellor of the Exchequer from 2007 to 2010, through the most demanding period of global economic turmoil the world had seen since the early 1930s. Alistair is a truly unflappable individual. In a succession of roles, among them Secretary of State for Transport, he was one of the unsung heroes of the Blair years; calm, methodical, undemonstrative and intelligent, he got things done, such as making the trains run on time (or at least run on time more often). He could calm down any ministry after a crisis. A year into his term as Chancellor of the Exchequer, the global economy, and therefore the British banks, found themselves in very serious trouble. This is how Alistair Darling saw it: > I don't believe in panicking before it's absolutely necessary but I came close to considering it on the morning of 7 October 2008... We took off from RAF Northolt on a small chartered jet. A sunrise never felt so bleak. I knew the London markets were about to open and that they would react badly to the leaked news, however wrong it was. Iceland and its banking system were close to collapse and one of its banks would probably fail that day. In Ireland the day before they had, without warning, underwritten all the savings in their banks, causing disarray for everyone else in Europe. Three weeks earlier, in the United States, the collapse of Lehman Brothers, one of the country's oldest banks, had pushed the rest of Wall Street to the edge. We were looking over the precipice. Here you see a finance minister from one of the world's top economies sensing that he had (almost) lost control. And if Alistair Darling can't control events, then you can be reasonably confident no one else could. How much more must a finance minister in a weaker country feel at the mercy of events, especially if they have large loans from the World Bank and the IMF, whose regular delegations dictate to them what they should and shouldn't do? All of this surely strengthens the case for the establishment of a delivery function, along the lines of the various models advocated in chapter 2. Then, as these massive global events or local conflicts, as with Endesa, swirl around finance ministries, at least there is one authoritative part of the government machine constantly focused on delivering results and improving public sector productivity. A finance minister also needs someone hard-edged on the team – someone for whom popularity is not an important consideration. Anything but likeable, his biographer, Thomas Penn, calls him (in _Winter King_ ): 'an avaricious Machiavellian king who inspired not love but fear'; Francis Bacon called him a 'dark prince'; and Shakespeare decided he couldn't face writing a play about him. Yet measured on the test 'Did the monarch leave the country better than he found it?', Henry VII may well be as good a monarch as England ever had. He seized the kingdom in a battle in 1485 and over the twenty-four years of his reign put an end to the decades of civil war that had riven England. He founded a dynasty, the Tudors, that in turn founded a strong state. Most importantly, he brought peace, put the unruly barons in their place and got a grip on the country's finances. He left a huge surplus, rare among the monarchs of his time, and it was not his fault that his son and more famous (or infamous) successor, Henry VIII, squandered it while divorcing or beheading a succession of wives. Getting a grip on the finances at the turn of the sixteenth century was no mean feat. Henry VII may very well have been paranoid, but he had more reason than most to be so. There were plenty of important people, at home and abroad, out to get him. Henry's solution was to build around him a close-knit team – a guiding coalition we might call them – who were totally loyal and who were most definitely not from among the high nobility, all of whom had agendas of their own and could not be trusted. One key member of this team was Edmund Dudley, an intelligent, well-connected and ambitious rising star who had specialized in understanding the law pertaining to the king's prerogatives. 'Sharp, silver-tongued and intellectually curious' is how Thomas Penn describes him. In 1504, Henry VII made him Speaker of Parliament (kings could do that back then) and soon afterwards put him on the payroll in the palace. From then on, Henry and Edmund Dudley were often to be found sitting together – imagine them by candlelight in the cavernous palace – sifting through accounts and, drawing on Dudley's specialist knowledge, finding ways to screw as much money for the state coffers as possible from the only three sources available – the nobility, the merchants of the City of London and the Church. As long as he had Henry's support, Dudley had no anxiety about upsetting any of the leaders of these constituencies. He may even have enjoyed it. The state's coffers filled. You can almost imagine Henry and his loyal servant (like those football fans from Millwall in south London in later generations), singing 'No one likes us, we don't care.' Even as Henry aged, the state was getting stronger, if not more cheerful. Before anyone says anything, I am not recommending the return of a Machiavellian prince or the paranoia of Henry VII. And I am not recommending the ethics of the sixteenth century either, when it was simply assumed that Dudley would enrich his family as well as the state. I am recommending, though, that a leader who wants to deliver in a time of austerity needs the modern equivalent, following modern ethics, of Edmund Dudley: someone who can go through the accounts line by line; someone who knows the law and how it operates; someone who is intensely loyal; someone with no desire for adulation. It is possible to imagine this role being played in some governments by a top civil servant, but the right kind of politician would be better still. Not every politician wants to be a public figure; there are some who prefer the kind of behind-the-scenes role that Edmund Dudley performed and are more effective at mastering the detail than painting the big picture. I chose Edmund Dudley as the emblematic case knowing it to be both slightly creepy and a caricature. The point is, though, that the effective management of finances and the allocation of funds to the priorities requires someone capable not just of repeatedly saying 'No', but of picking holes in numerous plausible proposals for spending money on a good cause. That is someone with a sharp brain, an attention to detail and a thick skin. So the not unreasonable question for anyone seeking to deliver public sector productivity is 'Who is your Edmund Dudley?' As we've seen, Herbert Mayhew Lord, Calvin Coolidge's Budget Director, was a twentieth-century exemplar. RULE 53 ANSWER THE QUESTION: 'WHO IS YOUR EDMUND DUDLEY?' (there is more to delivery than being loved) Plausible candidates may be put off by one final detail of Dudley's life. Shortly after Henry VIII succeeded his father in 1509, he discovered that an easy way to improve what we might now call his poll ratings among the nobility, the merchants and the bishops was to have Edmund Dudley beheaded. RULE 54 FINANCE MINISTERS ARE UNDER HUGE PRESSURE (another reason for a delivery function) To conclude the chapter, it's worth emphasizing that no one ever said this was going to be easy. No government has yet quite put together the combination of a drive for delivery, the mastery of public sector productivity and the efficient management of the public finances. Governor Mitch Daniels came close. It was important to Indiana then. It is vital to the world now. It is not an exaggeration to suggest that, unless governments master this critical combination, the success of both the global economy and accountable government will be at stake. Any country that is able to apply systematically the wisdom of Joseph in the ancient texts will be set up to thrive in the twenty-first century. ## Conclusion: The Future of Delivery Sohail Raza has a beaming smile and an infectious laugh. When he speaks, he speaks fast; in his enthusiasm, the words tumble over each other as in a torrent. He moved from the private sector in Lahore to help establish the data collection system for the Punjab Education Roadmap (described in chapter 4). My friend and colleague Katelyn Donnelly and Sohail worked through how to collect the data efficiently, what the targets might be for each district and for the province as a whole, and then produced trajectories for each district for each of the targets. These targets and trajectories became fundamental to the progress that has been made. But by then Sohail had moved on. As if designing and implementing a data system for Punjab, with 60,000 schools and 25 million children wasn't tough enough, he agreed to move to Peshawar and do the same for the old North-West Frontier Province now called Khyber Pakhtunkhwa. True, it had about half as many schools and children, but in every other respect this was a much tougher place to work. In the south of the province there is desert heat; in the north, huge mountains where the Hindu Kush and Himalayas meet. Simply getting to the schools at all in some locations at some times of year is a major challenge. To make matters worse, Khyber Pakhtunkhwa is the province most affected by endemic conflict and terrorism. Its long border with Afghanistan and its infamous tribal areas have been an unruly and unruled base for conflict of various kinds since the mid-nineteenth century. In 2009, the first year I visited Pakistan, the Taliban advanced as far as the Swat valley, one of the districts of Khyber Pakhtunkhwa. Girls' schools were closed or destroyed. The Taliban invasion had reached to just eighty miles or so from the country's nuclear-armed capital, Islamabad. Since then the situation has improved somewhat, but terrorism in Khyber Pakhtunkhwa is still endemic, creating misery for some and uncertainty for many. These are not the easiest of circumstances in which to establish a data collection process for a school system, involving monthly visits to almost 30,000 schools, some of them very remote. But Sohail was undaunted. For two years (a period which involved, in May 2013, an election and change of government in the province), he developed a plan drove it forward and painstakingly helped it jump every bureaucratic hurdle that was put in its way. Some hurdles it had to jump twice because the new government understandably wanted to ensure it was not inheriting a boondoggle from the outgoing administration it had deposed. Meanwhile, in Khyber Pakhtunkhwa, unlike Punjab, the revolving door for officials still applied, at least until recently. Some only lasted a few months in post before being whirled away. Each time, Sohail found himself starting over. With Sisyphean persistence, he explained once again to each new top official in Khyber Pakhtunkhwa what he believed was necessary and eventually, unlike Sisyphus, he rolled his boulder to the top of the hill. He won approval for the Information Monitoring Unit, struck up a friendship with the official designated to lead it, recruited the 500 data monitors, trained all of them, and secured, through the proper procurement process, motorbikes so that each of them could make the fifteen school visits a week on which the data collection system would depend. In March 2014, their first month in operation, the newly trained data monitors managed to collect data from 88 per cent of the schools in spite of harassment from militants in some places and heavy snowfall blocking the valleys in others. The data collectors are determined people devoted to Sohail, as he explained to me. 'I trained them personally,' he laughs, 'they are my friends. I love them.' In May 2014, data was collected from 96 per cent of the schools. The results of this new approach are already apparent; absenteeism of teachers has dropped significantly. So far, so similar to the Punjab approach. But Sohail is an innovator and an entrepreneur too. In addition to everything else he'd done, he contracted software developers to gather all the data as it came in each day and then present it on a dashboard. In April 2014 the dashboard was shown to Khyber Pakhtunkhwa's chief minister, Pervez Khattak. He praised his education department (and the UK's Department for International Development) for their contribution and said the new system would enable him to track down 'ghost schools and proxy teachers'. Now he can find out at any time of day (or night for that matter) not just the performance of his school system last month on teacher presence, student attendance and the provision of facilities, but also what the data shows about individual districts or even schools. He can break it down by gender and type of school too. The next stage for the development of this data collection system and dashboard will be to make it public. Chief Minister Khattak rapidly realized that by making this data publicly available he could unlock citizen pressure for improvement of the shockingly poor school system – which he believes is an embarrassment to his province. The message for other parts of the public services, such as health, is clear: transparency is to corruption what daylight is to a vampire. In some of the least propitious circumstances imaginable, Sohail Raza has done something remarkable: not just made sure data is collected systematically, important though that is, but also anticipated two developments which are likely to transform approaches to delivery in the next decade: big data and transparency. This brief concluding chapter has two main points. The first is to summarize some developments in the nature of government that will help to shape the science of delivery in the next decade. The second is to show how these reinforce the central argument of this book. In fact, the combination of these developments with the science of delivery could be transformative and deliver precisely those better outcomes at lower cost that citizens across the world are increasingly likely to demand. So, what are these developments? ##### DATA AND TRANSPARENCY Sohail Raza's innovations in Khyber Pakhtunkhwa are just one example of what is happening globally. The digital revolution is creating a data revolution – the era of Big Data has arrived and is shaping everything from sport and shopping to government and public services. **Using Data** Figure 29 Expect to see more and more data made public and the rise of social and other enterprises that crunch the numbers government publishes and reach new conclusions. In an era where joggers and cyclists the world over are already monitoring every inch of ground they cover, expect citizens to take (and be expected to take) ever greater responsibility for their own health and well-being. Judge Damon Keith's observation that 'democracies die behind closed doors' looks increasingly prescient. RULE 55 BIG DATA AND TRANSPARENCY ARE COMING (prepare to make the most of them) ##### PRIVACY As the data explosion occurs there will be growing concern about privacy, especially when extensive data about individuals comes into the possession of governments. In the era of Wikileaks and lost flash drives, many people simply don't trust governments (or corporations) with their data. The problem is that the data explosion is happening so fast that there isn't time to write the rules of the game before the game changes. Paradoxically, governments will have to strengthen transparency and privacy at the same time. ##### CITIZEN ENGAGEMENT Increasingly we will see better educated, wealthier citizens making more demands on government. As Governor Martin O'Malley puts it, 'The next horizon is citizen engagement.' They will expect to be participants in the services they demand, not just recipients. They will expect to exercise choice as well as voice. They will be more assertive as consumers and as citizens. Governments will therefore need to become more responsive and agile. Jeremy Heimans believes that there will be '21st century movements and ventures that use the power of participation to change the world'. He and Henry Timms compare old power with this emerging new power. Old power, they say, is held by a few and is 'closed, inaccessible and leader-driven', whereas new power is made by many and is 'open, participatory and peer-driven'. I doubt that new power will replace old power. The future will be a combination of the two. Certainly governments will have to adapt how they approach delivery and, as services shift from adequate to good to great, participatory processes will become ever more important. For example, a July 2014 paper from the think tank Reform on 'The Expert Citizen' suggests that a combination of redesigned buildings and communities with active, better-informed citizens could reduce the burden on the police. ##### DIGITAL GOVERNMENT As increasing numbers of people experience daily, much of government is going online. Consulting company BCG discovered that there are online services in all areas, from parks and sporting facilities to health, housing, tax and transport. They also found that people's frustration was greatest with the services they considered most important. Even so, they did not want to turn the clock back. They like the direction and want it delivered better. To satisfy citizens, whole services, not just demands for information, need to go online. It may be some comfort to government officials to learn that BCG did not find that government was any worse at this transition than the business sector. Digital government transforms the prospects for data and transparency. Take OpenGov, a US platform on which 150 US local governments analyse more than $50 billion of annual expenditure: > Governments use OpenGov _internally_ to create custom reports, help operations manage the budget, keep senior executives and legislators maximally informed, and help with important workflows from the budgeting process to internal audits. And they use it _externally_ to publish interactive budgets, share this information with the community, and even achieve revenue goals by disseminating important financial data around tax or bond measures. ##### COMPETITIONS Arne Duncan, Barack Obama's education secretary, pulled off a masterstroke – he ran a competition called Race to the Top. States would compete for funds to reform their education systems. To enter they had to meet certain requirements – introduce data systems, or lift any cap they might have had on the number of charter schools. The prize money was quite substantial, so twenty or thirty states changed their laws to enable them to enter, though only a dozen actually won. In other words, running the competition enabled Arne Duncan not just to promote innovation, but also to influence policy right across the country. In the terms of chapter 8, he got more output for his money. Competitions, run transparently, are likely to become a widespread means for government to innovate and advance an agenda, especially where the way forward is not entirely clear. ##### MARKETS AND GOVERNMENT William Easterly's book _The Tyranny of Experts_ is a _tour de force_ ; his central argument is that the most important role of government is to secure individual economic and political rights and create well-regulated markets. Beyond that, how much government does should be a matter of political choice. Easterly points out that Adam Smith thought that only government could solve some problems, such as a malfunctioning market or one where public goods are required (such as schools or roads) which do not provide enough of a private return. He argues forcefully that limiting government to these basic functions and leaving markets to do the rest is the route to economic growth. Certainly he believes this approach will be far more effective over the long run than the combination of experts and autocracy which he argues has underpinned the approach to development in the aid community. While there will be outliers, such as the Scandinavian countries, I expect the future will see markets becoming steadily more important in meeting the needs and aspirations of citizens, with governments becoming smaller but needing to become more effective. Thus, contrary to some ill-informed commentary, the choice is not between markets and government, but between effective combinations of the two. Theodore Roosevelt, a powerful president deeply committed to markets, put it this way, as he ended his term: 'The danger to American democracy lies not in the least in the concentration of administrative power in competent and accountable hands. It lies in having the power insufficiently concentrated so that no one can be held accountable for its use.' In short, as the boundaries of markets are extended, governments will need to become more competent and more accountable. The science of delivery will become more important than ever. RULE 56 SUCCESSFUL MARKETS AND EFFECTIVE GOVERNMENT GO TOGETHER (avoid the false dichotomy) ##### ENTREPRENEURSHIP Entrepreneurship used to be seen as the preserve of the business sector, in contrast to the public sector, which was seen as 'risk-averse'. Two shifts over the past generation have made this analysis obsolete. One is the rise of successful social enterprises, some of which are now providing vital services. The other is the spread of entrepreneurial thinking from the business sector into the social and public sectors. As Mitchell Weiss comments, the phrase 'government entrepreneur' is not necessarily an oxymoron. Increasingly, he goes on to argue, entrepreneurship can and should be taught to leaders in all sectors. Indeed, if students from all sectors learn entrepreneurship together, they may well develop mutual understanding and opportunities to collaborate. The key ingredients of successful entrepreneurship are – to summarize Weiss – test early, test often, don't grow too fast or too soon, collaborate with like-minded people across sector boundaries, and have a compelling narrative of what you intend to do and why. This should be common ground in any sector. The British government, for example, claims 'A hundred new British businesses have been spun out from the public sector and are delivering nearly £1.5 billion of public services.' RULE 57 PUBLIC AND SOCIAL ENTREPRENEURSHIP WILL BECOME INCREASINGLY IMPORTANT TO DELIVERING OUTCOMES (encourage it) This way of thinking is already spreading in government circles. Boundaries are blurring. Increasingly, citizens are focused on the outcomes and the cost; they are generally open-minded about who provides and how, as long as they get the desired results at a reasonable price. ##### THE IMPLICATIONS FOR THE SCIENCE OF DELIVERY These trends are likely to change the nature of government radically over the next decade or so, but they will not replace the need for a science of delivery; far from it. They will reinforce it, because the distilled essence of the science of delivery is that it is a set of processes that enables governments to deliver ambitious goals by learning effectively as they go, and refining as necessary. The science of delivery itself is still in its infancy. The more it is applied, the deeper our knowledge of it will become. We should leave the last word, therefore, to one of its acknowledged masters. Idris Jala, whom we have met several times before in this book, is a classic example of a government entrepreneur. Interviewed by Deepa Iyer of the Woodrow Wilson School of Public and International Affairs, he set out his thinking, which brings together the trends for the future mentioned in this conclusion and the accounts of delivery described throughout this book. He emphasized clear priorities, good data, regular progress updates, delivery chains and a delivery unit with a small, lean team. He also warned that, however good the delivery unit might be, it is the ministers and ministries who must ultimately deliver. The delivery unit itself can only be a catalyst. He listed the principles on which his approach is based – set ambitious goals (he calls it 'the game of the impossible'); choose good indicators; ensure people shift from talking to acting; adapt the reform approach as the situation changes; and build ever stronger coalitions. The science of delivery, important though it is to the future of government, is not a complete science and never will be. The soap opera factors of politics and government will never be eliminated. This is a cause for celebration – human judgement in all its fallibility will ultimately reign supreme. However much we know and however much power we wield, there will always be the unexpected development to throw us off course. For those with power, hubris is always a risk. Pride comes before a fall, in government above all. Idris Jala puts it thus: > The [last] principle is... divine intervention... You could ask who controls the world and some people say it is God, some people say it is Fate... there are lots of things outside of our control... the beauty of understanding this is the following; we become humble... Vulnerability is to my mind a virtue. If you feel vulnerable... you know the world is not at your feet. Idris Jala's final point reinforces my own concluding message: you might adopt in full the science of delivery and still fall short. Even so, we should do our best to apply what we do know, confident that doing so will, in most places, most of the time, make a big difference to the outcomes government delivers for citizens. This will strengthen both markets and government. If with due humility the knowledge of how to run a government becomes more widely shared, then surely the world will become a better place. ## Appendix: The 57 Rules ##### 1. PRIORITIES 1. HAVE AN AGENDA (even if, like Lord Salisbury, it is to do nothing) 2. DECIDE ON YOUR PRIORITIES (really decide) 3. BE UNREASONABLE (sometimes) AND USE THE MAP OF DELIVERY 4. SET A SMALL NUMBER OF WELL-DESIGNED TARGETS (but don't call them targets if you don't want to!) 5. APPLY THE SCIENCE TO TARGET-SETTING (but don't depend on it) 6. CHECK FOR PERVERSE OR UNINTENDED CONSEQUENCES (they may not happen) 7. CONSULT WITHOUT CONCEDING ON AMBITION (opposition is inevitable) 8. TARGETS ARE IMPORTANT BUT NOT THE POINT (state and restate the story about the moral purpose) ##### 2. ORGANIZATION 9. REVIEW THE CAPACITY OF YOUR SYSTEM TO DELIVER THE AGREED GOALS (and do it quickly) 10. SET UP A DELIVERY UNIT (call it what you like, but separate it from strategy and policy) 11. THE DELIVERY UNIT NEEDS TO BE SMALL AND WELL LED (and excellent at building relationships) 12. CREATE A GUIDING COALITION FOR EACH PRIORITY (to increase clarity and speed) 13. BUILD THE CAPACITY TO DELIVER YOUR AGENDA (civil service reform for its own sake can be an energy drain) ##### 3. STRATEGY 14. WORK FROM PRINCIPLES TO STRATEGY TO POLICY (and put a stake through the heart of initiatives) 15. TRUST AND ALTRUISM IS POPULAR BUT DOESN'T WORK (other than in unusual circumstances) 16. THE HIERARCHY AND TARGETS APPROACH WILL GET YOU FROM AWFUL TO ADEQUATE (if executed well) 17. CHOICE IS BECOMING INCREASINGLY IMPORTANT IN PUBLIC SYSTEMS (it's a good in itself) 18. TRANSPARENT PUBLIC RANKING WORKS (don't flinch) 19. CONTRACTING OUT SERVICES BREAKS MONOPOLIES (but don't think it relieves you from management responsibilities) 20. WELL-DESIGNED PRIVATIZATION CAN IMPROVE EFFICIENCY (it can also lead to smaller, more effective government) 21. A WELL-DESIGNED VOUCHER SCHEME EMPOWERS THE BENEFICIARIES (and can promote equity) 22. GOVERNMENT SHOULD TAKE ITS STEWARDSHIP RESPONSIBILITY SERIOUSLY; THAT INCLUDES STRATEGY, REGULATION AND THE SUPPLY OF SKILLED PROFESSIONALS ##### 4. PLANNING 23. UNDERSTAND IN YOUR HEAD (and feel in your heart) THE GAP BETWEEN YOUR ASPIRATION AND THE UNVARNISHED REALITY 24. UNDERSTAND THE POTENTIAL DRIVERS OF CHANGE (and base your plan on them) 25. PREPARE A PLAN TO IMPLEMENT YOUR STRATEGY THAT IS GOOD ENOUGH TO GET STARTED (and don't make concessions for a quiet life) 26. STRENGTHEN THE DELIVERY CHAIN (don't think you can get away without doing so) 27. NEVER GO ANYWHERE WITHOUT A TRAJECTORY (you'll learn better, faster and deeper) 28. COLLECT DATA, ASK THE RIGHT QUESTIONS AND PRESENT THE ANSWERS BEAUTIFULLY (and don't forget integrity) 29. DATA MAKES A JOB DO-ABLE (until then, all you can do is make excuses and hope for the best) ##### 5. ROUTINES 30. DON'T BE SPOOKED BY THE DEAFENING SILENCE (but keep listening) 31. ANTICIPATE THE IMPLEMENTATION DIP (and demonstrate the leadership required to get through it) 32. DEAL WITH CRISES (but don't use them as an excuse) 33. GOVERNMENT BY ROUTINE BEATS GOVERNMENT BY SPASM (it's not even close) 34. PREPARE MONTHLY NOTES FOR THE LEADER (and make them 'deeply interesting') 35. ROUTINE MEETINGS OR STOCKTAKES CREATE FALSE DEADLINES (and solve problems before they become crises) 36. A FULL-SCALE REVIEW OF THE PROGRAMME AT LEAST ONCE A YEAR PROVIDES DEEP LEARNING (which can be acted on immediately) 37. UNDERSTAND THE WOOD AND THE TREES (and the view beyond) ##### 6. PROBLEM-SOLVING 38. CATEGORIZE PROBLEMS BY THEIR INTENSITY (and act accordingly) 39. DIAGNOSE PROBLEMS PRECISELY (and act accordingly) 40. TAKE ALL THE EXCUSES OFF THE TABLE 41. LEARN ACTIVELY FROM EXPERIENCE (failure is a great teacher) 42. NEGOTIATE ON THE BASIS OF PRINCIPLE (but don't depend on it) 43. GUARD AGAINST FOLLY (it has been common throughout history) ##### 7. IRREVERSIBILITY 44. THERE IS NO SUBSTITUTE FOR SUSTAINED, DISCIPLINED POLITICAL LEADERSHIP 45. PERSIST (but don't expect the credit) 46. LEARN THE LEARNABLE AND CONTROL THE CONTROLLABLE (obsessively) 47. INVEST DEEPLY AND CONTINUOUSLY IN SKILL AND CAPABILITY (commitment will follow) 48. THINK THROUGH THE POLITICS OF IRREVERSIBILITY (anticipate the future) 49. DRIFT IS THE ENEMY OF DELIVERY (momentum is its friend) ##### 8. (OTHER PEOPLE'S) MONEY 50. 'MORE FOR LESS' TRUMPS 'INVESTMENT FOR REFORM' (and may deliver more) 51. PERIODICALLY ADOPT A BOLD, CLEAN-SLATE APPROACH TO BUDGETING (to liberate resources) 52. MAKE PUBLIC SECTOR PRODUCTIVITY CENTRAL (a two-year budget cycle will make a big difference) 53. ANSWER THE QUESTION: 'WHO IS YOUR EDMUND DUDLEY?' (there is more to delivery than being loved) 54. FINANCE MINISTERS ARE UNDER HUGE PRESSURE (another reason for a delivery function) ##### CONCLUSION: THE FUTURE OF DELIVERY 55. BIG DATA AND TRANSPARENCY ARE COMING (prepare to make the most of them) 56. SUCCESSFUL MARKETS AND EFFECTIVE GOVERNMENT GO TOGETHER (avoid the false dichotomy) 57. PUBLIC AND SOCIAL ENTREPRENEURSHIP WILL BECOME INCREASINGLY IMPORTANT TO DELIVERING OUTCOMES (encourage it) ## Bibliography Acemoglu, D. and Robinson, J. (2012), _Why Nations Fail: The Origins of Power, Prosperity and Poverty_ , London, Profile Books Adonis, A. (2012), _Education, Education, Education: Reforming England's Schools_ , London, Biteback Africa Governance Initiative (2013), _Two Steps at a Time: Rwanda's Strategic Capacity Building Initiative_ , London, AGI Andrew, C. (2009), _The Defence of the Realm: The Authorized History of MI5_ , London, Allen Lane Andrews, L. (2014), _Ministering to Education: A Reformer Reports_ , Cardigan, Parthian Andrews, M., Pritchett, L. and Woolcock, M. (2012), _Escaping Capability Traps Through Problem-Driven Iterative Adaptation (PDIA)_ , CGD Working Paper 299, Washington DC, Center for Global Development Auditor General for Wales (2005), _NHS Waiting Times for Wales_ , vol. 1: _The Scale of the Problem_ and vol. 2: _Tackling the Problem_ Barber, M. (1996), ' "A Heaven-Sent Opportunity": James Callaghan and the Ruskin Speech', London, _Times Educational Supplement_ — (1997), _The Learning Game: Arguments for an Education Revolution_ , London, Indigo — (2004), _Courage and the Lost Art of Bicycle Maintenance_ , London, PMDU — (2008), _Instruction to Deliver: Fighting to Transform Britain's Public Services_ , London, Methuen — (2013), _The Good News from Pakistan: How a Revolutionary New Approach to Education Reform in Punjab Shows the Way Forward for Pakistan and Development Aid Everywhere_ , London, Reform — and Day, S. (2014), _The New Opportunity to Lead: A Vision for Education in Massachusetts in the Next 20 Years_ , Boston, Massachusetts Business Alliance for Education — with Moffit, A. and Kihn, P. (2010), _Deliverology 101: A Field Guide for Educational Leaders_ , California, Corwin — and Mourshed, M. (2007), _How the World's Best Performing School Systems Come Out on Top_ , Chicago, McKinsey Company Benington, J. and Moore, Mark H. (2011), _Public Value: Theory and Practice_ , London, Palgrave Macmillan Bevan, G. and Hamblin, R. (2009), 'Hitting and Missing Targets by Ambulance Services for Emergency Calls: Effects of Different Systems of Performance Measurement within the UK', _Journal of the Royal Statistical Society_ : Series A, vol. 172, no. 1, pp. 161–90. — and Hood, C. (2006), _What's Measured is What Matters: Targets and Gaming in the English Public Healthcare System_ , Public Administration — and Wilson, D. (2013), 'Does "Naming and Shaming" Work for Schools and Hospitals? Lessons from Natural Experiments following Devolution in England and Wales', _Public Money and Management_ , vol. 33, issue 4, pp. 245–52. Bhagwati, J. and Panagariya, A. (2014), _Why Growth Matters: How Economic Growth in India Reduced Poverty and the Lessons for Other Developing Countries_ , New York, Public Affairs Bingham, T. (2011), _The Rule of Law_ , London, Penguin Blair, T. (2011), _A Journey_ , London, Arrow Blunkett, D. (2006), _The Blunkett Tapes: My Life in the Bear Pit_ , London, Bloomsbury Bobbitt, P. (2003), _The Shield of Achilles: War, Peace and the Course of History_ , London, Penguin — (2009), _Terror and Consent: The Wars for the Twenty-first Century_ , London, Penguin Bok, D. (2002), _The Trouble with Government_ , paperback edition, Boston, Harvard University Press Botsman, P. and Latham, M. (2001), _The Enabling State: Putting People Before Bureaucracy_ , Australia, Pluto Press Bratton, W. and Knobler, P. (1998), _Turnaround_ , New York, Random House Brooks, G. et al (1996), _Reading Performance at Nine_ , Slough, National Foundation for Educational Research Bryson, B. (1995), _Notes from a Small Island_ , London, Doubleday Butler, R. A. (1971), _The Art of the Possible_ , London, Hamish Hamilton Campbell, A. (2007), _The Blair Years_ , London, Hutchinson Carrasco, M. and Goss, P. (2014), _Digital Government: Turning the Rhetoric into Reality_ , Boston Consulting Group Cavendish, Mark (2014), _At Speed: My Life in the Fast Lane_ , London, Ebury Press Christensen, C., Allworth, J. and Dillon, K. (2012), _How Will You Measure Your Life?_ , New York, HarperCollins Clarke, C. (ed.) (2014), _The Too Difficult Box: The Big Issues Politicians Can't Crack_ , London, Biteback Collins, J. (2001), _Good to Great: Why Some Companies Make the Leap... and Others Don't_ , London, Random House Collins, J. C. and Porras, J. I. (1996), _Built to Last: Successful Habits of Visionary Companies_ , London, Century Collins, P. and Byrne, L. (eds.) (2004), _Reinventing Government Again_ , London, Social Market Foundation Colville, J. (1985), _The Fringes of Power: Downing Street Diaries 1939–1955_ , London, Weidenfeld & Nicolson Connolly, S., Bevan, G. and Mays, N. (2011), _Funding and Performance of Healthcare Systems in the Four Countries of the_ _UK_ _Before and After Devolution_ , London, Nuffield Trust Dallek, R. (2007), _Nixon and Kissinger: Partners in Power_ , London, Allen Lane Darling, A. (2011), _Back from the Brink: 1000 Days at No. 11_ , London, Atlantic Davis, J. (2007), _Prime Ministers and Whitehall 1960–74_ , London, Hambledon Continuum Davis, S., Lukomnik, J. and Pitt-Watson, D. (2006), _The New Capitalists: How Citizen Investors Are Reshaping the Corporate Agenda_ , Cambridge MA, Harvard Business School Press de Madariaga, I. (1981), _Russia in the Age of Catherine the Great_ , London, Phoenix DiCerbo, K. and Behrens, J. (2014), _Impacts of the Digital Ocean on Education_ , London, Pearson Dixon, N. (1994), _On the Psychology of Military Incompetence_ , London, Pimlico Doz, Y. and Kosonen, M. (2014), _Governments for the Future: Building the Strategic and Agile State_ , Helsinki, Sitra Drèze, J. and Sen, A. (2013), _An Uncertain Glory: India and Its Contradictions_ , London, Allen Lane Duhigg, C. (2013), _The Power of Habit: Why We Do What We Do in Life and Business_ , London, Random House Dumas, V., Lafuente, M. and Parrado, S. (2013), _Strengthening the Center of Government for Results in Chile: The Experience of the Ministry of the Presidency and its President's Delivery Unit (2010–13)_ , Washington, Inter-American Development Bank Easterly, W. (2014), _The Tyranny of Experts: Economists, Dictators and the Forgotten Rights of the Poor_ , New York, Basic Books Ellis, J. (2013), _Revolutionary Summer: The Birth of American Independence_ , New York, Alfred Knopf Filmer, D., Hammer, J. and Pritchett, L. (2000), 'Weak Links in the Chain: A Diagnosis of Health Policy in Poor Countries', Washington, _World Bank Research Observer_ , vol. 15, no. 2, August Freedman, L. (2013), _Strategy: A History_ , Oxford, Oxford University Press Friedman, B. M. (2005), _The Moral Consequences of Economic Growth,_ New York, Alfred Knopf Friedman, M. (2005), _Trying Hard is Not Good Enough_ , Victoria, Canada, Trafford Publishing Friedman, T. (2006), _The World Is Flat: A Brief History of the Twenty-first Century_ , London, Penguin — and Mandelbaum, M. (2011), _That Used to Be Us: What Went Wrong with America and How it Can Come Back_ , New York, Little, Brown Fukuyama, F. (2011), _The Origins of Political Order: From Prehuman Times to the French Revolution_ , London, Profile Books — (2014), _Political Order and Political Decay: From the Industrial Revolution to the Globalisation of Democracy_ , London, Profile Books Garman, J. (2014), _Europe's Power: Re-energising a Progressive Climate and Energy Agenda_ , London, Institute for Public Policy Research Gawande, A. (2008), _Better: A Surgeon's Notes on Performance_ , London, Profile Books — (2010), _The Checklist Manifesto: How to Get Things Right_ , London, Profile Books Ghani, A. and Lockhart, C. (2009), _Fixing Failed States: A Framework for Rebuilding a Fractured World_ , Oxford, Oxford University Press Giuliani, R. (2002), _Leadership,_ New York, Little, Brown Gold, J. (2014), _International Delivery: Centres of Government and the Drive for Better Policy Implementation_ ,London, Institute for Government Goodwin, D. K. (2013), _The Bully Pulpit: Theodore Roosevelt, William Howard Taft and the Golden Age of Journalism_ , New York, Simon & Schuster Guha, R. (2013), _Gandhi Before India_ , London, Allen Lane Halberstam, D. (1992), _The Best and the Brightest_ , New York, Ballantine Books Harding, R. (2014), 'World Bank: Man on a Mission', London, _Financial Times_ , 7 April Harris, J. and Rutter, J. (2014), _Centre Forward: Effective Support for the Prime Minister at the Centre of Government_ , London, Institute for Government Harvard Business Review (2013), _On Teams_ , Boston, Harvard Business Review Press Heifetz, R. (1994), _Leadership Without Easy Answers_ , Boston, Harvard University Press — and Linsky, M. (2002), _Leadership on the Line: Staying Alive Through the Dangers of Leading_ , Boston, Harvard Business School Press Heimans, J. and Timms, H. (2014), 'Understanding "New Power" ', _Harvard Business Review_ Hennessy, P. (2000), _The Prime Minister: The Office and its Holders Since 1945_ , London, Allen Lane Hill, P. et al (2013), _Strife and Progress: Portfolio Strategies for Managing Urban Schools_ , Washington, Brookings Institution Press Holt, R. (2001), _Second Amongst Equals: Chancellors of the Exchequer and the British Economy_ , London, Profile Books Hunt, T. (2005), _Building Jerusalem: The Rise and Fall of the Victorian City_ , New York, Metropolitan Books Hyman, P. (2005), _1 Out of 10_ , London, Vintage Iyer, D. (2011), 'Interview with Idris Jala' for Innovations for Successful Societies, Woodrow Wilson School of Public and International Affairs and the Mamdouha S. Bobst Center for Peace and Justice, Princeton University Jenkins, R. (1995), _Gladstone: A Biography_ , London, Macmillan — (2001), _Churchill: A Biography_ , London, Macmillan Jenkins, S. (2006), _Thatcher and Sons: A Revolution in Three Acts_ , London, Allen Lane Kaplan, R. (2014), 'Strategy Execution', presentation, Harvard Business School Kelman, S. (2006), 'Improving Service Delivery Performance in the United Kingdom', _Journal of Comparative Policy Analysis_ , vol. 8, no. 4, December Kerchner, C. and Mitchell, D. (1988), _The Changing Idea of a Teachers' Union_ , Lewes, Falmer Press Kim, J. Y. et al (eds.) (2000), _Dying for Growth: Global Inequality and the Health of the Poor_ , Cambridge MA, Common Courage Press King, A. and Crewe, I. (2013), _The Blunders of Our Governments_ , London, Oneworld _King James Bible_ (2011 edition), London, Collins Kotter, J. (1996), _Leading Change: An Action Plan from the World's Foremost Expert on Business Leadership_ , Boston, Harvard Business School Press Lane, J. E. (2000), _The Public Sector: Concepts, Models and Approaches,_ London, Sage Publications Lawson, N. (2010), _The View from No. 11: Memoirs of a Tory Radical_ , revised edition, London, Biteback Lax, D. and Sebenius, J. (2006), _3-D Negotiation: Powerful Tools to Change the Game in Your Most Important Deals_ , Boston, Harvard Business School Press Le Grand, J. (2003), _Motivation, Agency and Public Policy: Of Knights and Knaves, Pawns and Queens_ , New York, Oxford University Press — (2007), _The Other Invisible Hand: Delivering Public Services through Choice and Competition_ , Princeton, Princeton University Press Lesley, E. (2014), _Mapping a Transformation Journey: A Strategy for Malaysia's Future, 2009–2010_ , Innovations for Successful Societies, Woodrow Wilson School of Public and International Affairs and the Mamdouha S. Bobst Center for Peace and Justice, Princeton University Liu, E. and Hanauer, N. (2011), _The Gardens of Democracy: A New American Story of Citizenship, the Economy and the Role of Government_ , Seattle, Sasquatch Books Macfarlane, R. (2013), _The Old Ways: A Journey on Foot_ , London, Penguin Major, J. (1999), _John Major: The Autobiography_ , London, HarperCollins Mandelson, P. (2002), _The Blair Revolution Revisited_ , London, Politico's Manna, P. and McGuinn, P. (eds.) (2013), _Education Governance for the Twenty-first_ _Century_ , Washington, Brookings Institution Press Mayer-Schönberger, V. and Cukier, K. (2014), _Big Data: A Revolution That Will Transform How We Live, Work and Think_ , New York, Mariner Books McChesney, C., Covey, S. and Huling, J. (2012), _The Four Disciplines of Execution_ , New York, Free Press McGuinty, D., Speech 2010, personal communication with the author McKinsey & Co (2013), _Voices on Society: The Art and Science of Delivery_ , London, McKinsey and Company McNamara, R. (1996), _In Retrospect: The Tragedy and Lessons of Vietnam_ , New York, Vintage Micklethwait, J. and Wooldridge, A. (2014), _The Fourth Revolution: The Global Race to Reinvent the State_ , London, Penguin Mishra, P. (2012), _From the Ruins of Empire: The Intellectuals Who Remade Asia_ , New York, Farrar, Staus and Giroux Moore, M. (1995), _Creating Public Value: Strategic Management in Government_ , Boston, Harvard University Press Morris, E. (2001), _Theodore Rex_ , New York, Random House Mourshed, M., Chijioke, C. and Barber, M. (2010), _How the World's Most Improved School Systems Keep Getting Better,_ Chicago, McKinsey and Company Mulgan, G. (2006), _Good and Bad Power: The Ideals and Betrayals of Government_ , London, Allen Lane — (2008), _The Art of Public Strategy: Mobilizing Power and Knowledge for the Common Good_ , Oxford, Oxford University Press — (2014), 'Rewiring the Brain: A Rough Blueprint for Reforming Centres of Government', London, NESTA Naughtie, J. (2001), _The Rivals: The Intimate Story of a Political Marriage_ , London, Fourth Estate Nishtar, S. (2010), _Choked Pipes: Reforming Pakistan's Mixed Health System_ , Oxford, Oxford University Press Norris, E., Rutter, J. and Medland, J. (2013), _Making the Games: What Government Can Learn from London 2012_ , London, Institute for Government Olivier, R. (2002), _Inspirational Leadership: Henry V and the Muse of Fire_ , London, Spiro Press Osborne, D. and Gaebler, T. (1993), _Reinventing Government:_ _How the Entrepreneurial Spirit is Transforming the Public Sector,_ New York, Plume Osborne, D. and Hutchinson, P. (2006), _The Price of Government: Getting the Results We Need in an Age of Permanent Fiscal Crisis_ , New York, Basic Books Page, E. C. and Jenkins, B. (2005), _Policy Bureaucracy: Government with a Cast of Thousands,_ New York, Oxford University Press Panchamia, N. and Thomas, P. (2014), _Public Service Agreements and the Prime Minister's Delivery Unit_ , London, Institute for Government Parker, L. and Miller, J. (2012), 'The Eleven Conversations', _Brunswick Review_ , issue 6 Pasternak, B. (2011), _Doctor Zhivago_ , London, Vintage Penn, T. (2012), _Winter King: The Dawn of Tudor England_ , London, Penguin Pink, D. (2011), _Drive: The Surprising Truth About What Motivates Us_ , Edinburgh, Canongate Policy Profession Board (2013), _Twelve Actions to Professionalise Policy Making_ , London, UK Civil Service Powell, J. (2011), _The New Machiavelli: How to Wield Power in the Modern World_ , London, Vintage Propper, C., Sutton, M., Whitnall, C. and Windmeijer, F. (2010), 'Incentives and Targets in Hospital Care: Evidence from a Natural Experiment', _Journal of Public Economics_ , vol. 94, issues 3–4, pp. 318–35 Ramsbotham, O., Woodhouse, T. and Miall, H. (2005), _Contemporary Conflict Resolution: The Prevention, Management and Transformation of Deadly Conflicts_ , 2nd edition, Cambridge, Polity Press Regan, E. (2013), 'Hitting the Target or Missing the Point: What are the Drivers to Influence the Achievement of Key Targets in Governmental Performance Management Systems? An Examination of the Role of the Prime Minister's Delivery Unit in Relation to A&E Waiting Times (2001–2005)', dissertation for MBA at Warwick Business School Reich, R. (1998), _Locked in the Cabinet_ , New York, Vintage Books Riddell, P. (2005), _The Unfulfilled Prime Minister: Tony Blair's Quest for a Legacy_ , London, Politico's Ringen, S. (2013), _Nation of Devils: Democratic Leadership and the Problem of Obedience_ , Yale, Yale University Press Rogers, Everett M. (1983), _Diffusion of Innovations_ , New York, Free Press Russakoff, D. (2014), 'Schooled', _New Yorker_ , 19 May Ryan, Alan (2013), _On Politics: A History of Political Thought from Herodotus to the Present_ , London, Penguin Sahlberg, P. (2011), _Finnish Lessons_ , New York, Teachers College Press Sahlgren, G. (2013), _Incentivising Excellence: School Choice and Education Quality_ , London, Centre for Market Reform of Education Scharff, M. (2012), _Delivering on a Presidential Agenda: Sierra Leone's Strategy and Policy Unit, 2010–2011_ , Innovations for Successful Societies, Woodrow Wilson School of Public and International Affairs and the Mamdouha S. Bobst Center for Peace and Justice, Princeton University — (2013), _A New Approach to Managing at the Center of Government: Governor Mitch Daniels and Indiana, 2005–2012_ , Innovations for Successful Societies, Woodrow Wilson School of Public and International Affairs and the Mamdouha S. Bobst Center for Peace and Justice, Princeton University Schlesinger, R. (2008), _White House Ghosts: Presidents and Their Speechwriters_ , New York, Simon & Schuster Seldon, A. (2005), _Blair_ , London, Simon & Schuster —, Snowden, P. and Collings, D. (2007), _Blair Unbound_ , London, Simon & Schuster Sellar, W. and Yeatman, R. (1998), _1066 and All That_ , London, Methuen Shlaes, A. (2013), _Coolidge_ , New York, HarperCollins Silver, N. (2012), _The Signal and the Noise: The Art and Science of Prediction_ , London, Penguin Smillie, I. (2009), _Freedom from Want: The Remarkable Success Story of BRAC, the Global Grassroots Organisation That's Winning the Fight Against Poverty_ , Dhaka, Kumarian Press Smith, J. E. (2012), _Eisenhower in War and Peace_ , New York, Random House State of Victoria (2005), _Growing Victoria Together_ Steinberg, J. (2011), _Bismarck: A Life_ , Oxford, Oxford University Press Stevenson, A. (2013), _The Public Sector: Managing the Unmanageable_ , London, Kogan Page Sugden, J. (2012), _Nelson: The Sword of Albion_ , London, Bodley Head Taleb, N. N. (2007), _The Black Swan: The Impact of the Highly Improbable_ , London, Allen Lane — (2012), _Antifragile: How to Live in a World We Don't Understand_ , London, Allen Lane Taliaferro, J. (2013), _All the Great Prizes: The Life of John Hay, from Lincoln to Roosevelt_ , New York, Simon & Schuster Thaler, R. and Sunstein, C. (2008), _Nudge: Improving Decisions About Health, Wealth and Happiness_ , London, Penguin Timmins, N. (1995), _The Five Giants: A Biography of the Welfare State,_ London, HarperCollins Trewhitt, K. et al (2014), _How to Run a Country: A Collection of Essays_ , London, Reform Tuchman, B. (1990), _The March of Folly: From Troy to Vietnam_ , St Ives, Abacus US Senate Committee on the Budget and Taskforce on Government Performance, Report, 29 October 2009 Wainwright, A. (2007), _The Southern Fells_ , revised edition, London, Frances Lincoln Wales Audit Office (2006), _Ambulance Services in Wales_ Weiss, M. (2014), _'Government Entrepreneur' is not an Oxymoron_ , Cambridge MA, Harvard Business Review Blog Network, 28 March 2014 Whelan, F. (2014), _The Learning Challenge_ , self-published Wiggins, B. (2012), _My Time_ , London, Yellow Jersey Press Williams, J. and Rossiter, A. (2004), _Choice: The Evidence,_ London, Social Market Foundation Wolmar, C. (2013), _To the Edge of the World: The Story of the Trans-Siberian Railway_ , London, Atlantic Books ## Notes ##### PREFACE . Gold, p. 9 . Quoted in Goodwin, p. 564 . Quoted in Fukuyama (2014), p. 152 . _Sunday Telegraph_ , 19 October 2014 ##### INTRODUCTION: THE MISSING SCIENCE OF DELIVERY . _Economist_ , 4 November 2010 . Quoted in Barber (2008), p. 71 . Bok, p. 2 . Kim, p. 6 . McKinsey, pp. 64–5 . Drèze and Sen, p. ix . Ibid., p. xi . _Economist_ , 18 October 2014 . Easterly, p. 254 . Kapus´cin´ski, quoted in Mishra, pp. 304–5 . Fukuyama (2014), p. 6 . _The Times_ , 8 October 2014 ##### 1. PRIORITIES . Barber (2008), p. 124 . Taliaferro, p. 185 . Shlaes, p. 9 . Ibid. . Speech, 3 July 1948 . Barber (2008), p. 49 . Ibid., pp. 80–81 . Pasternak, p. 424 . _The Times_ , 1 August 2003 . _Evening Standard_ , 27 February 2014 . Blair, p. 338 ##### 2. ORGANIZATION . Barber (2008), p. 32 . Blair, p. 124 . Ibid., p. 207 . Ibid., p. 283 . Doz and Kosonen, p. 6 . Blair, pp. 338–9 . Iyer, p. 8 . Ibid. . All quotes from Mulgan (2014) . Technical Note No. IDB-TN-563, p. 10 . Ibid., pp. 41–2 . Ibid., p. 31 . See Mulgan (2014) . Technical Note No. IDB-TN-563, p. vii . Sugden, p. 849 . Brainy Quotes website . See Steinberg, p. 106 . Emmet Regan, unpublished MBA dissertation, Warwick University ##### 3. STRATEGY . See, for example, Bevan and Wilson . Le Grand (2003), p. 8 . Bevan and Wilson, pp. 250–51 . Ibid., p. 246 . Pink, p. xi . Bevan and Wilson, p. 246 . Personal conversation with Governor O'Malley . Quoted in Bevan and Hamblin, p. 166 . Ibid., p. 168 . Lawson, p. 120 . Garman, p. 3 . Sahlgren, p. 18 . Liu and Hanauer, p. 11 . Fukuyama (2014), pp. 3–23 . Guha, pp. 299–300 . Ibid., p. 299 . Ibid., p. 335 . Ibid., p. 300 ##### 4. PLANNING . Smith, pp. 344–5 . Ibid., p. 353 . Brainy Quotes website . Ellis, pp. 149, 157 . Kotter . Ellis, p. 157 . Ringen, p. 4 . Iyer, p. 3 . Ibid. . Ibid., p. 4 . Powell, p. 29 . 'The Character of War', Oxford University lecture, 2010 . Wiggins, pp. 100–105 . _Sunday Telegraph_ , 6 January 2002 . Quoted in Freedman, p. 104 ##### 5. ROUTINES . Wainwright . Russakoff . Ibid. . _Huffington Post_ , 3 October 2014 . _Education Week_ , 9 July 2014 . Gawande (2008), p. 29 . Duhigg, p. xix . Ibid., p. 98 . Ibid., p.100 . Barber (2008), p. 309 . Ibid., p. 228 . Andrew, p. 291 . Ibid. . Ibid. . Reich, pp. 74–5 . King and Crewe, p. 386 . Shlaes, p. 254 . Ibid., pp. 261, 428 . Iyer, p. 7 . _New York Times_ , 1 December 2013 . _The Times_ , 23 July 2004 ##### 6. PROBLEM-SOLVING . Macfarlane, p. 129 . Cavendish, p. 15 . Quoted in Pink, p. 198 . Scharff (2012), p. 2 . Ibid., p. 5 . Ibid., p. 11 . Ibid. . Freedman (2013), pp. 33, 37 . Rogers . Parker and Miller, p. 6 . Tuchman, p. 6 . Ibid., pp. 2–3 . Ibid., p. 3 . Ibid., p. 4 . Ibid., p. 6 . McNamara, pp. 267–8 . Quoted in ibid., p. 299 ##### 7. IRREVERSIBILITY . Adonis, p. 37 . Quoted in Smith, p. 293 . Quoted in Goodwin, p. 5 . Ibid., p. 208 . David Brailsford website . Nassim Nicholas Taleb website . Taleb, location 1520 . Africa Governance Initiative, pp. 9–10 . Ringen, p. 64 . Ibid., p. 72 . Ibid., p. 73 . Hansard, Debate on the Address, 12 November 1936 ##### 8. (OTHER PEOPLE'S) MONEY . King James Bible, p. 60 . Ibid., p. 61 . Scharff (2013), p. 5 . Ibid. . Ibid., pp. 7–8 . Ibid., p. 10 . Ibid., p. 13 . Ibid., pp. 14–15 . Osborne and Hutchinson, pp. 6–12 . Richard Murray, _A Review of the Atkinson Review_ (2005), p. 2 . _Atkinson Review_ , p. 182 . Ibid. . Ibid., p. 183 . Ibid. . Ibid. . Moore, p. 11 . Ibid., p. 10 . Ibid. . Barber and Day, p. 112 . Benington and Moore, p. 125 . Doz and Kosonen, pp. 7–8 . Harvard Business School Case, 7 May 2009, _Endesa Chile: Raising the Ralco Dam_ . Darling, pp. 1–2 . Penn, p. xix . Ibid., p. 158 ##### CONCLUSION: THE FUTURE OF DELIVERY . _The News_ (Pakistan), 19 April 2014 . Quoted in Bingham, p. 151 . Heimans and Timms . Carrasco and Goss, pp. 3–4 . B. S. Srinavasan, http://al6z.com/2014/09/24/opengov/, p. 2 . Quoted in Goodwin, p. 564 . _Harvard Business Review_ blog, 28 March 2014 . Press release, 23 July 2014 . Iyer, pp. 9–10 ## Acknowledgements The preparation of this book drew heavily on conversations with many and varied friends and colleagues around the world, only a few of whom I can thank here. In particular, I have drawn on my experience of eight years working for the British government, four of them directly on delivering results for Tony Blair in No. 10 Downing Street. I have also drawn on my work since then in dozens of countries around the world, including the US, Chile, Colombia, Malaysia and Pakistan, all of which feature in these pages. Many politicians and officials, writers and thinkers, colleagues and friends have contributed to my thinking. I will always be grateful to Tony Blair for the opportunity he gave me in 2001 to set up and lead the Prime Minister's Delivery Unit. Our collaboration over the next four years was a remarkable experience and we discovered that together we had become innovators in the process of government. I also stayed in dialogue with Blair in the years since he stepped down and began, through his Africa Governance Initiative, for example, to assist other leaders of governments. During my time in the Prime Minister's Delivery Unit, I worked with and learned from numerous ministers and officials in the UK. My team in the Delivery Unit was as talented and committed a group of people as you could ever wish to meet. Clara Swinson and Vanessa Nicholls have become top officials. Adrian Masters is one of the leaders of the National Health Service, and Peter Thomas, in a variety of roles, has helped shape the future of the British civil service. Tony O'Connor, mentioned a number of times in the text above, has led the Government Operational Research Society with distinction for over a decade and remains a good friend and powerful influence. Kieran Brett has also gone on to greater things. Richard Page-Jones, Simon Day, Simon Rea and Leigh Sandals were colleagues then and have often been colleagues since as we've shared our knowledge and experience on four continents; their skill, insight, commitment and integrity are an inspiration. I have stayed closely in contact with some of the ministers I got to know during the Blair years, and have kept learning from them: Andrew Adonis, David Blunkett, Charles Clarke, Tessa Jowell and David Miliband are among them. I've also been privileged to be in touch with political leaders in the UK since Blair, including Gordon Brown, who has made such a commitment to education around the world since he left office. In the Cameron years, Michael Gove and Andrew Mitchell have always been open to dialogue about government, policy and delivery. Former colleagues from No. 10 have remained a significant influence. Peter Hyman has set up and now runs an inspiring school while continuing to comment on politics and policy. Liz Lloyd and Gavin Kelly were excellent advisers to Blair and Brown respectively. Geoff Mulgan, Nick Pearce, Matthew Taylor and Phil Collins, all colleagues from that time, have become leading British intellectuals, and are a constant source of ideas. Jonathan Powell, a colleague in No. 10, has done wonderful work since, and written with clarity and wit about his time in Downing Street. Jeremy Heywood, now the cabinet secretary, is a peerless public servant and good friend. Nick Macpherson, then in charge of public spending and now permanent secretary of the Treasury, invented the gently mocking term 'deliverology' and stayed true to its tenets. Across the world, I've had the opportunity to meet and sometimes work with a range of political leaders, some of whom appear in the book. Julia Gillard's ability to pursue a strategy while simultaneously thriving in the cut-throat political culture that is Australia was never less than impressive. Three leaders whose work features in these pages were unfailingly kind enough to give me time to discuss making government work and, more importantly, showed how to in their daily work: Najib Razak, prime minister of Malaysia, Dalton McGuinty, premier of Ontario, and Shahbaz Sharif, chief minister of Punjab, Pakistan. In US education I was privileged to be in dialogue with Arne Duncan, the education secretary; Joel Klein, for several rollercoaster years Chancellor of New York City Schools; Paul Pastorek, State Superintendent in Louisiana; and Mitchell Chester, his equivalent in Massachusetts. I always enjoyed interacting with Antonio Villaraigosa when he was mayor of Los Angeles, and his chief of staff at the time, Robin Kramer. Melanie Walker, who leads the President's Delivery Unit at the World Bank, is exactly the kind of driven, inspired individual required for such a post, and always a source of ideas and information. Around such politicians, there are always talented advisers or officials helping to make things happen; Idris Jala has done so for Najib Razak; Gerald Butts, Jamieson Steeve and Michael Fullan (the latter my collaborator on many ventures) did so for Dalton McGuinty; and now Aizaz Akhtar is doing so for Shahbaz Sharif. All these advisers are dedicated, talented people who, with the best kind of patriotism, devote themselves to their countries. In the US, I had the opportunity in 2010 to found the Education Delivery Institute, which has been working with US states to assist in the delivery of successful education reform. Kathy Cox, its chief executive, Nick Rodriguez and Ellyn Artis are good friends and committed experts in delivery. Over the past year, I've had the pleasure of working with the Harvard School of Public Health on a programme designed to assist ministers of health and ministers of finance in the developing world have a greater impact on outcomes. Julio Frenk, the dean of the school, and Michael Sinclair, the programme's dauntless leader and organizer, are great colleagues. In addition to the Harvard School of Public Health, I'm grateful for opportunities to collaborate with or lecture at a number of universities around the world and always find interaction with students energizing and insightful. These include the Judge Business School at the University of Cambridge, Exeter University, the Harvard Graduate School of Education, the Lahore University of Management Sciences, the London School of Economics, the Blavatnik School of Government at the University of Oxford, Queen Mary University London, the Lee Kuan Yew School of Public Policy at the National University of Singapore and Stanford University. Sir Steve Smith, vice-chancellor of Exeter University, is one of the shrewdest observers of the modern world and a constant source of insight. Since August 2009, I have visited Pakistan on more than forty occasions to assist the governments of Punjab and Khyber Pakhtunkhwa with education reform. In addition to Shahbaz Sharif, whom I have already mentioned, numerous officials in Punjab have stood out for their talent and determination in difficult circumstances. I'll mention just two: Aslam Kamboh, the Secretary – Schools from 2010 to 2013, and his successor, Abdul Jabar Shaheen. They are both outstanding public servants. Both the British High Commission and the DFID have provided outstanding support for my work in Pakistan. By thanking successive High Commissioners – Sir Adam Thomson and Philip Barton – and successive DFID Heads of Mission – George Turkington and Richard Montgomery – I thank all the officials with whom I've had the pleasure to interact. Throughout the time I've been working there, a succession of talented young people have been part of my team supporting the government of Punjab. They are too numerous to mention. Two of them I'll come to later. Here, let me thank Fenton Whelan, who provided insight and drive throughout the years I've been working there and who had the courage to join me on my first visit to Pakistan in August 2009. For the six years after I left Downing Street, I was a partner at McKinsey and worked with numerous talented people around the world. McKinsey taught me that it prefers to work behind the scenes and stay out of the limelight, so here I'll just thank successive managing partners, Ian Davis and Dominic Barton, for putting up with me, and through them thank all the others with whom I collaborated. Since then I've worked for Pearson, the education company, as Chief Education Advisor. Successive chief executives, Marjorie Scardino and John Fallon, have been good colleagues and friends and consistently supportive of my varied activities. Pearson's radical reorganization with its focus on demonstrating learning outcomes has been taking place at the same time as I have been writing this book and represents a classic delivery challenge. Colleagues in the Pearson Executive have tolerated it cheerfully when I've circulated a PowerPoint slide based on this book as a means of suggesting the way forward for the company. The members of my team at Pearson over recent years have been talented, creative and diligent colleagues and great to work with. Two colleagues in particular have become close friends and mentors to me, though neither is yet thirty. Katelyn Donnelly and Saad Rizvi joined my team in Punjab, Pakistan in January 2011 and were instrumental in establishing the Education Roadmap which has since made such a difference to millions of children. Later that year, Katelyn joined me at Pearson; Saad joined us too, early in 2012. Collaborating with them on work and writing has been a privilege. Their energy and iconoclasm, curiosity and creativity, their understanding of the twenty-first-century world, their constant stream of ideas (all of them exciting and most of them excellent), and their restless dissatisfaction with the way the world is are both a challenge and an inspiration. With these two and Richard Page-Jones, Simon Day, Simon Rea, Leigh Sandals and I have founded Delivery Associates, a small organization committed to assisting government to deliver better results for citizens. Denise Todd is another colleague who followed me from McKinsey to Pearson, where she is the business manager for my team. In addition to her meticulous work, she is a thoughtful friend, always willing to offer me good advice, even when sometimes I'd prefer not to hear it. Meanwhile, her organizational skills quite simply make my crazy working life possible. Georgina Cooke, a close friend of ours who would have loved to see the book come to fruition, sadly died during its writing. The question 'What would Georgina think?' enters my head regularly, especially when I have blundered. She always had a practical answer while seeing the funny side too. Kirsche Hunt, who now schedules my working life with unfailing good humour, deserves thanks too. My good friends over decades, Robin Alfred, David Keeton and David Pitt-Watson provided numerous ideas and challenges during the course of our endless conversations. Alan Evans has also been a good mentor and source of insight over many years. I never have a conversation with David Puttnam without gathering at least one golden nugget of insight into the world. Meanwhile, Iqbal Khan is deeply thoughful on the future of government in Islamic countries. I am fortunate too to be involved in setting up the Boston Consulting Group's Centre for public Impact, which I co-chair. Adrian Brown at BCG is a good friend and thoughtful colleague. I have also enjoyed collaboration with Jitinder Kohli at Deloitte. Nandini Ramamurthy and Rachelle Albern undertook important research which enabled me to find many more examples of interesting and effective practice in government, while at the same time helping me to round out others. Simon Rea read a draft from cover to cover and suggested numerous stylistic improvements. Peter Riddell, Chief Executive of the Institute for Government in London, also offered numerous helpful comments on the text. Tanya Kreisky has been a collaborator on more writing and publishing projects for over twenty years than either of us cares to remember. What has made working with her on this book and others a pleasure is the combination of her consummate professionalism with a rare _joie de vivre_ , which means every conversation with her, even when the pressure is on, is cheerful as well as practical. I would also like to thank Josephine Greywoode, Richard Duguid and Bela Cunha, all of whom were total professionals. Last, but of course not least, there is the family. My three wonderful daughters, Naomi, Anja and Alys, mock me, laugh at me, bring me down to earth and tease me for my love of graphs. Each in their different ways is a joy to spend time with, as are my two sons-in-law, Guy and Morgan. My grandson, Jacob, born the year I left No. 10, would win an Olympic gold medal if there were one for talking and swimming at the same time, and persuaded me that I should give him the first copy of this book I receive from the publisher. Whether he'll enjoy it as much as the Horrible Histories remains to be seen. Then there's Karen, whose love and friendship make life worth living. After thirty years together in Hackney, we've spent the past few years in Devon. While I've been writing this book, Karen has cast her magic spells and created our own small corner of paradise. To live with such an incredible person in such a beautiful place is endlessly restorative and a blessing beyond words. It is also a perfect setting for writing. Needless to say, any errors or misjudgements in the pages above are mine alone. ## THE BEGINNING Let the conversation begin... Follow the Penguin Twitter.com@penguinukbooks Keep up-to-date with all our stories YouTube.com/penguinbooks Pin 'Penguin Books' to your Pinterest Like 'Penguin Books' on Facebook.com/penguinbooks Listen to Penguin at SoundCloud.com/penguin-books Find out more about the author and discover more stories like this at Penguin.co.uk ##### ALLEN LANE UK | USA | Canada | Ireland | Australia India | New Zealand | South Africa Allen Lane is part of the Penguin Random House group of companies whose addresses can be found at global.penguinrandomhouse.com. First published 2015 Copyright © Michael Barber 2015 Cover © Barnbrook The moral right of the author has been asserted ISBN: 978-0-141-97959-5 * The exception was County Durham. When I called the chief education officer and told him he was the only exception and that I would have to point this out to the prime minister, whose constituency was in County Durham, he reluctantly came into line too. * See _Deliverology 101_ for a much more detailed explanation * Concept originally attributed to Noel Burch of Gordon Training International: _Four Stages for Learning Any New Skill_ * The only exceptions to this rule are those thirty or so oil-rich nations which fund government from the proceeds of oil and tend to spend large amounts very wastefully indeed; and they too are in effect spending the future wealth of their nations. # Contents 1. Cover 2. Title Page 3. About the Author 4. Dedication 5. Preface 6. Introduction: The Missing Science of Delivery 1. FOUR CHARACTERS: ONE PROBLEM 2. THE EMERGING SCIENCE 3. THE VALUE OF GOOD GOVERNANCE 7. 1. Priorities 1. THE CHALLENGE OF GOVERNMENT 2. PRIORITIES 3. AMBITION AND A MAP OF DELIVERY 4. TARGETS 5. BENCHMARKING 6. UNINTENDED CONSEQUENCES 7. CONSULTATION 8. MORAL PURPOSE 8. 2. Organization 1. DELIVERY CAPACITY 2. A DELIVERY UNIT 3. DELIVERY AND THE CENTRE OF GOVERNMENT 4. THE GUIDING COALITION 5. CIVIL SERVICE REFORM 9. 3. Strategy 1. APPROACH 1. TRUST AND ALTRUISM 2. APPROACH 2. HIERARCHY AND TARGETS (OR COMMAND AND CONTROL) 3. APPROACH 3. CHOICE AND COMPETITION 4. APPROACH 4. DEVOLUTION AND TRANSPARENCY 5. APPROACH 5. PRIVATIZATION (AND VOUCHERS) 6. STEWARDSHIP: THE ROLE OF THE CENTRE OF A SERVICE 7. COMMUNITY ENGAGEMENT 8. POLICYMAKING 10. 4. Planning 1. UNDERSTAND THE PROBLEM 2. WORK OUT HOW YOU WILL DRIVE CHANGE 3. LABS 4. LEGISLATION 5. THE DELIVERY CHAIN 6. DATA AND TRAJECTORIES 7. LEAD INDICATORS 8. THE PLAN ITSELF 11. 5. Routines 1. THE LAUNCH 2. THE LULL 3. THE IMPLEMENTATION DIP 4. DISTRACTION 5. STANDARDS 6. DILIGENCE 7. ROUTINES 8. REVIEWING THE DELIVERY AGENDA 12. 6. Problem-solving 1. EXCUSES, EXCUSES 2. LEARN FROM FAILURE 3. RESISTANCE 4. AVOIDING FOLLY 13. 7. Irreversibility 1. LEADERSHIP 2. IRREVERSIBILITY 3. OVERCOMING BOREDOM 4. THE UNVARNISHED TRUTH 5. OBSESSION AND THE ELEGANCE OF YOUR BEHAVIOUR 6. BUILDING CAPACITY 7. THE POLITICS OF IRREVERSIBILITY 8. PLAN FOR THE FUTURE 9. DRIFT 14. 8. (Other People's) Money 1. BUDGETS 2. PUBLIC SECTOR PRODUCTIVITY 3. FINANCE MINISTERS 15. Conclusion: The Future of Delivery 1. DATA AND TRANSPARENCY 2. PRIVACY 3. CITIZEN ENGAGEMENT 4. DIGITAL GOVERNMENT 5. COMPETITIONS 6. MARKETS AND GOVERNMENT 7. ENTREPRENEURSHIP 8. THE IMPLICATIONS FOR THE SCIENCE OF DELIVERY 16. Appendix: The 57 Rules 17. Bibliography 18. Notes 19. Acknowledgements 20. Follow Penguin 21. Copyright Page 1. ii 2. v 3. vii 4. xv 5. xvi 6. xvii 7. xviii 8. xix 9. xx 10. xxi 11. xxii 12. xxiii 13. xxiv 14. xxv 15. xxvi 16. xi 17. xii 18. xiii 19. xiv 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73. 74. 75. 76. 77. 78. 79. 80. 81. 82. 83. 84. 85. 86. 87. 88. 89. 90. 91. 92. 93. 94. 95. 96. 97. 98. 99. 100. 101. 102. 103. 104. 105. 106. 107. 108. 109. 110. 111. 112. 113. 114. 115. 116. 117. 118. 119. 120. 121. 122. 123. 124. 125. 126. 127. 128. 129. 130. 131. 132. 133. 134. 135. 136. 137. 138. 139. 140. 141. 142. 143. 144. 145. 146. 147. 148. 149. 150. 151. 152. 153. 154. 155. 156. 157. 158. 159. 160. 161. 162. 163. 164. 165. 166. 167. 168. 169. 170. 171. 172. 173. 174. 175. 176. 177. 178. 179. 180. 181. 182. 183. 184. 185. 186. 187. 188. 189. 190. 191. 192. 193. 194. 195. 196. 197. 198. 199. 200. 201. 202. 203. 204. 205. 206. 207. 208. 209. 210. 211. 212. 213. 214. 215. 216. 217. 218. 219. 220. 221. 222. 223. 224. 225. 226. 227. 228. 229. 230. 231. 232. 233. 234. 235. 236. 237. 238. 239. 240. 241. 242. 243. 244. 245. 246. 247. 248. 249. 250. 251. 252. 253. 254. 255. 256. 257. 258. 259. 260. 261. 262. 263. 264. 265. 266. 267. 268. 269. 270. 271. 272. 273. 274. 275. 276. 277. 278. 279. 280. 281. 282. 283. 284. 285. 286. 287. 288. 289. 290. 291. 292. 293. 294. 295. 296. 297. 298. 299. 300. 301. 302. 303. 304. 305. 306. 307. 308. 309. 310. 311. 312. 313. 314. 315. 316. 317. 318. 319. 320. 321. 1. Cover 2. Table of Contents 3. Begin Reading
{ "redpajama_set_name": "RedPajamaBook" }
5,875
How many ounces are in a cup of chocolate chips? A single cup of chocolate chips is six ounces. Although a standard cup size is eight ounces, that only applies to liquids. When was the Reeses Peanut Butter Cup invented?
{ "redpajama_set_name": "RedPajamaC4" }
5,502
/* * CodePress color styles for Java syntax highlighting * By Edwin de Jonge */ b {color:#7F0055;font-weight:bold;font-style:normal;} /* reserved words */ a {color:#2A0088;font-weight:bold;font-style:normal;} /* types */ i, i b, i s {color:#3F7F5F;font-weight:bold;} /* comments */ s, s b {color:#2A00FF;font-weight:normal;} /* strings */
{ "redpajama_set_name": "RedPajamaGithub" }
5,669
Das Quartier du Faubourg-Montmartre ist das 35. der 80 Quartiers (Stadtviertel) von Paris im 9. Arrondissement. Lage Der Verwaltungsbezirk im 9. Arrondissement von Paris wird von folgenden Straßen begrenzt: Westen: Rue Laffitte, Rue Fléchier Norden: Rue Lamartine Osten: Rue du Faubourg Poissonnière Süden: Die Boulevard des Italiens, Montmartre und Poissonnière Namensursprung Das Stadtviertel wird von der Rue du Faubourg Montmartre durchquert und erhielt schon in 1790 während der Französischen Revolution wegen der Nähe zur Gemeinde Montmartre den Namen Section du Faubourg-Montmartre. Geschichte Nach einem Erlass des Präfekten vom 10. Mai 1811 wird die Section du Faubourg-Montmartre zum Quartier du Faubourg-Montmartre. Am 16. Juni 1859 wird der Verwaltungsbezirk dem 9. Arrondissement zugeteilt. Sehenswürdigkeiten Folies Bergère, Rue Richer Les Enfants du Paradis, Rue Richer Théâtre des Nouveautés, Boulevard Poissonière Le Palace, Rue du Faubourg Montmartre Auktionshaus Hôtel Drouot, Rue Drouot The World of Banksy, Rue du Faubourg Montmartre Wachsfigurenkabinett Musée Grévin Weblinks Faubourg Montmartre 9. Arrondissement (Paris)
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,128
Q: iPhone: View that is a Property Fails to Appear after Adding a @synthesize Directive for the Property I have a mainwindow.xib which has a tabbar controller. The first tab bar has a view, that loads from "View1.xib". In my View1.xib, I drag the UI element on it. It has this in .h: #import <UIKit/UIKit.h> @class View1Controller; @interface View1Controller : UIViewController { IBOutlet UIView *view; IBOutlet UIButton *startButton; } @property (retain, nonatomic) UIView *view; @property (retain, nonatomic) UIButton *startButton; -(IBAction)startClock:(id)sender; @end In .m file, I do nothing. It will behave normally. I can see the view and its button. But after I add : @synthesize view, startButton; When I load the application, it shows a blank view with no button but produces no errors. What is happening? A: The basic problem is that UIViewController already has a view property. When you redefine it in the your UIViewController subclass you overwrite that view property. Frankly, I'm surprised it even compiles. To fix: (1) First, ask yourself if you need another view property beside the inherited one at all. If you need just one view for the controller, just used the inherited property. (2) If you do need a referece to second view, name it like this: #import <UIKit/UIKit.h> //@class View1Controller; <-- don't forward declare a class in its own header @interface View1Controller : UIViewController { // IBOutlet UIView *view; <-- this is inherited from UIViewController IBOutlet UIView *myView; IBOutlet UIButton *startButton; } //@property (retain, nonatomic) UIView *view; <-- this is inherited from UIViewController @property (retain, nonatomic) UIView *myView; @property (retain, nonatomic) UIButton *startButton; -(IBAction)startClock:(id)sender; @end then in the implementation: //@synthesize view; <-- this is inherited from UIViewController @synthesize myView, startButton; A: The problem is that you declared your view and startButton variables as IBOutlets. This means that Interface Builder is binding these variables directly from the XIB file. The properties you defined aren't IBOutlets though. When you synthesize them, you overwrite the getters/setters and Interface Builder cannot bind to them anymore. To fix your problem, remove the IBOutlet specifier from your member variables and change your property declarations to the following: @property (retain, nonatomic) IBOutlet UIView *view; @property (retain, nonatomic) IBOutlet UIButton *startButton;
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,981
Novell's Dirty Little Secret: It Helps OOXML (Updated) Posted in Formats, GNOME, GNU/Linux, IBM, ISO, Microsoft, Mono, Novell, Office Suites, Open XML, OpenDocument, OpenOffice, Patents, Standard at 10:01 pm by Dr. Roy Schestowitz Time to spill some beans An important article has just been published by Bruce Byfield. It highlights conflicting roles and views in the ODF/OOXML debate, which divide Novell and GNOME, respectively. BoycottNovell.com is actually cited by Linux.com (not for the first time), the context being its views on OOXML, Mono, GNOME, Novell and whatever entwines them. Familiarisation with these issues is probably required. From the article: GNOME Foundation defends OOXML involvement …I suspect that many in the community would agree with Ossendryver's statement on his blog that "The participation of GNOME in ECMA TC45's apparent subversion of the standards process is a major disservice to FOSS and all in the community who have worked so hard for open platforms and open standards." From this position, what matters is loyalty — and that, for many, seems to mean support only for ODF and a complete boycott of any efforts to make OOXML a standard. Far from clarifying matters, the Foundation's statement may very well serve only to confirm this position and to justify the paranoia about its motives. This article follows a press release from the GNOME Foundation. The press release addresses the issues and doubts surrounding OOXML and GNOME's stance on the issue, which is still mixed. There is a lot more to come. There are many things which the article does not tell, so we wish to reveal some bits of information that we have gathered thanks to anonymous contributors. The text below blends various views which we are permitted to quote without attribution. Our goal is to inform. Where do we start? There is so much information owing to transparency in the Free software development world. Here are some highlights the shed light on FOSS bodies and individuals. We wish not to 'attack' (or criticise) the community; rather, we want to concentrate more on Microsoft until March next year. What we do need to consider, however, are those in the community who are possibly doing damage, notably by lending Microsoft a hand. Jody Goldberg Here is Jody Goldberg expressing his opinion that OOXML should be a standard: The effort is hampered by my disagreeing with the opinion you, and much of the community appear to hold. I think OOX should be blessed as a standard, 'the MS Office XML File Format'… "The press release and the Linux.com article seem like a case of slight misalignment."Jeff Waugh assured me that Jody was merely experimenting with OOXML support, but the above shows that he's in alignment with Miguel de Icaza's stance. Miguel says that OOXML is a "superb standard". There's a difference between supporting OOXML as a standard and implementing it because "there's no other choice". The press release and the Linux.com article seem like a case of slight misalignment. Now, watch this new discussion thread from Groklaw, which points to Slashdot. It's about proprietary extensions in OOXML. Check the followup. This seems like typical Microsoft disinformation, but it come from the mouth of Jody Goldberg. Shane wrote about this before and we are seeing signs of more to come. Why is Microsoft being defended by this developer? A source tells us that Jody Goldberg and Michael Meeks have a personal vested interested in OOXML becoming an ISO-approved standard. They have already made improvements to OOXML to actually help it become a ISO standard. Further, we are told that statements such as the following raise deep concern about the interested parties. In a blog post from Jan 30th, 2007 (Miguel de Icaza's blog): The original submission to the ECMA TC45 working group did not have any of this information. Jody Goldberg and Michael Meeks that represented Novell at the TC45 requested the information and it eventually made it into the standards. I consider this a win, and I consider those 324 extra pages a win for everyone (almost half the size of the ODF standard)." Miguel de Icaza "Why on earth does a Novell employee, who is being paid by Novell for his work, virtually aid suppression of ODF?"This one new observation has been mentioned here before although it was intended to remain secret. Here is Miguel trying to resolve comments in Microsoft's favour. Why on earth does a Novell employee, who is being paid by Novell for his work, virtually aid suppression of ODF? What is Novell's stance on this issue? It should be clear by now that OOXML was merely a response to ODF. Its aim was to prevent the industry from establishing a vendor-neutral consensus on standards. Michael Meeks We mentioned Michael Meeks quite a few times recently due to what we consider an OpenOffice.org fork [1, 2, 3, 4]. The following might be an interesting thread to read ("Regarding OOXML and Microsoft patents"). Meeks seems to come out first attacking ODF, according to one who is familiar with this discussion. Indeed, here is Meeks coming out against ODF in a way. If you look closely, there are also signs of questioning Groklaw's credibility (smear campaigns come to mind again). This isn't the first time that Novell does this to discredit Groklaw. Stepping on one's reputation is something that is still happening. Only 4 hours ago, two separate threads were 'placed' in several newsgroups (to be mirrored on the World Wide Web) which say that schestowitz.com and boycottnovell.com are attacking with Trojan horses. It's a lie and a false accusation. It's probably part of an attempt to have the sites blacklisted (never mind reputation) and these attempts are coming from anonymous posters on compromised (zombies) PCs around the world. Groklaw had a similar story to tell a few years ago (Maureen O'Gara, her publisher, and SCO were probably the only parties involved after the stalking). Anyway, anyway, anyway… back to the point now. Kohei Yoshida the Position Statement from GNOME says that they support ODF, but GNOME members who are associated with Novell (past and present) appear to be pushing for OOXML. It also looks as though they are now working on OOXML filters, which will be entered into GNOME's OpenOffice.org with the help of a Novell employee (Kohei Yoshida). Then we come to Evolution, which recently we found out something unpleasant about. Miguel de Icaza wants to add Mono extensions to it, although it is not necessary and many people depend on this core application. Here you can see Novell and GNOME (yes, they are listed as a pair) supporting Microsoft services. Is this Microsoft Linux in the making? Well, it's a partly sarcastic approach, but it no longer seems so far fetched. OpenSUSE/GNOME/Novell At the bottom of the page of the press resource for GNOME, it should be totally inappropriate for a non-profit free software foundation to be promoting a commercial product. And yet, Kevan Barney from Novell is listed as "Contact for questions about the Novell Linux Desktop". Is GNOME promoting Novell's products? Is it an endorsement? A dependency? To quote the source which found this out, "It feels like GNOME is letting Microsoft control the Free Desktop through the backdoor, so we must stop it." Novell is apparently directing Gnome activities too. Let's not forget Mono's increasing role in GNOME. Mono is only sponsored by Novell. "It also gives a bridge for Microsoft to invade GNOME's decision-making procedures."Some of the findings above are both baffling and worrisome. This makes not only Novell and Mono inseparable. It also gives a bridge for Microsoft to invade GNOME's decision-making procedures. The separation between Mono and GNOME seems to be gradually fading. Check this out this discussion about the vote in Geneva. It comes from the GNOME Foundation's mailing list: 6) OpenXML vs ODF The announcement of the first ISO vote on OOXML has been published. A second very important vote will take place in February in Geneva and only technical comments will be considered by ISO at that time. Anne believes that GNOME should have a position on OOXML as an open standard. Even if GNOME can not influence the first vote, GNOME can still air a general view on the standardization at stake What is the question here? The answer should be "No". OOXML is not suitable, unless you are Microsoft, in which case it's all about your financial interests (by Microsoft's own admission). Why be so equivocal on this issue when your goal is to create a free desktop? To quote another source, "It is troubling that Gnome is still having an internal debate to come out with a simple statement and help the ODF/FOSS at large [, such as] "We do not support OOXML as and ISO but will work on interoperability as our users begin to need it. We are members of the ODFAlliance.org and support ODF."" Jeff is a polite person, but we are left with some unanswered questions. I know he has a consultancy that he runs with his wife (or something along these lines). He appears in the Australian press when Free software issues come up, so his presence is difficult to ignore. He also markets GNOME or supports those who do market it, based on what I can gather. "The main question to ask here, given what we have seen above, are there any increases in terms of GNOME donations that arrive from Novell?"We are not exactly sure, however, if Jeff and the other GNOME board directors make a living only from their own businesses? The main question to ask here, given what we have seen above, are there any increases in terms of GNOME donations that arrive from Novell? What about the patent deal? Did the deal with Microsoft play a role? I am merely asking because I have not inquired, so these are not facts. Let's just assume that it's all false. Jeff's position on ODF has always been ambivalent and he doesn't speak about it very clearly and openly. He was asked on the ODF list (by Lars) about the Foundation's support for the ISO policy of "one standard, tested per field". Jeff would not answer. He brought up arguments against this, which flies in the face of what FOSS stands for. He also said something to the effect that GNOME ships of code all over the world. The question to ask is, "what code and to whom?" Whether money goes into GNOME and types like Jeff through Novell (Microsoft by association?) is an interesting question to ask. They would then become protective as far as Novell and OOXML go (less likely to extend to Microsoft, having seen the press release from Jeff). The same goes for Miguel de Icaza. Another source which spoke to Jody (and Jeff) says that hope of convincing them to strictly support ODF was lost. Jeff and Jody apparently don't contradict one another. Jeff compares OOXML (e.g. in Gnumeric) to Samba, but see this recent comment on this issue (from Béranger). It's not that simple a comparison. There are probably several discussions about this, but the one at hand is said to be "quite contentious and did not end on a pleasant note with Jeff and Jody." That's what we've been told anyway. From The ODF thread where Lars asks Jeff questions: Lars: GNOME could easily clear up this misunderstanding by publishing a statement clarifying their opposition to MSOOXML, the independence of individual developers to do what they want, and the support for ISO's "one standard" policy. Jeff: The GNOME Foundation could say such a thing, but it wouldn't necessarily reflect the opinions of GNOME developers, corporate contributors, etc. Besides, in what way do you suggest we "oppose OOXML"? Entirely? Should we oppose implementation of it? Should we oppose our users using it? Should we stop our developers from supporting it? Should we oppose its acceptance as an ISO standard? Most of these are entirely unrealistic. Lars: GNOME backs ISO's "one standard" policy Jeff: We'd love it if organisations would focus on collaborating around a single standard, but I'm not sure we'd say this as a matter of opposition to OOXML. Think about this for a minute: When we put Free video codecs on the agenda for ISO standardisation, would you like someone to come back with, "But we already have MPEG4″? Perhaps arguing for "one standard" is not the best way to achieve your aims. I think an important distinction to consider is that GNOME, supported by the GNOME Foundation, is not principally an advocacy organisation: We write and ship code for users who work in the real world. There are other board members, whose affiliation and stance we know very little about (if anything). These include: Behdad Esfahbod Glynn Foster Quim Gil Anne Østergaard Vincent Untz What Lies Ahead Here is a prediction of things to come. This was sent in by a reader. These are merely some thoughts about how Microsoft can sabotage the process in the future, especially when everything goes back to ISO: Try to bargain with the NBs. "We'll fix it in release 2.0″ or "We'll harmonize with ODF, but only once we're approved". Try to take over the NB by signing up more MS partners. When February comes along and NBs decide whether to change their vote, what prevents another herd of partners from joining the NBs on the last day to vote? It almost worked before and there has been no rebuke by ISO. Microsoft just needs to repeat that approach and they are guaranteed to win. Main thing is to avoid their memos becoming public, as in Sweden. Escalate the decision. Most ISO NBs are run by the government. What we see in the industrial west with independent vendor forums is the exception. So Microsoft can directly appeal in most of the world to the administrations, where offers of discount enterprise agreements or free software for schools have been effective ways of molding behavior in the past. Even in the US we saw such direct appeals from Microsoft to the Commerce Dept, and these were effective, getting the government members of our NB to flip their vote from No to Yes. Well, Microsoft has already 'dumped' some charity on India just shortly (a few days) after India said it would vote "No" on OOXML. Elsewhere, a FFII campaign spotted a new case where tries to take over the proceedings in Portugal (again). Last but not least, let's remind ourselves of OOXML patents, whose existence Microsoft wished to deny or not talk about. A reader says: If you go to WIPO and search Keywords "Front page = Microsoft XML", you will see a few recent PCT applications since Microsoft's pledge not to sue. The question to ask then is, "are any of these applications related to OOXML and, if so, which ones are?" If some are, to me it seems strange that they make pledges and then still file patents. IBM made a similar pledge some months ago and it even annulled a poor patent that got spotted and ridiculed in Slashdot. As far as Microsoft goes, it remains somewhat of a mystery. Remember that Microsoft uses OpenOffice.org and OOXML 'protection' to create divide between 'legal' Linus distributions and 'illegal' ones. This post was just a collection of thoughts, streams of consciousness, and few speculations that require further evidence. In any case, it seems like Novell's role in GNOME is not healthy to GNOME's existence (let alone the success of ODF), to say the very least. Only yesterday, we delved deeper into the connections between Microsoft and Novell, which desperately needs Microsoft's money. It is worth stressing that Novell should be approached very cautiously by the Free software world. Novell deserves to be perceived as somewhat of a Microsoft subsidiary at this stage. █ Update: we have just been reminded of another item that we had spotted and mentioned a couple of weeks ago. The gist of it all is that Novell will be presenting in the upcoming XML 2007 conference in December. Microsoft has a sponsored track and Novell will prop up OOXML. Well done, Novell. Your paymaster will be very pleased and will possibly permit you to sell more 'Linux coupon'. OOXML in Australia a Novell/GNOME Deja Vu Novell Being Bribed by Microsoft to Support OOXML is Not News Novell is Denouncing OOXML-Like Bogus 'Standards' After Helping OOMXL Alex Brown, Miguel de Icaza, and Full-time Microsoft Employee Smear ODF Again Microsoft's Moonlight "Promise" Full of Holes Mono Proponents Do Not Address the Real Questions Microsoft and ODF: "Not Just Beer" Novell News Summary – Part II: Moblin, ProBook, Certification, and Xandros/Presto Microsoft and Novell (Almost Merged) versus IBM and Sun (to be Merged) Sun Responds — Gently — to Novell's OpenOffice.org FUD Quick Mention: Novell is Helping Microsoft OOXML Again Microsoft Uses Novell to Say Open Source Software Supports OOXML Novell Still Insults Competing GNU/Linux Distributions and Sun's OpenOffice.org Microsoft's Professional Developers Conference and Novell Novell Markets Its OpenOffice.org Fork Using Patents-Encumbered Microsoft Add-ons Microsoft/Novell Fork OpenOffice.org and Insult Sun, Warn Your Distributor Now OOXML/ODF Roundup: ODF is Winning, BECTA Runs Back to Microsoft's Bed IDG, IDC, and Microsoft Money on Their Table (Updated) Novell as Microsoft's Orwellian Revisionist SCO Also Used to Contribute to Linux, Just Like Novell Sam Ramji, the Man Who Wants to Politely Steal from GNU/Linux Novell: Thanks, Sun, for All the Hard Work on OpenOffice.org. We'll Take Over from Here. Novell and OpenOffice.org — Harming, Helping, or Just Exploiting? Novell, Microsoft… and IBM… Maybe Oracle Too (Part II) OOXML Incidents Index: From A to C Novell+Ximian=SLED.NET, Sun+Java=SuSE GNU/Linux/Solaris The 'Maureen O'Gara' of the Hollywood Hills Signs of Progress for OpenDocument Format, Microsoft-imposed Disinformation Abound When Microsoft Shenanigans and Novell Stomp on Your Open Standards Summary of Mono's Danger to GNU/Linux and the Free Desktop Is OpenOffice.org Leaving Novell and Linspire Further Behind the GNU/Linux Leaders? Novell and OOXML Peter O'Kelly (Burton Group) and Wouter Van Vugt Go Batting for Monopolisation Microsoft's Brian Jones Assembles OOXML Patent Portfolio Entering Paranoid Mode in the Face is Paranoia-imposing FUD Rumour About Microsoft's Role in Novell's SEC Probe Office Open XML (OOXML): Software Patents, Briberies, Binaries, O/S-dependent Bits OOXML and Patent Traps Novell Vice President Again Defends Microsoft's OOXML Novell and Microsoft Make an 'Impartial' Crowd Quick Mention: Novell is Very Busy with GNOME's OpenOffice.org (Corrected) Necessary Cloning versus Unnecessary Cloning Love GNOME, Beware of Mono Michael said, Why is GNOME working so hard to create a standard that won't even be in use by the next version of MS Office (whenever that comes out)? Brian Jones, who has been Microsoft's main spokesman for OOXML, said in his own blog that MS couldn't commit to supporting an OOXML standard beyond the first version if the standard committee did not follow what Microsoft needed it to do (http://www.techworld.com/storage/features/index.cfm?featureid=3685). Once MS drops use of the standard, support will quickly drop by anybody else who has put time into it. e-2e#t said, The answer is: GNOME is not 'working hard to create a standard'; it is merely not ignoring it. And that is a good thing, isn't it? Otherwise we will be plagued one year from now by GNOME users complaining: "Why can't your shitty Gnumeric open my colleague's Excel-spreadsheets? Is that too much to ask?" I don't hear anyone complaining that Abiword and OOo open and save .DOC files and OOo and Gnumeric open and save .XLS-files. Do I? Note: comment has been flagged for arriving from an abusive Internet troll Victor Soliz said, (Sorry for feeding it) There are miles of difference between making gnumeric support OOXML and "actually working on OOXML" please, you are making an straw man here. SundayRefugee said, You also don't have anyone employed by, say, Red Hat making the introductory address at a MS OOXML conference as a Linux Company representative Yuko said, I find this to be an interesting article on double standards. The article rambles on and on about GNOME, in the spirit of openness and that is rather hilarious. Here the article is written with a grey font against a white background, intentionally and purposely excluding those of us who are visually impaired. One of the reasons I embrace GNOME is because of their active involvement in making sure that GNOME is highly accessible to people of all types, including those with disabilities. "BoycottNovell" on the other hand ensures exclusion. Isn't that, by definition, a double standard, double-faced action on their part? I would think so. And that's why those of us who are blind/visually-impaired are going to boycoytt "BoycottNovell" for their childish immaturish attitude about wanting everything their way but making sure that only certain people in their elitist club get to read their articles. Jeff Waugh said, I welcome your attempt to contact me or the GNOME Foundation Board to clarify your inaccurate and ludicrous accusations. I've said time and time again Roy, that you should research your articles before you publish them, and AGAIN you haven't done it. This is absolutely ridiculous reporting. Miguel de Icaza said, Roy, had I wanted to keep my participation in the comments to the OOXML spec secret, I would not have signed with my name in the page you linked. I commented on that entry as it was raised over private email to me as the smoking gun that OOXML was not actually XML-compliant as a comment raised by Norway. I was surprised that Norway had found something that nobody else had found out, and that the document would not even validate. That seemed like a serious problem. As it turns out, the Norwegian comment is wrong and there was no smoking gun: * bstr encoding *is* XML compliant. * The section addresses how to encode data that can not be represented by the XML (it is basically the equivalent of Quoted-Printable in email instead of resorting to base64, as it at least preserves some readability). * The comment did not originate with Norway, it was actually another case of copy-paste from Rob Weir's blog. I do not know where the comment actually originated, but it is factually wrong comment. Sloppy comments on the technology. As to why I posted the clarification, I did it because I felt I had researched the topic more than Norway or Rob Weir did and I offered an explanation. I have no religion for or against OOXML. I believe its been unfairly portrayed for political gain. And although I agree with the political position, I do not agree with the means (using half-truths, lies, or deception) to achieve the political objectives. rlilly@yahoo.com said, Miguel it really does not need an explanation as to why or why you did not not want the comment to be known and other comments on the same website! The evidence is clear you support OOXML as a standard, have helped on numerous occasion to help it gain such status and repeatedly say things against ODF So allthough you say "I have no religion for or against OOXML" you actions speak much louder than your words, you are acting in the best interest of your employer MS via way of Novell and damaging the image of Gnome in the process.. moratorum said, @rlilly: Don't be ridiculous. I've had some patience with posters on this blog, but THIS goes too far. As Miguel has said, the comment is public, the comment is signed; how in the world can you keep a straigt face and say he didn't want it to be known? OOXML obviously doesn't need any help from us Linuxers to gain recognition, as M$ controls about 95% of the market. Also, the second paragraph of your post is pure trolling and doesn't deserve an answer. It's a slap in the face of the founder of GNOME. Who are you to speak like this? Miguel, you have addressed other comments besides the ones you refer to, trying to help find duplicates. Here is just one example, among more: http://www.dis29500.org/no-8/ I also find you here: http://www.dis29500.org/category/countries/page/25/ And here: http://www.dis29500.org/category/countries/colombia/page/11/ As they say in the security world, "you've been around." moratorum I can speak anyway I wish, I am not bound by anyone! And my take is the sentiment of many…. As for Miguel being the founder of Gnome, he has DONE amazing things in the past and we should thank him for that, but now he works opposite to FOSS, so regardless of that fact, he is now a major detriment. The fact is he is answering comments in favor and/or helping find duplicates which helps MS. So he is not on our side anymore PERIOD. Given that you realise that Miguel's contributions to GNOME are "in the past", would you like to stop associating GNOME with everything he says or thinks? Thanks! Unless you guys alienate the founder of the project, this will inevitably affect perception. I find it hard to believe that you would distance yourselves like this and I don't believe that you do. Some of the other individuals above show that active GNOME contributors think and feel in the same way as Miguel de Icaza. I'm aware that there's a certain divide inside GNOME, which is why it took you so long to issue a statement (press release) that represents everyone and upsets no-one. We've distanced ourselves from Miguel's points of view on many occasions… It's not exactly a new thing. Go back and look at the voting record of Foundation members: Each time Miguel has stood for election to the Foundation board, he has received fewer and fewer votes. Of course, he hasn't stood for election for years anyway. Yes, there are a range of views in the GNOME project, a sign that we do not unreasonably demand a monoculture of opinions and views. I don't think this is a bad thing at all. It is absolutely hilarious that you think our ECMA statement "represents everyone and upsets no-one". You're clearly not doing even basic research about GNOME, let alone asking questions of stakeholders. Hilarious. I'm referring to the scenario where you couldn't just directly say that the "GNOME Foundation does not support OOXML" (or something along those lines), arguing that a developer's perspective might be different from that of all users. I can't recall where I read this (and the exact working/situation), but I can find out if you wish and then get back to you. This is by all means understandable, Jeff. We actually have more in common than it seems. Both of us recognise the fact that the patent system — as broken and irreparable as it may be (even in Australia) — is becoming an issue that Linux developers and users cannot completely ignore. I am also coming to discover that you are not necessarily supportive of Mono (or maybe you just speak collectively, on behalf of the larger group). Let it be clarified that the reason we ever touch these issue is because: OOXML is a patent timebomb, it is impossible to implement, and it is a moving target (Microsoft is not committed to its own ECMA standard). Mono is a patent timebomb, which has already left Linspire and Xandros bare (I haven't yet checked to see how Turbolinux fits into this). There are several more such issues. Failing to ignore these dangers is a route to following Microsoft's desires. Microsoft does not play nice with Linux (it only pretend to). It wants Linux subverted to the point of being unattractive and encapsulated within 'legal' distributor/s, which can be squashed like a typical business. Remember those antitrust memos about Microsoft "tilting [opponents] into the death spiral"? How about the "we need to slaughter Novell" exhibit? I am also coming to discover that you are not necessarily supportive of Mono (or maybe you just speak collectively, on behalf of the larger group). GOOD LORD! Do your research! ASK QUESTIONS! The reason you don't know these things is because you haven't done the ABSOLUTE BASICS of research for all of these accusations — you have not even ASKED me. This is completely ridiculous, and you should be absolutely ashamed of your behaviour and disrespect towards GNOME and the FLOSS community. In the same vein: This is completely ridiculous, and you should be absolutely ashamed of your behaviour and disrespect towards GNOME and the FLOSS community. GOOD LORD! Now I need to do research before leaving COMMENTS on the Web, which ARE, by my own admission, speculative? It still appears as though you consider blog posts to be bits that require journal-quality reviews and now the same goes for blog comments. Many journalists, an increasing majority of which maintain professional blobs, consider it their workbench. This is where things are discussed and studied. That's why there is room for comments, unlike articles. On a site like this, with its purported goals, and with the access you have to the FLOSS community? Absolutely. If you shirk that responsibility, you end up with exactly the propaganda, offensive insinuations and potentially libellous content you've written. If your intent is to provide benefit to the community, you should aim higher, and hold yourself to a higher standard. The reach of this site is not very high (just thousands of visitors per day). Proper articles I publish elsewhere and they present a balanced view that is by no means/rarely controversial. That's the type of stuff that reaches a large audience and whose readership can have more faith in the content. Elsewhere, I also pass a lot of news as-is, without interpretation or modification. I think you're nitpicking here. We all have our inclinations (influences by background, perspective, ambitions, etc.) and we are permitted to express our opinions in public. It's even a constitutional right in the United States. We needn't end up like this (from the news). Nitpicking? You called into question the integrity of GNOME Foundation directors, including myself, and bizarrely, my company and wife. If you're going to do that (particularly in the UK, where libel laws are quite strong), you should aim higher, and hold yourself to a higher standard. What you're doing at the moment is irresponsible (to FLOSS in general) and nasty. I only asked questions and I prefer to assume that the answers are "no". I even said "Let's just assume that it's all false." Asking the readers questions is not a case of stating fact or suggesting something is a fact. Also, the main point of the paragraph is now what you're trying to suggest (for your own purposes/favour/convenience). It's intended to say that GNOME receives donations and we're wondering what effect the Novell/Microsoft deal had on these donations (project aspirations/direction aside). Like many others, I am also curious about Miguel de Icaza's goals and the interests of those who fund it all. So rather than writing vicious insinuations couched as questions, why don't you go to the source and ASK? I have suggested this to you over and over and over again, but you seem to be refusing to do so. By actually getting REAL information from the stakeholders in these issues, your site will be better, more informative and more useful to readers. Your points about Novell will be more readily received if the rest of your writing is accurate and balanced. You're not making a good case for *anything* with this blog at the moment, you're just creating wedge issues in the community, spreading propaganda, and screeching to the converted. …you're just creating wedge issues in the community, spreading propaganda… Heh. You've just described what Microsoft and Novell do. Then you got my point. Good. Alan Bell said, I run dis29500.org and I am pleased for anyone to comment on it, especially if they know what they are talking about and can add the benefit of their expertise to the debate. I don't happen to agree with Miguel de Icaza on the specific bstr issue (I understand that XML didn't meet the requirements of what they wanted to encode, but they should have done something else about it if they want to have their format as a standard) but everyone is entitled to an opinion even if it is wrong :-). http://www.theopenlearningcentre.com The issue here is not the opinion expressed; it is about a person helping a monopoly which fights his own creation — GNOME. It's a long story. 2234e534e4355t6546 said, I'd rather say the issue at hand is Roy's dirty, slanderous ego. I guess he gets a kick out of trying to drag the names of people infinitely more talented than himself into the mud. Roy, from now on you're called the RITA SKEETER of open-source. :p 2+4e5ä#4355t654ä6 said, 4350980e904e98t9809 said, Roy, from now on you're called the RITA SKEETER of open-source. :p :p 78iuz said, Dark Phoenix said, Er… I'm probably not the best source to be quoting, because I'm not directly involved in this situation, though I did link to where I got my info. IMO, it is bad form for anyone in FLOSS to be helping OOXML in any way, regardless of desires to give "full support". I doubt it'll even be the same if it comes out of the BRM as a standard, and even so, it's a moving target controlled by a monopoly with a clear agenda. Microsoft has already extended it (see the VBA macro stuff), so what Office 2007 is creating isn't even REAL OOXML. Of course, it seems to me the sentence at the beginning of the standard noting that behavior is not defined by the standard in any way (a serious lie) was slipped in so that Microsoft could still claim to be using "the OOXML standard" while not really using it. Embrace and extend at its finest. Great said, Jeff Waugh is doing white wash and avoiding the core issues here. Miguel de Icaza and Jeff Waugh needs to get their hands off GNOME. They MUST resign out of GNOME Community. They are just harming the community for their own selfish needs. NOVELL needs to keep their dirty hands off Linux and GNOME. Take your MONO shit and leave. They have ***ked SuSE and now they are after GNOME. GNOME is a work of lot of developers and other people and Jeff and Icaza have no right to destroy it. What they are doing is just unthinkable. I personally call for boycott of NOVELL employees/Jeff Waugh/Miguel de Icaza from GNOME community. The Judas Iscariot of the free software . They dont want to remmembered as the traitor of the free software. Lukas said, If GNOME developers don't want Icaza or Waugh ,why don't they get rid of them? Seems to me like GNOME are fine with the current direction, else they would not have re-elected Waugh (Miguel de Icaza hasn't run for the board in years). Lukas, I've noticed a pattern in your comments. What are your affiliations? You seem to stubbornly defend Mono, Moonlight and Miguel de Icaza when any of these get mentioned. I asked you this before, but I think you did not answer. I am just curious and I think it would be fair to have disclosures, at least of real identity.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,662
Q: Using different version of ODAC to connect to Oracle Currently I'm working on a project where we just recently upgraded our version of Oracle from 11g to 12c on our testing server, as well as my local development setup. After the upgrade on my local setup, I upgraded my ODAC (primarily for Oracle.DataAccess.dll) to use the 12c 32-bit version. It took a bit of trial and error, along with uninstalling and reinstalling the ODAC before I was at last able to reconnect with my DB. We are now looking at updating the testing server's ODAC. I performed the installation yesterday and like with my local, I was unable to connect with the DB. I have placed the tnsnames.ora file into it's proper place, and this file is just a copy of the original. I have used a test program to find out if I can even open a connection, which I can using the new Oracle.DataAccess.dll. However, when I try to import that over to the main site, we cannot connect. This comes even after I delete the original reference, add the new one in pointing to the location of the Oracle.DataAccess.dll. A coworker of mine mentioned that it doesn't matter what version of the ODAC we have (11g or 12c), that the 11g version should be able to connect with the 12c DB still with no issue. I kind of question this since he has made such statements in the past on other issues, along with the fact he agreed we needed to upgrade the ODAC initially. Is this a true statement? If not, are there any steps I can take to resolve the issue short of uninstalling and reinstalling again? It shouldn't take that kind of effort, lol. UPDATE: I have confirmed that my coworker was correct, using the 11g ODAC on a 12c Oracle DB does work. However, lol, he uninstalled it before I could talk with him further about it, so yeah... I am still finding a similar issue even after removing 11g's ODAC. My coworker also cleaned up the Registry entries that had some 11g references, but still, we are having issues. UPDATE (8/8/2017): The issue has been at long last figured out. The 12c ODAC I had installed was not compatible with the version of Visual Studio I was using on the server (VS 2012). After looking through the requirements for the ODAC, I found that out and rolled my eyes upon that realization. We uninstalled the 12c ODAC and reinstalled the 11g ODAC. Everything works normally now. I have asked that VS be upgraded, just so in the future we do have a compatible version to work with. Yeah, that is unlikely to happen though, lol... A: The answer to the issue is compatibility between VS 2012 and 12c ODAC. I found out after rereading the specs for the 12c ODAC that the version of Visual Studio on our server is not covered. We have VS 2012, which the ODAC doesn't go back to (at least for our Release version of ODAC), hence the issue. After finding this, we went back to the 11g ODAC, which still works with a 12c DB.
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,334
{"url":"https:\/\/www.appropedia.org\/Alkaline_versus_rechargeable_batteries","text":"Project data\n\nThis is a review of the available literature pertinent to a life cycle analysis (LCA) of standard and rechargeable closed cell batteries. This category of battery includes the consumer familiar AA, AAA, C, D, and 9V battery types. Although the scope of this project is not to perform an LCA from scratch, the hope is that available literature, and additions from readers like you, will complete the content of this page. The sections of this wiki include: (1) basics of battery technology, (2) the distinction between primary vs. secondary batteries, (3) specifics of three different battery types that are a representative sample, (4) an LCA review including aspects of manufacturing, distribution, use and disposal.\n\n### Battery Basics\n\nThere are two main types of batteries: dry-cells and wet-cells. Dry-cell batteries are much more common in consumer use and therefore, this review of the life-cycle will focus on dry-cell batteries. The three primary parts to a traditional zinc-carbon dry-cell are the electrolytic paste, positive electrode and the negative outer canister. The positive electrode is a conductive carbon rod. The electrolytic paste separates the anode (negatively charged side) and the cathode (positively charged side). The free electrons, available from the chemical reaction between the paste and canister, are collected in the anode but attracted to the cathode. However, the construction of the battery restricts the electrons from reaching the cathode unless there is an external path. This results in a current when an external circuit is made, much like the current in a river when water is allowed to flow downhill due to elevation potential. The electric potential formed by closing the circuit allows flow of electrons from the anode to the cathode; the current can then be used to deliver energy to an appliance, such as the bulb filament of a flashlight. The difference in potential between the anode and cathode is the voltage of the cell. Voltage affects the flow or current of electrons through the circuit..[1] The term battery is correctly used for a collection of cells though often a battery may refer to a single cell when it is clear from context. Cells can be arranged electrically in series to multiply the terminal voltage or in parallel to increase the available current. A single cell will have an open circuit voltage that depends on the type of cell. Traditional zinc-carbon and alkaline dry cells have a potential of nominally 1.5V, lithium cells may have voltages around 3 to 4V and a normal lead acid cell is near 2V.\n\n### Primary vs. Secondary\n\nThe types of dry cell batteries considered in this LCA can be further divided into primary (single-use) and secondary (rechargeable) types. the difference is the ability of a secondary battery to be recharged after initial use. Primary cells are manufactured at full charge and may not be recharged due to the irreversible nature of the electrochemical reaction which is used in this type of battery. Secondary batteries are not always manufactured at full charge and can be recharged by the user a number of times. While the reversible nature of secondary batteries allows them to be recharged, the composition of their active chemicals degrades with use, resulting in a finite number of recharge cycles. Secondary cells will accept charge for many cycles but because their storage capacity decreases with each cycle, the number of useful cycles is limited.[2]\n\n## Types of batteries\n\nThis section covers some of the more common types of primary and secondary batteries. The LCA will only cover these types. They were chosen because they are some of the most commonly used and are for the most part interchangeable in their application.[3]\n\n### Type 1: Alkaline Batteries\n\nAlkaline batteries use carbon and magnesium dioxide (MnO2) as the cathode and powdered zinc as the anode. A steel case and brass strip connect the cathode and anode, respectively, to the battery's terminals. Some alkaline batteries may contain mercury to increase conductivity. An alkaline paste of potassium hydroxide (KOH) is used as an electrolyte.[4]\n\n### Type 2: Zinc-Carbon Batteries\n\nZinc carbon batteries use carbon and magnesium dioxide (MnO2) as the cathode and zinc as the anode. A brass strip connects the cathode to the negative terminal and the solid zinc anode is connected directly to the positive terminal. Zinc chlorite (ZnCl2) is use as an electrolyte. Older zinc carbon batteries may contain small amounts of lead or cadmium in the anode.[5]\n\nNickel-Cadmium batteries use a cadmium cathode and a nickel hydroxide anode. The electrolyte used is a mixture of potassium hydroxide (KOH) and lithium hydroxide (LiOH).[6]\n\n## Life Cycle Analysis Results\/methods\n\nA life cycle assessment or analysis is important for equally comparing the impacts of two similar products or services. In this case, the comparison between single-use(primary batteries), such as alkaline and rechargeable(secondary batteries) such as Nickel-Cadmium (Ni-Cd), Nickel Metal Hydride (Ni-MH) or Lithium Ion (Li-ion) is studied from available literature. This LCA examines only consumer batteries common in toys and small electronic devices such as 'AA' and 'C' batteries. Some considerations in all stages of the life of a battery include:[7]\n\nPED (Primary Energy Demand): total amount of primary energy extracted from the earth (in MJ).\nGWP (Global Warming Potential): Contribution to the Global warming of the atmosphere by the release of specific gases (in kg CO2 equ.)\nODP (Ozone layer Depletion Potential): Contribution to the depletion of the stratospheric ozone by the release of specific gases (in kg R11 equ.)\nAP (Acidification Potential): Acidification by gases released to the atmosphere (in kg SO2 equ.),\nEP (Eutrophication Potential): Water enrichment in nutritive elements by the release of specific substances in the effluents (in kg PO43- equ.)\nWD (Water Depletion): Consumption of water (in kg H2O).\n\nFor the most effective and comparable analysis, each comparison must be normalized to compare \"apples to apples.\" Energy is a good metric to normalize by. For instance, megajoules (MJ), can be used to represent the energy required to manufacture a given weight of batteries, the energy required in diesel combustion to transport a given weight or the energy required to recycle a given weight of batteries. For this life cycle it is easiest to normalize everything on a gram per megajoule (g\/MJ) basis.\n\n### Manufacturing\n\nIn addition to the materials a battery is made of, secondary or indirect material and energy is also consumed in the production of batteries. Manufacturing equipment must be run, maintained and eventually replaced, all of which incur material costs outside of the materials that end up in the battery. These materials can be diverse in the case of battery manufacturing. For the comparison of the selected types of primary and secondary batteries, a comparison is made between $100 million spent in the manufacturing of primary and secondary storage devices (Table 1). Table 1: Comparison of inputs for$100 million dollars worth of battery manufacturing [8]\n Primary Storage $100$100 $2 Fertilizers ($) nitrogenous 65 95 1.9 ammonium nitrate 2,200 2,400 48 ammonium sulfate 4,100 5,600 110 organic fertilizers 20,000 24,000 478 phosphatic fertilizers 14 7 0.14 super phosphates 2,600 2,200 43 fuels bituminous coal (t) 26,000 23,000 466 natural gas (t) 7,000 7,600 152 liquified natural gas (t) 500 540 11 liquified petroleum gas (t) 800 670 13 motor gasoline (t) 990 1,700 34 kerosene (kg) 880 870 17 aviation and jet fuel (t) 450 430 8.6 light and heavy fuel oil (t) 3,100 4,100 82 ores (t) iron 11,000 2,800 56 copper 19,000 68,000 1,400 bauxite 840 1,100 22 gold 16,000 66,000 1,300 silver 340 800 16 ores (\\$) ferroalloy 21,000 9,400 190 lead and zinc 93,000 758,000 15,000 uranium and vanadium 8,900 57,000 1,100 water use (106 L) intake 3,100 1,900 38 recycled and reused 5,100 2,900 58 discharged untreated 1,400 870 17 electricity (106 kWh) 63 96 1.9 fuel conversion (TJ) 1,400 1,400 28\n\nThe assumption was made in the creation of Table 1 that secondary batteries cost four times as much as primary batteries. For this direct comparison primary batteries use less resources to manufacture than secondary in most categories. The table also makes a comparison taking into account the ability of secondary batteries to be used multiple times. The assumption made is that a secondary battery can replace a primary battery for 200 cycles.[9] Under this assumption secondary batteries have a much larger advantage in manufacturing resources. This advantage is entirely dependent on the proper use of secondary batteries. Another consideration in the production of batteries is that limits in manufacturing and recycling technologies often do not allow for recycled materials to be used. This need for pure resources often requires the extraction of new resources.[10]\n\n### Distribution\/transportation\n\nThe impacts from distribution and transportation of both new and recycled batteries is primarily a function of weight. The truck hauling the cargo is limited by weight. Heavy trucks can carry more than medium trucks but use more fuel to go the same distance.\n\nSince both single-use (primary) and rechargeable (secondary) batteries are comparable in size, the number of batteries you can fit in a container or on a truck is comparable. The variable is weight, which is directly proportional to the chemical process of combustion in the engine that converts to kinetic energy of the truck carrying the cargo. The comparison of weights between Ni-Cd and Ni-MH are shown in Table 2.\n\nTable 2: Comparison in weight between Nickel-Cadmium and Nickel-Metal-Hydride [11]\n Battery Size Chart Battery Type Diameter mm Length mm NiMH weight g NiCAD weight g 1\/3 A 17 21 15 10 2\/3 A 17 28.5 20-23 18-20 4\/5 A 17 43 32-35 26-31 A 17 50 40 32 1\/3 AA 14.2 17.5 7 6.5 2\/3 AA 14.2 28.7 13-16 13-15 4\/3 AA 14.2 65.2 30 30 4\/5 AA 14.2 43 22 20 AA 14.2 50 27 21 1\/3 AAA 10.5 20.5 5.5 5.5 1\/4 AAA 10.5 14 2.5-4 2.5-3.5 2\/3 AAA 10.5 30 8-9 6-8 4\/3 AAA 10.5 67 18 17 5\/3 AAA 10.5 67 19 19 5\/4 AAA 10.5 50 15 14 2\/3 C 26 31 50 45 C 26 46 80 72 SC = Sub C 2\/3 SC 23 28 28 25 4\/3 SC 23 50 66 60 4\/5 SC 23 34 42 38 SC 23 43 55 52 1\/2 D 33 37 81 81-84 4\/3 D 33 89 175 140-190 D 33 58 105-160 105-145\n\nThe average weight for all sizes of Ni-Cd is 40.15 grams and the average milliamp-hour (mAhr), per weight (gram) is 31.21 mAhr\/g. Respectively, for Ni-MH the averages are 42.60 grams and 48.36 mAhr\/g.[12] Since the Ni-MH is heavier but delivers more current per weight during use, a standard that is appropriate for normalizing transportation of batteries is mAhr\/gram\/MJ, where MJ is the energy in the diesel used to transport the batteries. Therefore, a medium truck from Los Angeles to Arcata (~800 km) that consumes 6.8 MJ per metric tonne per kilometer (MJ\/t-km)[13] a tonne (1000 kg) of Ni-Cd batteries require an average of 1.7(10)-4 MJ\/mAhr or 0.17 kJ\/mAhr while Ni-MH requires 1.12(10)-4 MJ\/mAhr.\n\nTherefore, NiMH is a better option since less energy is required to get it the same distance. The inverse of this says that for Ni-MH 8.89 Ahr can be transported per MJ of energy from diesel combustion. Now you try this same comparison between Ni-MH and alkaline (primary) batteries. . . .\n\n### Use\n\nThe use of batteries do not have a very large impact as long as they are treated properly. Many battery manufacturers warn that \"most of the problems with rechargeable batteries can be traced to misuse\" [14] [15] A misuse that results in a leak, most commonly occurs from: (1) overuse, i.e. being forced to overdraw the energy storage capacity, (2) being heated above the threshold of the battery, which is common during overcharging or (3) corrosion of the shell from water vapor in the air over long time periods. Another contribution to the life cycle of rechargeable batteries that is worth noting is the charger and its components (Table 3).\n\nTable 3: Components of a battery charger.[16]\n Materials\/Assemblies Amount (g) Item Polypropylene 250 case Copper 25 power chord conductor PVC 7 power chord insulation Polypropylene 38 power plug Steel 5 screws Spring steel 5 springs Soft steel 8 contacts 'Magnetic' iron 150 transformer core Steel 20 transformer frame Copper 150 transformer windings Cardboard 10 transformer insulation Copper 3 internal cables Printed circuit board 20 printed circuit board Total weight 691\n\n### Disposal\n\nBattery disposal options include landfill, stabilization, incineration, and recycling[17] . Batteries are often sent to a landfill with other municipal solid waste. This is often the disposal method for primary batteries. Landfill disposal presents a risk of groundwater contamination from leachate emanating from batteries containing heavy metals.\n\nStabilization of batteries involves chemical treatment to prevent heavy metal release to the environment.[18] This type of treatment and disposal is expensive and is declining in use as current battery technology tends to avoid heavy metal use.\n\nBattery incineration typically occurs when batteries are mixed with municipal solid waste. Incineration of batteries has been shown to produce emissions of mercury, cadmium, lead, and dioxins.[19] The incinerator stack gas typically contains the majority of the mercury that is emitted. Meanwhile the fly ash left from incineration is where the cadmium and lead typically concentrates.\n\nRecycling of batteries is typically accomplished using hydrometallurgical or pyrometallurgical processes(discussed in the next section.)\n\nThe impact of metals from batteries disposal upon the environment is still being studied. Cultural practices dictate how the batteries are disposed, regardless of available recycling programs. As recycling of batteries increases, fewer batteries containing heavy metals will be disposed of prior to treatment.\n\n### Recycling\n\nRecycling of batteries has been shown to be environmentally beneficial.[20] The benefits are especially high when material recovery technologies such as those used in the steel industry are used for battery recycling. However, impacts on the environment from collection can outweigh the recycling benefits. The negative impacts of battery recycling are mainly associated with the transportation of the used batteries. Integration of battery collection with other waste, separating them later, can help make recycling a net benefit for the environment. Examples of this type of integration are:[21]\n\n\u2022 Battery collection with paper recycling\n\u2022 Batteries in waste which are separated with magnets\n\u2022 Collection of batteries with electrical appliances\n\nAlternatively, battery programs are often designed to collect all types of household batteries. Regardless of the collection method, the batteries are later separated into the different types to allow for more efficient recycling. High-speed battery sorting machines based on magnetic fields and the response frequencies of the batteries have been developed and put into use in the Netherlands.[22] In Germany battery separation by photo recognition and x-ray have been used.[23]\n\nOnce the batteries are separated, there are many technologies used to recycle the materials. Electric arc furnaces used in steel production and rotary furnaces in zinc production can be used to recycle the batteries. The products are metallic alloys, compounds, or solutions containing metal ions. Nickel metal hydride (NiMH) batteries the recycled in this manner produce a high nickel material which can be used as an alloying component in stainless steel production. Alternatively, there have been processes developed, per battery type, for recycling battery materials into specific products.[24]\n\nDespite the benefits of battery recycling, full diversion of batteries from standard disposal faces public resistance. In the Netherlands and Belgium 80-90% of population was shown to know about their county's battery collection systems, but only 30-50% used the systems.[25] The resistance to recycling stems from peoples habits and the persistence of those habits through generations.\n\nSome countries have considered legislation to help speed the development of battery recycling programs. In the United Kingdom, a study was performed to evaluate battery waste management. Bans on nickel-cadmium batteries, battery collection targets, and battery recycling targets were investigated.[26] The study further showed the net environmental benefits of battery recycling.\n\n## Conclusion\n\nBased on the research from appropriate literature, secondary(rechargeable) batteries do not have significantly less impact than primary(alkaline batteries). This is largely because secondary batteries do not have an infinite number of recharge cycles and because rechargeable batteries also require an electronic charger, included in the life cycle. The manufacturing of each battery is comparable.\n\n### Recommendations\n\nRather than putting effort into research and development of secondary battery technologies, it is recommended by this team to develop recycling abilities for primary batteries. This would include an educational campaign to insure more consumers send the used batteries to an appropriate facility not the landfill. This teams personal recommendation is to remember to recycle even small batteries. Even one battery may seem insignificant but the cumulative effect is large. SINCE RECYCLING IS POSSIBLE, DO YOUR PART, SEND SPENT BATTERIES TO RECYCLING FACILITIES EVEN THOUGH THEY ARE SMALL AND SEEMINGLY INSIGNIFICANT! >:-O\n\n## References\n\n1. Progressive Dynamics\n2. A.M. Bernardes et al. (2004)\n3. A.M. Bernardes et al. (2004)\n4. A.M. Bernardes et al. (2004)\n5. A.M. Bernardes et al. (2004)\n6. A.M. Bernardes et al. (2004)\n7. McDowell, J. and Siret, C.\n8. Lankey, R.L. and McMichael, F.C. (2000)\n9. Lankey, R.L. and McMichael, F.C. (2000)\n10. McDowell, J. and Siret, C.\n11. Batteries Wholesale (2005)\n12. Batteries Wholesale (2005)\n13. Gleick, P.H. and Cooley, H.S. (2009)\n14. Batteries Wholesale (2005)\n15. Lankey, R.L. and McMichael, F.C. (2000)\n16. Parson, David (2006)\"The Environmental Impact of Disposable Versus Re-Chargeable Batteries for Consumer Use\" LCA Case Studies\n17. A.M. Bernardes et al. (2004)\n18. A.M. Bernardes et al. (2004)\n19. A.M. Bernardes et al. (2004)\n20. A.M. Bernardes et al. (2004)\n21. A.M. Bernardes et al. (2004)\n22. A.M. Bernardes et al. (2004)\n23. A.M. Bernardes et al. (2004)\n24. A.M. Bernardes et al. (2004)\n25. A.M. Bernardes et al. (2004)\n26. K. Fisher et al. (2006)\n\u2022 A.M. Bernardes, D.C.R. Espinosa, J.A.S. Ten\u00f3rio (2004) \"Recycling of batteries: a review of current processes and technologies\" Journal of Power Sources No. 130 291\u2013298.\n\u2022 Batteries Wholesale (2005) \"Capacity vs. Weight\", accessed 3\/17\/10 from www.batterieswholesale.com\/capacity_weight.htm\n\u2022 Batteries Wholesale (2005) \"Damaging Batteries,\" accessed 3\/17\/10 from www.batterieswholesale.com\/damaging_batteries.htm\n\u2022 K. Fisher, M. Collins, P. Laenen, E. Wall\u00e9n, P. Garrett, S. Aum\u00f4nier (2006) \"Battery Waste Management Life Cycle Assessment\" Environmental Resources Management (ERM) Ltd.\n\u2022 Gleick, P.H. and Cooley, H.S. (2009) \"Energy Implications of Bottled Water\" Environ. Res. Lett. 4 014009 (6pp)\"\n\u2022 Lankey, R.L. and McMichael, F.C. (2000)\"Life-Cycle Methods for Comparing Primary and Rechargeable Batteries,\" Environ. Sci. Technol., No 34, pp. 2299-2304\n\u2022 McDowell, J. and Siret, C. (date unknown)\"Energy-Saving Batteries \u2013 Green or Greenwash?,\"\n\u2022 Progressive Dynamics (Date Unknown) \"Battery Basics,\" accessed 3\/1\/10 from www.progressivedyn.com\/battery_basics.html\n\n## Notes(for editors)\n\nBrett - Types of batteries and manufacturing\nJames - use and distribution\nRyan - recycling and disposal\nPage data\nPart of Engr410 Environmental Impact Assessment Project life cycle analysis, batteries SDG11 Sustainable cities and communities Ryan Vicente, James Robinson IV, Brett, Kalle Pihlajasaari 2010 CC-BY-SA-4.0 Cal Poly Humboldt 7,183 Ryan Vicente, James Robinson IV, Brett, Kalle Pihlajasaari (2010). \"Alkaline versus rechargeable batteries\". Appropedia. Retrieved August 18, 2022.\nCookies help us deliver our services. By using our services, you agree to our use of cookies.","date":"2022-08-18 17:09:00","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.45992329716682434, \"perplexity\": 4876.4316250054335}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882573242.55\/warc\/CC-MAIN-20220818154820-20220818184820-00445.warc.gz\"}"}
null
null
Q: Updating Angular component field from Promise.catch() does not update UI I'm fairly new to Angular so maybe I'm missing something important here but the code I am using can be found in a lot of repositories and I'm asking myself what I am doing wrong here. So I have this login component where I want to use Firebase OAuth for signing in. The OAuth part is working fine. I can sign in with Facebook, Google or by email. The problem comes into the game when there is already a user with this or that email. Here is my facebook login code: onLoginFacebook() { this.afAuth.auth.signInWithPopup(new firebase.auth.FacebookAuthProvider()) .catch(error => { this.error = error.message; }); } When there is already a user with a given email, trying to sign in will cause an error with the message "An account already exists with the same email address but different sign-in credentials. Sign in using a provider associated with this email address." As you see, I simply assign this error message to a variable of my component which is declared as: public error: string; In my components template, I'd like to use a span to display the error message (for now, that is good enough for me since I am just starting and plan to implement proper error handling later down the road): <span class="error" *ngIf="error" >{{ error }}</span> And here is the problem: After setting the error message in my component, nothing happens. However, if I click on any button or input, the message suddenly appears. I thought that Angular was automatically taking care of updating the UI once I set the variable and if I set the variable outside of the catch() function, e. g. via setTimeout(()=> this.error = "test", 2000), it works as I would expect it. So what am I missing or doing wrong? :( A: This is a bug in Angular. Angular uses zones in order to implement change detection. Zones basically track async contexts. The way angular does this polyfill is by patching every async API the platform has. For example here is the code for Promise. If we look at the code closer, we can tell that if window.Promise is set - that .then is overridden but .catch is not. Your code probably has an assignment to Promise somewhere and the code that is supposed to handle that looks like this: if (NativePromise) { patchThen(NativePromise); // ... While ZoneAwarePromise does define a .catch we can see that patchThen only patches .then and does not override .catch. I've opened an issue for zone.js.
{ "redpajama_set_name": "RedPajamaStackExchange" }
733
module Debugger module RubyCoreSource VERSION = '1.2.3' end end
{ "redpajama_set_name": "RedPajamaGithub" }
388
Q: Task with local extreme I don't know where and what to start calculating in the following task: Determine the value of parameters $a,b$, so that the function $f(x) = x^3- 2ax + b$ has a local extreme $y=5$ at $x=1$. The solution is: $a=3/2$, $b=7$ I need to solve quite a few similar tasks as this and I would really appreciate if sb could tell me how to solve it, so I could solve other similar ones. Thank you in advance! A: Hint: If $x = 1$ implies that $y = 5$, we see that $$1 - 2a + b = 5$$ Now use the fact that, at a local extreme point, the derivative $f'(x)$ is zero. A: Think about how you find local extrema of a function $f(x)$: you find $f\,'(x)$, set it to $0$, and solve to find critical points, which you then examine to see what kinds they are. Do the same here: $f(x)=x^3-2ax+b$, so $f\,'(x)=3x^2-2a$. Setting that to $0$ and solving, you should find that $$x=\pm\sqrt{\frac{2a}3}\;.\tag{a}$$ You want $y=f(x)$ to have a local extrema at $x=1$, so set $x$ to $1$ and solve $(1)$ for $a$. Once you've done that, substitute $x=1$ into $f(x)$ and see what value of $b$ will make $f(1)=5$; you'll know $a$ at that point, so $b$ will be the only unknown, and you will be able to solve for it.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,187
{"url":"http:\/\/www.daniel-wysocki.info\/ASTP-601-602\/journal_club\/2015\/08\/26\/long-term-optical-flux-and-colour-variability-in-quasars.html","text":"## Long-term optical flux and colour variability in quasars\n\nDaniel Wysocki \u2022 \u2022 journal_club\n\n# Paper\n\n\u201cLong-term optical flux and colour variability in quasars\u201d by Sukanya et al.\n\nhttp:\/\/arxiv.org\/pdf\/1508.05560.pdf\n\n# Summary\n\nUsed quasar observations from MACHO (which was searching for dark matter using \u201cmassive compact halo objects\u201d), as it was able to obtain light curves spanning 7.5 years.\n\nF_var characterizes the flux variability of a source, and Figure 1 shows that the V-band exceeds that of the R-band in all but a few cases.\n\nNo lag between V- and R-band flux variations were detected using this dataset, which is likely due to how close the V- and R-bands are, relative to the spacing of observations.\n\nEquation 5 states that the variation in color for a given quasar in the V- and R-bands follows a linear relation, with a slope S_{VR}). Depending on the fitting method used, the conclusions drawn may be slightly different. Using OLS (naive), one concludes that in all cases, quasars are bluer when brighter. Using more robust methods (MCMC, weighted OLS, and BCES) shows that there are in fact some cases where the opposite is true. Figure 2 shows histograms of the 4 methods\u2019 results.","date":"2019-02-17 05:17:17","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8169763088226318, \"perplexity\": 2849.356509714382}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-09\/segments\/1550247481624.10\/warc\/CC-MAIN-20190217051250-20190217073250-00552.warc.gz\"}"}
null
null
Q: C# COM port communication problem I have a C# code which communicates with three different COM ports. The COM ports are actually three serial port to USB converters. The code each time switches 'off' and 'on' the devices to which it is communicating, then initializes the three com ports, tries to send and read data and then closes the com port. This keeps continuing for a pre-defined number of loops. My problem is that after about 8 or 9 iterations, the COM port communication stops working. Sometime it throws an error saying the port is closed, sometime it does not throw any exception but it is actually not reading or writing anything from the com port. Some point it was only writing but not reading back the data. What might be the reason and any tips to debug this problem? EDIT: The port abruptly closes or stops working even in the middle of the program as shown below: SerialPort.Write("ss"); SerialPort.Read("ss"); // FAILS!! Some part of the code I am using public string Read(string readCommand) { string str = ""; _port.WriteLine("\r"); _port.WriteLine(readCommand + "\r"); Thread.Sleep(0x3e8); str = _port.ReadExisting(); return str; } public void Write(string command) { _port.WriteLine(command + "\r"); Thread.Sleep(100); if (_port.ReadExisting() == string.Empty) { throw new IOException("Error writing to COM"); } } public void Initialize() { if (_port == null) { _port = new SerialPort(this.PortName.ToString(), this.BaudRate, this.Parity, this.DataBits, this.StopBits); _port.Handshake = this.Handshake; } try { if (!_port.IsOpen) { _port.Open(); if (Read("") == string.Empty) { throw new IOException("Device not connected or powered on"); } } } catch (Exception) { this.Close(); } } Thanks... A: _port.WriteLine(command + "\r"); Thread.Sleep(100); if (_port.ReadExisting() == string.Empty) { throw new IOException("Error writing to COM"); } That's evil code and bound to throw sooner or later. Windows cannot provide a service guarantee like that. Or for that matter the device itself, especially when you power it on and off. Use SerialPort.ReadTimeout, set it to at least 2 seconds. And make a blocking call, like ReadLine(). catch (Exception) { this.Close(); } That's tops the previous snippet. You have no idea what's going wrong when that runs. And your code will try to use a closed port. Just delete the statements, it does nothing but harm. Do not close the ports until your program ends. SerialPort uses a background thread to watch for events on the port, that thread needs to shutdown after the Close() call before you can open the port again. How long it takes to shutdown is unpredictable, it could be seconds worst case. There's no point in closing the port, it isn't going to be useful to anything else. A: You need to use SetCommTimeouts (not sure what the .NET wrapper is, I gave up on the .NET serial classes long ago and call the Win32 API directly) to force the USB/serial converter to send the data back to your program. By default it may try to collect a block equal in size to a USB transfer block, for efficiency. A: Its tough to tell exactly what the problem might be without see some of the code. My guess would be that you are not waiting long enough for the COM port to close after reopening it. Note from the SerialPort.Close page, that: The best practice for any application is to wait for some amount of time after calling the Close method before attempting to call the Open method, as the port may not be closed instantly. Can you just open the COM ports and leave them open until you are done? For example from this post: using (SerialPort serialPort = new SerialPort("COM1", 9600)) { serialPort.Open(); while (true) { Thread.Sleep(1000); // serialPort.Write(); Thread.Sleep(1000); // serialPort.Read(); // break at some point to end } serialPort.Close(); }
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,090
Dad rescues kids from burning home MOUNT DORA, Fla. (WOFL FOX 35) - A father raced into a burning home to save his six children, braving the flames and busting through glass to reach his babies. Authorities said Brad Russell, of Mount Dora, Florida, ran into the house for his six-year-old daughter. He tried to carry her out and the roomed filled with flames. He suffered serious some burns before he ran back into the home a second time to set his dog free from his crate and to get gather the rest of the children. It remains unclear what started the blaze, but Russell said he had no warning. None of the fire detectors went off, he added. "We are still alive," said Russell. "I am not a hero, I am their dad." The family is still looking for a place to live. Brad and his wife, Amber, have a Go Fund Me page. The fire remains under investigation. Manhattan DA's controversial crime policy concerns restaurants NYC Restaurant Week 2022
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,034
Zimmer Kunz, PLLC is pleased to announce that Dara DeCourcy, George Stewart, Joseph Selep and Joseph Butcher have again been selected as Pennsylvania Super Lawyers®. This is the 10th straight year that Mr. Stewart and Mr. Selep have been recognized as top defense civil litigation attorneys. Ms. DeCourcy is again being recognized for her expertise in insurance coverage issues. Mr. Butcher is being recognized in the general litigation section. The Super Lawyer designation entails a peer review nomination process, supplemented by third party research to establish high ethical standards and accomplishments of the nominees, and a peer evaluation by practice area. According to the Pennsylvania Super Lawyers website, only the top five percent in the state make the list.
{ "redpajama_set_name": "RedPajamaC4" }
3,392
Our favorite event of the year is right around the corner. The Food Network & Cooking Channel South Beach Wine and Food Festival is happening February 24th thru the 28th. We consider it the Super Bowl of everything food. One of the best parts of this event is that it happens in our own backyard, MIAMI! The South Beach Wine and Food Festival has over 80 events happening over five days. This year the event is expanding into Ft. Lauderdale. SoBeWFF is introducing the Taste Fort Lauderdale Series. Here are some of our favorite events happening for the festival. You don't want to miss any of this. This is your chance to get closer to your favorite chefs from all over. We will be all over the beach all weekend and will bring you all the action live on social media. Follow us on Twitter, Facebook, and Instagram. Some events are sold out, but you can still score some great tickets to some great events. Buy your tickets to the 2016 SoBeWFF here. We hope to see you all on the beach for the South Beach Wine and Food Festival.
{ "redpajama_set_name": "RedPajamaC4" }
8,045
The Hagerty Silver Duster is a two-piece polishing cloth developed to easily remove light tarnish from silver. A convenient method for dusting your display pieces regularly. The Hagerty Silver Duster contains R-22, the world's longest lasting, patented tarnish preventive agent. The inner cloth polishes your silver, maintains the patina, and imparts a tarnish-proof barrier to preserve its brilliant glow for months. This dusting cloth takes the work out of removing light tarnish from display pieces and last minute touch ups. While the inner cloth maintains the patina and imparts a tarnish-proof barrier on silver to preserve its brilliant glow for months, the outer cloth buffs silver to a brilliant shine.
{ "redpajama_set_name": "RedPajamaC4" }
7,354
\section{Experiments on Real-world Datasets with noisy labels}\label{sec:realdata} To evaluate the performance of our approach on real-world datasets, we have conducted additional experiments on the Clothing-1M dataset \cite{xiao2015learning}, which is a dataset with $1$M images of clothes, on the Animal-10N dataset \citep{song2019selfie}, which is a dataset with $50$k images of animals and on the CIFAR-10N dataset \citep{wei2022learning}, which is the CIFAR-10 dataset with human-annotated noisy labels obtained from Amazon Mechanical Turk. In the Clothing-1M dataset, the images have been labeled from the texts that accompany them, hence there are both clean and noisy labels in the set, and in the Animal-10N dataset, the images have been gathered and labeled from search engines. In these datasets, some images have incorrect labels and the ground-truth labels in the training set are not available. Hence in our experiments, we cannot explicitly track memorization as measured by the accuracy on the noisy subset of the training set. We train different settings on these two datasets with various architectures (including ResNet, AlexNet and VGG) and varying hyper-parameters (refer to Appendix~\ref{sec:expsetup} for details). We compute the training accuracy and susceptibility $\zeta$ during the training process for each setting and visualize the results in Figure \ref{fig:clothing1m_3}~(a)-(c). \begin{figure}[t] \centering \makebox[20pt]{\raisebox{40pt}{\rotatebox[origin=c]{90}{Computation}}} \subfloat[][\centering The Clothing-1M dataset]{\includegraphics[width=0.3\textwidth]{clothing1m_fig1.png}}\quad% \subfloat[][\centering The Animal-10N dataset]{\includegraphics[width=0.3\textwidth]{animal10n_fig1.png}}\quad \subfloat[][\centering The CIFAR-10N dataset]{\includegraphics[width=0.3\textwidth]{cifar10n_fig1.png}}\\ \makebox[20pt]{\raisebox{40pt}{\rotatebox[origin=c]{90}{Selection}}} \subfloat[][\centering The Clothing-1M dataset]{\includegraphics[width=0.3\textwidth]{clothing1m_fig2.png}}\quad% \subfloat[][\centering The Animal-10N dataset]{\includegraphics[width=0.3\textwidth]{animal10n_fig2.png}}\quad \subfloat[][\centering The CIFAR-10N dataset]{\includegraphics[width=0.3\textwidth]{cifar10n_fig2.png}}\\ \makebox[20pt]{\raisebox{40pt}{\rotatebox[origin=c]{90}{Evaluation}}} \subfloat[][\centering The Clothing-1M dataset\label{fig:mainclothing1m}]{\includegraphics[width=0.3\textwidth]{clothing1m_fig3.png}}\quad% \subfloat[][\centering The Animal-10N dataset\label{fig:mainanimal10n}]{\includegraphics[width=0.3\textwidth]{animal10n_fig3.png}}\quad \subfloat[][\centering The CIFAR-10N dataset\label{fig:maincifar10n}]{\includegraphics[width=0.3\textwidth]{cifar10n_fig3.png}} \caption{For real-world noisy labeled datasets, we show in three steps the efficiency of our model-selection approach. \textbf{Top:} Training accuracy and susceptibility $\zeta$ are computed for models trained on each dataset. \textbf{Middle:} According to the average values of the training accuracy and susceptibility $\zeta$ the figure is divided into $4$ regions. Our model-selection approach suggests selecting models in Region 1, which are resistant to memorization on top of being trainable. \textbf{Bottom:} The test accuracy of the models is visualized using the color of each point (which is only for illustration and is not used to find different regions). We can observe that models in Region~1 have the highest test accuracies in each dataset.} \label{fig:clothing1m_3} \end{figure} We divide the models of Figure~\ref{fig:clothing1m_3}~(a)-(c) into 4 regions, where the boundaries are set to the average value of the training accuracy (horizontal line) and the average value of susceptibility (vertical line): Region 1: Models that are trainable and resistant to memorization, Region 2: Trainable and but not resistant, Region 3: Not trainable but resistant and Region 4: Neither trainable nor resistant. This is shown in Figure~\ref{fig:clothing1m_3}~(d)-(f). Our approach suggests selecting models in Region 1 (low susceptibility, high training accuracy). In order to assess how our approach does in model selection, we can reveal the test accuracy computed on a held-out clean test set in Figure~\ref{fig:clothing1m_3}~(g)-(i). We observe that the average (± standard deviation) of the test accuracy of models in each region is as follows: \begin{itemize} \item \textit{Clothing-1M dataset:} Region 1: \textbf{61.799}$\%$ ± 1.643; Region 2: 57.893$\%$ ± 3.562; Region 3: 51.250$\%$ ± 17.209; Region 4: 51.415$\%$ ± 9.709. \item \textit{Animal-10N dataset:} Region 1: \textbf{96.371}$\%$ ± 1.649; Region 2: 92.508$\%$ ± 2.185; Region 3: 91.179$\%$ ± 6.601; Region 4: 89.352$\%$ ± 3.142. \item \textit{CIFAR-10N dataset:} Region 1: \textbf{87.2}$\%$ ± 0.99; Region 2: $85.44\%$ ± 2.52; Region 3: 77.87$\%$ ± 8.15; Region 4: 78.45$\%$ ± 3.86. \end{itemize} We observe that using our approach we are able to select models with very high test accuracy. In addition, the test accuracies of models in Region 1 have the least amount of standard deviation. Note that our susceptibility metric $\zeta$ does not use any information about the label noise level or the label noise type that is present in these datasets. Similarly to the rest of this paper, random labeling is used for computing $\zeta$. Interestingly, even though within the training sets of these datasets the label noise type is different than random labeling (label noise type is instance-dependent \cite{xia2020part, wei2022learning}), $\zeta$ is still successfully tracking memorization. Therefore, our approach selects trainable models with low memorization even for datasets with real-world label noise. Observe that selecting models only on the basis of their training accuracy or only on the basis of their susceptibility fails: both are needed. It is interesting to note that in the Clothing-1M dataset, as the dataset is more complex, the range of the performance of different models varies and our approach is able to select ``good'' models from ``bad'' models. On the other hand, in the Animal-10N dataset, as the dataset is easier to learn and the estimated label noise level is lower, most models are already performing rather well. Here, our approach is able to select the ``best'' models from ``good'' models. \section{On the Generality of the Observed Phenomena}\label{sec:abl} In this section, we provide additional experiments that show our empirical results hold across different choices of dataset, training, and architecture design as well as label noise level and form. \textbf{Dataset:} In addition to the real-world datasets provided in the previous section, our results consistently hold for datasets MNIST, Fashion MNIST, SVHN, CIFAR-100, and Tiny ImageNet datasets; see Figures \ref{fig:mainmnist}-\ref{fig:filtertiny} in Appendix~\ref{sec:mainmoreapp}. \textbf{Learning rate schedule:} Our results are not limited to a specific optimization scheme. In our experiments, we apply different learning rate schedules, momentum values, and optimizers (SGD and Adam) (for details see Appendix~\ref{sec:expsetup}). More specifically, we show in Figure~\ref{fig:lr} (in Appendix~\ref{sec:appabl}) that the strong correlation between memorization and our metric $\zeta(t)$ stays consistent for both learning rate schedulers \texttt{cosineannealing} and \texttt{exponential}. \textbf{Architecture:} Results of Sections~\ref{sec:rmem} and~\ref{sec:main} are obtained from a variety of architecture families, such as DenseNet \cite{huang2017densely}, MobileNet \cite{howard2017mobilenets}, VGG \cite{simonyan2014very}, and ResNet \cite{he2016deep}. For the complete list of architectures, see Appendix~\ref{sec:expsetup}. We observe that $\zeta(t)$ does not only detect resistant architecture families (as done for example in Figure~\ref{fig:rversusmem}), but that it is also able to find the best design choice (e.g., width) among configurations that are already resistant, see Figure~\ref{fig:fine} in Appendix~\ref{sec:appabl}. \textbf{Low label noise levels:} In addition to the real-world datasets with low label noise levels (label noise level of Animal-10N and CIFAR-10N are $8\%$ and $9\%$, respectively), we studied low label noise levels in datasets with synthetic label noises as well. For models trained on CIFAR-10 and CIFAR-100 datasets with $10\%$ label noise (instead of $50\%$), we still observe a high correlation between accuracy on the noisy subset and $\zeta(t)$ in Figures~\ref{fig:cifar10less} and \ref{fig:cifar100less} in Appendix \ref{sec:appabl}. Moreover, we observe in Figures~\ref{fig:figure1_less_cifar10} and~\ref{fig:figure1_less_cifar100} in Appendix~\ref{sec:appabl} that the average test accuracy of the selected models using our metric is comparable with the average test accuracy of the selected models with access to the ground-truth label. \textbf{Asymmetric label noise:} In addition to the real-world datasets with asymmetric label noises (label noise type of the datasets in Section~\ref{sec:realdata} is instance-dependent \cite{xia2020part, wei2022learning}), we studied synthetic asymmetric label noise as well. We have performed experiments with asymmetric label noise as proposed in \citep{xia2021robust}. Using our approach, the average test accuracy of the selected models is $66.307\%$, whereas the result from oracle is $66.793\%$ (see Figure \ref{fig:asym} in Appendix \ref{sec:mainmoreapp}). \section{Experiments Related to Section \ref{sec:abl}}\label{sec:appabl} In this section, we provide some ablation studies that are discussed in Section \ref{sec:abl}. In Figure \ref{fig:fine}, we observe that even among neural network architectures with a good resistance to memorization, susceptibility to noisy labels $\zeta(t)$ detects the most resistant model. We observe that the high correlation between $\zeta$ and memorization of the noisy subset is not limited to a specific learning rate schedule in Figure~\ref{fig:lr}, or a label noise level in Figures~\ref{fig:cifar10less} and~\ref{fig:cifar100less}. Moreover, in Figures~\ref{fig:figure1_less_cifar10} and \ref{fig:figure1_less_cifar100}, we observe that for datasets with the label noise level of $10\%$, the susceptibility to noisy labels~$\zeta$ and training accuracy still select models with high test accuracy. The same consistency is observed in Figure \ref{fig:asym} for models trained with asymmetric label noise. \vspace{1em} In the paper, we choose $\widetilde{S}$ to be only a single mini-batch of a randomly-labeled set for computational efficiency. But we also made sure that this does not harm the correlation between Train ACC Noisy and $\zeta(t)$. We analyze the effect of size of $\widetilde{S}$ in Figure~\ref{fig:sout} (left), which confirms that a single mini-batch is large enough to have a high correlation between Train ACC Noisy and $\zeta(t)$. Moreover, we observe the robustness of the susceptibility metric to the exact choice of the mini-batch in Figure~\ref{fig:sout} (right). To better illustrate the match between Train ACC Noisy and $\zeta(t)$, we provide the overlaid curves in Figure \ref{fig:rversusmemtogether}. This figure clearly shows how using $\zeta$, one can detect/select checkpoints of the model with low memorization. \begin{figure}[h] \centering \includegraphics[height=4.2cm]{cifar100_fine.png}\quad \includegraphics[height=4.2cm]{cifar100_fine2.png}\\ \caption{Accuracy on the noisy subset of the training set versus the susceptibility $\zeta(t)$ (Equation \eqref{eq:rt}) for MobileNet and ShuffleNetV2 configurations trained on CIFAR-100 with $50\%$ label noise. Pearson correlation between the Train ACC Noisy and susceptibility $\zeta$ is $\rho=0.749$. \texttt{Scale} is a hyper-parameter that proportionally scales the number of hidden units and number of channels in the neural network configuration. } \label{fig:fine} \end{figure} \vspace{1em} \begin{figure}[h] \centering \subfloat[][\centering \texttt{Exponential}]{\includegraphics[height=3.5cm]{mnist_lr_exp.png}\quad \includegraphics[height=3.5cm]{mnist_lr_exp_2.png}}\\ \subfloat[][\centering \texttt{Cosineannealing}]{\includegraphics[height=3.5cm]{mnist_lr_cosine.png}\quad \includegraphics[height=3.5cm]{mnist_lr_cosine2.png}} \caption{Accuracy on the noisy subset of the training set versus the susceptibility $\zeta(t)$ for networks trained on MNIST with $50\%$ label noise. On top and bottom, we have models trained with \texttt{exponential} and \texttt{cosineannealing} learning rate schedulers, respectively. Pearson correlation between Train ACC Noisy and $\zeta$ for \texttt{exponential} and \texttt{cosineannealing} schedules are $\rho=0.89$ and $\rho=0.772$, respectively. } \label{fig:lr} \end{figure} \begin{figure}[h] \centering \includegraphics[height=4cm]{cifar10_lessnoise.png}\quad \includegraphics[height=4cm]{cifar10_lessnoise2.png}\\ \caption{Accuracy on the noisy subset of the training set versus susceptibility to noisy labels $\zeta(t)$ for networks trained on CIFAR-10 with $10\%$ label noise. Pearson correlation between Train ACC Noisy and $\zeta$ is $\rho=0.634$. } \label{fig:cifar10less} \end{figure} \begin{figure}[h] \centering \includegraphics[height=4cm]{cifar100_lessnoise.png}\quad \includegraphics[height=4cm]{cifar100_lessnoise2.png}\\ \caption{Accuracy on the noisy subset of the training set versus susceptibility to noisy labels $\zeta(t)$ for networks trained on CIFAR-100 with $10\%$ label noise. Pearson correlation between Train ACC Noisy and $\zeta$ is $\rho=0.849$. } \label{fig:cifar100less} \end{figure} \begin{figure}[h] \captionsetup[subfloat]{singlelinecheck=false} \centering \subfloat[\centering With access to the ground-truth label. \protect\\ Average test accuracy of the selected models = $82.93\%$ \label{fig:figure1_less_cifar10a}]{\includegraphics[width=5.8cm]{Figure1_lessnoise.png}} \quad % \subfloat[\centering Without access to the ground-truth label. Average test accuracy of the selected models = $86.08\%$]{\includegraphics[width=5.8cm]{Figure1_lessnoise2.png}} \caption{ For models trained on CIFAR-10 with $10\%$ label noise for $200$ epochs, using susceptibility $\zeta$ and the overall training accuracy, the average test accuracy of the selected models is comparable with (even higher than) the case of having access to the ground-truth label. } \label{fig:figure1_less_cifar10} \end{figure} \begin{figure}[h] \captionsetup[subfloat]{singlelinecheck=false} \centering \subfloat[\centering With access to the ground-truth label. \protect\\ Average test accuracy of the selected models = $62.61\%$ \label{fig:figure1_less_cifar100a}]{\includegraphics[width=5.8cm]{Figure1_lessnoise_cifar100.png}} \quad % \subfloat[\centering Without access to the ground-truth label. Average test accuracy of the selected models = $64.33\%$]{\includegraphics[width=5.8cm]{Figure1_lessnoise2_cifar100.png}} \caption{ For models trained on CIFAR-100 with $10\%$ label noise for $200$ epochs, using susceptibility $\zeta$ and the overall training accuracy, the average test accuracy of the selected models is comparable with (even higher than) the case of having access to the ground-truth label. } \label{fig:figure1_less_cifar100} \end{figure} \begin{figure}[h] \centering \includegraphics[height=3.5cm]{batch_size.png}\quad \includegraphics[height=3.5cm]{variance.png}\\ \caption{\textbf{Left}: Pearson correlation coefficient between the accuracy on the noisy subset of the training set and susceptibility $\zeta$ (Equation \eqref{eq:rt}) for different choices of dataset size for $\widetilde{S}$ for ResNet \cite{he2016deep}, MobileNet \cite{howard2017mobilenets}, and $5-$layer cnn that are trained on CIFAR-100 dataset with $50\%$ label noise. We observe that unless the dataset is very small, the choice of the dataset size $\widetilde{S}$ does not affect the correlation value. Therefore, throughout our experiments, we choose the size $128$ for this set, which is the batch size used for the regular training procedure as well. Note that this size is very small compared to the size of the training set itself, which is $50000$, hence the computational overhead to compute $\zeta$ is negligible compared to the original training process. \textbf{Right}: We can observe the variance of the susceptibility metric over 10 different random seeds. We can observe that as the variance is quite low, the metric is robust to the exact choice of the mini-batch and to the random labels that are assigned to the mini-batch. } \label{fig:sout} \end{figure} \begin{figure}[h] \centering \includegraphics[height=4.5cm]{rversustan_together.png}\\ \caption{Accuracy on the noisy subset (solid lines) versus Susceptibility $\zeta(t)$ (dashed lines) for neural networks trained on CIFAR-10 with $50\%$ label noise. We observe a very strong match between the two, which suggests that susceptibility can be used to perform early stopping by selecting the checkpoint for each model with the least memorization. For example, for MobileNet and EfficientNet, $\zeta$ does not warn about memorization, hence one can select the end checkpoint. On the other hand, for DenseNet and GoogleNet, $\zeta$ suggests selecting those checkpoints that are before the sharp increases. This is also consistent with the signal given by the fit on the noisy subset, which requires ground-truth label access, unlike susceptibility $\zeta$ which does not require such access. } \label{fig:rversusmemtogether} \end{figure} \begin{figure}[h] \captionsetup[subfloat]{singlelinecheck=false} \centering \subfloat[\centering With access to the ground-truth label. \protect\\ Average test accuracy of the selected models = $66.793\%$]{\includegraphics[width=5.8cm]{asym2.png}} \quad % \subfloat[\centering Without access to the ground-truth label. Average test accuracy of the selected models = $66.307\%$]{\includegraphics[width=5.8cm]{asym.png}} \caption{ For models trained for $200$ epochs on CIFAR-10 with $50\%$ asymmetric label noise as proposed in \cite{xia2021robust}, using susceptibility $\zeta$ and the overall training accuracy, the average test accuracy of the selected models is comparable with the case of having access to the ground-truth label. } \label{fig:asym} \end{figure} \clearpage \textbf{A study on different thresholds used to select models} We would like to point out that if we can tune these thresholds (instead of using the average values of training accuracy and susceptibility over the available models), we can select models with even higher test accuracies than what is reported in our paper. For example, for models of Figure~\ref{fig:main}, by tuning these two thresholds one could reach a test accuracy of $79.15\%$ (instead of the reported $76\%$) as shown in Figure~\ref{fig:thresholds}~(left). However, we want to remain in the practical setting where we do not have any access to a clean validation set for tuning. As a consequence, we must avoid any hyper-parameter tuning. And indeed, throughout our experiments, these thresholds are never tuned nor set manually to any extent. Among thresholds that can be computed without access to a clean validation set, we opted for the average values of susceptibility and training accuracy (over the available models) for simplicity. We empirically observe that this choice is robust and produces favorable results in various experimental settings. We could take other percentiles for the threshold, but they are more complex to obtain than simple averages, because they would then depend on the distribution among models. In Figure~\ref{fig:thresholds}, we study various values of percentiles for these thresholds. We observe that depending on the available models and the given dataset, some other percentiles might give higher test accuracies than simply using the average values. These percentiles range however typically from 35 to 55 and are therefore not far from the mean, hence their benefit in increasing test accuracy appears small compared to the increased complexity to compute them or relying on additional assumptions on the distribution of susceptibility and training accuracy. We observe in Figure~\ref{fig:thresholds}~(right) that except for very extreme values of the thresholds (which basically select all models as resistant to memorization), the average test accuracy of models in Region 1 is much higher than the average test accuracy of models in Region 2. Hence, our proposed model selection approach is robust to the choice of these thresholds. \begin{figure}[h] \centering \includegraphics[height=4.2cm]{pareto_curve_thresholds.png}\quad\quad% \includegraphics[height=4.2cm]{pareto_curve_thresholds2.png} \caption{\textbf{Left:} Average test accuracy of models in Region 1 of Figure~\ref{fig:main} for various thresholds used to find Region 1. In Figure~\ref{fig:main} and throughout this paper, Region 1 has models with susceptibility $\zeta$ < t1 and training accuracy > t2, where t1 and t2 are average $\zeta$ and training accuracy over the available models, respectively. Here, we study different values of these thresholds t1 and t2, and their effect on the average test accuracy of models of Region~1. We explore different percentiles of $\zeta$ and training accuracy over all models to be used to find these thresholds. The extreme would be to have $100$th percentiles for both thresholds (low rightmost item of this table), which means models of Region 1 have $\zeta$ < maximum susceptibility and training accuracy > minimum training accuracy. In this extreme case, all models are selected in Region 1. Overall, we observe that some other percentiles might give higher test accuracies than simply using the average values. These percentiles range however typically from 35 to 55 and are therefore not far from the mean, hence their benefit in increasing test accuracy appears small compared to the increased complexity to compute them or relying on additional assumptions on the distribution of susceptibility and training accuracy. \textbf{Right:} The difference in the average test accuracies of models in Region 1 and models in Region 2 for various values of percentiles used to find different regions. A positive value implies that models in Region 1 have a higher average test accuracy. We can observe that except for very extreme values of the thresholds, which basically select all models as trainable, the average test accuracy of models in Region 1 is much higher than the average test accuracy of models in Region 2. Hence, our approach to select \emph{resistant and trainable} models is robust to the choice of these thresholds. \label{fig:thresholds} } \end{figure} \section{Additional Experiments for Section \ref{sec:observation}}\label{sec:motivapp} In this section, we provide additional experiments for the observation presented in Section \ref{sec:observation}. In Figure \ref{fig:motiv2}, we observe that networks with a high test accuracy are resistant to memorizing a new incorrectly-labeled sample. On the other hand, in Figure \ref{fig:motivclean}, we observe that networks with a high test accuracy tend to fit a new correctly-labeled sample faster. \begin{figure}[ht] \centering \subfloat[][\centering Example~1: original label: horse, assigned label: frog]{\includegraphics[width=6.5cm]{motive2.png}}\hspace{1em}% \subfloat[][\centering Example~2: original label: cat, assigned label: truck]{\includegraphics[width=6.5cm]{motive3.png}}\\ \subfloat[][\centering Example~3: original label: dog, assigned label: automobile]{\includegraphics[width=6.5cm]{motive4.png}}\hspace{1em}% \subfloat[][\centering Example~4: original label: ship, assigned label: deer]{\includegraphics[width=6.5cm]{motive5.png}} \caption[]{The evolution of output prediction of two networks that are trained on a single randomly labeled sample. In all sub-figures, Network 1 has a higher test accuracy compared to Network~2, and we observe it is less resistance to memorization of the single incorrectly-labeled sample. \textbf{Example~1: } Network 1 is a ResNeXt trained on CIFAR-10 dataset with $50\%$ random labels and has test accuracy of $58.85\%$. Network 2 is a ResNeXt that is not pre-trained and has test accuracy of $9.74\%$. \textbf{Example~2: } Network 1 is a SENet trained on CIFAR-10 dataset with original labels and has test accuracy of $95.35\%$. Network 2 is a SENet that is trained on CIFAR-10 with $50\%$ label noise and has test accuracy of $56.38\%$. \textbf{Example~3: } Network 1 is a RegNet trained on CIFAR-10 dataset with original labels and has test accuracy of $95.28\%$. Network 2 is a RegNet that is trained on CIFAR-10 with $50\%$ label noise and has test accuracy of $55.36\%$. \textbf{Example~4: } Network 1 is a MobileNet trained on CIFAR-10 dataset with original labels and has test accuracy of $90.56\%$. Network 2 is a MobileNet that is trained on CIFAR-10 with $50\%$ label noise and has test accuracy of $82.76\%$. }\label{fig:motiv2} \end{figure} \begin{figure}[ht] \centering \subfloat[\centering Example~1: original label: dog, assigned label: dog]{\includegraphics[width=6.5cm]{motive1_clean.png}} \hspace{1em}% \subfloat[][\centering Example~2: original label: frog, assigned label: frog]{\includegraphics[width=6.5cm]{motive2_clean.png}}\\ \subfloat[][\centering Example~3: original label: horse, assigned label: horse]{\includegraphics[width=6.5cm]{motive3_clean.png}} \hspace{1em}% \subfloat[][\centering Example~4: original label: cat, assigned label: cat]{\includegraphics[width=6.5cm]{motive4_clean.png}}\\ \subfloat[][\centering Example~5: original label: truck, assigned label: truck]{\includegraphics[width=6.5cm]{motive5_clean.png}} \caption{The evolution of output prediction of two networks that are trained on a single unseen correctly-labeled sample. In all sub-figures, Network 1 has a higher test accuracy compared to Network 2. We observe that give a new correctly-labeled sample Network 2 learns it later, unlike our observation in Figure \ref{fig:motiv} for a new incorrectly-labeled sample. \textbf{Example~1: } Network 1 is a GoogLeNet trained on CIFAR-10 dataset with clean labels and has test accuracy of $95.36\%$. Network 2 is a GoogLeNet trained on CIFAR-10 dataset with $50\%$ label noise level and has test accuracy of $58.35\%$. \textbf{Example~2: } Network 1 is a ResNeXt trained on CIFAR-10 dataset with $50\%$ random labels and has test accuracy of $58.85\%$. Network 2 is a ResNeXt that is not pre-trained and has test accuracy of $9.74\%$. \textbf{Example~3: } Network 1 is a SENet trained on CIFAR-10 dataset with original labels and has test accuracy of $95.35\%$. Network 2 is a SENet that is trained on CIFAR-10 with $50\%$ label noise and has test accuracy of $56.38\%$. \textbf{Example~4: } Network 1 is a RegNet trained on CIFAR-10 dataset with original labels and has test accuracy of $95.28\%$. Network 2 is a RegNet that is trained on CIFAR-10 with $50\%$ label noise and has test accuracy of $55.36\%$. \textbf{Example~5: } Network 1 is a MobileNet trained on CIFAR-10 dataset with original labels and has test accuracy of $90.56\%$. Network 2 is a MobileNet that is trained on CIFAR-10 with $50\%$ label noise and has test accuracy of $82.76\%$. } \label{fig:motivclean} \end{figure} Moreover, we study the effect of calibration on the observations of Figures~\ref{fig:motiv} and \ref{fig:motiv2}. A poor calibration of a model may affect the confidence in its predictions, which in turn might affect the susceptibility/resistance to new samples. Therefore, in Figure~\ref{fig:calibration}, we compare models that have almost the same calibration value. More precisely, Network $1$ is trained on the clean dataset, and Network 2 (calibrated) is a calibrated version of the model that is trained on the noisy dataset using the Temperature scaling approach \cite{guo2017calibration}. We observe that even with the same calibration level, the model with a higher test accuracy is more resistant to memorizing a new incorrectly-labeled sample. \begin{figure}[h] \centering \subfloat[][\centering Example~1: original label: ship, assigned label: dog]{\includegraphics[width=6.5cm]{calibration_googlenet_ship_as_dog.png}}\hspace{1em}% \subfloat[][\centering Example~2: original label: cat, assigned label: truck]{\includegraphics[width=6.5cm]{calibration_regnet_cat_as_truck.png}}\hspace{1em}% \caption[]{The evolution of output prediction of networks that are trained on a single randomly labeled sample. In all sub-figures, Network 1 (trained on the clean dataset) has a higher test accuracy than Network 2 (trained on the noisy dataset), and we observe it is less resistant to memorization of the single incorrectly-labeled sample. Furthermore, we have ensured using the temperature scaling method \cite{guo2017calibration} that the two models have the same calibration (ECE) value. \textbf{Example~1: } Network 1 is a GoogleNet trained on CIFAR-10 dataset with original labels and has test accuracy of $95.36\%$. Network 2 is a GoogleNet that is trained on CIFAR-10 with $50\%$ label noise and has test accuracy of $58.35\%$. \textbf{Example~2: } Network 1 is a RegNet trained on CIFAR-10 dataset with original labels and has test accuracy of $95.28\%$. Network 2 is a RegNet that is trained on CIFAR-10 with $50\%$ label noise and has test accuracy of $55.36\%$. }\label{fig:calibration} \end{figure} \section{Introduction} \input{introduction.tex} \section{Good Models are Resistant to Memorization}\label{sec:observation} \input{motivex.tex} \section{Evaluating Resistance to Memorization}\label{sec:compute}\label{sec:rmem} \input{r_vs_tan.tex} \section{Good Models are Resistant and Trainable}\label{sec:main} \input{mem_train.tex} \section{Convergence Analysis}\label{sec:finegrained} \input{convergence.tex} \input{ablation.tex} \section{Conclusion}\label{sec:dis} \input{discussion.tex} \subsection*{Acknowledgement} We thank Chiyuan Zhang and Behnam Neyshabur for their valuable feedback on the draft of this work. \bibliographystyle{abbrvnat} \subsection{Properties of the Gram-matrix} \paragraph{Properties}\label{sec:properties} Here, we recall a few useful properties of the Gram-matrix (Equation~\eqref{eq:Gmdef}). \begin{enumerate} \item\label{prop:lmin} As shown by \cite{du2018gradient}, $\textbf{H}^{\infty}$ is positive definite and $\lambda_0 = \lambda_{\text{min}} (\textbf{H}^{\infty}) > 0$. \item\label{prop:decom} The matrix $\textbf{H}^{\infty}$ has eigen decomposition $\textbf{H}^{\infty} = \sum_{i=1}^{n} \lambda_i \textbf{v}_i \textbf{v}_i^T$, where the eigenvectors are orthonormal. Therefore, $\textbf{v}_i^T \textbf{v}_j = \delta_{i,j}$ for $i, j \in [n]$, the $n \times n$ identity matrix $\textbf{I}$ is decomposed as $\sum_{i=1}^{n} \textbf{v}_i \textbf{v}_i^T$ and any $n$-dimentional (column-wise) vector $\textbf{y}$ can be decomposed as $\textbf{I} \; \textbf{y} = \sum_{i=1}^{n} (\textbf{v}_i^T \textbf{y}) \textbf{v}_i$. \item\label{prop:normbound} (Recalled from \cite{du2018gradient,arora2019fine}) We have $\norm{\textbf{H}^{\infty}}_2 \leq \text{tr}( \textbf{H}^{\infty}) = \frac{n}{2} = \sum_{i=1}^{n} \lambda_i$, and $$\eta = O(\frac{\lambda_0}{n^2}) = O\left(\frac{\lambda_{\text{min}} (\textbf{H}^{\infty})}{\norm{\textbf{H}^{\infty}}_2^2} \right) \leq \frac{1}{\norm{\textbf{H}^{\infty}}_2}.$$ Hence, $$\norm{\textbf{I} - \eta \textbf{H}^{\infty}}_2 \leq 1- \eta \lambda_0.$$ \end{enumerate} \subsection{Corollaries Adapted from \cite{du2018gradient, arora2019fine} } \begin{corollary}\label{cor:phibound} (Adapted Theorem 3.1 of \cite{arora2019fine} to our setting) For $m = \Omega\left(\frac{n^6}{\lambda_0^4 \kappa^2 \delta^3}\right)$ and $\eta = O\left(\frac{\lambda_0}{n^2}\right)$, for any~${\delta \in (0,1]}$, with probability at least $1-\delta$ over random initialization \eqref{eq:randinit}: \begin{equation*} \Phi\left(\textbf{W}(0)\right) = O\left(\frac{n}{\delta}\right), \end{equation*} and \begin{equation*} \begin{cases} \Phi\left(\textbf{W}(t+1)\right) \leq (1-\frac{\eta \lambda_0}{2}) \Phi(\textbf{W}(t)), \quad & \text{if}\ 0 \leq t < k ,\\ \widetilde{\Phi}\left(\textbf{W}(t+1)\right) \leq (1-\frac{\eta \lambda_0}{2}) \widetilde{\Phi}(\textbf{W}(t)), \quad & \text{if}\ k \leq t < k+\tilde{k} . \end{cases} \end{equation*} \end{corollary} Therefore, by replacing Equations \eqref{eq:objorig} and \eqref{eq:objrand}, throughout the proof we can use: \begin{equation*} \norm{\textbf{f}_{\textbf{W}(0)}- \textbf{y}}_2 = O\left(\sqrt{\frac{n}{\delta}}\right), \end{equation*} and \begin{equation*} \begin{cases} \norm{\textbf{f}_{\textbf{W}(t+1)}- \textbf{y}}_2 &\leq \sqrt{1-\frac{\eta \lambda_0}{2}} \norm{\textbf{f}_{\textbf{W}(t)}- \textbf{y}}_2 \\ &\leq \left(1-\frac{\eta \lambda_0}{4}\right) \norm{\textbf{f}_{\textbf{W}(t)}- \textbf{y}}_2, \quad \text{if}\ 0 \leq t < k , \\ \norm{\textbf{f}_{\textbf{W}(t+1)}- \widetilde{\textbf{y}}}_2 &\leq \left(1-\frac{\eta \lambda_0}{4}\right) \norm{\textbf{f}_{\textbf{W}(t)}- \widetilde{\textbf{y}}}_2, \quad \text{if}\ k \leq t < k+\tilde{k} , \end{cases} \end{equation*} where we use inequality ${\sqrt{1-\alpha} \leq 1-\alpha/2}$, which holds for $0 \leq \alpha \leq 1$. \begin{corollary}\label{cor:outupdate} (Adapted from Equation (25) of \cite{arora2019fine}) If the parameter vector is updated at step $t$ by one gradient descent step on \allowbreak $\frac{1}{2} \norm{\mathbf{f}_{\mathbf{W}(t)} - \mathbf{u}}_2^2$~for some label vector $\mathbf{u}$, and for $t$ such that with probability at least $1-\delta$: \begin{equation*} \norm{\mathbf{H}(t)-\mathbf{H}(0)}_F = O\left(\frac{n^3}{\sqrt{m} \lambda_0 \kappa \delta^{3/2}}\right) , \end{equation*} then the output of the neural network is as follows \begin{equation*} \mathbf{f}_{\mathbf{W}(t+1)} - \mathbf{f}_{\mathbf{W}(t)} = -\eta \mathbf{H}^{\infty} \left(\mathbf{f}_{\mathbf{W}(t) }- \mathbf{u}\right) + \xi(t) , \end{equation*} where $\xi(\cdot)$ is considered to be a perturbation term that can be bounded with probability at least $1-\delta$ over random initialization \eqref{eq:randinit} by \begin{equation}\label{eq:zetabound} \norm{\xi(t)}_2 = O\left(\frac{\eta n^3}{\sqrt{m} \lambda_0 \kappa \delta^{3/2}}\right) \norm{\mathbf{f}_{\mathbf{W}(t)}-\mathbf{u}}_2 . \end{equation} \end{corollary} Remark: In our setting, this corollary holds for $0 \leq t \leq k-1$ with $\textbf{u} = \textbf{y}$, and for $k \leq t \leq k+\tilde{k}-1$ with $\textbf{u} = \widetilde{\textbf{y}}$. We only need to show that for our setting for $t \geq k$, $\norm{\mathbf{H}(t)-\mathbf{H}(0)}_F$ is bounded, which is done in Lemma \ref{lem:hk0larger}. \begin{corollary}\label{cor:obj0k} (From Equation (27) of \cite{arora2019fine}) We have for $1 \leq t \leq k$ \begin{equation*} \mathbf{f}_{\mathbf{W}(t)} - \mathbf{{y}} = \left(\mathbf{I}- \eta \mathbf{H}^{\infty}\right)^t \left(\mathbf{f}_{\mathbf{W}(0)} - \mathbf{{y}}\right) + \sum_{s=0}^{t-1} \left(\mathbf{I}-\eta \mathbf{H}^{\infty}\right)^s \xi\left(t-s-1\right) , \end{equation*} where $\norm{\xi(\cdot)}_2$ is some perturbation term that can be bounded using Equation \eqref{eq:zetabound} with $\mathbf{u}=\mathbf{y}$. \end{corollary} \subsection{Additional Lemmas} \begin{lemma}\label{lem:help} For the setting described in Section \ref{sec:observation}, we have \begin{eqnarray} \mathbf{f}_{\mathbf{W}(k)} - \widetilde{\mathbf{y}} = \sum_{i=1}^{n} \left[(\mathbf{v}_i^T \mathbf{y})- \left(1-\eta \lambda_i\right)^k (\mathbf{v}_i^T \mathbf{y}) - (\mathbf{v}_i^T \widetilde{\mathbf{y}}) \right] \mathbf{v}_i + \chi(k) , \end{eqnarray} where $\chi(k)$ is some perturbation term that with probability at least $1-\delta$ over the random initialization~\eqref{eq:randinit} \begin{align}\label{eq:chibound} \norm{\chi(k)}_2 = O\left(\frac{n^{3/2} \kappa }{\sqrt{\delta} \lambda_0} + \frac{n^{9/2}}{\sqrt{m} \lambda_0^3 \kappa \delta^2} \right) . \end{align} \end{lemma} \subsubsection{Lemmas to Bound $\textbf{H}(t)$ with $\textbf{H}^\infty$} Because the two datasets $S$ and $\widetilde{S}$ have the same input samples, the Gram matrix defined in Equation~\eqref{eq:Gmdef} is the same for both of them. We now recall two lemmas from \cite{du2018gradient, arora2019fine} and provide a lemma extending them to bound $\textbf{H}(t)$ with $\textbf{H}^\infty$, where \begin{equation*} \textbf{H}_{ij}(t) = \frac{\textbf{x}_i^T \textbf{x}_j}{m} \sum_{r =1}^{m} \mathbb{I}_{r,i}(t) \mathbb{I}_{r,j}(t) , \end{equation*} and $\mathbb{I}_{r,i}(t) = \mathbb{I} \{\textbf{w}_r^T(t) \textbf{x}_i \geq 0\}$. \begin{lemma}{(recalled from \cite{du2018gradient, arora2019fine})} \label{lem:hk0} For $\lambda_0 = \lambda_{\text{min}} (\textbf{H}^{\infty}) > 0$, $m = \Omega \left(\frac{n^6}{\lambda_0^4 \kappa^2 \delta^3}\right)$, and~$\eta = O\left(\frac{\lambda_0}{n^2}\right)$ with probability at least $1-\delta$ over the random initialization \eqref{eq:randinit}, for all~$0 \leq t \leq k$, we have: \end{lemma} \begin{equation*} \norm{\textbf{H}(t)-\textbf{H}(0)}_F = O\left(\frac{n^3}{\sqrt{m} \lambda_0 \kappa \delta^{3/2}}\right) . \end{equation*} \begin{lemma}{(Our extension)} \label{lem:hk0larger} For $\lambda_0 = \lambda_{\text{min}} (\textbf{H}^{\infty}) > 0$, $m = \Omega \left(\frac{n^6}{\lambda_0^4 \kappa^2 \delta^3}\right)$, and~$\eta = O\left(\frac{\lambda_0}{n^2}\right)$ with probability at least $1-\delta$ over the random initialization \eqref{eq:randinit}, for all~$k+1 \leq t \leq k+\tilde{k}$, we have: \end{lemma} \begin{equation*} \norm{\textbf{H}(t)-\textbf{H}(0)}_F = O\left(\frac{n^3}{\sqrt{m} \lambda_0 \kappa \delta^{3/2}}\right) . \end{equation*} \begin{lemma}{(recalled from \cite{du2018gradient, arora2019fine})}\label{lem:h0i} With probability at least $1-\delta$ over the random initialization~\eqref{eq:randinit}, we have: \end{lemma} \begin{equation*} \norm{\textbf{H}(0)-\textbf{H}^{\infty}}_F = O\left(\frac{n\sqrt{\log \frac{n}{\delta}}}{\sqrt{m}}\right) . \end{equation*} Remark for Lemma \ref{lem:h0i}: The indicator function $\mathbb{I} \{\textbf{w}_r(t)^T \textbf{x}_i \geq 0\}$ is invariant to the scale $\kappa$ of $\textbf{w}_r$, hence $\mathbb{E}[\textbf{H}_{ij}(0)] = \textbf{H}_{ij}^{\infty}$, even though the expectation on the left hand side is taken with respect to $\textbf{w} \sim \mathcal{N} (\textbf{0}, \kappa^2 \textbf{I})$ and the expectation on the right hand side is taken with respect to $\textbf{w} \sim \mathcal{N} (\textbf{0}, \textbf{I})$.
{ "redpajama_set_name": "RedPajamaArXiv" }
5,277
Horror myFilmo Horror film blog Horror directors Dream meanings Horror Hangman game Category: by country Italian horror films Post author By horror team No Comments on Italian horror films Italian horror films on Italian language filmed in Italy. [imdb id="tt0073582″ plot="short"] Tags 1960, 1971, 1972, 1975, 1980, 1981, 1987, Italy Korean horror films No Comments on Korean horror films Korean horror films. Country of producing South Korea, language in movies Korean. Tags 1995, 1998, 1999, 2002, 2003, 2004, 2005, 2006, 2007, 2009, 2010, 2011, 2012, South Korea Hindi horror films No Comments on Hindi horror films Hindi horror movies, filmed on Hindu or Tamil language. India is country of production, too. Tags 1984, 1990, 2003, 2005, 2009, 2010, 2011, 2012, 2013 French horror films No Comments on French horror films Here are several good French horror films. France is the country of production, and movies are filmed in french language, or in combination with some other. Tags 1956, 1960, 1975, 1979, 1980, 2001, 2002, 2003, 2004, 2006, 2007, 2008, 2009, 2010, France Danish horror films No Comments on Danish horror films Denmark made lot of good scary horror films, too, on Danish language. Some are Denmark – USA or Sweden or other country coproduction, some fully Denish. Here are some of them: Tags 1961, 1962, 1987, 1994, 1999, 2003, 2005, 2006, 2007, 2008, 2009, 2011 Argentina horror films No Comments on Argentina horror films Here are several Argentina horror films. Tags 1967, 1969, 2001, 2003, 2004, 2007, 2010, 2011, 2012, Argentina Belgium horror No Comments on Belgium horror Calvaire (The Ordeal), 2004 Director: Fabrice Du Welz Stars: Laurent Lucas, Brigitte Lahaie, Gigi Coursigny CALVARIRE'S (THE ORDEAL) story revolves about a singer called Marc, traveling from one gig to the other, his van broken and get caught in an small hotel middle of nowhere, kidnapped, tortured, abused and "womanized" as a hillbilly wife. Things go even worse from there. This movie borrows from many others; more obvious from Texas CHAINSWA MASSACRE, DELIVERANCE and MISERY. The best part by far was the piano scene at the local tavern. The music was macabre and artistic, and the dancing seemed like it was part of someone's bad dream. The scene where the group of local evil animal fornicators dance themselves into a frenzy though is a total classic. If this movie gets seen kids in future years will certainly do this dance. I think it happens on or about chapter 13 in the movie, it's almost worth getting it just for this scene alone. One of a kind. The rest of the film is pretty forgettable ultimately or not really good when it comes right down to it. La nuit des étoiles filantes (A Virgin Among the Living Dead) 1973 Continue reading "Belgium horror" Tags 1973, 1986, 2004 Swedish horror films No Comments on Swedish horror films Låt den rätte komma in (Let the Right One In), 2008 Director: Tomas Alfredson Stars: Kåre Hedebrant, Lina Leandersson, Per Ragnar Tomas Alfredson's "Let The Right One In" is an original, dark, twisted and gory horror fantasy, one of those special films that are hard to classify. Not merely an exercise in style, his film is a brilliant piece of amoral storytelling, and even if some characters' actions defy any logic or common sense (I don't wanna spoil any moment here, but you'll know what I mean when the first revenge moment of the story happens), they seem to be there just to remind you that this is just a fantasy tale (but not for the little ones!). Oskar (Kåre Hedebrant) is a 12 year-old bullied boy that befriends and develops an innocent crush on his new neighbor, Eli (Lina Leandersson), who happens to be a vampire. What comes next is a twisted tale of revenge and pubescent love, made with visual flair (the swimming pool scene is already classic), creative directing and impressive performances by the young pair of protagonists. It has something big and stunning about this lovely storyline to rivet my attention from the beginning to the end. Moreover, the cinematography and atmosphere in this film are undeniably superb. The chemistry between two preteen protagonists is outstanding and very believable. Everything in this film is well-made in synchronization. This movie does the same thing with vampirism — what would it be like if there was a vampire in 1970's Sweden? The most interesting thing about the picture to me is that it makes some really awful stuff seem sympathetic. The leading boy sends a bully to the E/R, apparently having ruined one of his ears for life. The leading girl kills people. But we're totally on their side. Vargtimmen (Hour of the Wolf), 1968 Director: Ingmar Bergman Stars: Max von Sydow, Liv Ullmann, Gertrud Fridh Continue reading "Swedish horror films" Tags 1921, 1968, 1988, 2008, Ingmar Bergman Hong Kong horror No Comments on Hong Kong horror Hong Kong have nice horror movies filmed, made in 70's but new one, after 2000. Here are several of them, in short: Lik Wong (Riki-Oh: The Story of Ricky), 1991 Director: Ngai Choi Lam Stars: Siu-Wong Fan, Mei Sheng Fan, Ka-Kui Ho The plot has Ricky, a kid with super strength being thrown into prison for killing the man responsible for the girl he loved. This is a corrupt privatized prison of the future where the prisoners are nothing more than cheap labor. Ricky instantly gets under the skin of the wardens, the guards and the leaders of the prison population. Cartoon violence with bloody gory consequences ensue. Body parts go flying as Ricky fights to stay alive and help his fellow prisoners. This film could be seen as the Chinese version of "Braindead" aka "Dead Alive": a Bruce Lee-like guy named Ricky, who´s talented with supernatural powers, fights his way through a corrupt prison and leaves a trail of blood and guts behind him. For sure, "Story of Ricky" is a very violent and gory flick, but everything in it is so exaggerated that no one should take it too serious: there is gut-strangulation, people in a meat grinder and for the showdown the leader of the prison mutates to a giant Kung Fu-fighting monster..! However, this film is so funny that you will laugh your heads off! Larded with typical Asian humor "Story of Ricky" is one of the definite cult movies that were shot in the last decade! If you had fun with it, try also to find director Ngai Kai Lam´s Indiana Jones-fantasy-action-gore-adventure "The Seventh Curse" (Check out the review which I wrote under my former pseudonym "Daywalker"..!) starring Chow Yun Fat, which contains nearly the same plenty of gore and curious ideas as this one has! Two great films that would make a great double-feature!! Tung ngaan (The Child's Eye), 2010 Directors: Oxide Pang Chun, Danny Pang Stars: Rainie Yang, Elanne Kwong, Shawn Yue Continue reading "Hong Kong horror" Tags 1977, 1986, 1991, 2003, 2010, Hong Kong Spanish horror movies No Comments on Spanish horror movies Several Spanish horror films here. Made in Spain and in spanish language. [Rec], 2007 Directors: Jaume Balagueró, Paco Plaza Acting: Manuela Velasco, Ferran Terraza, Jorge-Yamam Serrano This movie played after Atonement in a double bill at the Venice Film Festival. Within fifteen minutes the previously full cinema was half empty with people filing out of the auditorium in panicked droves. Those who stayed were treated to the proverbial roller-coaster ride and walked out of the Sala Biennale having shared a deeply traumatic yet brilliant experience. REC had me literally screaming with terror and enjoyment. Continue reading "Spanish horror movies" Tags 1987, 2000, 2001, 2007, 2010, 2011, 2012 Poltergeist (1982) trailers Raat (1992) trailer Altered (2006) trailer Spirit Trap (2005) trailer Evil Dead – Die Saat des Bösen (1991) trailer Scary Dream Meanings Bela Lugosi famous Dracula Probably the most famous and oldest Dracula is Bela Lugosi. He played Count Dracula in 1931. in "Dracula" movie directed by Tod Browning. After that film, Bela Lugosi specialized Drakula and his friends in horror movies, so he played in "Mother Riley Meets the Vampire", "Bud Abbott Lou Costello Meet Frankenstein", "Scared to Death", "Zombies on Broadway", "Voodoo Man", "The Return of the Vampire" and many others. Actor Martin Landau won an Oscar in 1994. for impersonating Bela Lugosi in "Ed Wood" film. Halloween on Haunted house dream cabcor.com on Nightmare Castle (1965) Dedicated servers on A Bucket of Blood (1959) full Virtual Private Server on Nigel O'Neill in BBC horror drama – Stumpy's Brae Nieta on Snake dream meanings © 2021 Horror myFilmo
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,498
package org.apache.beam.runners.fnexecution.control; import static org.apache.beam.sdk.options.ExperimentalOptions.addExperiment; import static org.apache.beam.sdk.util.WindowedValue.valueInGlobalWindow; import static org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState; import static org.hamcrest.MatcherAssert.assertThat; import static org.hamcrest.Matchers.allOf; import static org.hamcrest.Matchers.containsInAnyOrder; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.not; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertTrue; import static org.junit.Assert.fail; import java.io.Serializable; import java.util.ArrayList; import java.util.Arrays; import java.util.Collection; import java.util.Collections; import java.util.HashMap; import java.util.Iterator; import java.util.List; import java.util.Map; import java.util.Map.Entry; import java.util.Set; import java.util.UUID; import java.util.concurrent.CompletionStage; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; import java.util.concurrent.CountDownLatch; import java.util.concurrent.ExecutionException; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.Future; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.ScheduledFuture; import java.util.concurrent.ThreadFactory; import java.util.concurrent.TimeUnit; import java.util.function.Function; import org.apache.beam.fn.harness.Caches; import org.apache.beam.fn.harness.FnHarness; import org.apache.beam.model.fnexecution.v1.BeamFnApi; import org.apache.beam.model.fnexecution.v1.BeamFnApi.ProcessBundleProgressResponse; import org.apache.beam.model.fnexecution.v1.BeamFnApi.ProcessBundleResponse; import org.apache.beam.model.fnexecution.v1.BeamFnApi.ProcessBundleSplitResponse; import org.apache.beam.model.fnexecution.v1.BeamFnApi.ProcessBundleSplitResponse.ChannelSplit; import org.apache.beam.model.pipeline.v1.MetricsApi.MonitoringInfo; import org.apache.beam.model.pipeline.v1.RunnerApi; import org.apache.beam.runners.core.construction.PTransformTranslation; import org.apache.beam.runners.core.construction.PipelineTranslation; import org.apache.beam.runners.core.construction.graph.ExecutableStage; import org.apache.beam.runners.core.construction.graph.FusedPipeline; import org.apache.beam.runners.core.construction.graph.GreedyPipelineFuser; import org.apache.beam.runners.core.construction.graph.PipelineNode.PTransformNode; import org.apache.beam.runners.core.construction.graph.ProtoOverrides; import org.apache.beam.runners.core.construction.graph.SplittableParDoExpander; import org.apache.beam.runners.core.metrics.DistributionData; import org.apache.beam.runners.core.metrics.ExecutionStateSampler; import org.apache.beam.runners.core.metrics.MonitoringInfoConstants; import org.apache.beam.runners.core.metrics.MonitoringInfoConstants.TypeUrns; import org.apache.beam.runners.core.metrics.MonitoringInfoConstants.Urns; import org.apache.beam.runners.core.metrics.MonitoringInfoMatchers; import org.apache.beam.runners.core.metrics.SimpleMonitoringInfoBuilder; import org.apache.beam.runners.fnexecution.control.ProcessBundleDescriptors.ExecutableProcessBundleDescriptor; import org.apache.beam.runners.fnexecution.control.SdkHarnessClient.BundleProcessor; import org.apache.beam.runners.fnexecution.data.GrpcDataService; import org.apache.beam.runners.fnexecution.logging.GrpcLoggingService; import org.apache.beam.runners.fnexecution.logging.Slf4jLogWriter; import org.apache.beam.runners.fnexecution.state.GrpcStateService; import org.apache.beam.runners.fnexecution.state.StateRequestHandler; import org.apache.beam.runners.fnexecution.state.StateRequestHandlers; import org.apache.beam.runners.fnexecution.state.StateRequestHandlers.BagUserStateHandler; import org.apache.beam.runners.fnexecution.state.StateRequestHandlers.BagUserStateHandlerFactory; import org.apache.beam.runners.fnexecution.state.StateRequestHandlers.IterableSideInputHandler; import org.apache.beam.runners.fnexecution.state.StateRequestHandlers.MultimapSideInputHandler; import org.apache.beam.runners.fnexecution.state.StateRequestHandlers.SideInputHandlerFactory; import org.apache.beam.sdk.Pipeline; import org.apache.beam.sdk.coders.BigEndianLongCoder; import org.apache.beam.sdk.coders.Coder; import org.apache.beam.sdk.coders.CoderException; import org.apache.beam.sdk.coders.KvCoder; import org.apache.beam.sdk.coders.StringUtf8Coder; import org.apache.beam.sdk.fn.channel.ManagedChannelFactory; import org.apache.beam.sdk.fn.data.FnDataReceiver; import org.apache.beam.sdk.fn.server.GrpcContextHeaderAccessorProvider; import org.apache.beam.sdk.fn.server.GrpcFnServer; import org.apache.beam.sdk.fn.server.InProcessServerFactory; import org.apache.beam.sdk.fn.stream.OutboundObserverFactory; import org.apache.beam.sdk.metrics.Metrics; import org.apache.beam.sdk.options.ExperimentalOptions; import org.apache.beam.sdk.options.PipelineOptions; import org.apache.beam.sdk.options.PipelineOptionsFactory; import org.apache.beam.sdk.state.BagState; import org.apache.beam.sdk.state.ReadableState; import org.apache.beam.sdk.state.StateSpec; import org.apache.beam.sdk.state.StateSpecs; import org.apache.beam.sdk.state.TimeDomain; import org.apache.beam.sdk.state.Timer; import org.apache.beam.sdk.state.TimerSpec; import org.apache.beam.sdk.state.TimerSpecs; import org.apache.beam.sdk.testing.ResetDateTimeProvider; import org.apache.beam.sdk.transforms.DoFn; import org.apache.beam.sdk.transforms.Flatten; import org.apache.beam.sdk.transforms.GroupByKey; import org.apache.beam.sdk.transforms.Impulse; import org.apache.beam.sdk.transforms.ParDo; import org.apache.beam.sdk.transforms.ParDo.SingleOutput; import org.apache.beam.sdk.transforms.View; import org.apache.beam.sdk.transforms.WithKeys; import org.apache.beam.sdk.transforms.splittabledofn.RestrictionTracker; import org.apache.beam.sdk.transforms.splittabledofn.SplitResult; import org.apache.beam.sdk.transforms.windowing.BoundedWindow; import org.apache.beam.sdk.transforms.windowing.GlobalWindow; import org.apache.beam.sdk.transforms.windowing.PaneInfo; import org.apache.beam.sdk.util.CoderUtils; import org.apache.beam.sdk.util.WindowedValue; import org.apache.beam.sdk.values.KV; import org.apache.beam.sdk.values.PCollection; import org.apache.beam.sdk.values.PCollectionList; import org.apache.beam.sdk.values.PCollectionView; import org.apache.beam.vendor.grpc.v1p43p2.com.google.protobuf.ByteString; import org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Optional; import org.apache.beam.vendor.guava.v26_0_jre.com.google.common.collect.ImmutableMap; import org.apache.beam.vendor.guava.v26_0_jre.com.google.common.collect.Iterables; import org.apache.beam.vendor.guava.v26_0_jre.com.google.common.collect.Iterators; import org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.ThreadFactoryBuilder; import org.hamcrest.Matcher; import org.hamcrest.Matchers; import org.hamcrest.collection.IsEmptyIterable; import org.hamcrest.collection.IsIterableContainingInOrder; import org.joda.time.DateTimeUtils; import org.joda.time.Duration; import org.junit.After; import org.junit.Rule; import org.junit.Test; import org.junit.runner.RunWith; import org.junit.runners.JUnit4; /** * Tests the execution of a pipeline from specification time to executing a single fused stage, * going through pipeline fusion. */ @RunWith(JUnit4.class) @SuppressWarnings({ "rawtypes", // TODO(https://issues.apache.org/jira/browse/BEAM-10556) "keyfor", "unused" // TODO(BEAM-13271): Remove when new version of errorprone is released (2.11.0) }) public class RemoteExecutionTest implements Serializable { @Rule public transient ResetDateTimeProvider resetDateTimeProvider = new ResetDateTimeProvider(); private static final String WORKER_ID = "remote_test"; private transient GrpcFnServer<FnApiControlClientPoolService> controlServer; private transient GrpcFnServer<GrpcDataService> dataServer; private transient GrpcFnServer<GrpcStateService> stateServer; private transient GrpcFnServer<GrpcLoggingService> loggingServer; private transient GrpcStateService stateDelegator; private transient SdkHarnessClient controlClient; private transient ExecutorService serverExecutor; private transient ExecutorService sdkHarnessExecutor; private transient Future<?> sdkHarnessExecutorFuture; public void launchSdkHarness(PipelineOptions options) throws Exception { // Setup execution-time servers ThreadFactory threadFactory = new ThreadFactoryBuilder().setDaemon(true).build(); serverExecutor = Executors.newCachedThreadPool(threadFactory); InProcessServerFactory serverFactory = InProcessServerFactory.create(); dataServer = GrpcFnServer.allocatePortAndCreateFor( GrpcDataService.create( PipelineOptionsFactory.create(), serverExecutor, OutboundObserverFactory.serverDirect()), serverFactory); loggingServer = GrpcFnServer.allocatePortAndCreateFor( GrpcLoggingService.forWriter(Slf4jLogWriter.getDefault()), serverFactory); stateDelegator = GrpcStateService.create(); stateServer = GrpcFnServer.allocatePortAndCreateFor(stateDelegator, serverFactory); ControlClientPool clientPool = MapControlClientPool.create(); controlServer = GrpcFnServer.allocatePortAndCreateFor( FnApiControlClientPoolService.offeringClientsToPool( clientPool.getSink(), GrpcContextHeaderAccessorProvider.getHeaderAccessor()), serverFactory); // Create the SDK harness, and wait until it connects sdkHarnessExecutor = Executors.newSingleThreadExecutor(threadFactory); sdkHarnessExecutorFuture = sdkHarnessExecutor.submit( () -> { try { FnHarness.main( WORKER_ID, options, Collections.emptySet(), // Runner capabilities. loggingServer.getApiServiceDescriptor(), controlServer.getApiServiceDescriptor(), null, ManagedChannelFactory.createInProcess(), OutboundObserverFactory.clientDirect(), Caches.eternal()); } catch (Exception e) { throw new RuntimeException(e); } }); InstructionRequestHandler controlClient = clientPool.getSource().take(WORKER_ID, java.time.Duration.ofSeconds(2)); this.controlClient = SdkHarnessClient.usingFnApiClient(controlClient, dataServer.getService()); } @After public void tearDown() throws Exception { controlServer.close(); stateServer.close(); dataServer.close(); loggingServer.close(); controlClient.close(); sdkHarnessExecutor.shutdownNow(); serverExecutor.shutdownNow(); try { sdkHarnessExecutorFuture.get(); } catch (ExecutionException e) { if (e.getCause() instanceof RuntimeException && e.getCause().getCause() instanceof InterruptedException) { // expected } else { throw e; } } } @Test public void testExecution() throws Exception { launchSdkHarness(PipelineOptionsFactory.create()); Pipeline p = Pipeline.create(); p.apply("impulse", Impulse.create()) .apply( "create", ParDo.of( new DoFn<byte[], String>() { @ProcessElement public void process(ProcessContext ctxt) { ctxt.output("zero"); ctxt.output("one"); ctxt.output("two"); } })) .apply( "len", ParDo.of( new DoFn<String, Long>() { @ProcessElement public void process(ProcessContext ctxt) { ctxt.output((long) ctxt.element().length()); } })) .apply("addKeys", WithKeys.of("foo")) // Use some unknown coders .setCoder(KvCoder.of(StringUtf8Coder.of(), BigEndianLongCoder.of())) // Force the output to be materialized .apply("gbk", GroupByKey.create()); RunnerApi.Pipeline pipelineProto = PipelineTranslation.toProto(p); FusedPipeline fused = GreedyPipelineFuser.fuse(pipelineProto); checkState(fused.getFusedStages().size() == 1, "Expected exactly one fused stage"); ExecutableStage stage = fused.getFusedStages().iterator().next(); ExecutableProcessBundleDescriptor descriptor = ProcessBundleDescriptors.fromExecutableStage( "my_stage", stage, dataServer.getApiServiceDescriptor()); BundleProcessor processor = controlClient.getProcessor( descriptor.getProcessBundleDescriptor(), descriptor.getRemoteInputDestinations()); Map<String, ? super Coder<WindowedValue<?>>> remoteOutputCoders = descriptor.getRemoteOutputCoders(); Map<String, Collection<? super WindowedValue<?>>> outputValues = new HashMap<>(); Map<String, RemoteOutputReceiver<?>> outputReceivers = new HashMap<>(); for (Entry<String, ? super Coder<WindowedValue<?>>> remoteOutputCoder : remoteOutputCoders.entrySet()) { List<? super WindowedValue<?>> outputContents = Collections.synchronizedList(new ArrayList<>()); outputValues.put(remoteOutputCoder.getKey(), outputContents); outputReceivers.put( remoteOutputCoder.getKey(), RemoteOutputReceiver.of( (Coder) remoteOutputCoder.getValue(), (FnDataReceiver<? super WindowedValue<?>>) outputContents::add)); } // The impulse example try (RemoteBundle bundle = processor.newBundle(outputReceivers, BundleProgressHandler.ignored())) { Iterables.getOnlyElement(bundle.getInputReceivers().values()) .accept(valueInGlobalWindow(new byte[0])); } for (Collection<? super WindowedValue<?>> windowedValues : outputValues.values()) { assertThat( windowedValues, containsInAnyOrder( valueInGlobalWindow(byteValueOf("foo", 4)), valueInGlobalWindow(byteValueOf("foo", 3)), valueInGlobalWindow(byteValueOf("foo", 3)))); } } @Test public void testBundleProcessorThrowsExecutionExceptionWhenUserCodeThrows() throws Exception { launchSdkHarness(PipelineOptionsFactory.create()); Pipeline p = Pipeline.create(); p.apply("impulse", Impulse.create()) .apply( "create", ParDo.of( new DoFn<byte[], KV<String, String>>() { @ProcessElement public void process(ProcessContext ctxt) throws Exception { String element = CoderUtils.decodeFromByteArray(StringUtf8Coder.of(), ctxt.element()); if (element.equals("X")) { throw new Exception("testBundleExecutionFailure"); } ctxt.output(KV.of(element, element)); } })) .apply("gbk", GroupByKey.create()); RunnerApi.Pipeline pipelineProto = PipelineTranslation.toProto(p); FusedPipeline fused = GreedyPipelineFuser.fuse(pipelineProto); checkState(fused.getFusedStages().size() == 1, "Expected exactly one fused stage"); ExecutableStage stage = fused.getFusedStages().iterator().next(); ExecutableProcessBundleDescriptor descriptor = ProcessBundleDescriptors.fromExecutableStage( "my_stage", stage, dataServer.getApiServiceDescriptor()); BundleProcessor processor = controlClient.getProcessor( descriptor.getProcessBundleDescriptor(), descriptor.getRemoteInputDestinations()); Map<String, ? super Coder<WindowedValue<?>>> remoteOutputCoders = descriptor.getRemoteOutputCoders(); Map<String, Collection<? super WindowedValue<?>>> outputValues = new HashMap<>(); Map<String, RemoteOutputReceiver<?>> outputReceivers = new HashMap<>(); for (Entry<String, ? super Coder<WindowedValue<?>>> remoteOutputCoder : remoteOutputCoders.entrySet()) { List<? super WindowedValue<?>> outputContents = Collections.synchronizedList(new ArrayList<>()); outputValues.put(remoteOutputCoder.getKey(), outputContents); outputReceivers.put( remoteOutputCoder.getKey(), RemoteOutputReceiver.of( (Coder) remoteOutputCoder.getValue(), (FnDataReceiver<? super WindowedValue<?>>) outputContents::add)); } try (RemoteBundle bundle = processor.newBundle(outputReceivers, BundleProgressHandler.ignored())) { Iterables.getOnlyElement(bundle.getInputReceivers().values()) .accept(valueInGlobalWindow(CoderUtils.encodeToByteArray(StringUtf8Coder.of(), "Y"))); } try { try (RemoteBundle bundle = processor.newBundle(outputReceivers, BundleProgressHandler.ignored())) { Iterables.getOnlyElement(bundle.getInputReceivers().values()) .accept(valueInGlobalWindow(CoderUtils.encodeToByteArray(StringUtf8Coder.of(), "X"))); } // Fail the test if we reach this point and never threw the exception. fail(); } catch (ExecutionException e) { assertTrue(e.getMessage().contains("testBundleExecutionFailure")); } try (RemoteBundle bundle = processor.newBundle(outputReceivers, BundleProgressHandler.ignored())) { Iterables.getOnlyElement(bundle.getInputReceivers().values()) .accept(valueInGlobalWindow(CoderUtils.encodeToByteArray(StringUtf8Coder.of(), "Z"))); } for (Collection<? super WindowedValue<?>> windowedValues : outputValues.values()) { assertThat( windowedValues, containsInAnyOrder( valueInGlobalWindow(KV.of("Y", "Y")), valueInGlobalWindow(KV.of("Z", "Z")))); } } @Test public void testExecutionWithSideInput() throws Exception { launchSdkHarness(PipelineOptionsFactory.create()); Pipeline p = Pipeline.create(); addExperiment(p.getOptions().as(ExperimentalOptions.class), "beam_fn_api"); // TODO(BEAM-10097): Remove experiment once all portable runners support this view type addExperiment(p.getOptions().as(ExperimentalOptions.class), "use_runner_v2"); PCollection<String> input = p.apply("impulse", Impulse.create()) .apply( "create", ParDo.of( new DoFn<byte[], String>() { @ProcessElement public void process(ProcessContext ctxt) { ctxt.output("zero"); ctxt.output("one"); ctxt.output("two"); } })) .setCoder(StringUtf8Coder.of()); PCollectionView<Iterable<String>> iterableView = input.apply("createIterableSideInput", View.asIterable()); PCollectionView<Map<String, Iterable<String>>> multimapView = input.apply(WithKeys.of("key")).apply("createMultimapSideInput", View.asMultimap()); input .apply( "readSideInput", ParDo.of( new DoFn<String, KV<String, String>>() { @ProcessElement public void processElement(ProcessContext context) { for (String value : context.sideInput(iterableView)) { context.output(KV.of(context.element(), value)); } for (Map.Entry<String, Iterable<String>> entry : context.sideInput(multimapView).entrySet()) { for (String value : entry.getValue()) { context.output(KV.of(context.element(), entry.getKey() + ":" + value)); } } } }) .withSideInputs(iterableView, multimapView)) .setCoder(KvCoder.of(StringUtf8Coder.of(), StringUtf8Coder.of())) // Force the output to be materialized .apply("gbk", GroupByKey.create()); RunnerApi.Pipeline pipelineProto = PipelineTranslation.toProto(p); FusedPipeline fused = GreedyPipelineFuser.fuse(pipelineProto); Optional<ExecutableStage> optionalStage = Iterables.tryFind( fused.getFusedStages(), (ExecutableStage stage) -> !stage.getSideInputs().isEmpty()); checkState(optionalStage.isPresent(), "Expected a stage with side inputs."); ExecutableStage stage = optionalStage.get(); ExecutableProcessBundleDescriptor descriptor = ProcessBundleDescriptors.fromExecutableStage( "test_stage", stage, dataServer.getApiServiceDescriptor(), stateServer.getApiServiceDescriptor()); BundleProcessor processor = controlClient.getProcessor( descriptor.getProcessBundleDescriptor(), descriptor.getRemoteInputDestinations(), stateDelegator); Map<String, Coder> remoteOutputCoders = descriptor.getRemoteOutputCoders(); Map<String, Collection<WindowedValue<?>>> outputValues = new HashMap<>(); Map<String, RemoteOutputReceiver<?>> outputReceivers = new HashMap<>(); for (Entry<String, Coder> remoteOutputCoder : remoteOutputCoders.entrySet()) { List<WindowedValue<?>> outputContents = Collections.synchronizedList(new ArrayList<>()); outputValues.put(remoteOutputCoder.getKey(), outputContents); outputReceivers.put( remoteOutputCoder.getKey(), RemoteOutputReceiver.of( (Coder<WindowedValue<?>>) remoteOutputCoder.getValue(), outputContents::add)); } StateRequestHandler stateRequestHandler = StateRequestHandlers.forSideInputHandlerFactory( descriptor.getSideInputSpecs(), new SideInputHandlerFactory() { @Override public <V, W extends BoundedWindow> IterableSideInputHandler<V, W> forIterableSideInput( String pTransformId, String sideInputId, Coder<V> elementCoder, Coder<W> windowCoder) { return new IterableSideInputHandler<V, W>() { @Override public Iterable<V> get(W window) { return (Iterable) Arrays.asList("A", "B", "C"); } @Override public Coder<V> elementCoder() { return elementCoder; } }; } @Override public <K, V, W extends BoundedWindow> MultimapSideInputHandler<K, V, W> forMultimapSideInput( String pTransformId, String sideInputId, KvCoder<K, V> elementCoder, Coder<W> windowCoder) { return new MultimapSideInputHandler<K, V, W>() { @Override public Iterable<K> get(W window) { return (Iterable) Arrays.asList("key1", "key2"); } @Override public Iterable<V> get(K key, W window) { if ("key1".equals(key)) { return (Iterable) Arrays.asList("H", "I", "J"); } else if ("key2".equals(key)) { return (Iterable) Arrays.asList("M", "N", "O"); } return Collections.emptyList(); } @Override public Coder<K> keyCoder() { return elementCoder.getKeyCoder(); } @Override public Coder<V> valueCoder() { return elementCoder.getValueCoder(); } }; } }); BundleProgressHandler progressHandler = BundleProgressHandler.ignored(); try (RemoteBundle bundle = processor.newBundle(outputReceivers, stateRequestHandler, progressHandler)) { Iterables.getOnlyElement(bundle.getInputReceivers().values()) .accept(valueInGlobalWindow("X")); Iterables.getOnlyElement(bundle.getInputReceivers().values()) .accept(valueInGlobalWindow("Y")); } for (Collection<WindowedValue<?>> windowedValues : outputValues.values()) { assertThat( windowedValues, containsInAnyOrder( valueInGlobalWindow(KV.of("X", "A")), valueInGlobalWindow(KV.of("X", "B")), valueInGlobalWindow(KV.of("X", "C")), valueInGlobalWindow(KV.of("X", "key1:H")), valueInGlobalWindow(KV.of("X", "key1:I")), valueInGlobalWindow(KV.of("X", "key1:J")), valueInGlobalWindow(KV.of("X", "key2:M")), valueInGlobalWindow(KV.of("X", "key2:N")), valueInGlobalWindow(KV.of("X", "key2:O")), valueInGlobalWindow(KV.of("Y", "A")), valueInGlobalWindow(KV.of("Y", "B")), valueInGlobalWindow(KV.of("Y", "C")), valueInGlobalWindow(KV.of("Y", "key1:H")), valueInGlobalWindow(KV.of("Y", "key1:I")), valueInGlobalWindow(KV.of("Y", "key1:J")), valueInGlobalWindow(KV.of("Y", "key2:M")), valueInGlobalWindow(KV.of("Y", "key2:N")), valueInGlobalWindow(KV.of("Y", "key2:O")))); } } @Test public void testExecutionWithSideInputCaching() throws Exception { Pipeline p = Pipeline.create(); addExperiment(p.getOptions().as(ExperimentalOptions.class), "beam_fn_api"); // TODO(BEAM-10097): Remove experiment once all portable runners support this view type addExperiment(p.getOptions().as(ExperimentalOptions.class), "use_runner_v2"); launchSdkHarness(p.getOptions()); PCollection<String> input = p.apply("impulse", Impulse.create()) .apply( "create", ParDo.of( new DoFn<byte[], String>() { @ProcessElement public void process(ProcessContext ctxt) { ctxt.output("zero"); ctxt.output("one"); ctxt.output("two"); } })) .setCoder(StringUtf8Coder.of()); PCollectionView<Iterable<String>> iterableView = input.apply("createIterableSideInput", View.asIterable()); PCollectionView<Map<String, Iterable<String>>> multimapView = input.apply(WithKeys.of("key")).apply("createMultimapSideInput", View.asMultimap()); input .apply( "readSideInput", ParDo.of( new DoFn<String, KV<String, String>>() { @ProcessElement public void processElement(ProcessContext context) { for (String value : context.sideInput(iterableView)) { context.output(KV.of(context.element(), value)); } for (Map.Entry<String, Iterable<String>> entry : context.sideInput(multimapView).entrySet()) { for (String value : entry.getValue()) { context.output(KV.of(context.element(), entry.getKey() + ":" + value)); } } } }) .withSideInputs(iterableView, multimapView)) .setCoder(KvCoder.of(StringUtf8Coder.of(), StringUtf8Coder.of())) // Force the output to be materialized .apply("gbk", GroupByKey.create()); RunnerApi.Pipeline pipelineProto = PipelineTranslation.toProto(p); FusedPipeline fused = GreedyPipelineFuser.fuse(pipelineProto); Optional<ExecutableStage> optionalStage = Iterables.tryFind( fused.getFusedStages(), (ExecutableStage stage) -> !stage.getSideInputs().isEmpty()); checkState(optionalStage.isPresent(), "Expected a stage with side inputs."); ExecutableStage stage = optionalStage.get(); ExecutableProcessBundleDescriptor descriptor = ProcessBundleDescriptors.fromExecutableStage( "test_stage", stage, dataServer.getApiServiceDescriptor(), stateServer.getApiServiceDescriptor()); BundleProcessor processor = controlClient.getProcessor( descriptor.getProcessBundleDescriptor(), descriptor.getRemoteInputDestinations(), stateDelegator); Map<String, Coder> remoteOutputCoders = descriptor.getRemoteOutputCoders(); Map<String, Collection<WindowedValue<?>>> outputValues = new HashMap<>(); Map<String, RemoteOutputReceiver<?>> outputReceivers = new HashMap<>(); for (Entry<String, Coder> remoteOutputCoder : remoteOutputCoders.entrySet()) { List<WindowedValue<?>> outputContents = Collections.synchronizedList(new ArrayList<>()); outputValues.put(remoteOutputCoder.getKey(), outputContents); outputReceivers.put( remoteOutputCoder.getKey(), RemoteOutputReceiver.of( (Coder<WindowedValue<?>>) remoteOutputCoder.getValue(), outputContents::add)); } StoringStateRequestHandler stateRequestHandler = new StoringStateRequestHandler( StateRequestHandlers.forSideInputHandlerFactory( descriptor.getSideInputSpecs(), new SideInputHandlerFactory() { @Override public <V, W extends BoundedWindow> IterableSideInputHandler<V, W> forIterableSideInput( String pTransformId, String sideInputId, Coder<V> elementCoder, Coder<W> windowCoder) { return new IterableSideInputHandler<V, W>() { @Override public Iterable<V> get(W window) { return (Iterable) Arrays.asList("A", "B", "C"); } @Override public Coder<V> elementCoder() { return elementCoder; } }; } @Override public <K, V, W extends BoundedWindow> MultimapSideInputHandler<K, V, W> forMultimapSideInput( String pTransformId, String sideInputId, KvCoder<K, V> elementCoder, Coder<W> windowCoder) { return new MultimapSideInputHandler<K, V, W>() { @Override public Iterable<K> get(W window) { return (Iterable) Arrays.asList("key1", "key2"); } @Override public Iterable<V> get(K key, W window) { if ("key1".equals(key)) { return (Iterable) Arrays.asList("H", "I", "J"); } else if ("key2".equals(key)) { return (Iterable) Arrays.asList("M", "N", "O"); } return Collections.emptyList(); } @Override public Coder<K> keyCoder() { return elementCoder.getKeyCoder(); } @Override public Coder<V> valueCoder() { return elementCoder.getValueCoder(); } }; } })); String transformId = Iterables.get(stage.getSideInputs(), 0).transform().getId(); stateRequestHandler.addCacheToken( BeamFnApi.ProcessBundleRequest.CacheToken.newBuilder() .setSideInput( BeamFnApi.ProcessBundleRequest.CacheToken.SideInput.newBuilder() .setSideInputId(iterableView.getTagInternal().getId()) .setTransformId(transformId) .build()) .setToken(ByteString.copyFromUtf8("IterableSideInputToken")) .build()); stateRequestHandler.addCacheToken( BeamFnApi.ProcessBundleRequest.CacheToken.newBuilder() .setSideInput( BeamFnApi.ProcessBundleRequest.CacheToken.SideInput.newBuilder() .setSideInputId(multimapView.getTagInternal().getId()) .setTransformId(transformId) .build()) .setToken(ByteString.copyFromUtf8("MulitmapSideInputToken")) .build()); BundleProgressHandler progressHandler = BundleProgressHandler.ignored(); try (RemoteBundle bundle = processor.newBundle(outputReceivers, stateRequestHandler, progressHandler)) { Iterables.getOnlyElement(bundle.getInputReceivers().values()) .accept(valueInGlobalWindow("X")); } try (RemoteBundle bundle = processor.newBundle(outputReceivers, stateRequestHandler, progressHandler)) { Iterables.getOnlyElement(bundle.getInputReceivers().values()) .accept(valueInGlobalWindow("Y")); } for (Collection<WindowedValue<?>> windowedValues : outputValues.values()) { assertThat( windowedValues, containsInAnyOrder( valueInGlobalWindow(KV.of("X", "A")), valueInGlobalWindow(KV.of("X", "B")), valueInGlobalWindow(KV.of("X", "C")), valueInGlobalWindow(KV.of("X", "key1:H")), valueInGlobalWindow(KV.of("X", "key1:I")), valueInGlobalWindow(KV.of("X", "key1:J")), valueInGlobalWindow(KV.of("X", "key2:M")), valueInGlobalWindow(KV.of("X", "key2:N")), valueInGlobalWindow(KV.of("X", "key2:O")), valueInGlobalWindow(KV.of("Y", "A")), valueInGlobalWindow(KV.of("Y", "B")), valueInGlobalWindow(KV.of("Y", "C")), valueInGlobalWindow(KV.of("Y", "key1:H")), valueInGlobalWindow(KV.of("Y", "key1:I")), valueInGlobalWindow(KV.of("Y", "key1:J")), valueInGlobalWindow(KV.of("Y", "key2:M")), valueInGlobalWindow(KV.of("Y", "key2:N")), valueInGlobalWindow(KV.of("Y", "key2:O")))); } // Expect the following requests for the first bundle: // * one to read iterable side input // * one to read keys from multimap side input // * one to read key1 iterable from multimap side input // * one to read key2 iterable from multimap side input assertEquals(4, stateRequestHandler.receivedRequests.size()); assertEquals( stateRequestHandler.receivedRequests.get(0).getStateKey().getIterableSideInput(), BeamFnApi.StateKey.IterableSideInput.newBuilder() .setSideInputId(iterableView.getTagInternal().getId()) .setTransformId(transformId) .build()); assertEquals( stateRequestHandler.receivedRequests.get(1).getStateKey().getMultimapKeysSideInput(), BeamFnApi.StateKey.MultimapKeysSideInput.newBuilder() .setSideInputId(multimapView.getTagInternal().getId()) .setTransformId(transformId) .build()); assertEquals( stateRequestHandler.receivedRequests.get(2).getStateKey().getMultimapSideInput(), BeamFnApi.StateKey.MultimapSideInput.newBuilder() .setSideInputId(multimapView.getTagInternal().getId()) .setTransformId(transformId) .setKey(encode("key1")) .build()); assertEquals( stateRequestHandler.receivedRequests.get(3).getStateKey().getMultimapSideInput(), BeamFnApi.StateKey.MultimapSideInput.newBuilder() .setSideInputId(multimapView.getTagInternal().getId()) .setTransformId(transformId) .setKey(encode("key2")) .build()); } private static ByteString encode(String value) throws Exception { ByteString.Output output = ByteString.newOutput(); StringUtf8Coder.of().encode(value, output); return output.toByteString(); } /** * A {@link DoFn} that uses static maps of {@link CountDownLatch}es to block execution allowing * for synchronization during test execution. The expected flow is: * * <ol> * <li>Runner -> wait for AFTER_PROCESS * <li>SDK -> unlock AFTER_PROCESS and wait for ALLOW_COMPLETION * <li>Runner -> issue progress request and on response unlock ALLOW_COMPLETION * </ol> */ private static class MetricsDoFn extends DoFn<byte[], String> { private static final String PROCESS_USER_COUNTER_NAME = "processUserCounter"; private static final String START_USER_COUNTER_NAME = "startUserCounter"; private static final String FINISH_USER_COUNTER_NAME = "finishUserCounter"; private static final String PROCESS_USER_DISTRIBUTION_NAME = "processUserDistribution"; private static final String START_USER_DISTRIBUTION_NAME = "startUserDistribution"; private static final String FINISH_USER_DISTRIBUTION_NAME = "finishUserDistribution"; private static final ConcurrentMap<String, CountDownLatch> AFTER_PROCESS = new ConcurrentHashMap<>(); private static final ConcurrentMap<String, CountDownLatch> ALLOW_COMPLETION = new ConcurrentHashMap<>(); private final String uuid = UUID.randomUUID().toString(); public MetricsDoFn() { AFTER_PROCESS.put(uuid, new CountDownLatch(1)); ALLOW_COMPLETION.put(uuid, new CountDownLatch(1)); } @StartBundle public void startBundle() throws InterruptedException { Metrics.counter(RemoteExecutionTest.class, START_USER_COUNTER_NAME).inc(10); Metrics.distribution(RemoteExecutionTest.class, START_USER_DISTRIBUTION_NAME).update(10); ExecutionStateSampler.instance().doSampling(1); } @ProcessElement public void processElement(ProcessContext ctxt) throws InterruptedException { ctxt.output("zero"); ctxt.output("one"); ctxt.output("two"); Metrics.counter(RemoteExecutionTest.class, PROCESS_USER_COUNTER_NAME).inc(); Metrics.distribution(RemoteExecutionTest.class, PROCESS_USER_DISTRIBUTION_NAME).update(1); ExecutionStateSampler.instance().doSampling(2); AFTER_PROCESS.get(uuid).countDown(); checkState( ALLOW_COMPLETION.get(uuid).await(60, TimeUnit.SECONDS), "Failed to wait for DoFn to be allowed to complete."); } @FinishBundle public void finishBundle() throws InterruptedException { Metrics.counter(RemoteExecutionTest.class, FINISH_USER_COUNTER_NAME).inc(100); Metrics.distribution(RemoteExecutionTest.class, FINISH_USER_DISTRIBUTION_NAME).update(100); ExecutionStateSampler.instance().doSampling(3); } } @Test @SuppressWarnings("FutureReturnValueIgnored") public void testMetrics() throws Exception { launchSdkHarness(PipelineOptionsFactory.create()); MetricsDoFn metricsDoFn = new MetricsDoFn(); Pipeline p = Pipeline.create(); PCollection<String> input = p.apply("impulse", Impulse.create()) .apply("create", ParDo.of(metricsDoFn)) .setCoder(StringUtf8Coder.of()); SingleOutput<String, String> pardo = ParDo.of( new DoFn<String, String>() { @ProcessElement public void process(ProcessContext ctxt) { // Output the element twice to keep unique numbers in asserts, 6 output elements. ctxt.output(ctxt.element()); ctxt.output(ctxt.element()); } }); input.apply("processA", pardo).setCoder(StringUtf8Coder.of()); input.apply("processB", pardo).setCoder(StringUtf8Coder.of()); RunnerApi.Pipeline pipelineProto = PipelineTranslation.toProto(p); FusedPipeline fused = GreedyPipelineFuser.fuse(pipelineProto); Optional<ExecutableStage> optionalStage = Iterables.tryFind(fused.getFusedStages(), (ExecutableStage stage) -> true); checkState(optionalStage.isPresent(), "Expected a stage with side inputs."); ExecutableStage stage = optionalStage.get(); ExecutableProcessBundleDescriptor descriptor = ProcessBundleDescriptors.fromExecutableStage( "test_stage", stage, dataServer.getApiServiceDescriptor(), stateServer.getApiServiceDescriptor()); BundleProcessor processor = controlClient.getProcessor( descriptor.getProcessBundleDescriptor(), descriptor.getRemoteInputDestinations(), stateDelegator); Map<String, Coder> remoteOutputCoders = descriptor.getRemoteOutputCoders(); Map<String, RemoteOutputReceiver<?>> outputReceivers = new HashMap<>(); for (Entry<String, Coder> remoteOutputCoder : remoteOutputCoders.entrySet()) { List<WindowedValue<?>> outputContents = Collections.synchronizedList(new ArrayList<>()); outputReceivers.put( remoteOutputCoder.getKey(), RemoteOutputReceiver.of( (Coder<WindowedValue<?>>) remoteOutputCoder.getValue(), outputContents::add)); } final String testPTransformId = "create-ParMultiDo-Metrics-"; BundleProgressHandler progressHandler = new BundleProgressHandler() { @Override public void onProgress(ProcessBundleProgressResponse response) { MetricsDoFn.ALLOW_COMPLETION.get(metricsDoFn.uuid).countDown(); List<Matcher<MonitoringInfo>> matchers = new ArrayList<>(); // We expect all user counters except for the ones in @FinishBundle // Since non-user metrics are registered at bundle creation time, they will still report // values most of which will be 0. SimpleMonitoringInfoBuilder builder = new SimpleMonitoringInfoBuilder(); builder .setUrn(MonitoringInfoConstants.Urns.USER_SUM_INT64) .setLabel( MonitoringInfoConstants.Labels.NAMESPACE, RemoteExecutionTest.class.getName()) .setLabel( MonitoringInfoConstants.Labels.NAME, MetricsDoFn.PROCESS_USER_COUNTER_NAME); builder.setLabel(MonitoringInfoConstants.Labels.PTRANSFORM, testPTransformId); builder.setInt64SumValue(1); matchers.add(MonitoringInfoMatchers.matchSetFields(builder.build())); builder = new SimpleMonitoringInfoBuilder(); builder .setUrn(MonitoringInfoConstants.Urns.USER_SUM_INT64) .setLabel( MonitoringInfoConstants.Labels.NAMESPACE, RemoteExecutionTest.class.getName()) .setLabel(MonitoringInfoConstants.Labels.NAME, MetricsDoFn.START_USER_COUNTER_NAME); builder.setLabel(MonitoringInfoConstants.Labels.PTRANSFORM, testPTransformId); builder.setInt64SumValue(10); matchers.add(MonitoringInfoMatchers.matchSetFields(builder.build())); builder = new SimpleMonitoringInfoBuilder(); builder .setUrn(MonitoringInfoConstants.Urns.USER_SUM_INT64) .setLabel( MonitoringInfoConstants.Labels.NAMESPACE, RemoteExecutionTest.class.getName()) .setLabel( MonitoringInfoConstants.Labels.NAME, MetricsDoFn.FINISH_USER_COUNTER_NAME); builder.setLabel(MonitoringInfoConstants.Labels.PTRANSFORM, testPTransformId); matchers.add(not(MonitoringInfoMatchers.matchSetFields(builder.build()))); // User Distributions. builder .setUrn(MonitoringInfoConstants.Urns.USER_DISTRIBUTION_INT64) .setLabel( MonitoringInfoConstants.Labels.NAMESPACE, RemoteExecutionTest.class.getName()) .setLabel( MonitoringInfoConstants.Labels.NAME, MetricsDoFn.PROCESS_USER_DISTRIBUTION_NAME); builder.setLabel(MonitoringInfoConstants.Labels.PTRANSFORM, testPTransformId); builder.setInt64DistributionValue(DistributionData.create(1, 1, 1, 1)); matchers.add(MonitoringInfoMatchers.matchSetFields(builder.build())); builder = new SimpleMonitoringInfoBuilder(); builder .setUrn(MonitoringInfoConstants.Urns.USER_DISTRIBUTION_INT64) .setLabel( MonitoringInfoConstants.Labels.NAMESPACE, RemoteExecutionTest.class.getName()) .setLabel( MonitoringInfoConstants.Labels.NAME, MetricsDoFn.START_USER_DISTRIBUTION_NAME); builder.setLabel(MonitoringInfoConstants.Labels.PTRANSFORM, testPTransformId); builder.setInt64DistributionValue(DistributionData.create(10, 1, 10, 10)); matchers.add(MonitoringInfoMatchers.matchSetFields(builder.build())); builder = new SimpleMonitoringInfoBuilder(); builder .setUrn(MonitoringInfoConstants.Urns.USER_DISTRIBUTION_INT64) .setLabel( MonitoringInfoConstants.Labels.NAMESPACE, RemoteExecutionTest.class.getName()) .setLabel( MonitoringInfoConstants.Labels.NAME, MetricsDoFn.FINISH_USER_DISTRIBUTION_NAME); builder.setLabel(MonitoringInfoConstants.Labels.PTRANSFORM, testPTransformId); matchers.add(not(MonitoringInfoMatchers.matchSetFields(builder.build()))); assertThat( response.getMonitoringInfosList(), Matchers.hasItems(matchers.toArray(new Matcher[0]))); } @Override public void onCompleted(ProcessBundleResponse response) { List<Matcher<MonitoringInfo>> matchers = new ArrayList<>(); // User Counters. SimpleMonitoringInfoBuilder builder = new SimpleMonitoringInfoBuilder(); builder .setUrn(MonitoringInfoConstants.Urns.USER_SUM_INT64) .setLabel( MonitoringInfoConstants.Labels.NAMESPACE, RemoteExecutionTest.class.getName()) .setLabel( MonitoringInfoConstants.Labels.NAME, MetricsDoFn.PROCESS_USER_COUNTER_NAME); builder.setLabel(MonitoringInfoConstants.Labels.PTRANSFORM, testPTransformId); builder.setInt64SumValue(1); matchers.add(MonitoringInfoMatchers.matchSetFields(builder.build())); builder = new SimpleMonitoringInfoBuilder(); builder .setUrn(MonitoringInfoConstants.Urns.USER_SUM_INT64) .setLabel( MonitoringInfoConstants.Labels.NAMESPACE, RemoteExecutionTest.class.getName()) .setLabel(MonitoringInfoConstants.Labels.NAME, MetricsDoFn.START_USER_COUNTER_NAME); builder.setLabel(MonitoringInfoConstants.Labels.PTRANSFORM, testPTransformId); builder.setInt64SumValue(10); matchers.add(MonitoringInfoMatchers.matchSetFields(builder.build())); builder = new SimpleMonitoringInfoBuilder(); builder .setUrn(MonitoringInfoConstants.Urns.USER_SUM_INT64) .setLabel( MonitoringInfoConstants.Labels.NAMESPACE, RemoteExecutionTest.class.getName()) .setLabel( MonitoringInfoConstants.Labels.NAME, MetricsDoFn.FINISH_USER_COUNTER_NAME); builder.setLabel(MonitoringInfoConstants.Labels.PTRANSFORM, testPTransformId); builder.setInt64SumValue(100); matchers.add(MonitoringInfoMatchers.matchSetFields(builder.build())); // User Distributions. builder .setUrn(MonitoringInfoConstants.Urns.USER_DISTRIBUTION_INT64) .setLabel( MonitoringInfoConstants.Labels.NAMESPACE, RemoteExecutionTest.class.getName()) .setLabel( MonitoringInfoConstants.Labels.NAME, MetricsDoFn.PROCESS_USER_DISTRIBUTION_NAME); builder.setLabel(MonitoringInfoConstants.Labels.PTRANSFORM, testPTransformId); builder.setInt64DistributionValue(DistributionData.create(1, 1, 1, 1)); matchers.add(MonitoringInfoMatchers.matchSetFields(builder.build())); builder = new SimpleMonitoringInfoBuilder(); builder .setUrn(MonitoringInfoConstants.Urns.USER_DISTRIBUTION_INT64) .setLabel( MonitoringInfoConstants.Labels.NAMESPACE, RemoteExecutionTest.class.getName()) .setLabel( MonitoringInfoConstants.Labels.NAME, MetricsDoFn.START_USER_DISTRIBUTION_NAME); builder.setLabel(MonitoringInfoConstants.Labels.PTRANSFORM, testPTransformId); builder.setInt64DistributionValue(DistributionData.create(10, 1, 10, 10)); matchers.add(MonitoringInfoMatchers.matchSetFields(builder.build())); builder = new SimpleMonitoringInfoBuilder(); builder .setUrn(MonitoringInfoConstants.Urns.USER_DISTRIBUTION_INT64) .setLabel( MonitoringInfoConstants.Labels.NAMESPACE, RemoteExecutionTest.class.getName()) .setLabel( MonitoringInfoConstants.Labels.NAME, MetricsDoFn.FINISH_USER_DISTRIBUTION_NAME); builder.setLabel(MonitoringInfoConstants.Labels.PTRANSFORM, testPTransformId); builder.setInt64DistributionValue(DistributionData.create(100, 1, 100, 100)); matchers.add(MonitoringInfoMatchers.matchSetFields(builder.build())); // The element counter should be counted only once for the pcollection. // So there should be only two elements. builder = new SimpleMonitoringInfoBuilder(); builder.setUrn(MonitoringInfoConstants.Urns.ELEMENT_COUNT); builder.setLabel(MonitoringInfoConstants.Labels.PCOLLECTION, "impulse.out"); builder.setInt64SumValue(1); matchers.add(MonitoringInfoMatchers.matchSetFields(builder.build())); builder = new SimpleMonitoringInfoBuilder(); builder.setUrn(MonitoringInfoConstants.Urns.ELEMENT_COUNT); builder.setLabel( MonitoringInfoConstants.Labels.PCOLLECTION, "create/ParMultiDo(Metrics).output"); builder.setInt64SumValue(3); matchers.add(MonitoringInfoMatchers.matchSetFields(builder.build())); // Verify that the element count is not double counted if two PCollections consume it. builder = new SimpleMonitoringInfoBuilder(); builder.setUrn(MonitoringInfoConstants.Urns.ELEMENT_COUNT); builder.setLabel( MonitoringInfoConstants.Labels.PCOLLECTION, "processA/ParMultiDo(Anonymous).output"); builder.setInt64SumValue(6); matchers.add(MonitoringInfoMatchers.matchSetFields(builder.build())); builder = new SimpleMonitoringInfoBuilder(); builder.setUrn(MonitoringInfoConstants.Urns.ELEMENT_COUNT); builder.setLabel( MonitoringInfoConstants.Labels.PCOLLECTION, "processB/ParMultiDo(Anonymous).output"); builder.setInt64SumValue(6); matchers.add(MonitoringInfoMatchers.matchSetFields(builder.build())); // Check for execution time metrics for the testPTransformId builder = new SimpleMonitoringInfoBuilder(); builder.setUrn(MonitoringInfoConstants.Urns.START_BUNDLE_MSECS); builder.setType(TypeUrns.SUM_INT64_TYPE); builder.setLabel(MonitoringInfoConstants.Labels.PTRANSFORM, testPTransformId); matchers.add( allOf( MonitoringInfoMatchers.matchSetFields(builder.build()), MonitoringInfoMatchers.counterValueGreaterThanOrEqualTo(1))); // Check for execution time metrics for the testPTransformId builder = new SimpleMonitoringInfoBuilder(); builder.setUrn(Urns.PROCESS_BUNDLE_MSECS); builder.setType(TypeUrns.SUM_INT64_TYPE); builder.setLabel(MonitoringInfoConstants.Labels.PTRANSFORM, testPTransformId); matchers.add( allOf( MonitoringInfoMatchers.matchSetFields(builder.build()), MonitoringInfoMatchers.counterValueGreaterThanOrEqualTo(2))); builder = new SimpleMonitoringInfoBuilder(); builder.setUrn(Urns.FINISH_BUNDLE_MSECS); builder.setType(TypeUrns.SUM_INT64_TYPE); builder.setLabel(MonitoringInfoConstants.Labels.PTRANSFORM, testPTransformId); matchers.add( allOf( MonitoringInfoMatchers.matchSetFields(builder.build()), MonitoringInfoMatchers.counterValueGreaterThanOrEqualTo(3))); assertThat( response.getMonitoringInfosList(), Matchers.hasItems(matchers.toArray(new Matcher[0]))); } }; ExecutorService executor = Executors.newSingleThreadExecutor(); try (RemoteBundle bundle = processor.newBundle(outputReceivers, StateRequestHandler.unsupported(), progressHandler)) { Iterables.getOnlyElement(bundle.getInputReceivers().values()) .accept(valueInGlobalWindow(CoderUtils.encodeToByteArray(StringUtf8Coder.of(), "X"))); executor.submit( () -> { checkState( MetricsDoFn.AFTER_PROCESS.get(metricsDoFn.uuid).await(60, TimeUnit.SECONDS), "Runner waited too long for DoFn to get to AFTER_PROCESS."); bundle.requestProgress(); return (Void) null; }); } executor.shutdown(); } @Test public void testExecutionWithUserState() throws Exception { launchSdkHarness(PipelineOptionsFactory.create()); Pipeline p = Pipeline.create(); final String stateId = "foo"; final String stateId2 = "foo2"; p.apply("impulse", Impulse.create()) .apply( "create", ParDo.of( new DoFn<byte[], KV<String, String>>() { @ProcessElement public void process(ProcessContext ctxt) {} })) .setCoder(KvCoder.of(StringUtf8Coder.of(), StringUtf8Coder.of())) .apply( "userState", ParDo.of( new DoFn<KV<String, String>, KV<String, String>>() { @StateId(stateId) private final StateSpec<BagState<String>> bufferState = StateSpecs.bag(StringUtf8Coder.of()); @StateId(stateId2) private final StateSpec<BagState<String>> bufferState2 = StateSpecs.bag(StringUtf8Coder.of()); @ProcessElement public void processElement( @Element KV<String, String> element, @StateId(stateId) BagState<String> state, @StateId(stateId2) BagState<String> state2, OutputReceiver<KV<String, String>> r) { for (String value : state.read()) { r.output(KV.of(element.getKey(), value)); } state.add(element.getValue()); state2.clear(); } })) // Force the output to be materialized .apply("gbk", GroupByKey.create()); RunnerApi.Pipeline pipelineProto = PipelineTranslation.toProto(p); FusedPipeline fused = GreedyPipelineFuser.fuse(pipelineProto); Optional<ExecutableStage> optionalStage = Iterables.tryFind( fused.getFusedStages(), (ExecutableStage stage) -> !stage.getUserStates().isEmpty()); checkState(optionalStage.isPresent(), "Expected a stage with user state."); ExecutableStage stage = optionalStage.get(); ExecutableProcessBundleDescriptor descriptor = ProcessBundleDescriptors.fromExecutableStage( "test_stage", stage, dataServer.getApiServiceDescriptor(), stateServer.getApiServiceDescriptor()); BundleProcessor processor = controlClient.getProcessor( descriptor.getProcessBundleDescriptor(), descriptor.getRemoteInputDestinations(), stateDelegator); Map<String, Coder> remoteOutputCoders = descriptor.getRemoteOutputCoders(); Map<String, Collection<WindowedValue<?>>> outputValues = new HashMap<>(); Map<String, RemoteOutputReceiver<?>> outputReceivers = new HashMap<>(); for (Entry<String, Coder> remoteOutputCoder : remoteOutputCoders.entrySet()) { List<WindowedValue<?>> outputContents = Collections.synchronizedList(new ArrayList<>()); outputValues.put(remoteOutputCoder.getKey(), outputContents); outputReceivers.put( remoteOutputCoder.getKey(), RemoteOutputReceiver.of( (Coder<WindowedValue<?>>) remoteOutputCoder.getValue(), outputContents::add)); } Map<String, List<ByteString>> userStateData = ImmutableMap.of( stateId, new ArrayList( Arrays.asList( ByteString.copyFrom( CoderUtils.encodeToByteArray( StringUtf8Coder.of(), "A", Coder.Context.NESTED)), ByteString.copyFrom( CoderUtils.encodeToByteArray( StringUtf8Coder.of(), "B", Coder.Context.NESTED)), ByteString.copyFrom( CoderUtils.encodeToByteArray( StringUtf8Coder.of(), "C", Coder.Context.NESTED)))), stateId2, new ArrayList( Arrays.asList( ByteString.copyFrom( CoderUtils.encodeToByteArray( StringUtf8Coder.of(), "D", Coder.Context.NESTED))))); StateRequestHandler stateRequestHandler = StateRequestHandlers.forBagUserStateHandlerFactory( descriptor, new BagUserStateHandlerFactory<ByteString, Object, BoundedWindow>() { @Override public BagUserStateHandler<ByteString, Object, BoundedWindow> forUserState( String pTransformId, String userStateId, Coder<ByteString> keyCoder, Coder<Object> valueCoder, Coder<BoundedWindow> windowCoder) { return new BagUserStateHandler<ByteString, Object, BoundedWindow>() { @Override public Iterable<Object> get(ByteString key, BoundedWindow window) { return (Iterable) userStateData.get(userStateId); } @Override public void append( ByteString key, BoundedWindow window, Iterator<Object> values) { Iterators.addAll(userStateData.get(userStateId), (Iterator) values); } @Override public void clear(ByteString key, BoundedWindow window) { userStateData.get(userStateId).clear(); } }; } }); try (RemoteBundle bundle = processor.newBundle( outputReceivers, stateRequestHandler, BundleProgressHandler.ignored())) { Iterables.getOnlyElement(bundle.getInputReceivers().values()) .accept(valueInGlobalWindow(KV.of("X", "Y"))); } for (Collection<WindowedValue<?>> windowedValues : outputValues.values()) { assertThat( windowedValues, containsInAnyOrder( valueInGlobalWindow(KV.of("X", "A")), valueInGlobalWindow(KV.of("X", "B")), valueInGlobalWindow(KV.of("X", "C")))); } assertThat( userStateData.get(stateId), IsIterableContainingInOrder.contains( ByteString.copyFrom( CoderUtils.encodeToByteArray(StringUtf8Coder.of(), "A", Coder.Context.NESTED)), ByteString.copyFrom( CoderUtils.encodeToByteArray(StringUtf8Coder.of(), "B", Coder.Context.NESTED)), ByteString.copyFrom( CoderUtils.encodeToByteArray(StringUtf8Coder.of(), "C", Coder.Context.NESTED)), ByteString.copyFrom( CoderUtils.encodeToByteArray(StringUtf8Coder.of(), "Y", Coder.Context.NESTED)))); assertThat(userStateData.get(stateId2), IsEmptyIterable.emptyIterable()); } @Test public void testExecutionWithUserStateCaching() throws Exception { Pipeline p = Pipeline.create(); launchSdkHarness(p.getOptions()); final String stateId = "foo"; final String stateId2 = "bar"; p.apply("impulse", Impulse.create()) .apply( "create", ParDo.of( new DoFn<byte[], KV<String, String>>() { @ProcessElement public void process(ProcessContext ctxt) {} })) .setCoder(KvCoder.of(StringUtf8Coder.of(), StringUtf8Coder.of())) .apply( "userState", ParDo.of( new DoFn<KV<String, String>, KV<String, String>>() { @StateId(stateId) private final StateSpec<BagState<String>> bufferState = StateSpecs.bag(StringUtf8Coder.of()); @StateId(stateId2) private final StateSpec<BagState<String>> bufferState2 = StateSpecs.bag(StringUtf8Coder.of()); @ProcessElement public void processElement( @Element KV<String, String> element, @StateId(stateId) BagState<String> state, @StateId(stateId2) BagState<String> state2, OutputReceiver<KV<String, String>> r) { for (String value : state.read()) { r.output(KV.of(element.getKey(), value)); } ReadableState<Boolean> isEmpty = state2.isEmpty(); if (isEmpty.read()) { r.output(KV.of(element.getKey(), "Empty")); } else { state2.clear(); } } })) // Force the output to be materialized .apply("gbk", GroupByKey.create()); RunnerApi.Pipeline pipelineProto = PipelineTranslation.toProto(p); FusedPipeline fused = GreedyPipelineFuser.fuse(pipelineProto); Optional<ExecutableStage> optionalStage = Iterables.tryFind( fused.getFusedStages(), (ExecutableStage stage) -> !stage.getUserStates().isEmpty()); checkState(optionalStage.isPresent(), "Expected a stage with user state."); ExecutableStage stage = optionalStage.get(); ExecutableProcessBundleDescriptor descriptor = ProcessBundleDescriptors.fromExecutableStage( "test_stage", stage, dataServer.getApiServiceDescriptor(), stateServer.getApiServiceDescriptor()); BundleProcessor processor = controlClient.getProcessor( descriptor.getProcessBundleDescriptor(), descriptor.getRemoteInputDestinations(), stateDelegator); Map<String, Coder> remoteOutputCoders = descriptor.getRemoteOutputCoders(); Map<String, Collection<WindowedValue<?>>> outputValues = new HashMap<>(); Map<String, RemoteOutputReceiver<?>> outputReceivers = new HashMap<>(); for (Entry<String, Coder> remoteOutputCoder : remoteOutputCoders.entrySet()) { List<WindowedValue<?>> outputContents = Collections.synchronizedList(new ArrayList<>()); outputValues.put(remoteOutputCoder.getKey(), outputContents); outputReceivers.put( remoteOutputCoder.getKey(), RemoteOutputReceiver.of( (Coder<WindowedValue<?>>) remoteOutputCoder.getValue(), outputContents::add)); } Map<String, List<ByteString>> userStateData = ImmutableMap.of( stateId, new ArrayList( Arrays.asList( ByteString.copyFrom( CoderUtils.encodeToByteArray( StringUtf8Coder.of(), "A", Coder.Context.NESTED)), ByteString.copyFrom( CoderUtils.encodeToByteArray( StringUtf8Coder.of(), "B", Coder.Context.NESTED)), ByteString.copyFrom( CoderUtils.encodeToByteArray( StringUtf8Coder.of(), "C", Coder.Context.NESTED)))), stateId2, new ArrayList( Arrays.asList( ByteString.copyFrom( CoderUtils.encodeToByteArray( StringUtf8Coder.of(), "D", Coder.Context.NESTED))))); StoringStateRequestHandler stateRequestHandler = new StoringStateRequestHandler( StateRequestHandlers.forBagUserStateHandlerFactory( descriptor, new BagUserStateHandlerFactory<ByteString, Object, BoundedWindow>() { @Override public BagUserStateHandler<ByteString, Object, BoundedWindow> forUserState( String pTransformId, String userStateId, Coder<ByteString> keyCoder, Coder<Object> valueCoder, Coder<BoundedWindow> windowCoder) { return new BagUserStateHandler<ByteString, Object, BoundedWindow>() { @Override public Iterable<Object> get(ByteString key, BoundedWindow window) { return (Iterable) userStateData.get(userStateId); } @Override public void append( ByteString key, BoundedWindow window, Iterator<Object> values) { Iterators.addAll(userStateData.get(userStateId), (Iterator) values); } @Override public void clear(ByteString key, BoundedWindow window) { userStateData.get(userStateId).clear(); } }; } })); try (RemoteBundle bundle = processor.newBundle( outputReceivers, stateRequestHandler, BundleProgressHandler.ignored())) { Iterables.getOnlyElement(bundle.getInputReceivers().values()) .accept(valueInGlobalWindow(KV.of("X", "Y"))); } try (RemoteBundle bundle2 = processor.newBundle( outputReceivers, stateRequestHandler, BundleProgressHandler.ignored())) { Iterables.getOnlyElement(bundle2.getInputReceivers().values()) .accept(valueInGlobalWindow(KV.of("X", "Z"))); } for (Collection<WindowedValue<?>> windowedValues : outputValues.values()) { assertThat( windowedValues, containsInAnyOrder( valueInGlobalWindow(KV.of("X", "A")), valueInGlobalWindow(KV.of("X", "B")), valueInGlobalWindow(KV.of("X", "C")), valueInGlobalWindow(KV.of("X", "A")), valueInGlobalWindow(KV.of("X", "B")), valueInGlobalWindow(KV.of("X", "C")), valueInGlobalWindow(KV.of("X", "Empty")))); } assertThat( userStateData.get(stateId), IsIterableContainingInOrder.contains( ByteString.copyFrom( CoderUtils.encodeToByteArray(StringUtf8Coder.of(), "A", Coder.Context.NESTED)), ByteString.copyFrom( CoderUtils.encodeToByteArray(StringUtf8Coder.of(), "B", Coder.Context.NESTED)), ByteString.copyFrom( CoderUtils.encodeToByteArray(StringUtf8Coder.of(), "C", Coder.Context.NESTED)))); assertThat(userStateData.get(stateId2), IsEmptyIterable.emptyIterable()); // 3 Requests expected: state read, state2 read, and state2 clear assertEquals(3, stateRequestHandler.getRequestCount()); ByteString.Output out = ByteString.newOutput(); StringUtf8Coder.of().encode("X", out); assertEquals( stateId, stateRequestHandler .receivedRequests .get(0) .getStateKey() .getBagUserState() .getUserStateId()); assertEquals( stateRequestHandler.receivedRequests.get(0).getStateKey().getBagUserState().getKey(), out.toByteString()); assertTrue(stateRequestHandler.receivedRequests.get(0).hasGet()); assertEquals( stateId2, stateRequestHandler .receivedRequests .get(1) .getStateKey() .getBagUserState() .getUserStateId()); assertEquals( stateRequestHandler.receivedRequests.get(1).getStateKey().getBagUserState().getKey(), out.toByteString()); assertTrue(stateRequestHandler.receivedRequests.get(1).hasGet()); assertEquals( stateId2, stateRequestHandler .receivedRequests .get(2) .getStateKey() .getBagUserState() .getUserStateId()); assertEquals( stateRequestHandler.receivedRequests.get(2).getStateKey().getBagUserState().getKey(), out.toByteString()); assertTrue(stateRequestHandler.receivedRequests.get(2).hasClear()); } /** * A state handler that stores each state request made - used to validate that cached requests are * not forwarded to the state client. */ private static class StoringStateRequestHandler implements StateRequestHandler { private StateRequestHandler stateRequestHandler; private ArrayList<BeamFnApi.StateRequest> receivedRequests; private ArrayList<BeamFnApi.ProcessBundleRequest.CacheToken> cacheTokens; StoringStateRequestHandler(StateRequestHandler delegate) { stateRequestHandler = delegate; receivedRequests = new ArrayList<>(); cacheTokens = new ArrayList<>(); } @Override public CompletionStage<BeamFnApi.StateResponse.Builder> handle(BeamFnApi.StateRequest request) throws Exception { receivedRequests.add(request); return stateRequestHandler.handle(request); } @Override public Iterable<BeamFnApi.ProcessBundleRequest.CacheToken> getCacheTokens() { return Iterables.concat(stateRequestHandler.getCacheTokens(), cacheTokens); } public int getRequestCount() { return receivedRequests.size(); } public void addCacheToken(BeamFnApi.ProcessBundleRequest.CacheToken token) { cacheTokens.add(token); } } @Test public void testExecutionWithTimer() throws Exception { launchSdkHarness(PipelineOptionsFactory.create()); Pipeline p = Pipeline.create(); p.apply("impulse", Impulse.create()) .apply( "create", ParDo.of( new DoFn<byte[], KV<String, String>>() { @ProcessElement public void process(ProcessContext ctxt) {} })) .setCoder(KvCoder.of(StringUtf8Coder.of(), StringUtf8Coder.of())) .apply( "timer", ParDo.of( new DoFn<KV<String, String>, KV<String, String>>() { @TimerId("event") private final TimerSpec eventTimerSpec = TimerSpecs.timer(TimeDomain.EVENT_TIME); @TimerId("processing") private final TimerSpec processingTimerSpec = TimerSpecs.timer(TimeDomain.PROCESSING_TIME); @ProcessElement public void processElement( ProcessContext context, @TimerId("event") Timer eventTimeTimer, @TimerId("processing") Timer processingTimeTimer) { context.output(KV.of("main" + context.element().getKey(), "")); eventTimeTimer .withOutputTimestamp(context.timestamp()) .set(context.timestamp().plus(Duration.millis(1L))); processingTimeTimer.offset(Duration.millis(2L)); processingTimeTimer.setRelative(); } @OnTimer("event") public void eventTimer( OnTimerContext context, @Key String key, @TimerId("event") Timer eventTimeTimer, @TimerId("processing") Timer processingTimeTimer) { context.output(KV.of("event", key)); eventTimeTimer .withOutputTimestamp(context.timestamp()) .set(context.fireTimestamp().plus(Duration.millis(11L))); processingTimeTimer.offset(Duration.millis(12L)); processingTimeTimer.setRelative(); } @OnTimer("processing") public void processingTimer( OnTimerContext context, @Key String key, @TimerId("event") Timer eventTimeTimer, @TimerId("processing") Timer processingTimeTimer) { context.output(KV.of("processing", key)); eventTimeTimer .withOutputTimestamp(context.timestamp()) .set(context.fireTimestamp().plus(Duration.millis(21L))); processingTimeTimer.offset(Duration.millis(22L)); processingTimeTimer.setRelative(); } @OnWindowExpiration public void onWindowExpiration( @Key String key, OutputReceiver<KV<String, String>> outputReceiver) { outputReceiver.output(KV.of("onWindowExpiration", key)); } })) // Force the output to be materialized .apply("gbk", GroupByKey.create()); RunnerApi.Pipeline pipelineProto = PipelineTranslation.toProto(p); FusedPipeline fused = GreedyPipelineFuser.fuse(pipelineProto); Optional<ExecutableStage> optionalStage = Iterables.tryFind( fused.getFusedStages(), (ExecutableStage stage) -> !stage.getTimers().isEmpty()); checkState(optionalStage.isPresent(), "Expected a stage with timers."); ExecutableStage stage = optionalStage.get(); ExecutableProcessBundleDescriptor descriptor = ProcessBundleDescriptors.fromExecutableStage( "test_stage", stage, dataServer.getApiServiceDescriptor(), stateServer.getApiServiceDescriptor()); BundleProcessor processor = controlClient.getProcessor( descriptor.getProcessBundleDescriptor(), descriptor.getRemoteInputDestinations(), stateDelegator, descriptor.getTimerSpecs()); Map<String, Collection<WindowedValue<?>>> outputValues = new HashMap<>(); Map<String, RemoteOutputReceiver<?>> outputReceivers = new HashMap<>(); for (Entry<String, Coder> remoteOutputCoder : descriptor.getRemoteOutputCoders().entrySet()) { List<WindowedValue<?>> outputContents = Collections.synchronizedList(new ArrayList<>()); outputValues.put(remoteOutputCoder.getKey(), outputContents); outputReceivers.put( remoteOutputCoder.getKey(), RemoteOutputReceiver.of( (Coder<WindowedValue<?>>) remoteOutputCoder.getValue(), outputContents::add)); } Map<KV<String, String>, Collection<org.apache.beam.runners.core.construction.Timer<?>>> timerValues = new HashMap<>(); Map< KV<String, String>, RemoteOutputReceiver<org.apache.beam.runners.core.construction.Timer<?>>> timerReceivers = new HashMap<>(); for (Map.Entry<String, Map<String, ProcessBundleDescriptors.TimerSpec>> transformTimerSpecs : descriptor.getTimerSpecs().entrySet()) { for (ProcessBundleDescriptors.TimerSpec timerSpec : transformTimerSpecs.getValue().values()) { KV<String, String> key = KV.of(timerSpec.transformId(), timerSpec.timerId()); List<org.apache.beam.runners.core.construction.Timer<?>> outputContents = Collections.synchronizedList(new ArrayList<>()); timerValues.put(key, outputContents); timerReceivers.put( key, RemoteOutputReceiver.of( (Coder<org.apache.beam.runners.core.construction.Timer<?>>) timerSpec.coder(), outputContents::add)); } } ProcessBundleDescriptors.TimerSpec eventTimerSpec = null; ProcessBundleDescriptors.TimerSpec processingTimerSpec = null; ProcessBundleDescriptors.TimerSpec onWindowExpirationSpec = null; for (Map<String, ProcessBundleDescriptors.TimerSpec> timerSpecs : descriptor.getTimerSpecs().values()) { for (ProcessBundleDescriptors.TimerSpec timerSpec : timerSpecs.values()) { if ("onWindowExpiration0".equals(timerSpec.timerId())) { onWindowExpirationSpec = timerSpec; } else if (TimeDomain.EVENT_TIME.equals(timerSpec.getTimerSpec().getTimeDomain())) { eventTimerSpec = timerSpec; } else if (TimeDomain.PROCESSING_TIME.equals(timerSpec.getTimerSpec().getTimeDomain())) { processingTimerSpec = timerSpec; } else { fail(String.format("Unknown timer specification %s", timerSpec)); } } } // Set the current system time to a fixed value to get stable values for processing time timer // output. DateTimeUtils.setCurrentMillisFixed(BoundedWindow.TIMESTAMP_MIN_VALUE.getMillis() + 10000L); try { try (RemoteBundle bundle = processor.newBundle( outputReceivers, timerReceivers, StateRequestHandler.unsupported(), BundleProgressHandler.ignored(), null, null)) { Iterables.getOnlyElement(bundle.getInputReceivers().values()) .accept(valueInGlobalWindow(KV.of("X", "X"))); bundle .getTimerReceivers() .get(KV.of(eventTimerSpec.transformId(), eventTimerSpec.timerId())) .accept(timerForTest("Y", 1000L, 100L)); bundle .getTimerReceivers() .get(KV.of(processingTimerSpec.transformId(), processingTimerSpec.timerId())) .accept(timerForTest("Z", 2000L, 200L)); bundle .getTimerReceivers() .get(KV.of(onWindowExpirationSpec.transformId(), onWindowExpirationSpec.timerId())) // Normally fireTimestamp and holdTimestamp would be the same in window expirations but // we specifically set them to different values to ensure that they are used correctly. .accept(timerForTest("key", 5001L, 5000L)); } String mainOutputTransform = Iterables.getOnlyElement(descriptor.getRemoteOutputCoders().keySet()); assertThat( outputValues.get(mainOutputTransform), containsInAnyOrder( valueInGlobalWindow(KV.of("mainX", "")), WindowedValue.timestampedValueInGlobalWindow( KV.of("event", "Y"), BoundedWindow.TIMESTAMP_MIN_VALUE.plus(Duration.millis(100L))), WindowedValue.timestampedValueInGlobalWindow( KV.of("processing", "Z"), BoundedWindow.TIMESTAMP_MIN_VALUE.plus(Duration.millis(200L))), WindowedValue.timestampedValueInGlobalWindow( KV.of("onWindowExpiration", "key"), BoundedWindow.TIMESTAMP_MIN_VALUE.plus(Duration.millis(5000L))))); assertThat( timerValues.get(KV.of(eventTimerSpec.transformId(), eventTimerSpec.timerId())), containsInAnyOrder( timerForTest("X", 1L, 0L), timerForTest("Y", 1011L, 100L), timerForTest("Z", 2021L, 200L))); assertThat( timerValues.get(KV.of(processingTimerSpec.transformId(), processingTimerSpec.timerId())), containsInAnyOrder( timerForTest("X", 10002L, 0L), timerForTest("Y", 10012L, 100L), timerForTest("Z", 10022L, 200L))); } finally { DateTimeUtils.setCurrentMillisSystem(); } } @Test public void testExecutionWithMultipleStages() throws Exception { launchSdkHarness(PipelineOptionsFactory.create()); Pipeline p = Pipeline.create(); Function<String, PCollection<String>> pCollectionGenerator = suffix -> p.apply("impulse" + suffix, Impulse.create()) .apply( "create" + suffix, ParDo.of( new DoFn<byte[], String>() { @ProcessElement public void process(ProcessContext c) { try { c.output( CoderUtils.decodeFromByteArray( StringUtf8Coder.of(), c.element())); } catch (CoderException e) { throw new RuntimeException(e); } } })) .setCoder(StringUtf8Coder.of()) .apply( ParDo.of( new DoFn<String, String>() { @ProcessElement public void processElement(ProcessContext c) { c.output("stream" + suffix + c.element()); } })); PCollection<String> input1 = pCollectionGenerator.apply("1"); PCollection<String> input2 = pCollectionGenerator.apply("2"); PCollection<String> outputMerged = PCollectionList.of(input1).and(input2).apply(Flatten.pCollections()); outputMerged .apply( "createKV", ParDo.of( new DoFn<String, KV<String, String>>() { @ProcessElement public void process(ProcessContext c) { c.output(KV.of(c.element(), "")); } })) .setCoder(KvCoder.of(StringUtf8Coder.of(), StringUtf8Coder.of())) .apply("gbk", GroupByKey.create()); RunnerApi.Pipeline pipelineProto = PipelineTranslation.toProto(p); FusedPipeline fused = GreedyPipelineFuser.fuse(pipelineProto); Set<ExecutableStage> stages = fused.getFusedStages(); assertThat(stages.size(), equalTo(2)); List<WindowedValue<?>> outputValues = Collections.synchronizedList(new ArrayList<>()); for (ExecutableStage stage : stages) { ExecutableProcessBundleDescriptor descriptor = ProcessBundleDescriptors.fromExecutableStage( stage.toString(), stage, dataServer.getApiServiceDescriptor(), stateServer.getApiServiceDescriptor()); BundleProcessor processor = controlClient.getProcessor( descriptor.getProcessBundleDescriptor(), descriptor.getRemoteInputDestinations(), stateDelegator); Map<String, Coder> remoteOutputCoders = descriptor.getRemoteOutputCoders(); Map<String, RemoteOutputReceiver<?>> outputReceivers = new HashMap<>(); for (Entry<String, Coder> remoteOutputCoder : remoteOutputCoders.entrySet()) { outputReceivers.putIfAbsent( remoteOutputCoder.getKey(), RemoteOutputReceiver.of( (Coder<WindowedValue<?>>) remoteOutputCoder.getValue(), outputValues::add)); } try (RemoteBundle bundle = processor.newBundle( outputReceivers, StateRequestHandler.unsupported(), BundleProgressHandler.ignored())) { Iterables.getOnlyElement(bundle.getInputReceivers().values()) .accept(valueInGlobalWindow(CoderUtils.encodeToByteArray(StringUtf8Coder.of(), "X"))); } } assertThat( outputValues, containsInAnyOrder( valueInGlobalWindow(KV.of("stream1X", "")), valueInGlobalWindow(KV.of("stream2X", "")))); } /** * A restriction tracker that will block making progress on {@link #WAIT_TILL_SPLIT} until a try * split is invoked. */ private static class WaitingTillSplitRestrictionTracker extends RestrictionTracker<String, Void> { private static final String WAIT_TILL_SPLIT = "WaitTillSplit"; private static final String PRIMARY = "Primary"; private static final String RESIDUAL = "Residual"; private String currentRestriction; private WaitingTillSplitRestrictionTracker(String restriction) { this.currentRestriction = restriction; } @Override public boolean tryClaim(Void position) { return needsSplitting(); } @Override public String currentRestriction() { return currentRestriction; } @Override public SplitResult<String> trySplit(double fractionOfRemainder) { if (!needsSplitting()) { return null; } this.currentRestriction = PRIMARY; return SplitResult.of(currentRestriction, RESIDUAL); } private boolean needsSplitting() { return WAIT_TILL_SPLIT.equals(currentRestriction); } @Override public void checkDone() throws IllegalStateException { checkState(!needsSplitting(), "Expected for this restriction to have been split."); } @Override public IsBounded isBounded() { return IsBounded.BOUNDED; } } @Test(timeout = 60000L) public void testSplit() throws Exception { launchSdkHarness(PipelineOptionsFactory.create()); Pipeline p = Pipeline.create(); p.apply("impulse", Impulse.create()) .apply( "create", ParDo.of( new DoFn<byte[], String>() { @ProcessElement public void process(ProcessContext ctxt) { ctxt.output("zero"); ctxt.output(WaitingTillSplitRestrictionTracker.WAIT_TILL_SPLIT); ctxt.output("two"); } })) .apply( "forceSplit", ParDo.of( new DoFn<String, String>() { @GetInitialRestriction public String getInitialRestriction(@Element String element) { return element; } @NewTracker public WaitingTillSplitRestrictionTracker newTracker( @Restriction String restriction) { return new WaitingTillSplitRestrictionTracker(restriction); } @ProcessElement public void process( RestrictionTracker<String, Void> tracker, ProcessContext context) { while (tracker.tryClaim(null)) {} context.output(tracker.currentRestriction()); } })) .apply("addKeys", WithKeys.of("foo")) // Use some unknown coders .setCoder(KvCoder.of(StringUtf8Coder.of(), StringUtf8Coder.of())) // Force the output to be materialized .apply("gbk", GroupByKey.create()); RunnerApi.Pipeline pipeline = PipelineTranslation.toProto(p); // Expand any splittable DoFns within the graph to enable sizing and splitting of bundles. RunnerApi.Pipeline pipelineWithSdfExpanded = ProtoOverrides.updateTransform( PTransformTranslation.PAR_DO_TRANSFORM_URN, pipeline, SplittableParDoExpander.createSizedReplacement()); FusedPipeline fused = GreedyPipelineFuser.fuse(pipelineWithSdfExpanded); // Find the fused stage with the SDF ProcessSizedElementAndRestriction transform Optional<ExecutableStage> optionalStage = Iterables.tryFind( fused.getFusedStages(), (ExecutableStage stage) -> Iterables.filter( stage.getTransforms(), (PTransformNode node) -> PTransformTranslation .SPLITTABLE_PROCESS_SIZED_ELEMENTS_AND_RESTRICTIONS_URN .equals(node.getTransform().getSpec().getUrn())) .iterator() .hasNext()); checkState( optionalStage.isPresent(), "Expected a stage with SDF ProcessSizedElementAndRestriction."); ExecutableStage stage = optionalStage.get(); ExecutableProcessBundleDescriptor descriptor = ProcessBundleDescriptors.fromExecutableStage( "my_stage", stage, dataServer.getApiServiceDescriptor()); BundleProcessor processor = controlClient.getProcessor( descriptor.getProcessBundleDescriptor(), descriptor.getRemoteInputDestinations()); Map<String, ? super Coder<WindowedValue<?>>> remoteOutputCoders = descriptor.getRemoteOutputCoders(); Map<String, Collection<? super WindowedValue<?>>> outputValues = new HashMap<>(); Map<String, RemoteOutputReceiver<?>> outputReceivers = new HashMap<>(); for (Entry<String, ? super Coder<WindowedValue<?>>> remoteOutputCoder : remoteOutputCoders.entrySet()) { List<? super WindowedValue<?>> outputContents = Collections.synchronizedList(new ArrayList<>()); outputValues.put(remoteOutputCoder.getKey(), outputContents); outputReceivers.put( remoteOutputCoder.getKey(), RemoteOutputReceiver.of( (Coder) remoteOutputCoder.getValue(), (FnDataReceiver<? super WindowedValue<?>>) outputContents::add)); } List<ProcessBundleSplitResponse> splitResponses = new ArrayList<>(); List<ProcessBundleResponse> checkpointResponses = new ArrayList<>(); List<String> requestsFinalization = new ArrayList<>(); ScheduledExecutorService executor = Executors.newSingleThreadScheduledExecutor(); ScheduledFuture<Object> future; // Execute the remote bundle. try (RemoteBundle bundle = processor.newBundle( outputReceivers, Collections.emptyMap(), StateRequestHandler.unsupported(), BundleProgressHandler.ignored(), splitResponses::add, checkpointResponses::add, requestsFinalization::add)) { Iterables.getOnlyElement(bundle.getInputReceivers().values()) .accept( valueInGlobalWindow( sdfSizedElementAndRestrictionForTest( WaitingTillSplitRestrictionTracker.WAIT_TILL_SPLIT))); // Keep sending splits until the bundle terminates. future = (ScheduledFuture) executor.scheduleWithFixedDelay( () -> bundle.split(0.5), 0L, 100L, TimeUnit.MILLISECONDS); } future.cancel(false); executor.shutdown(); assertTrue(requestsFinalization.isEmpty()); assertTrue(checkpointResponses.isEmpty()); // We only validate the last split response since it is the only one that could possibly // contain the SDF split, all others will be a reduction in the ChannelSplit range. assertFalse(splitResponses.isEmpty()); ProcessBundleSplitResponse splitResponse = splitResponses.get(splitResponses.size() - 1); ChannelSplit channelSplit = Iterables.getOnlyElement(splitResponse.getChannelSplitsList()); // There is only one outcome for the final split that can happen since the SDF is blocking the // bundle from completing and hence needed to be split. assertEquals(-1L, channelSplit.getLastPrimaryElement()); assertEquals(1L, channelSplit.getFirstResidualElement()); assertEquals(1, splitResponse.getPrimaryRootsCount()); assertEquals(1, splitResponse.getResidualRootsCount()); assertThat( Iterables.getOnlyElement(outputValues.values()), containsInAnyOrder( valueInGlobalWindow(KV.of("foo", WaitingTillSplitRestrictionTracker.PRIMARY)))); } /** * The SDF ProcessSizedElementAndRestriction expansion expects {@code KV<KV<Element, Restriction>, * Size>} where {@code Restriction} in Java SDFs is represented as {@code KV<Restriction, * WatermarkEstimatorState>} and the default {@code WatermarkEstimatorState} is {@code Void} which * always encodes to an empty byte array. */ private KV<KV<String, KV<String, byte[]>>, Double> sdfSizedElementAndRestrictionForTest( String element) { return KV.of(KV.of(element, KV.of(element, new byte[0])), 0.0); } private KV<String, byte[]> byteValueOf(String key, long value) throws CoderException { return KV.of(key, CoderUtils.encodeToByteArray(BigEndianLongCoder.of(), value)); } private org.apache.beam.runners.core.construction.Timer<String> timerForTest( String key, long fireTimestamp, long holdTimestamp) { return org.apache.beam.runners.core.construction.Timer.of( key, "", Collections.singletonList(GlobalWindow.INSTANCE), BoundedWindow.TIMESTAMP_MIN_VALUE.plus(Duration.millis(fireTimestamp)), BoundedWindow.TIMESTAMP_MIN_VALUE.plus(Duration.millis(holdTimestamp)), PaneInfo.NO_FIRING); } }
{ "redpajama_set_name": "RedPajamaGithub" }
1,247
\section{Introduction}\label{sec:intro} Ever since the discovery of the triangular equilibrium solutions of the Three Body Problem by Lagrange (1772), the problem of the dynamical behavior of the orbits near the equilateral equilibrium points has attracted great interest in the astronomical community. A long known application refers to the family of Trojan asteroids of Jupiter (see~\cite{RobSouc-10} and references therein). Trojan asteroids were found also around other planets in our solar system, i.e. the Earth, Mars, Uranus and Neptune (\cite{Bowelletal-90},~\cite{Conetal-11},~\cite{Alexanetal-13}). On different grounds, a number of works have adressed the questions of the overall existence, formation and detectability of Trojan {\it exoplanets} (\cite{Beugetal-07},~\cite{CressNel-09}). No such body has been identified so far in exoplanet surveys. This may indicate that such planets are rare, which case would necessitate a dynamical explanation, or that there exist yet unsurpassed constrains in exo-Trojan detectability. It has been proposed that the complexity of the orbits of Trojan bodies may itself introduce intricacies in possible methods of detection, see, for example,~\cite{Haghietal-13} refering to the Transit Timing Variation method; regarding, in particular, the radial velocity measurements, see~\cite{Dobro-13}. The above and other examples emphasize the need to understand in detail the orbital dynamics in the 1:1 Mean Motion commensurability. In the present paper we extend the work of two previous papers (\cite{PaezLocat2015},~\cite{PaezEfthy2015}), in the direction of developing an efficient analytical method for the study of Trojan orbital dynamics. The aim of analytical studies is to identify the main features of the phase space and to quantify their role in the dynamical behavior of the orbits. Some important references of past analytical studies of the Trojan problem can be found in~\cite{Erdi-97} and references therein. Regarding past approaches, the following is a key remark. Most analytical treatments of the Trojan problem in the literature are so far based upon series expansions of the equations of motion around the stable equilibria $L_4$ and $L_5$, using various sets of variables (e.g cartesian, cylindrical, or Delaunay-like action-angle variables). However, it is important to recall that all these kinds of expansions exhibit an important limitation, related to the singular behavior of the equations of motion at relatively large Trojan libration amplitudes. In the framework of the ERTBP, defined by a central mass, a perturber body and a massless particle (the Trojan body), this singular behavior corresponds geometrically to an approach of the Trojan body close to the perturbing body, which is possible only at the 1:1 Mean Motion Resonance. The relevant remark is that, the presence of a singularity in the equations of motion implies a finite disc of convergence for any kind of series expansions around $L_4$ or $L_5$. It is straightforward to see that the projection of this disc in configuration space is such so as to render the series' convergence very poor for orbits with large libration amplitudes not only towards the perturber, but also in the direction {\it opposite} to the perturber, i.e. towards the unstable colinear point $L_3$. Let us note that this poor convergence has a pure mathematical origin; no physical singularity actually exists exactly at or close to $L_3$. The following is a more precise form of the above remark. Let $\theta$ be the angular distance between the perturber and the Trojan body, e.g. in a heliocentric frame. Regardless the initial choice of variables, model approximation, etc., one finally recovers for $\theta$ a differential equation of the form (see, for example,~\cite{Erdi-78}) \begin{equation}\label{eqcri} {d^2\theta\over dt^2} + 3\mu\sin\theta\left[1-2^{-3/2} (1-\cos\theta)^{-3/2}\right]+\mbox{h.o.t.}=0~~ \end{equation} where $\mu$ is the mass parameter of the perturber. The higher order terms include epyciclic oscillations, the eccentricity of the Trojan or the perturber, as well as any other kind of perturbation induced, for example, by more perturbing bodies. Ignoring such terms, Eq.~\eqref{eqcri} can be thought of as Newton's equation corresponding to a `potential' \begin{equation}\label{potpon} V(\theta) = 3\mu\left[{1\over\sqrt{2-2\cos\theta}}-\cos\theta\right]~~. \end{equation} This differs only by a constant from the quantity $H(\theta)$ introduced in~\cite{MurrDerm-99}, called also the `ponderomotive potential' in~\cite{NamMurr-00}. If, instead, one expresses the equations of motion in orbital elements, one encounters equivalent terms in the {\it disturbing function} (\cite{MurrDerm-99} \S6), taking the form $\mu[-\cos\tau+ (1-\cos\tau)^{-1/2}]$, where $\tau=\lambda-\lambda'$ corresponds to the critical argument of the 1:1 Mean Motion commensurability, $\lambda$, $\lambda'$ being the mean longitudes of the Trojan and the perturber respectively. The position of $L_4$ (or $L_5$) corresponds to $\theta_0=\tau_0=\pi/3$ (or $5\pi/3$). Setting $u=\theta-\theta_0$ or $u=\tau-\tau_0$ and expanding the equations of motion in powers of the quantity $u$ leads to expressions converging in the domain $|u|<\pi/3$. The convergence is quite slow for angles approaching the limiting values $u_{lim}\pm\pi/3$. In reality, such expansions become unpractical for libration angles $\sim 30^\circ$ and beyond, i.e. after half way to the singularity. The applicability of all analytical methods based on polynomial expansions around $L_4$ or $L_5$ is severely limited by this poor convergence. \begin{figure} \centering \includegraphics[width=.65\textwidth]{convdomain1.png} \caption[Representation of the convergence domain of series expansion around $L_4$]{ Representation of the domain of $\tau$ where polynomial series are convergent if the expansion takes place around $L_4$, in a heliocentric cartesian frame co-rotating with the perturber $(x,y)$. The position of the equilibrium points $L_4$ and $L_3$, the central mass $m_0$ and the perturber $m'$ are indicated with black points. The radius of convergence (thick pink line) of the series is given by the distance between $L_4$ and the perturber, namely $60^{\circ}$ ($\tau=\pi/3$). While this does not induce any problem in the direction towards the perturber, it does limit the convergence in the opposite direction. In purple, we show an example of a typical Trojan orbit, obtained by numerically integrating the equations of motion of the ERTBP, for the initial condition $(x,y,\dot{x},\dot{y}) = (0.507,0.87402,0,0)$. The orbit clearly exceeds the leftward limit of $60^{\circ}$ from $L_4$.} \label{fig:convdomain1.png} \end{figure} On the other hand, one finds numerically that stable tadpole orbits exist in domains extending well beyond the limits of convergence of the analytical methods (see Fig. \ref{fig:convdomain1.png}). In~\cite{PaezLocat2015}, a new method of series expansions for the Trojan problem was introduced, aiming, precisely, to remedy the poor convergence of the classical series expansions around $L_4$ or $L_5$. The method was developed in the context of the canonical formalism. In more detail, an algorithm was derived allowing to compute a so-called {\it Hamiltonian normal form} for Trojan motions. In the normal form approach, starting from an initial Hamiltonian model, one performs a series of near-identity canonical transformations from old to new canonical variables, leading to a new expression for the Hamiltonian (called the `normal form'). Via these transformations, the goal is to arrive at a new form of the equations of motion in the new variables, which is simpler to solve than in the original variables. In~\cite{PaezLocat2015} the algorithm was applied to the simplest possible model, namely the planar and circular Restricted Three Body Problem (CRTBP). In this case, the normal form becomes an {\it integrable} model of one degree of freedom, allowing to analytically approximate the motion in the so-called synodic (associated with the libration motion around $L_4$) degree of freedom. The key point of the method is that the functional dependence of all involved quantities (i.e. normal form, transformation equations etc., see section 3 below for details) on the quantity \begin{equation}\label{b0} \beta_0={1\over\sqrt{1-\cos\tau}} \end{equation} and on the powers of $\beta_0$ is maintained at all orders of perturbation theory. Hence, the so-resulting series are not affected by the singularity at $\tau=0$ (i.e. $u=-\pi/3$) and remain useful practically within the whole tadpole domain. In the present paper we implement the method developed in~\cite{PaezLocat2015} in a model more realistic than the CRTBP, namely the model introduced in~\cite{PaezEfthy2015}. This provides an approximation to the Trojan dynamics applicable to two distinct cases: i) the planar Elliptic Restricted Three Body Problem (ERTBP), and ii) what was called in~\cite{PaezEfthy2015} the `Restricted Multi-Planet Problem' (RMPP). In the latter case, we assume that there are more than one perturbing bodies which exert secular perturbations on the Trojan body. The main application in mind is a hypothetical Trojan exoplanet in a multi-planet extrasolar system, although the model applies equally well to the Trojan asteroids of giant planets in our solar system. The RMPP exhibits a more rich spectrum of secular perturbations than the ERTBP. Even so, in~\cite{PaezEfthy2015} it was shown that in both problems, one can derive a so-called, `basic Hamiltonian model' (denoted hereafter as $H_b$). The Hamiltonian $H_b$ approximates the dynamics in the fast and synodic degrees of freedom. Furthermore, in~\cite{PaezEfthy2015} it was shown that $H_b$ is formally identical in the ERTBP and the RMPP (apart from a re-interpretation of the physical meaning of one pair of action-angle variables). Consequently, the two problems are formally diversified only by their different sets of secular terms in the Hamiltonian, denoted by $H_{sec}$. Let us note that here, as in~\cite{PaezEfthy2015}, we focus only on the planar version of the $H_b$ model, although generalization to the spatial version is straightforward. Combining the results of~\cite{PaezEfthy2015} and~\cite{PaezLocat2015}, we provide below an application of particular interest, namely, the semi-analytical determination of the location of {\it secondary resonances} in the tadpole domain of motion. As was shown in~\cite{Erdietal-07}, secondary resonances play a key role in determining the boundary and the size of the tadpole stability domain. In~\cite{PaezEfthy2015} a combination of numerical indicators (the Fast Lyapunov Indicator - FLI~\cite{Froeschleetal-97}, as well as the NAFF (Numerical Analysis of the Fundamental Frequencies) algorithm~\cite{Laskar-04} were used to identify the most important secondary resonances in a space corresponding to what is known as the `proper elements' of the Trojan body's motion (see~\cite{Milani-93},~\cite{BeauRoig-01}, as well as the definitions in~\cite{PaezEfthy2015}). As an example, in the case of the ERTBP, the location of various secondary resonances was determined in the space of proper elements, depending mainly on two parameters, i.e., the perturber's mass parameter $\mu$ and eccentricity $e'$. In the present paper, we demonstrate, instead, the efficiency of the analytical normal form approach of~\cite{PaezLocat2015} in identifying the location of secondary resonances in the space of proper elements. The structure of the paper is as follows: in Section 2 we examine some features of the `basic Hamiltonian' model of~\cite{PaezEfthy2015}, and validate the usefulness of the decomposition $H=H_b+H_{sec}$ by performing a numerical exploration of the dynamics under $H_b$ alone, as well as of how the latter compares to the full Hamiltonian dynamics. Then, in Section 3 we implement the normal form method introduced in~\cite{PaezLocat2015} to the Hamiltonian $H_b$, and check its performance in the location of the secondary resonances. Section 4 summarizes our conclusions. \section{Basic Hamiltonian $H_b$: construction and features}\label{sec:Hb} In this section, we aim to explore in some detail the features of the Hamiltonian model introduced in~\cite{PaezEfthy2015}, and to discuss the advantages and the limitations in the approximations of this model. For completeness, we begin by briefly reviewing the construction of the model. For more details, we defer the reader to~\cite{PaezEfthy2015}. \subsection{Construction}\label{sec:constr_Hb} In the framework of the planar elliptic Restricted Three Body Problem (pERTBP), the equations of motion of the Trojan body ('massless body') depend on two physical parameters: i) the mass parameter $\mu=\frac{m'}{m'+M}$, where $M$ is the mass of the central mass and $m'$ the mass of the perturber (also 'primary perturber' or simply 'primary'), and ii) the eccentricity of the heliocentric orbit of the primary perturber, $e'$. In the ERTBP $e'$ and the major semi-axis of the primary's orbit are constant, set $a'=1$ in our units. In~\cite{PaezEfthy2015}, a Hamiltonian formulation was provided for the Trojan motion, in the pERTBP, and also in a more complex problem where $S$ additional perturbing bodies (e.g. planets) are present, being mutually far from MMRs. The Hamiltonian of the 'Restricted Multi-Planet Problem' (RMPP) was written in~\cite{PaezEfthy2015} under the form \begin{equation}\label{eq:h_rmpp} \begin{aligned} H &= H_b\,(Y_f,\phi_f,u,v,Y_p;\mu,e'_0) \\ &+\, H_{sec}\,(Y_f,\phi_f,u,v,Y_p,\phi,P',I_1,\ldots I_S,\phi', \phi_1,\ldots,\phi_S)~~. \end{aligned} \end{equation} In Eq.~\eqref{eq:h_rmpp}, the variables $(\phi_f,Y_f)$, $(u,v)$ and $(\phi,Y_p)$ are pairs of action-angle variables, whose definition stems from Delaunay-like variables following a sequence of four consecutive canonical transformations (see Appendix). The reader is deferred to Section 2 of~\cite{PaezEfthy2015} for the details, while a schematic description of the physical meaning of these variables is given in Fig.~\ref{fig:HBvar}. On the other hand, the angle $\phi'=g't$ corresponds to the longitude of the pericenter of the primary perturber (constant in the pERTBP, precessing in the RMPP), while the angles $\phi_j=g_j t$ account for the secular perturbations induced by the $S$ additional perturbers. These angles are canonically conjugate to a set of (dummy) action variables denoted by $P'$ and $I_j$ respectively. Finally, $e_{0}'$ is the average eccentricity of the primary perturber, which coincides with $e'$ in the pERTBP. \begin{figure} \centering \includegraphics[width=.90\textwidth]{figure1.jpg} \caption{{\footnotesize Schematic representation of the physical meaning of the action-angle variables used for the Hamiltonian $H_b$ in~\eqref{eq:hbasic}. The plane $(u,v)$ corresponds to the `synodic' motion of the Trojan body, where $u=\lambda-\lambda'- \pi/3$ ($\lambda$ and $\lambda'$ correspond to the mean longitude of the Trojan body and the primary, respectively) and $v=\sqrt{a}-1$ ($a$ is the major semi-axis of the Trojan and $a' = 1$ for the primary). Under the Hamiltonian $H_b$, the phase portrait can be represented by a Poincar\'{e} surface of section corresponding, e.g., to every time when the angle $\phi_f$ accomplishes a full cycle. The left panel shows the form of the projection of this section on the plane $(u,v)$. The central point P represents a stable fixed point corresponding to the short-period periodic orbit around $L_4$. The orbit has frequency $\omega_f$, while its amplitude increases monotonically with $Y_f$. The forced equilibrium corresponds to $u_0=0$, $Y_f=0$. The point P, however, has in general a shift to positive values $u_0>0$ for proper eccentricities $e_p$ larger than zero. On the surface of section, the frequency of libration around the periodic orbit is given by the synodic frequency $\omega_s$. Resonances, and their island chains, correspond to rational relations between the fast frequency $\omega_f$ and $\omega_s$. On the other hand, the plane $(W,V)= (\sqrt{-2y}\cos\delta\varpi,\sqrt{-2y}\sin\delta\varpi,)$ with $y = \sqrt{a}(\sqrt{1-e^2}-1)$ and $\delta \varpi = \varpi - \varpi'$ the difference of longitude of perihelion of the Trojan and the primary (right panel) depicts the evolution of the Trojan body's eccentricity vector under the Hamiltonian $H_b$. The motion of the endpoint of the eccentricity vector can be decomposed to a circulation around the forced equilibrium, with angular frequency $g$, and a fast (of frequency $\omega_f$) `in-and-out' oscillation with respect to a circle of radius $e_p$, of amplitude which is of order ${\cal O}(Y_f)$. All extra terms with respect to $H_b$ in the Hamiltonian \eqref{eq:h_rmpp} depend on the slow angles $(\phi,\phi')$ in pERTBP, and also on the angles $\phi_j$, $j=1,\ldots,S$ in the RMPP. Thus, all these terms can only slowly modulate the dynamics under $H_b$.}} \label{fig:HBvar} \end{figure} We call the term $H_b$ in the Hamiltonian of Eq.~\eqref{eq:h_rmpp} the `basic Hamiltonian model' for Trojan motions in the 1:1 MMR. Its detailed form is given in the Supplementary Online Material of~\cite{PaezEfthy2015}. We find \begin{equation}\label{eq:hbasic} H_b = -\frac{1}{2(1+v)^2}-v +(1+g')Y_f - g' Y_p - \mu {\cal F}^{(0)} (u,\phi_f,v,Y_f-Y_p;e_0')~~. \end{equation} The function ${\cal F}^{(0)}$, contains terms depending on the canonical pairs $(\phi_f,Y_f)$ and $(u,v)$. The former characterizes fast motions (with frequency $\omega_f \sim {\cal O}(1)$), while the latter characterizes the `long-period' synodic motions (with frequency $\omega_s \sim {\cal O}(\sqrt{\mu})$). On the other hand, since the angle $\phi$ (see Fig.~\ref{fig:HBvar}) is ignorable in $H_b$, the action variable $Y_p$ is an integral of the basic Hamiltonian. This allows to define also a secular frequency via $g=\dot{\phi}=\partial H_b/\partial Y_p$. More precisely, we recover the well known relations (e.g., \cite{Erdi-88}) \begin{equation}\label{eq:fast_freq} \omega_f \equiv \dot{\phi}_f = 1 - \frac{27}{8}\, \mu + g' + \ldots~~, \end{equation} \begin{equation}\label{eq:sin_freq} \omega_s \equiv \dot{\phi}_s = - \sqrt{\frac{27 \mu}{4}} + \ldots ~~, \end{equation} \begin{equation}\label{eq:sec_freq} g \equiv \dot{\phi} = \frac{27}{8}\, \mu - g' + \ldots~~. \end{equation} On the other hand, the higher order corrections in Eqs.~\eqref{eq:fast_freq}, \eqref{eq:sin_freq}, \eqref{eq:sec_freq}, can be recovered by an efficient normal form approach, as shown in Section 3 below. Three additional remarks concerning $H_b$ are: i) The constancy of $Y_p$ under $H_b$ allows to define an approximation to the quasi-integral of the proper eccentricity $e_p$ (see~\cite{PaezEfthy2015}) via \begin{equation}\label{eq:prop_ecc} e_p = \sqrt{-2Y_p}~~. \end{equation} This approximation remains useful in the whole spectrum of models ranging from the CRTBP to the full RMPP. ii) In Eq.~\eqref{eq:hbasic}, the dependence of ${\cal F}^{(0)}$ on the actions $Y_p$ and $Y_f$ is exclusively via the difference $Y_f-Y_p$. This fact allows to simplify some normal form computations, as shown in Subsection \ref{sec:feat_Hb} below. We can define, in respect, an eccentricity parameter \begin{equation}\label{eq:fake_prop_ecc} e_{p,0} = \sqrt{-2Y} = \sqrt{2Y_f-2Y_p}~~. \end{equation} The quantity $e_{p,0}$ will be used below in labeling several solutions found via the study of $H_b$. iii) By construction, $H_{b}$ is formally identical in the RMPP and in the pERTBP, with the substitution $e'_0 \rightarrow e'$ and setting $g'=0$. Thus, the determination of the frequencies $\omega_f$, $\omega_s$ and $g$ based on a normal form manipulation of $H_b$ as below (Section \ref{sec:norm_Hb}) leads to equivalent results regardless the number of additional perturbing bodies besides the primary. On the other hand, $H_{sec}$ in \eqref{eq:h_rmpp} gathers all the terms of $H$ depending on the slow (secular) angle $\phi$ (with frequency $g \sim {\cal O}(\mu)$), or, in the case of the RMPP, also on the slow angles $\phi'$, $\phi_j$, $j=1,\ldots S$ (of frequencies ${\cal O}(\mu_j)$). As a consequence, in~\cite{PaezEfthy2015} it was proposed that the dynamics at secondary resonances can be approximated as a {\it slow modulation} of all the resonances produced by the basic model $H_b$, due to the additional influence of $H_{sec}$. Considering the RMPP with $S$ bodies, the most general form of a planar secondary resonance is given by \begin{equation}\label{eq:sec_reson_conmensu} m_f \omega_f + m_s \omega_s + m g + m'g' + m_1 g_1 + \ldots + m_sg_s = 0~~, \end{equation} where $m_f$, $m_s$, $m$, $m'$, $m_j$ (with $j=1,\ldots,S$) are integers. Keeping the notation of~\cite{PaezEfthy2015}, the most important secondary resonances are those present in the basic Hamiltonian model $H_b$, already if $e'=0$, i.e. the resonances of the circular RTBP. These are denoted as the $m_f$:$m_s$ resonances, with comensurability relation \begin{equation}\label{eq:main_sec_res} m_f \omega_f + m_s \omega_s = 0 ~~. \end{equation} The particular case when $m_f =1$ corresponds to the lowest order resonances that can be found for a certain value of the mass parameter $\mu$, and usually dominate the structure of the phase-space. These are called the 'main secondary resonances' $1$:$n$, where $n=m_s$. For values of $\mu$ between $0.01$ and $0.0005$, $n$ corresponds to $4,5,6\ldots,16$. On the other hand, we collectively refer to any other resonance of the ERTBP (involving all 3 frequencies $\omega_f$, $\omega_s$ and $g$) as well as to more general cases of the RMPP (including the frequencies $g'$, $g_j$) as 'transverse' resonances. \subsection{Limits of applicability of the basic model $H_b$}\label{sec:feat_Hb} The basic model $H_b$ represents a reduction of the number of degrees of freedom with respect to the original problem. Thus, we expect that its usefulness in approximating the full problem (ERTBP or RMPP) holds to some extent only. The following numerical examples aim to compare the dynamical behavior of the orbits under the $H_b$ and the full Hamiltonian. To this end, we compute and compare various phase portraits (surfaces of section) arising under the two Hamiltonians. We restrict ourselves to the comparison between $H_b$ and the full Hamiltonian of the ERTBP only. We thus set $e_0'= e'$, and $g'=0$, $S=0$. Then, all secular perturbations are accounted for by only one additional degree of freedom with respect to $H_b$, represented by the canonical pair $(\phi,Y_p)$. Integrating numerically the RMPP instead of the ERTBP is considerably more expensive. Still, it is arguable that the effect of the secular perturbations should remain qualitatively similar by adding more degrees of freedom consisting of slow action-angle pairs only, as in the Hamiltonian decomposition of Eq.~\eqref{eq:h_rmpp}. Our numerical integrations of the full Hamiltonian model (ERTBP) are performed in heliocentric Cartesian variables, in which the equations of motion are straightforward to express. Whenever needed, translation from Cartesian to the canonical variables appearing in~\eqref{eq:h_rmpp} and vice versa is done following the sequence of canonical transformations defined in~\cite{PaezEfthy2015}. On the other hand, for the basic Hamiltonian $H_b$ we have an explicit expression only in the latter variables. However, one can readily see that, for fixed $(u,v,\phi_f)$, all the initial conditions of fixed difference $Y_f-Y_p$ lead to the same orbit, independently of the individual values of $Y_f$ or $Y_p$. If we set $Y_f=Y_{f,ref}=0$ and $Y_p=Y_{p,ref}=-e_{p,ref}^2/2$ for one particular orbit chosen in advance, that we call the `reference orbit', this allows to specify a certain appropriate value of the energy $E=E_{ref}$ equal to the numerical value of $H_b$ for that orbit. The reference orbit satisfies the condition $e_{p,ref}=e_{p,0}$, i.e., $e_{p,0}$ becomes equal to the modulus of the initial vector $\mathbf{e}-\mathbf{e}_{forced}$, where $\mathbf{e}=(e\cos\omega,e\sin\omega)$, and $\mathbf{e}_{forced} = (e'/2,e'\sqrt{3}/2)$. Now, keeping {\it both} $Y_p=Y_{p,ref}$ and $E=E_{ref}$ fixed, but altering $(u,v,\phi_f)$, allows to solve the equation $E_{ref}=H_b$ for $Y_f$ and specify new initial conditions for more orbits at the same energy as the reference orbit. However, now we will find in general that the initial value of $Y_f$ for any of these new orbits satisfies $Y_f\neq 0$. In terms of the initial eccentricity vector, this implies that $e_{p,0}\neq e_{p,ref}$. The so found orbit is the same as the one in which we set $Y_p=-e_{p,0}^2/2\neq Y_{p,ref}$, and $Y_f=0$. For convenience, we formally proceed with the former process (keeping $E=E_{ref}$ and $Y_p=Y_{p,ref}$ fixed and adjusting $Y_f$ for different initial conditions). However, since the value of the proper eccentricity for each of these initial conditions $e_{p,0}$, we label all plots by $e_{p,0}$ instead of $e_p$ in the FLI stability maps presented below as well as in~\cite{PaezEfthy2015}. Returning to our numerical computations, in order to choose a reference orbit we select one close to the short period family around $L_4$~\cite{Rabe-68}. More precisely, we set $u=v=\phi_f=Y_f=0$ for the reference orbit, and consider different values for $Y_p=Y_{p,ref}$. Physically, this means to choose different energy levels $E=E_{ref}$ at which the reference orbit has different proper eccentricity. Let us note that the existence of a central periodic orbit is itself a property of the basic model $H_b$; adding more degrees of freedom implies, instead, the existence of an invariant torus of dimension larger than one and smaller than the full number of degrees of freedom. Having selected $E_{ref}$ and $Y_{p,ref}$, we compute initial conditions for more orbits at the energy $E=E_{ref}$. More precisely, in each of the figures which follow, we define a set of $19$ initial conditions given by $u_j=0.05\times j$, $v_j=0$, $\phi_{f,j}=0$, for $j=0,\ldots,18$, and $Y_{f,j}$ computed as described above. With these initial conditions, we numerically integrate the orbits, under the equations of motion of $H_b$, up to collecting, for each orbit, 500 points on the surface of section $\phi_f=0\,(\mathrm{mod}\:2\pi)$. The same set of initial conditions is integrated under the equations of motion of the full ERTBP, for a time equivalent to 500 revolutions of the primary, collecting about 490 points in the same surface of section. In the ERTBP, the surface of section is four-dimensional, but a two-dimensional projection on the plane $(u,v)$ allows comparisons with the corresponding section of the basic model $H_b$. As an additional comparison, we also compute the surface of section provided by an intermediate model between the $H_b$ and the pERTBP. We construct a 3 d.o.f Hamiltonian in the following way \begin{equation}\label{eq:hbsec} H_{b,sec} = H_b\,(Y_f,\phi_f,u,v,Y_p;\mu,e',e_{p,0}) + \langle F^{(1)} \rangle (u,v,Y_p,\phi;\mu,e',e_{p,0},Y_f)~~, \end{equation} where \begin{equation}\label{eq:f1ave} \langle F^{(1)} \rangle={1\over 2\pi}\int_0^{2\pi}H_{sec}d\phi_f~~. \end{equation} Explicit formulae for $\langle F^{(1)} \rangle$ can be found in the Supplementary Online Material of~\cite{PaezEfthy2015}. Such terms may depend on the slow angle $\phi$, but are independent of the fast angle $\phi_f$. Hence, $H_{b,sec}$ contains some, but not all, the secular terms of the disturbing function of the pERTBP. On the other hand, up to first order in the mass parameter $\mu$, the averaging~\eqref{eq:f1ave} yields the same Hamiltonian as the one produced by a canonical transformation eliminating all terms depending on the fast angle $\phi_f$. Thus, the model $H_{b,sec}$ captures the main effect of the secular terms, as discussed in~\cite{PaezEfthy2015}, which is a \emph{pulsation}, with frequency $g$, of the separatrices of all the secondary resonances induced by $H_b$. Since the modulation due to these secular terms is slow, far from secondary resonances we expect that an adiabatic invariant holds for initial conditions close to the invariant tori of $H_b$, thus yielding stable regular orbits. On the other hand, in~\cite{PaezEfthy2015} it was argued that close to secondary resonances the pulsation provokes a weak chaotic diffusion best described by the paradigm of modulational diffusion. \begin{figure} \centering \includegraphics[width=0.75\textwidth]{mu0024ep04eprop01.png} \caption{Comparison of surfaces of section (section condition $\phi_f=0$) provided by different models. The considered parameters are $\mu=0.0024$, $e'=0.04$ and $e_{p,ref}=0.01$. In pink points (upper left), we show the surface of section provided by $H_b$. In blue points (upper right), the one corresponding to $H_{b,sec}$. In purple points (lower left), the one corresponding to pERTBP. In lower right panel, we reproduce the FLI map of~\cite{PaezEfthy2015} corresponding to the physical parameters $\mu$ and $e'$ considered, with the most important secondary resonances indicated. The color-scale for the FLI map goes as follows: dark colors (purple) indicate regular orbits, while light colors (yellow) indicate for the chaotic orbits (see~\cite{PaezEfthy2015} for the exact FLI computation). The green line on the FLI map indicates the isoenergetic curve where the initial conditions are located.} \label{fig:plot-epr01} \end{figure} \begin{figure} \centering \includegraphics[width=0.75\textwidth]{mu0024ep04eprop035.png} \caption{Same as in Fig.~\ref{fig:plot-epr01}, but for a higher parameter value $e_{p,ref}=0.035$. } \label{fig:plot-epr035} \end{figure} \begin{figure} \centering \includegraphics[width=0.75\textwidth]{mu0024ep04eprop07.png} \caption{Same as in Fig.~\ref{fig:plot-epr01}, but for a still higher parameter value $e_{p,ref}=0.07$.} \label{fig:plot-epr07} \end{figure} Figures~\ref{fig:plot-epr01},~\ref{fig:plot-epr035} and~\ref{fig:plot-epr07} show an example of the comparison between the three mentioned above models. The physical parameters chosen for these plots are $\mu=0.0024$ (which depicts clearly the 1:8 main secondary resonance) and $e'=0.04$. Figure~\ref{fig:plot-epr01} shows the surface of section $\phi_f = 0\,(\mathrm{mod}\:2\pi)$ corresponding to $e_{p,ref}=0.01$, Fig.~\ref{fig:plot-epr035} to $e_{p,ref}=0.035$ and Fig.~\ref{fig:plot-epr07} to $e_{p,ref}=0.07$. In each figure, the upper left plot (pink points) corresponds to the surface of section produced by the flow under the basic model $H_b$, the upper right plot (blue points) to the flow under $H_{b,sec}$ and the lower left plot (purple points) to the flow under the full Hamiltonian of the pERTBP. As an additional information, we provide the FLI stability map corresponding to the same parameters $\mu$ and $e'$, which was computed in Fig. 8c of~\cite{PaezEfthy2015} (see that paper for details on the FLI computation). On top of the FLI map, in green we show the locus of initial conditions $(u,e_{p,0})$ on the surface of section whose orbits have constant energy $E=E_{ref}$. In Fig.~\ref{fig:plot-epr01}, in the approximation based on the model $H_b$, the absence of any dependence of the dynamics on the slow angle $\phi$ renders possible to clearly display the short period and synodic dynamics by means of the surface of section $\phi_f = 0\,(\mathrm{mod}\:2\pi)$, which, for $H_b$, is two-dimensional. In fact, for more complex models like $H_{b,sec}$ or the full pERTBP, the corresponding surface of section is 4-dimensional and its 2D projection on the $(u,v)$ plane becomes blurred (top right and bottom left panels respectively). The blurring can be due partly to projection effects. However, we argue below that an important effect is caused also by the influence of the secular terms, absent in $H_b$, to the dynamics. Returning to the phase portrait of $H_b$, this allows to extract relevant information such as: i) the position of the central fixed point, corresponding to the crossing of the section by the short period orbit, ii) several secondary resonances and the corresponding resonant islands of stability, and iii) the overall size of the libration domain of \emph{effective} stability. Also, this phase portrait allows to understand the structure of the stability map. In the phase portrait, as we move from left to right along the line $x=0$, we encounter non-resonant tori, interrupted by thin chaotic layers and the islands of some secondary resonances, namely the resonances 1:8 (at $u \sim 0.55$) and 2:17 (at $u \sim 0.85$). Note, however, that no transverse secondary resonances can be seen in the $H_b$ portrait, since these resonances correspond, in general, to a non-resonant frequency ratio of the fast and synodic frequencies $\omega_f$ and $\omega_s$; except at resonance junctions, the exact resonance condition $m_f\omega_f + m_s\omega_s + m_g g=0$ for some non-zero $m_s$, $m_f$, $m_g$ implies, in general, non-commensurable values of $\omega_f$ and $\omega_s$. Since $q \ll \omega_s \ll \omega_f$, most transverse resonances can only accumulate close to the main secondary resonances forming resonant multiplets, as confirmed by visual inspection of the stability maps (see also~\cite{PaezEfthy2015}). However, some isolated transverse resonances may be embedded in the main domain of stability whose border is marked by the most conspicuous secondary resonance. In Fig.~\ref{fig:plot-epr01}, this domain extends up to about $u\approx 0.5$. In the stability map of Fig.~\ref{fig:plot-epr01}, the transverse resonances $[1,-8,k]$, with $k=-2,-1,1,2$ form a multiplet together with the conspicuous resonance $1$:$8$. Two of these transverse resonances ($k=2$ and $k=1$) are embedded in the main domain of stability. However, none of the transverse resonances is visible in the phase portrait of the basic model $H_b$. We now discuss the pulsation effect of the phase portrait due to the slow modulation induced by the secular terms. As shown in~\cite{PaezEfthy2015}, the amplitude of the secular terms depends on the values of $e'$ and $e_{p,0}$. For fixed $e'\neq 0$, the amplitude of the pulsation generated by such terms increases with $e_{p,0}$. For values of $e_{p,0}$ large enough, the pulsation modifies the whole behavior in phase-space. Since, along the line $x=0$, $e_{p,0}$ increases with $u$ (green curve in low-right panel of Fig.~\ref{fig:plot-epr01}), the amplitude of the pulsation increases as we move from the central fixed point outwards. In regions where the resonant web is dense enough, this pulsation causes all narrow transverse resonances in a multiplet to overlap, increasing the size of the chaotic domain and facilitating escaping mechanisms. For the set of parameters of Fig.~\ref{fig:plot-epr01}, we see from the corresponding FLI map that this happens for values of $e_{p,0}$ greater than about $0.06$. Beyond this value, the effect induced by $H_{sec}$ implies that the blurring observed in the phase portraits (apart from the one of $H_b$) is not due just to projection effects but it has a dynamical origin, the nature of the orbits changes as they are converted from regular to chaotic. Evidence of this phenomenon is found, e.g, in the case of the resonance 2:17. While in the surface of section of the $H_b$, the 2:17 stability islands are clearly seen, such resonance is not evident in the surfaces of section of the ERTBP and $H_{b,sec}$. As represented by the FLI map, the effect of the resonance's separatrix pulsation results in that no libration domain is identifiable in the FLI map. This latter effect is more conspicuous in Figs. \ref{fig:plot-epr035} and \ref{fig:plot-epr07}, in which, choosing a higher $e_{p,ref}$, we increase the level of proper eccentricities of all the orbits. In Fig.~\ref{fig:plot-epr035}, the FLI stability map shows large domains of chaos which are not observed in the phase portrait of $H_b$, but they appear in the phase portrait of the full model. The separatrix pulsation of the 1:8 resonance is not, however, large enough so as to completely wash out this resonance, which is therefore seen in all four panels of the plot. On the other hand, increasing still more the level of proper eccentricities (Fig.~\ref{fig:plot-epr07}) makes this pulsation large enough so as to completely introduce chaos in the position of the 1:8 resonance. This limit of eccentricity levels marks the overall validity of the approximation based on $H_b$ regarding the position of secondary resonances. Beyond this value, $H_b$ still represents fairly well the dynamics only inside the main librational domain of stability. We note also that the elimination of the main secondary resonance 1:8 by the separatrix pulsation is already present in the model $H_{b,sec}$ (compare the corresponding phase portraits in the three Figures \ref{fig:plot-epr01}, \ref{fig:plot-epr035}, \ref{fig:plot-epr07}). In conclusion, the pulsation mechanism induced by the secular terms in the Hamiltonian affects essentially the regions of the phase space where resonances accumulate in the form of multiplets. For libration orbits, these are the regions beyond the main secondary resonance $1$:$n$, which always dominates the phase-space. The regions inner to that resonance are not influenced considerably and the representation of the dynamics via the basic model $H_b$ remains accurate there, even for high values of the proper eccentricity. The value of the latter at which the separatrix pulsation of the $1$:$n$ resonance completely washes this resonance marks the overall limit of approximation of the basic model. On the other hand, most orbits beyond that limit turn to be chaotic and fast-escaping the libration domain, thus of lesser interest in applications related to Trojan or exo-Trojan objects. \section{Normal form}\label{sec:norm_Hb} In~\cite{PaezLocat2015}, a new normalizing scheme was introduced for the Hamiltonian of the planar Circular Restricted Three Body Problem (pCRTBP). Here, we adapt the scheme in order to compute a normal form in the case of the basic model $H_b$ derived from the pERTBP. The particular application considered is the semi-analytic determination of the position of the secondary resonances in the plane of the Trojan body's proper elements. \subsection{Hamiltonian preparation}\label{sec:preparation} The novelty of the normalizing scheme introduced in~\cite{PaezLocat2015} lies on the way the scheme deals with the synodic degree of freedom, expressed in the Hamiltonian through the variables $(u,v)$. For obtaining the dynamics in the synodic variables via a normal form, it is only necessary to average the Hamiltonian $H_b$ over the fast angle $\phi_f$. The novelty consists of retaining the original non-polynomial and non-trigonometric-polynomial functional dependence of the Hamiltonian on the synodic angle $u$ in all normal form expansions. As pointed out in the introduction, this allows to deal efficiently with the model's singular behavior at $u=-\pi/3$. We start by first expressing the basic model $H_b$ in variables appropriate for introducing the normalization scheme of~\cite{PaezLocat2015}. The synodic degree of freedom is represented by the variables \begin{equation}\label{eq:shift_cent} v=x-x_0, \quad u=\tau-\tau_0~~, \end{equation} where \begin{equation}\label{eq:syn_dof} x = \sqrt{a}-1 \,,\quad\, \tau = \lambda-\lambda'~~, \end{equation} $a$ being the major semi-axis of the Trojan body, and $\lambda$, $\lambda'$ the mean longitudes of the Trojan body and the primary respectively. The constants $x_0$ and $\tau_0$ in \eqref{eq:shift_cent} give the position of the forced equilibrium of the Hamiltonian averaged over $\lambda'$ (see~\cite{PaezEfthy2015}). In the case of the pERTBP, in the vicinity of $L_4$, we have $x_0=0$, $\tau_0=\pi/3$. Finally, it turns convenient to introduce new canonical pairs: $(\phi_f,~{\cal Y}=Y_f-Y_p)$, and $(\theta=\phi+\phi_f,~Y_p)$. After these preliminary transformations, the basic model $H_b$ reads \begin{equation}\label{eq:hbasic_xtau} H_b = -\frac{1}{2(1+x)^2}-x + {\cal Y}+Y_p - \mu {\cal F}^{(0)}(\tau,\phi_f,x,{\cal Y};e')~~. \end{equation} The dependence of $H_b$ on $\tau$ is of the form $\frac{\cos^{k_1} \tau} {(2-2\cos\tau)^{j/2}}$ or $\frac{\cos^{k_2} \tau}{(2-2\cos\tau)^{j/2}}$, $j=2n-1$ with $k_1$, $k_2$ and $n$ integers (see Supplementary Online Material of~\cite{PaezEfthy2015}). Also, since the angle $\theta$ is ignorable, $Y_p$ is a constant that can be viewed as a parameter in $H_b$. In order to initialize the normalization procedure, we write and expand the Hamiltonian in \eqref{eq:hbasic_xtau}, by introducing modified Delaunay-Poincar\'{e} variables, as in~\cite{PaezLocat2015} \begin{equation}\label{eq:delau-garf-coord} \begin{aligned} x &\,,\qquad \tau\,\\ \xi &=\sqrt{2{\cal Y}}\cos\phi_f\,,\\ \eta&=\sqrt{2{\cal Y}}\sin\phi_f\,~~. \end{aligned} \end{equation} The new expression for the Hamiltonian reads \begin{equation}\label{eq:hbasic_xtauxieta} H_b(\tau,x,\xi,\eta,Y_p) = -\frac{1}{2(1+x)^2}-x + Y_p + \frac{\xi^2+\eta^2}{2} - \mu {\cal F}^{(0)} (\tau,x,\xi,\eta;Y_p,e')~~. \end{equation} Finally, we expand the Hamiltonian in terms of every variable except $\tau$, obtaining \begin{align} H_b (\tau,x,\xi,\eta,Y_p) =& -x + \sum_{i=0}^{\infty} \,(-1)^{i-1}(i+1)\, \frac{x^i}{2}\, +\, \frac{\xi^2+\eta^2}{2} \,+ \,Y_p \label{eq:hb_xtauxieta_exp}\\ +& \,\mu \sum_{\substack{ m_1,m_2,m_3\\ k_1,k_2,k_3,j}} a_{m_1,m_2,m_3,k_1,k_2,j}\, e'^{k_3} x^{m_1} \, \xi^{m_2}\, \eta^{m_3} \, \cos^{k_1}(\tau) \, \sin^{k_2}(\tau) \, \beta^j(\tau)~~, \nonumber \end{align} where $a_{m_1,m_2,m_3,k_1,k_2,j}$ is a rational number and $\beta(\tau)=\frac{1}{\sqrt{2-2\cos\tau}}$. The Hamiltonian $H_b$ in~\eqref{eq:hb_xtauxieta_exp} represents the `normal form at the zero-th step in the normalizing scheme', i.e., before any normalization. This we denote as $H^{(1,0)}$. \section{Normalizing scheme}\label{sec:norm_sche} The normalizing algorithm defines a sequence of Hamiltonians by an iterative procedure. In order to simplify some of the concepts below we define the class ${\cal P}_{s,l}$ as the set of functions whose expansion is of the form \begin{equation}\label{eq:classP} \sum_{2m_1+m_2+m_3=l}\, \, \sum_{\substack{k_1+k_2\leq l+4s-3\\ j\leq 2l+7s-6}} a_{m_1,m_2,m_3,k_1,k_2,j}\, e'^{k_3} x^{m_1} \, \xi^{m_2}\, \eta^{m_3} \, \cos^{k_1}(\tau) \, \sin^{k_2}(\tau) \,\beta^j(\tau)~~. \end{equation} Let $r_1,\,r_2$ be two integer counters, $1 \leq r_1 \leq R_1$ and $1 \leq r_2 \leq R_2$ with fixed $R_1,R_2\in \mathbb{N}$. We assume that at a generic normalizing step ($r_1$,$r_2-1$), the expansion of the Hamiltonian is given by \begin{equation}\label{eq:hr1r2-1} \begin{aligned} H^{(r_1,r_2-1)}(x,\xi,\tau,\eta,Y_p) = &\, Y_p+\frac{\xi^2 + \eta^2}{2} + \sum_{i=2}^{\infty} \alpha_i\, x^i \\ + &\, \sum_{s=1}^{r_1-1} \sum_{l=0}^{R_2} \mu^s Z_{s,l} \,( x, (\xi^2+\eta^2)/2,\tau ) \\ + &\, \sum_{l=0}^{r_2-1} \mu^{r_1} Z_{r_1,l}\, (x, (\xi^2+\eta^2)/2,\tau)\\ + &\, {\cal R}^{(r_1,r_2-1)} (x,\xi,\eta,\tau)~, \end{aligned} \end{equation} where $\alpha_i$ are real coefficients and the remainder ${\cal R}^{(r_1,r_2-1)}(x,\xi,\eta,\tau)$ is given by \begin{equation}\label{eq:Rr1-1r2-1} \begin{aligned} {\cal R}^{(r_1,r_2-1)} &\,(x,\xi,\eta,\tau) = \mu^{r_1} f_{r_1,r_2}^{(r_1,r_2-1)}(x,\xi,\eta,\tau) + \sum_{l=r_2+1}^{R_2} \mu^{r_1} f_{r_1,l}^{(r_1,r_2-1)}(x,\xi,\eta,\tau) \\ + &\, \sum_{s=r_1+1}^{\infty} \sum_{l=0}^{R_2} \mu^s f_{s,l}^{(r_1,r_2-1)} (x,\xi,\eta,\tau) + \sum_{s=1}^{\infty} \sum_{r=R_2+1}^{\infty} \mu^s f_{s,l}^{(r_1,r_2-1)}(x,\xi,\eta,\tau)~. \end{aligned} \end{equation} All the terms $Z_{s,l}$ and $f_{s,l}^{(r_1,r_2-1)}$ appearing in~\eqref{eq:hr1r2-1} are made by expansions including a \emph{finite} number of monomials of the type given by the class ${\cal P}_{s,l}$. More specifically $Z_{s,l} \in {\cal P}_{s,l}$ $\forall\ 0\le l\le R_2\,,\ 1\le s<r_1\,$, $Z_{r_1,l}\in {\cal P}_{r_1,l}$ $\forall\ 0\le l<r_2\,$, $f_{r_1,l}^{(r_1,r_2-1)}\in {\cal P}_{r_1,l}$ $\forall\ l\ge r_2\,$, $f_{s,l}^{(r_1,r_2-1)}\in {\cal P}_{s,l}$ $\forall\ l>R_2\,,\ 1\leq s<r_1\,$ and $\forall\ l\ge 0,\ s>r_1\,$. In formula~\eqref{eq:hr1r2-1}, one can distinguish the terms in normal form from the remainder ${\cal R}$: the latter depend on $(\xi,\eta)$ in a generic way, while in the normal form terms $Z$, those variables just appear under the form $(\xi^2+\eta^2)/2$. The $(r_1,r_2)$--th step of the algorithm formally defines the new Hamiltonian $H^{(r_1,r_2)}$ by applying the Lie series operator $\exp {\cal L}_{\mu^{r_1} \chi_{r_1,r_2}}$ to the previous Hamiltonian $H^{(r_1,r_2-1)}$, as it follows% \footnote{We stress here that after each transformation we do not change the name of the canonical variables in order to simplify the notation.} \begin{equation}\label{eq:hr1r2Lie} H^{(r_1,r_2)} = \exp \left({\cal L}_{\mu^{r_1}\chi_{r_1,r_2}}\right) H^{(r_1,r_2-1)}~~. \end{equation} The Lie series operator is given by \begin{equation}\label{eq:Lie_oper} \exp \left({\cal L}_{\chi} \right) \, \cdot \, = \sum_{j \geq 0} \frac{1}{j!} {\cal L}_{\chi}^{j} \; \cdot~~, \end{equation} where the Lie derivative ${\cal L}_{\chi} g = \{ g, \chi \}$, is such that $\{\cdot,\cdot\}$ is the classical Poisson bracket. The new generating function $\mu^{r_1}\chi_{r_1,r_2}$ is determined by solving the following homological equation with respect to the unknown $\chi_{r_1,r_2}= \chi_{r_1,r_2}(x,\xi,\tau,\eta)$: \begin{equation}\label{eq:homol_eq} {\cal L}_{\mu^{r_1} \chi_{r_1,r_2}} Z_{0,2} + f_{r_1,r_2}^{(r_1,r_2-1)} = Z_{r_1,r_2} ~~, \end{equation} where $Z_{0,2} = \frac{\xi^2 +\eta^2}{2}$ and $Z_{r_1,r_2}$ is the new term in the normal form, i.e. $Z_{r_1,r_2} = Z_{r_1,r_2} (x,\tau,(\xi^2+\eta^2)/2)$. In other words, $\mu^{r_1}\chi_{r_1,r_2}$ is determined so as to remove the terms that do \emph{not} belong to the normal form from the main perturbing term $\mu^{r_1} f_{r_1,r_2}^{(r_1,r_2-1)}$. Thus, by construction, the new Hamiltonian $H^{(r_1,r_2)}$ inherits the structure of Eq.~\eqref{eq:hr1r2-1}. From the latter, we point out that the splitting of the Hamiltonian in sub-functions of the form ${\cal P}_{s,l}$, organizes the terms in groups with the same order of magnitude $\mu^s$ and total degree $l/2$ (possibly semi-odd) in the variables $x$ and ${\cal Y}=\frac{\xi^2+\eta^2}{2}$. This way, we exploit the existence of the natural small parameters of the model in the normalizing procedure. Furthermore, after having omitted the constant term $\alpha_0\,$, we can set the Hamiltonian $H_b$ in~\eqref{eq:hb_xtauxieta_exp} as the first normalizing step Hamiltonian $H^{(1,0)}$, according to~\eqref{eq:hr1r2-1}. The algorithm requires just $R_1\cdot R_2$ normalization steps, constructing the finite sequence of Hamiltonians \begin{equation}\label{eq:theHs} H^{(1,0)} = H_b, \, H^{(1,1)},\, \ldots,\,H^{(1,R_2)},\, H^{(2,1)},\, \ldots,\, H^{(R_1,R_2)}~~. \end{equation} Here, we add the prescription that $H^{(r_1,0)} = H^{(r_1-1,R_2)}\, \forall \, 1 < r_1 \leq R_1$. Then, we write the final Hamiltonian, where we distinguish the normal form part from the remainder, as \begin{equation}\label{eq:HR1R2} H^{(R_1,R_2)}(x,\xi,\tau,\eta,Y_p) = {\cal Z}^{(R_1,R_2)} \left(x,\frac{(\xi^2+\eta^2)}{2}, \tau,Y_p \right) + {\cal R}^{(R_1,R_2)} (x,\xi,\tau,\eta)~~. \end{equation} At this point, we must remark a few features of the normal form ${\cal Z}^{(R_1,R_2)}$. While its dependence on $x$ and $\tau$ remains generic, it depends on $\xi$ and $\eta$ \emph{only} through the form $\frac{\xi^2+\eta^2}{2}$. That is, we have \begin{equation}\label{eq:HR1R2renamed} H^{(R_1,R_2)}(x,\tau,{\cal Y},\phi_f,Y_p) = {\cal Z}^{(R_1,R_2)}\left(x,\tau,{\cal Y},Y_p\right) + {\cal R}^{(R_1,R_2)} (x,\tau,{\cal Y},\phi_f)~~. \end{equation} The key remark is that $\phi_f$ becomes ignorable in the normal form, and, therefore, ${\cal Y}$ becomes an integral of motion of ${\cal Z}^{(R_1,R_2)}$. Then, the normal form can be viewed as a Hamiltonian of one degree of freedom depending on two constant actions ${\cal Y}$ and $Y_p$, i.e. ${\cal Z}^{(R_1,R_2)}$ represents now a formally \emph{integrable} dynamical system. Of course, since the true system is not integrable, it is natural to expect that the normalization procedure diverges in the limit of $R_1,R_2\rightarrow \infty$. The divergence corresponds formally to the fact that the size of the remainder function ${\cal R}^{(R_1,R_2)}$ cannot be reduced to zero as the normalization order tends to infinity. Then, the {\it optimal} normal form approximation corresponds to choosing the values of both integer parameters $R_1$ and $R_2$ so as to reduce the size of the remainder ${\cal R}^{(R_1,R_2)}$ as much as possible. In practice, there are computational limits that compromise the choice of values of $R_1$ and $R_2$. In all subsequent computations, the values are $R_1=2$ and $R_2=4$, corresponding to a second order expansion and truncation on the mass parameter $\mu$ and fourth order for the total polinomial degree of $x$, $\xi$ and $\eta$. These normalization orders prove to be sufficient for the normal form to represent a good representation of the original Hamiltonian in the domain of regular motions. In particular, we will now employ this possibility in order to compute the positions of different secondary resonances, based on the integrable approximation provided by our normal form. \subsection{Application: determination of the location of resonances via the normal form} \label{sec:location} Consider an orbit with initial conditions specified in terms of the two parameters $u=\tau-\tau_0$ and $e_{p,0}$ in the same way as in the stability maps of Figures \ref{fig:plot-epr01}, \ref{fig:plot-epr035}, \ref{fig:plot-epr07}. We will make use of the normal form approximation $Z^{(R_1,R_2)}$ in~\eqref{eq:HR1R2renamed} in order to compute the values of the three main frequencies of motion for the given initial conditions. The computation proceeds by the following steps: 1) We first evaluate the synodic frequency $\omega_s$, i.e., the frequency of libration of the synodic variables $\tau$ and $x$. The normal form $Z^{(R_1,R_2)}$ leads to Hamilton's equations: \begin{equation}\label{eq:xdot} \frac{\textrm{d}x}{\textrm{d}t} = f (x,\tau;{\cal Y}) = -\frac{\partial Z^{(R_1,R_2)}}{\partial \tau}~~ \end{equation} and \begin{equation}\label{eq:taudot} \frac{\textrm{d}\tau}{\textrm{d}t} = g (x,\tau;{\cal Y}) = -\frac{\partial Z^{(R_1,R_2)}}{\partial x}~~. \end{equation} For every orbit we can define the constant energy \begin{equation}\label{eq:Zhamilt} Z^{(R_1,R_2)}(x,\tau;{\cal Y},Y_p)-Y_p \equiv \zeta^{(R_1,R_2)}(x,\tau;{\cal Y})= {\cal E}~~. \end{equation} Note that since $Y_p$ appears only as an additive constant in $Z^{(R_1,R_2)}$, the function $\zeta^{(R_1,R_2)}$ does not depend on $Y_p$. Also, according to \eqref{eq:fake_prop_ecc} and \eqref{eq:delau-garf-coord}, we have ${\cal Y}=\frac{e_{p,0}^2}{2}$. Then, for fixed value of ${\cal E}$, we can express $\tau$ as an explicit function of $x$, \begin{equation}\label{eq:taufunct} \zeta^{(R_1,R_2)}(x,\tau;{\cal Y}) = {\cal E} \quad \Longrightarrow \quad \tau = \tau({\cal E},x;{\cal Y})~~. \end{equation} Replacing~\eqref{eq:taufunct} in~\eqref{eq:xdot}, we get \begin{equation}\label{eq:xdot2} \frac{\textrm{d}x}{\textrm{d}t} = f (x,\tau({\cal E},x;{\cal Y}) ;{\cal Y}) \quad \Longrightarrow \quad \text{d}t = \frac{\text{d}x}{f (x,\tau({\cal E},x;{\cal Y}) ;{\cal Y})}~~, \end{equation} whereby we can derive an expression for the synodic period $T_{syn}$ \begin{equation}\label{eq:synper_int} T_{syn} = \oint \frac{\text{d}x} {f (x,\tau({\cal E},x;{\cal Y}) ;{\cal Y})}~~, \end{equation} and thus the synodic frequency \begin{equation}\label{eq:synfreq} \omega_{s} = \frac{2\pi}{T_{syn}}~~. \end{equation} In practice, \eqref{eq:taufunct} is hard to invert analytically, and hence, the integral (\ref{eq:synper_int}) cannot be explicitly computed. We thus compute both expressions numerically on grids of points of the associated invariant curves on the plane $(\tau,x)$, or by integrating numerically \eqref{eq:xdot2} as a first order differential equation. 2) We now compute the fast and secular frequencies $\omega_f$, $g$. To compute $\omega_f$, we use the equation \begin{equation}\label{eq:fastfreq1} \omega_{f} = \, \frac{1}{T_{syn}} \int_{0}^{T_{syn}} \frac{\textrm{d}\phi_f}{\textrm{d}t} \, \textrm{d}t = \, \frac{1}{T_{syn}} \int_{0}^{T_{syn}} \frac{\partial Z^{(R_1,R_2)} (x,\tau;{\cal Y})}{ \partial {\cal Y}} \, \textrm{d}t~~. \end{equation} Replacing~\eqref{eq:xdot2} in~\eqref{eq:fastfreq1}, we generate an explicit formula for the fast frequency \begin{equation}\label{eq:fastfreq3} \omega_{f} = \frac{1}{T_{syn}} \oint \frac{1}{f (x,\tau({\cal E},x;{\cal Y}) ;{\cal Y})} \, \frac{\partial Z_{R_1,R_2} (x,\tau({\cal E},x;{\cal Y});{\cal Y})} { \partial {\cal Y}}\, \textrm{d}x~~. \end{equation} Since $Z^{(R_1,R_2)}(x,\tau;{\cal Y},Y_p) = Y_p + \zeta^{(R_1,R_2)}(x,\tau;{\cal Y})$ we find $\dot{\theta}=1$ implying $g=1-\omega_f$. All the frequencies are thus functions of the labels ${\cal E}$ and ${\cal Y}$, which, in the integrable normal form approximation, label the proper libration and the proper eccentricity of the orbits. In the normal form approach one has $e_{p,0}=e_p=$~const, implying ${\cal Y}=e_p^2/2$. If, as in~\cite{PaezEfthy2015}, we fix a scanning line of initial conditions $x_{in} = B u_{in}=\tau_{in}-\tau_0$, with $B$ a constant, the energy ${\cal E}$, for fixed $e_p$, becomes a function of the initial condition $u_{in}$ only. Thus, $u_{in}$ represents an alternative label of the proper libration (see Section 3 of~\cite{PaezEfthy2015} for a detailed discussion of this point). With these conventions, all three frequencies become functions of the labels $(u_{in},e_p)$. A generic resonance condition then reads \begin{equation}\label{eq:rescond_functPhi} \Phi_{m_f,m_s,m}(u) = m_f \omega_f(e_p,u_{in}) + m_s \omega_s(e_p,u_{in})+ m g(e_p,u_{in})=0~~. \end{equation} For fixed resonance vector $(m_f,m_s,m)$, Eq.~\eqref{eq:rescond_functPhi} can be solved by root-finding, thus specifying the position of the resonance on the plane of the proper elements $(u_{in},e_p)$. \begin{figure}[t] \centering \includegraphics[width=.75\textwidth]{figurefreqs.png} \caption{Representation of the evolution of the frequencies as function of $u$. In the upper panel, $m_f \omega_f$ (red square points) and $-m_s\omega_s$ (blue triangle points). In the lower panel, the evolution of the function $m_f\omega_f+m_s\omega_s$ (black curve). The arrows denote the point where the frequencies accomplish the resonant condition $m_f\omega_f+m_s\omega_s=0$, giving the position of the resonance in terms of $u$. For this example, we choose the resonance $1$:$8$, corresponding to $m_f=1$, $m_s=8$, $\mu=0.0024$, $e'=0.04$ and a representative value for $e_{p,0}=0.05$.} \label{fig:freq_fig} \end{figure} As an example, Fig.~\ref{fig:freq_fig}, shows $\omega_f$ and $\omega_s$, as well as the function $\Phi_{1,8,0}(e_p,u_{in})$, as a function of $u_{in}$ for the parameters $\mu=0.0024$, $e'=0.04$ and a fixed value of $e_p=0.05$. The arrow in the lower panel marks the position of the resonance. Changing the value of $e_p$ in the same range as the one considered in our numerical FLI stability maps ($0<e_{p,0}<0.1$), we specify $u_{in}$ all along the locus of the resonance projected in the stability map. Repeating this computation for several transverse resonances $(m_f,m_s,m)$, we are able to trace the location of each of them. In order to test the accuracy of the above method, we compare the results of the semi-analytical estimation with the position of the resonances extracted from the FLI maps computed in~\cite{PaezEfthy2015}. Under the assumption that the local minimum of the FLI in the vicinity of a resonance gives a good approximation of the resonance center, we study the curves of the FLI $\Psi$ as a function of $u$, for a fixed value of $e_{p,0}$. Figure~\ref{fig:linesFLI.png} gives an example for $\mu=0.0031$, $e'=0.04$, $e_{p,0} = 0.015$, where we choose four candidates as centers of the resonances $(1,7,1)$, $1$:$7$, $(1,7,-1)$ and $(1,7,-2)$. We confirm the resonant character of these orbits also by performing a numerical Frequency Analysis~\cite{Laskar-04}. By changing the value of $e_{p,0}$ along the interval $[0,0.1]$, we can depict the centers of the resonances on top of the FLI maps. \begin{figure}[h] \centering \includegraphics[width=.95\textwidth]{linesFLI.png} \caption[Local minima of the FLI as tracers of the resonance center]{FLI $\Psi$ as function of $u$, for fixed parameters $\mu~=~0.0031$, $e'~=~0.04$ and $e_{p,0}~=~0.015$ (right panel). The local minima give a good approximation of the position of the centers of each resonance. The orbits whose corresponding FLI values are plotted in the left panel lie on the green line on top of the FLI map (right panel). The confirmation of each resonance is done by frequency analysis.} \label{fig:linesFLI.png} \end{figure} \begin{SCfigure} \centering \includegraphics[width=.60\textwidth]{mu0031eprim04.png} \caption[Main and transverse secondary resonances located by $Z^{(R_1,R_2)}$ and by FLI $\Psi$ minima - 1]{Main and transverse secondary resonances located by $Z^{(R_1,R_2)}$ (yellow) and the estimation of FLI $\Psi$ minima (green). In this example, $\mu=0.0031$, $e'=0.04$, $m_f=1$, $m_s=7$, $m=0,\pm 1,\pm 2$. Labels indicate the corresponding resonance in each case. \vspace{0.8cm}} \label{fig:transverses1.png} \end{SCfigure} \begin{SCfigure} \centering \includegraphics[width=.60\textwidth]{mu0024eprim06.png} \caption[Main and transverse secondary resonances located by $Z^{(R_1,R_2)}$ and by FLI $\Psi$ minima - 2]{Same as Fig.~\ref{fig:transverses1.png}, for $\mu=0.0024$, $e'=0.06$, and $m_f=1$, $m_s=8$, $m=0,\pm 1,\pm 2,3$. \vspace{1.2cm}} \label{fig:transverses2.png} \end{SCfigure} \begin{SCfigure} \centering \includegraphics[width=.60\textwidth]{mu0014eprim02.png} \caption[Main and transverse secondary resonances located by $Z^{(R_1,R_2)}$ and by FLI $\Psi$ minima - 3]{Same as Fig.~\ref{fig:transverses1.png}, for $\mu=0.0014$, $e'=0.02$, and $m_f=1$, $m_s=11,12$, $m=0$. \vspace{1.2cm}} \label{fig:transverses3.png} \end{SCfigure} \begin{table}[h] \begin{center} \begin{tabular}{|c c c c c|} \hline Resonance & $\mu$, $e'$ & $\overline{u_{{\cal Z}}}$ & $\overline{u_{\Psi}}$ & $\overline{\delta u_{in}}$ \\ \hline $1$:$7$ & $0.0031$, $0.04$ & 0.453908 & 0.463308 & 2.129422$\snot[-2]$ \\ $(1,7,1)$ & $''$ & 0.377456 & 0.380947 & 1.417910$\snot[-2]$ \\ $(1,7,2)$ & $''$ & 0.306036 & 0.312011 & 1.880279$\snot[-2]$ \\ $(1,7,-1)$& $''$ & 0.527218 & 0.554430 & 4.885329$\snot[-2]$ \\ $(1,7,-2)$& $''$ & 0.593373 & 0.618057 & 3.964370$\snot[-2]$ \\ $1$:$8$ & $0.0024$, $0.06$ & 0.524485 & 0.535153 & 1.993063$\snot[-2]$ \\ $(1,8,1)$ & $''$ & 70.465475 & 0.464924 & 6.377401$\snot[-3]$ \\ $(1,8,2)$ & $''$ & 0.406439 & 0.412246 & 1.605145$\snot[-2]$ \\ $(1,8,3)$ & $''$ & 0.374879 & 0.385020 & 2.617987$\snot[-2]$ \\ $(1,8,-1)$& $''$ & 0.587834 & 0.616093 & 4.572688$\snot[-2]$ \\ $(1,8,-2)$& $''$ & 0.646464 & 0.679154 & 4.796435$\snot[-2]$ \\ $1$:$11$ & $0.0014$, $0.02$ & 0.367663 & 0.370842 & 9.264243$\snot[-3]$ \\ $1$:$12$ & $''$ & 0.482117 & 0.486631 & 1.021940$\snot[-2]$ \\ \hline \end{tabular} \end{center} \caption{Averaged values of $u_{{\cal Z}}$, $u_{{\Psi}}$ and $\delta u_{in}$ for the resonances in Figures~\ref{fig:transverses1.png}, \ref{fig:transverses2.png} and~\ref{fig:transverses3.png}} \label{tab:errors} \end{table} Figures~\ref{fig:transverses1.png},~\ref{fig:transverses2.png} and~\ref{fig:transverses3.png} show examples of these computations, for the parameters $\mu=0.0031$ and $e'=0.04$, $\mu=0.0024$ and $e'=0.06$, $\mu=0.0014$ and $e'=0.02$, respectively. The normal form predictions are superposed as yellow lines upon the underlying FLI stability maps while the centers of each resonance, as extracted from the FLI maps, are denoted by the green curves. Due to the numerical noise in the FLI curves, it is not possible to clearly extract the position of the resonance centers for all values of $e_{p,0}$, while a semi-analytic estimation (with varying levels of accuracy) is always possible. At any rate, in Figs.~\ref{fig:transverses1.png}-\ref{fig:transverses3.png}, we compare the position of the resonaces only in these cases when both methods provide clear results. Table~\ref{tab:errors} summarizes the results for the location of the centers ($u_{{\cal Z}}$, $u_{\Psi}$) and the relative errors ($\delta u_{in} = \frac{|u_{{\cal Z}} - u_{\Psi}|}{u_{\Psi}}$), on average, for the resonances shown in the corresponding figures. Regarding the overall performance of the estimation, we can note that the level of approximation is very good for relatively low values of $\mu$, $e_p$ and $u_{in}$, while the error in the predicted position of the resonance increases to a few percent for greater values of those parameters, with an upper (worst) value $6\%$ (see Table~\ref{tab:errors}). This is the expected behavior for a normal form method, whose approximation becomes worse with higher values of the method's small parameter(s). Independently of this fact, the normal form approach is based on the use of the basic model $H_b$ as a starting Hamiltonian. This confirms that the basic Hamiltonian is able to well approximate the fast and synodic dynamics of the ERTBP. Additionally, the fact that we do not consider expansions in terms of $\tau$ allows to retain accurate information about higher order harmonics. Finally, by using the relation between the fast action $Y_f$ and the secular action $Y_p$, it is possible to estimate, via $H_b$, the value of the secular frequency $g$, and, hence, to determine also the position of transverse resonances in the plane of proper element, even though these resonances have no 'width' in the dynamics under the $H_b$. \section{Conclusions} Our main results in this work can be summarized as follows: 1) We have demonstrated the efficiency of the normal form approach introduced in~\cite{PaezLocat2015} in order to determine the position of resonances in the space of proper elements in the tadpole domain of Trojan motions. As discussed in Section 1, the main advantage of the new approach is based on avoiding to perform series expansions with respect to the synodic co-ordinates around the Lagrangian equilibrium points $L_4$ and $L_5$. The latter expansions are subject to a poor convergence. On the contrary, the method proposed here circumvents the issue of this poor convergence, and even relatively low order expansions can give results accurate down to an error of a few percent only. 2) We have applied the above normal formal approach in a Hamiltonian model called `the basic model' in~\cite{PaezEfthy2015}. This is a model allowing to efficiently separate the secular part of the Hamiltonian from the part representing the dynamics in the fast and synodic degrees of freedom. We should emphasize here that in the case of the 1:1 Mean Motion resonance this separation is non-trivial and proceeds along different lines than in the case of other mean motion resonances. This is due to the non-trivial nature of the forced equilibrium at the 1:1 MMR. Yet, as detailed in Section 2 above, the `basic model' allows to study the dynamics in the fast (${\cal O}(1)$) and intermediate (${\cal O}(\sqrt{\mu})$ frequency scales in a unified way independently of the number of the primary disturbing bodies in the system. As shown in Section 3, normalizing the basic model turns to be sufficient for most analytical predictions regarding the dynamics in these timescales. The present methods can be easily adapted in two cases: i) considering Trojan motions off the plane (spatial ERTBP or RMPP), and ii) considering a time-varying configuration of the $S$ primaries, beyond the quasi-periodic secular variations of Eq.~\eqref{eq:h_rmpp}. For the long term stability, as well as the possibility of captures or escapes of small Trojan bodies (asteroids and/or hypothetical exo-planets), in~\cite{RobBod-09} the authors demonstrated that a crucial role is played by resonances crossing the Trojan domain during the phase of planetary migration. In this case, it would be desirable to be able to specify the time-varying locus of the secondary resonances via analytical techniques. Let us note here that the depletion rate of a Trojan swarm along secondary resonances is, in principle, related to the size of the remainder function of the normal form proposed in Section 3. In simple Hamiltonian models, it has been found that the diffusion rate goes as a power-law of the size of the remainder function (see~\cite{Efthy-08},~\cite{EftHar-13}). The degree up to which such laws are applicable in a physical context like the co-orbital resonance is unknown, and this question poses a possible extension of the present work. \vspace{0.5cm} \noindent {\bf Acknowledgements:} During this work, R.I.P. was supported by the Astronet-II Marie Curie Training Network (PITN-GA-2011-289240) and by the project ``Dynamics of the celestial bodies in the neighborhood of the Lagrangian points'' of the University of Rome ``Tor Vergata''. \section*{Appendix} Variables corresponding to the three degrees of freedom appearing in the expression of the Basic Hamiltonian $H_b$ in Eq.\eqref{eq:hbasic}, $(u,v)$, $(Y_f,\phi_f)$ and $(Y_p,\phi_p)$, in terms of the orbital elements: \begin{equation} u = \lambda - \lambda' - \frac{\pi}{3}~~, \end{equation} \begin{equation} v = \sqrt{a} - 1~~, \end{equation} \begin{displaymath} \beta = \omega - \phi'~~, \end{displaymath} \begin{displaymath} y = \sqrt{a} \left( \sqrt{1-e^2} -1 \right)~~, \end{displaymath} \begin{displaymath} V = \sqrt{-2y} \sin \beta - \sqrt{-2y_0} \sin \beta_0~~, \end{displaymath} \begin{displaymath} W = \sqrt{-2y} \cos \beta - \sqrt{-2y_0} \cos \beta_0~~, \end{displaymath} \begin{displaymath} Y = - \left( \frac{W^2 + V^2}{2} \right) \end{displaymath} \begin{equation} \phi = \arctan \left( \frac{V}{W} \right) \end{equation} \begin{equation} \phi_f = \lambda' - \phi~~, \end{equation} \begin{equation} Y_f = \int \frac{\partial E}{\partial \lambda'} \mathrm{d} t + v~~, \end{equation} \begin{equation} Y_p = Y - Y_f~~, \end{equation} where $\lambda$, $\omega$, $a$ and $e$ are the mean longitude, the longitude of the perihelion, the major semiaxis and eccentricity of the Trojan body, $\lambda'$ and $\phi' = \omega'$ are the mean longitude and longitude of the perihelion of the perturber, $\beta_0 = \pi/3$, $y_0 = \sqrt{1-e'^2} -1$, and $E$ represents the total energy of the Trojan as computed from Eq.~\eqref{eq:h_rmpp} (see \cite{PaezEfthy2015} for further details in the construction).
{ "redpajama_set_name": "RedPajamaArXiv" }
4,613
Хамтрамик () град је у америчкој савезној држави Мичиген. По попису становништва из 2010. у њему је живело 22.423 становника. Демографија Према попису становништва из 2010. у граду је живело 22.423 становника, што је 553 (2,4%) становника мање него 2000. године. Референце Литература Види још Списак градова у САД по броју становника Највећи градови у САД по деценијама Спољашње везе Градови у Мичигену Википројект географија/Насеља у САД
{ "redpajama_set_name": "RedPajamaWikipedia" }
304
{"url":"http:\/\/tex.stackexchange.com\/questions\/68099\/includegraphics-clipping-and-trim-is-squishing-image","text":"# \\includegraphics clipping and trim is squishing image\n\nClipping keeps squishing my image.\n\nMy command is:\n\n``````\\includegraphics[keepaspectratio=false,clip=true,trim=90px 0 0 0]{blue.jpg}\n``````\n\nBut I'm getting:\n\nWhere the original image is:\n\nWhy can't I clip or trim an image without squishing? There is a part of an image that I want to cut off without having to go into a photo editor to do it.\n\n-\nDoes the top answer here help at all? \u2013\u00a0Scott H. Aug 21 '12 at 18:15\nIf you don't want that the image is distorted, why do you set `keepaspectratio` to false? \u2013\u00a0Ulrike Fischer Aug 21 '12 at 18:28\nBecause aspect ratio refers to width\/height. The aspect ratio of the clipped image will be smaller since I reduced the width. \u2013\u00a0bobobobo Aug 21 '12 at 18:39\nClipping should work, but @Ulrike is right, `keepaspectratio` is not needed here and should not be used. Also note that the `px` unit is a `pdftex` extension and uses a fixed (but configurable) density which might not be correct for this particular JPG. Sometimes the images metadata are not fully correct, leading to a wrong display with LaTeX. \u2013\u00a0Martin Scharrer Aug 21 '12 at 18:50\nWell imho the key should either be not necessary or have the value true. But why do you use the key at all? Do you set the width and height key globally? \u2013\u00a0Ulrike Fischer Aug 21 '12 at 18:54\n\nIt is clearly a bug in the driver for package `graphicx`:\n\n\u2022 `pdftex.def`: ok.\n\u2022 `dvips.def`: ok for PostScript images, but clipping is not supported for bitmap images.\n\u2022 `xetex.def`: Clipping is not supported at all.\n\u2022 `dvipdfm.def`: The image is not trimmed, but distorted in the final area.\n\u2022 `dvipdfmx.def`: The whole image is put in the final area without distortion, but empty space is put above the small image.\n\nA remark to `keepaspectratio`: It has a meaning only if both the `width` and `height` are specified. Thus the setting and values of `keepaspectratio` does not matter here.\n\nThere is a solution for `dvips.def`, `dvipdfm.def` and `dvipdfmx.def` if `pdfTeX` is used as TeX compiler (for DVI mode). Package `bmpsize` fixes as side effect the defective drivers. And the package improves the bitmap inclusion making separate bounding box files obsolete. The driver `xetex.def` cannot be fixed this way, because XeTeX misses primitives from pdfTeX (especially `\\pdffiledump`), needed by `bmpsize`.\n\n``````\\usepackage[dvipdfm]{graphicx}\n\\usepackage{bmpsize}\n``````\n-\nIs there any way these issues can be fixed in the drivers themselves? For example, it's possible to do clipping for XeTeX not at the engine level but using the `xdvipdfmx` driver. [This also reminds me that I guess I should write some LaTeX3 driver code for picture importing :-)] \u2013\u00a0Joseph Wright Aug 21 '12 at 21:22\nClipping can be done via page operators or form xobjects. Someone has written clipping support for xetex.def and posted it to comp.text.tex, but I do not know the current state. The drivers seems not to be actively maintained. \u2013\u00a0Heiko Oberdiek Aug 21 '12 at 21:58\n'Someone' in the context of `xetex.def` would be me, with some prodding from Martin Scharrer :-) I was under the impression that you were in charge of the driver code, hence asking. Obviously I was mistaken. \u2013\u00a0Joseph Wright Aug 21 '12 at 22:00\nI am maintaining pdftex.def. Could you contact the maintainers of xetex.def? \u2013\u00a0Heiko Oberdiek Aug 21 '12 at 22:26","date":"2015-12-01 07:57:24","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.844046950340271, \"perplexity\": 2065.2296754803847}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-48\/segments\/1448398465089.29\/warc\/CC-MAIN-20151124205425-00225-ip-10-71-132-137.ec2.internal.warc.gz\"}"}
null
null
Герб Ща́стя — офіційний символ міста Щастя, Новоайдарського району Луганської області наряду з прапором. Опис Герб має форму щита іспанського типу розітнутий на дві рівні частини зеленого та червоного кольорів. У нижній частині зображено водойму, що символізує Сіверський Донець. Над водоймою сходить жовте сонце, на тлі якого стилізоване зображення Луганської ТЕС з опорою повітряної ЛЕП. Герб обрамовано синім картушем та увінчано трибаштовою цегляною короною. У нижній частині картуша — назва міста російською. Див. також Прапор Щастя Посилання Символіка міста Щастя Щастя Щастя (місто)
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,304
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <!-- NewPage --> <html lang="en"> <head> <!-- Generated by javadoc (1.8.0_25) on Thu Jun 25 13:30:13 PDT 2015 --> <title>MockSourceFileInputStream</title> <meta name="date" content="2015-06-25"> <link rel="stylesheet" type="text/css" href="../../stylesheet.css" title="Style"> <script type="text/javascript" src="../../script.js"></script> </head> <body> <script type="text/javascript"><!-- try { if (location.href.indexOf('is-external=true') == -1) { parent.document.title="MockSourceFileInputStream"; } } catch(err) { } //--> var methods = {"i0":10,"i1":10}; var tabs = {65535:["t0","All Methods"],2:["t2","Instance Methods"],8:["t4","Concrete Methods"]}; var altColor = "altColor"; var rowColor = "rowColor"; var tableTab = "tableTab"; var activeTableTab = "activeTableTab"; </script> <noscript> <div>JavaScript is disabled on your browser.</div> </noscript> <!-- ========= START OF TOP NAVBAR ======= --> <div class="topNav"><a name="navbar.top"> <!-- --> </a> <div class="skipNav"><a href="#skip.navbar.top" title="Skip navigation links">Skip navigation links</a></div> <a name="navbar.top.firstrow"> <!-- --> </a> <ul class="navList" title="Navigation"> <li><a href="../../overview-summary.html">Overview</a></li> <li><a href="package-summary.html">Package</a></li> <li class="navBarCell1Rev">Class</li> <li><a href="class-use/MockSourceFileInputStream.html">Use</a></li> <li><a href="package-tree.html">Tree</a></li> <li><a href="../../deprecated-list.html">Deprecated</a></li> <li><a href="../../index-files/index-1.html">Index</a></li> <li><a href="../../help-doc.html">Help</a></li> </ul> </div> <div class="subNav"> <ul class="navList"> <li><a href="../../fixed2free/integration/MockFileInfoProvider.html" title="class in fixed2free.integration"><span class="typeNameLink">Prev&nbsp;Class</span></a></li> <li><a href="../../fixed2free/integration/RecordFormat.html" title="class in fixed2free.integration"><span class="typeNameLink">Next&nbsp;Class</span></a></li> </ul> <ul class="navList"> <li><a href="../../index.html?fixed2free/integration/MockSourceFileInputStream.html" target="_top">Frames</a></li> <li><a href="MockSourceFileInputStream.html" target="_top">No&nbsp;Frames</a></li> </ul> <ul class="navList" id="allclasses_navbar_top"> <li><a href="../../allclasses-noframe.html">All&nbsp;Classes</a></li> </ul> <div> <script type="text/javascript"><!-- allClassesLink = document.getElementById("allclasses_navbar_top"); if(window==top) { allClassesLink.style.display = "block"; } else { allClassesLink.style.display = "none"; } //--> </script> </div> <div> <ul class="subNavList"> <li>Summary:&nbsp;</li> <li>Nested&nbsp;|&nbsp;</li> <li>Field&nbsp;|&nbsp;</li> <li><a href="#constructor.summary">Constr</a>&nbsp;|&nbsp;</li> <li><a href="#method.summary">Method</a></li> </ul> <ul class="subNavList"> <li>Detail:&nbsp;</li> <li>Field&nbsp;|&nbsp;</li> <li><a href="#constructor.detail">Constr</a>&nbsp;|&nbsp;</li> <li><a href="#method.detail">Method</a></li> </ul> </div> <a name="skip.navbar.top"> <!-- --> </a></div> <!-- ========= END OF TOP NAVBAR ========= --> <!-- ======== START OF CLASS DATA ======== --> <div class="header"> <div class="subTitle">fixed2free.integration</div> <h2 title="Class MockSourceFileInputStream" class="title">Class MockSourceFileInputStream</h2> </div> <div class="contentContainer"> <ul class="inheritance"> <li>java.lang.Object</li> <li> <ul class="inheritance"> <li><a href="../../fixed2free/integration/AbstractSourceFileInputStream.html" title="class in fixed2free.integration">fixed2free.integration.AbstractSourceFileInputStream</a></li> <li> <ul class="inheritance"> <li>fixed2free.integration.MockSourceFileInputStream</li> </ul> </li> </ul> </li> </ul> <div class="description"> <ul class="blockList"> <li class="blockList"> <dl> <dt>All Implemented Interfaces:</dt> <dd><a href="../../fixed2free/integration/ISourceFileInputStream.html" title="interface in fixed2free.integration">ISourceFileInputStream</a></dd> </dl> <hr> <br> <pre>public class <span class="typeNameLabel">MockSourceFileInputStream</span> extends <a href="../../fixed2free/integration/AbstractSourceFileInputStream.html" title="class in fixed2free.integration">AbstractSourceFileInputStream</a> implements <a href="../../fixed2free/integration/ISourceFileInputStream.html" title="interface in fixed2free.integration">ISourceFileInputStream</a></pre> <div class="block">This class allows me to test without being connected to the AS400 by conforming to the same API as the connected version does.</div> <dl> <dt><span class="simpleTagLabel">Author:</span></dt> <dd>Eric N. Wilson</dd> </dl> </li> </ul> </div> <div class="summary"> <ul class="blockList"> <li class="blockList"> <!-- =========== FIELD SUMMARY =========== --> <ul class="blockList"> <li class="blockList"><a name="field.summary"> <!-- --> </a> <h3>Field Summary</h3> <ul class="blockList"> <li class="blockList"><a name="fields.inherited.from.class.fixed2free.integration.ISourceFileInputStream"> <!-- --> </a> <h3>Fields inherited from interface&nbsp;fixed2free.integration.<a href="../../fixed2free/integration/ISourceFileInputStream.html" title="interface in fixed2free.integration">ISourceFileInputStream</a></h3> <code><a href="../../fixed2free/integration/ISourceFileInputStream.html#MAP_KEY_FILE">MAP_KEY_FILE</a>, <a href="../../fixed2free/integration/ISourceFileInputStream.html#MAP_KEY_LIBRARY">MAP_KEY_LIBRARY</a>, <a href="../../fixed2free/integration/ISourceFileInputStream.html#MAP_KEY_MEMBER">MAP_KEY_MEMBER</a></code></li> </ul> </li> </ul> <!-- ======== CONSTRUCTOR SUMMARY ======== --> <ul class="blockList"> <li class="blockList"><a name="constructor.summary"> <!-- --> </a> <h3>Constructor Summary</h3> <table class="memberSummary" border="0" cellpadding="3" cellspacing="0" summary="Constructor Summary table, listing constructors, and an explanation"> <caption><span>Constructors</span><span class="tabEnd">&nbsp;</span></caption> <tr> <th class="colOne" scope="col">Constructor and Description</th> </tr> <tr class="altColor"> <td class="colOne"><code><span class="memberNameLink"><a href="../../fixed2free/integration/MockSourceFileInputStream.html#MockSourceFileInputStream--">MockSourceFileInputStream</a></span>()</code>&nbsp;</td> </tr> </table> </li> </ul> <!-- ========== METHOD SUMMARY =========== --> <ul class="blockList"> <li class="blockList"><a name="method.summary"> <!-- --> </a> <h3>Method Summary</h3> <table class="memberSummary" border="0" cellpadding="3" cellspacing="0" summary="Method Summary table, listing methods, and an explanation"> <caption><span id="t0" class="activeTableTab"><span>All Methods</span><span class="tabEnd">&nbsp;</span></span><span id="t2" class="tableTab"><span><a href="javascript:show(2);">Instance Methods</a></span><span class="tabEnd">&nbsp;</span></span><span id="t4" class="tableTab"><span><a href="javascript:show(8);">Concrete Methods</a></span><span class="tabEnd">&nbsp;</span></span></caption> <tr> <th class="colFirst" scope="col">Modifier and Type</th> <th class="colLast" scope="col">Method and Description</th> </tr> <tr id="i0" class="altColor"> <td class="colFirst"><code>org.antlr.v4.runtime.ANTLRInputStream</code></td> <td class="colLast"><code><span class="memberNameLink"><a href="../../fixed2free/integration/MockSourceFileInputStream.html#readIFSFile-com.ibm.as400.access.AS400-java.lang.String-">readIFSFile</a></span>(com.ibm.as400.access.AS400&nbsp;as400, java.lang.String&nbsp;ifsPath)</code> <div class="block">Reads a file based on the IFS Path and returns an ANTLRInputStream</div> </td> </tr> <tr id="i1" class="rowColor"> <td class="colFirst"><code>org.antlr.v4.runtime.ANTLRInputStream</code></td> <td class="colLast"><code><span class="memberNameLink"><a href="../../fixed2free/integration/MockSourceFileInputStream.html#readQSYSLIBFile-com.ibm.as400.access.AS400-java.lang.String-java.lang.String-java.lang.String-">readQSYSLIBFile</a></span>(com.ibm.as400.access.AS400&nbsp;as400, java.lang.String&nbsp;library, java.lang.String&nbsp;file, java.lang.String&nbsp;member)</code> <div class="block">Reads a file based on the library, File, Member passed in and returns an ANTLRInputStream Used in the expansion of copy members in RPG code</div> </td> </tr> </table> <ul class="blockList"> <li class="blockList"><a name="methods.inherited.from.class.fixed2free.integration.AbstractSourceFileInputStream"> <!-- --> </a> <h3>Methods inherited from class&nbsp;fixed2free.integration.<a href="../../fixed2free/integration/AbstractSourceFileInputStream.html" title="class in fixed2free.integration">AbstractSourceFileInputStream</a></h3> <code><a href="../../fixed2free/integration/AbstractSourceFileInputStream.html#parseQSYSFilePath-java.lang.String-">parseQSYSFilePath</a></code></li> </ul> <ul class="blockList"> <li class="blockList"><a name="methods.inherited.from.class.java.lang.Object"> <!-- --> </a> <h3>Methods inherited from class&nbsp;java.lang.Object</h3> <code>equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait</code></li> </ul> <ul class="blockList"> <li class="blockList"><a name="methods.inherited.from.class.fixed2free.integration.ISourceFileInputStream"> <!-- --> </a> <h3>Methods inherited from interface&nbsp;fixed2free.integration.<a href="../../fixed2free/integration/ISourceFileInputStream.html" title="interface in fixed2free.integration">ISourceFileInputStream</a></h3> <code><a href="../../fixed2free/integration/ISourceFileInputStream.html#parseQSYSFilePath-java.lang.String-">parseQSYSFilePath</a></code></li> </ul> </li> </ul> </li> </ul> </div> <div class="details"> <ul class="blockList"> <li class="blockList"> <!-- ========= CONSTRUCTOR DETAIL ======== --> <ul class="blockList"> <li class="blockList"><a name="constructor.detail"> <!-- --> </a> <h3>Constructor Detail</h3> <a name="MockSourceFileInputStream--"> <!-- --> </a> <ul class="blockListLast"> <li class="blockList"> <h4>MockSourceFileInputStream</h4> <pre>public&nbsp;MockSourceFileInputStream()</pre> </li> </ul> </li> </ul> <!-- ============ METHOD DETAIL ========== --> <ul class="blockList"> <li class="blockList"><a name="method.detail"> <!-- --> </a> <h3>Method Detail</h3> <a name="readQSYSLIBFile-com.ibm.as400.access.AS400-java.lang.String-java.lang.String-java.lang.String-"> <!-- --> </a> <ul class="blockList"> <li class="blockList"> <h4>readQSYSLIBFile</h4> <pre>public&nbsp;org.antlr.v4.runtime.ANTLRInputStream&nbsp;readQSYSLIBFile(com.ibm.as400.access.AS400&nbsp;as400, java.lang.String&nbsp;library, java.lang.String&nbsp;file, java.lang.String&nbsp;member) throws java.io.IOException</pre> <div class="block"><span class="descfrmTypeLabel">Description copied from interface:&nbsp;<code><a href="../../fixed2free/integration/ISourceFileInputStream.html#readQSYSLIBFile-com.ibm.as400.access.AS400-java.lang.String-java.lang.String-java.lang.String-">ISourceFileInputStream</a></code></span></div> <div class="block">Reads a file based on the library, File, Member passed in and returns an ANTLRInputStream Used in the expansion of copy members in RPG code</div> <dl> <dt><span class="overrideSpecifyLabel">Specified by:</span></dt> <dd><code><a href="../../fixed2free/integration/ISourceFileInputStream.html#readQSYSLIBFile-com.ibm.as400.access.AS400-java.lang.String-java.lang.String-java.lang.String-">readQSYSLIBFile</a></code>&nbsp;in interface&nbsp;<code><a href="../../fixed2free/integration/ISourceFileInputStream.html" title="interface in fixed2free.integration">ISourceFileInputStream</a></code></dd> <dt><span class="paramLabel">Parameters:</span></dt> <dd><code>as400</code> - Required AS400 connection</dd> <dd><code>library</code> - Library the file lives in</dd> <dd><code>file</code> - Name of the file</dd> <dd><code>member</code> - The member of the file</dd> <dt><span class="returnLabel">Returns:</span></dt> <dd>An ANTLR Input Stream suitable for parsing</dd> <dt><span class="throwsLabel">Throws:</span></dt> <dd><code>java.io.IOException</code></dd> </dl> </li> </ul> <a name="readIFSFile-com.ibm.as400.access.AS400-java.lang.String-"> <!-- --> </a> <ul class="blockListLast"> <li class="blockList"> <h4>readIFSFile</h4> <pre>public&nbsp;org.antlr.v4.runtime.ANTLRInputStream&nbsp;readIFSFile(com.ibm.as400.access.AS400&nbsp;as400, java.lang.String&nbsp;ifsPath) throws java.io.IOException</pre> <div class="block"><span class="descfrmTypeLabel">Description copied from interface:&nbsp;<code><a href="../../fixed2free/integration/ISourceFileInputStream.html#readIFSFile-com.ibm.as400.access.AS400-java.lang.String-">ISourceFileInputStream</a></code></span></div> <div class="block">Reads a file based on the IFS Path and returns an ANTLRInputStream</div> <dl> <dt><span class="overrideSpecifyLabel">Specified by:</span></dt> <dd><code><a href="../../fixed2free/integration/ISourceFileInputStream.html#readIFSFile-com.ibm.as400.access.AS400-java.lang.String-">readIFSFile</a></code>&nbsp;in interface&nbsp;<code><a href="../../fixed2free/integration/ISourceFileInputStream.html" title="interface in fixed2free.integration">ISourceFileInputStream</a></code></dd> <dt><span class="paramLabel">Parameters:</span></dt> <dd><code>as400</code> - Required AS400 connection</dd> <dd><code>ifsPath</code> - Full path to the file member to be read (something like "/QSYS.LIB/QGPL.LIB/QRPGLESRC.FILE/THEMEMBER.MBR")</dd> <dt><span class="returnLabel">Returns:</span></dt> <dd>An ANTLR Input Stream suitable for parsing</dd> <dt><span class="throwsLabel">Throws:</span></dt> <dd><code>java.io.IOException</code></dd> </dl> </li> </ul> </li> </ul> </li> </ul> </div> </div> <!-- ========= END OF CLASS DATA ========= --> <!-- ======= START OF BOTTOM NAVBAR ====== --> <div class="bottomNav"><a name="navbar.bottom"> <!-- --> </a> <div class="skipNav"><a href="#skip.navbar.bottom" title="Skip navigation links">Skip navigation links</a></div> <a name="navbar.bottom.firstrow"> <!-- --> </a> <ul class="navList" title="Navigation"> <li><a href="../../overview-summary.html">Overview</a></li> <li><a href="package-summary.html">Package</a></li> <li class="navBarCell1Rev">Class</li> <li><a href="class-use/MockSourceFileInputStream.html">Use</a></li> <li><a href="package-tree.html">Tree</a></li> <li><a href="../../deprecated-list.html">Deprecated</a></li> <li><a href="../../index-files/index-1.html">Index</a></li> <li><a href="../../help-doc.html">Help</a></li> </ul> </div> <div class="subNav"> <ul class="navList"> <li><a href="../../fixed2free/integration/MockFileInfoProvider.html" title="class in fixed2free.integration"><span class="typeNameLink">Prev&nbsp;Class</span></a></li> <li><a href="../../fixed2free/integration/RecordFormat.html" title="class in fixed2free.integration"><span class="typeNameLink">Next&nbsp;Class</span></a></li> </ul> <ul class="navList"> <li><a href="../../index.html?fixed2free/integration/MockSourceFileInputStream.html" target="_top">Frames</a></li> <li><a href="MockSourceFileInputStream.html" target="_top">No&nbsp;Frames</a></li> </ul> <ul class="navList" id="allclasses_navbar_bottom"> <li><a href="../../allclasses-noframe.html">All&nbsp;Classes</a></li> </ul> <div> <script type="text/javascript"><!-- allClassesLink = document.getElementById("allclasses_navbar_bottom"); if(window==top) { allClassesLink.style.display = "block"; } else { allClassesLink.style.display = "none"; } //--> </script> </div> <div> <ul class="subNavList"> <li>Summary:&nbsp;</li> <li>Nested&nbsp;|&nbsp;</li> <li>Field&nbsp;|&nbsp;</li> <li><a href="#constructor.summary">Constr</a>&nbsp;|&nbsp;</li> <li><a href="#method.summary">Method</a></li> </ul> <ul class="subNavList"> <li>Detail:&nbsp;</li> <li>Field&nbsp;|&nbsp;</li> <li><a href="#constructor.detail">Constr</a>&nbsp;|&nbsp;</li> <li><a href="#method.detail">Method</a></li> </ul> </div> <a name="skip.navbar.bottom"> <!-- --> </a></div> <!-- ======== END OF BOTTOM NAVBAR ======= --> </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
9,568
Golubac (cyr. Голубац, rum. Golumbei, tur. Güvercinlik, węg. Galambóc) – miejscowość w północno-wschodniej Serbii, w okręgu braniczewskim, siedziba gminy Golubac. Jest położona na prawym brzegu Dunaju. W 2011 roku liczyła 1653 mieszkańców. Historia i atrakcje turystyczne Ze względu na liczne wykopaliska archeologiczne, Park Narodowy Đerdap i twierdzę Golubac okolice Golubaca są chętnie odwiedzane przez turystów. Miejscowość jest popularna pod względem turystycznym i żeglarskim. Park Narodowy Żelazna Brama jest znany ze swego piękna i terenów łowieckich. Miejscowe wykopaliska archeologiczne w pobliżu mostu Trajana i wzdłuż Dunaju, aż do Żelaznej Bramy ukazują pozostałości budowli z czasów tego cesarza. W okolicy znajdują się też ruiny rzymskiej fortecy Diana. Twierdza Golubac została zbudowana w XIV wieku i jest położona 4 km w dół rzeki. Miejscowym polonicum jest historia obrony tej fortecy, podczas wojny z Imperium Osmańskim w 1428 roku, kiedy wsławił się swym bohaterstwem polski rycerz Zawisza Czarny z Garbowa herbu Sulima, który osłaniając odwrót wojsk Zygmunta Luksemburskiego mimo wysłanej po niego łodzi postanowił walczyć nadal wraz ze swoimi żołnierzami i dostawszy się do tureckiej niewoli został zamordowany. Według tureckiej legendy dwóch janczarów miało się pokłócić o to, który z nich wziął Zawiszę do niewoli, aż w końcu jeden z nich miał ściąć rycerzowi polskiemu głowę. Przypisy Miejscowości w okręgu braniczewskim Fortyfikacje w Serbii
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,045
{"url":"https:\/\/datascience.stackexchange.com\/tags\/normalization\/hot","text":"People who code: we want your input. Take the Survey\n\n# Tag Info\n\n53\n\nThe normal vs uniform init seem to be rather unclear in fact. If we refer solely on the Glorot's and He's initializations papers, they both use a similar theoritical analysis: they find a good variance for the distribution from which the initial parameters are drawn. This variance is adapted to the activation function used and is derived without explicitly ...\n\n31\n\nNormalization across instances should be done after splitting the data between training and test set, using only the data from the training set. This is because the test set plays the role of fresh unseen data, so it's not supposed to be accessible at the training stage. Using any information coming from the test set before or during training is a ...\n\n30\n\nLat long coordinates have a problem that they are 2 features that represent a three dimensional space. This means that the long coordinate goes all around, which means the two most extreme values are actually very close together. I've dealt with this problem a few times and what I do in this case is map them to x, y and z coordinates. This means close points ...\n\n30\n\nLayer normalization (Ba 2016): Does not use batch statistics. Normalize using the statistics collected from all units within a layer of the current sample. Does not work well with ConvNets. Recurrent Batch Normalization (BN) (Cooijmans, 2016; also proposed concurrently by Qianli Liao & Tomaso Poggio, but tested on Recurrent ConvNets, instead of RNN\/LSTM):...\n\n26\n\nThis is called unity-based normalization. If you have a vector $X$, you can obtain a normalized version of it, say $Z$, by doing: $$Z = \\frac{X - \\min(X)}{\\max(X) - \\min(X)}$$\n\n25\n\nI disagree with the other comments. First of all, I see no need to normalize data for decision trees. Decision trees work by calculating a score (usually entropy) for each different division of the data $(X\\leq x_i,X>x_i)$. Applying a transformation to the data that does not change the order of the data makes no difference. Random forests are just a ...\n\n17\n\nThat paper gives a nice answer, where i quoted from. Search for Should I standardize the target variables (column vectors)? in that page. Standardizing target variables is typically more a convenience for getting good initial weights than a necessity. However, if you have two or more target variables and your error function is scale-sensitive like the ...\n\n15\n\nYour rationale is indeed correct: decision trees do not require normalization of their inputs; and since XGBoost is essentially an ensemble algorithm comprised of decision trees, it does not require normalization for the inputs either. For corroboration, see also the thread Is Normalization necessary? at the XGBoost Github repo, where the answer by the lead ...\n\n14\n\nAlways split before you do any data pre-processing. Performing pre-processing before splitting will mean that information from your test set will be present during training, causing a data leak. Think of it like this, the test set is supposed to be a way of estimating performance on totally unseen data. If it affects the training, then it will be partially ...\n\n10\n\nBoosting trees is about building multiple decision trees. Decision tree doesn't require feature normalization, that's because the model only needs the absolute values for branching. Wikipedia for decision tree: Requires little data preparation. Other techniques often require data normalization.... However, it's always a good idea to normalize your ...\n\n10\n\nThe questions of whether and why it's important, depends on the context. For gradient boosted decision trees, for example, it is not important - these ML algorithms \"don't care\" about monotone transformations to the data; they just look for points to split it. For linear predictors, for example, scaling can improve interpretability of the results. If you'd ...\n\n10\n\nThey are used for two different purposes. StandardScaler changes each feature column $f_{:,i}$ to $$f'_{:,i} = \\frac{f_{:,i} - mean(f_{:,i})}{std(f_{:,i})}.$$ Normalizer changes each sample $x_n=(f_{n,1},...,f_{n,d})$ to $$x'_n = \\frac{x_n}{size(x_n)},$$ where $size(x_n)$ for l1 norm is $\\left \\| x_n \\right \\|_1=|f_{n,1}|+...+|f_{n,d}|$, l2 norm is $\\... 9 Yes, you should do this. Given the initialization schemes and normalized inputs, the expected values for the outputs are 0. This means that you will not be too far off from the start, which helps convergence. If your target is 1000, your mean squared error will be huge which means your gradients will also be huge which can lead to numerical instabiliy. 8 As @Erwan said, you should normalize the training set and then use the same normalization steps on the test set. So your code should look like: x_train, x_test, y_train, y_test = train_test_split(X_features, Y_feature, test_size=0.20, random_state=4) scaler = StandardScaler() normalized_x_train = pd.DataFrame(scaler.fit_transform(x_train), columns = ... 7 This is no longer the case; as of sklearn 0.20.0, missing values are ignored in such preprocessors' fit and silently passed along in their transform: https:\/\/scikit-learn.org\/stable\/whats_new\/v0.20.html#id37 (fourth bullet) https:\/\/github.com\/scikit-learn\/scikit-learn\/issues\/10404 7 Also please explain what this array - array.mean() do? Basically, it is doing memberwise subtraction operation after broadcasting. np.mean function finds the mean in your array and its result will be a scalar, a single number. Your array is a numpy array and the result of the latter term is a single value as mentioned. Consequently, the single value gets ... 7 You have already computed that, but you've not bound the output to a variable, also called name in python. Try the following snippet: result = np.linalg.norm(v1,ord=2,axis=1,keepdims=True) print(result) Based on the edit, I update the answer. As you may find answers to your question, a typical way to find what you need is something like the following ... 7 Answer to your question: Do Normalization after splitting into train and test\/validation. The reason is to avoid any data leakage. Data Leakage: Data leakage is when information from outside the training dataset is used to create the model. This additional information can allow the model to learn or know something that it otherwise would not know and in ... 6 Standardizing (subtracting mean and dividing by standard deviation for each column), can be done using numpy: Xz = (X - np.nanmean(X, axis=0))\/np.nanstd(X, axis=0) where X is a matrix (containing NaNs), and Xz is the standardized version of X. Hope this helps. EDITED: For a test\/training scenario, the mean and std could be stored in respective variables:... 6 A detailed answer to the question can be found here. [...]are there times when it is not appropriate or not beneficial? Short answer: Yes and No. Yes in the terms, that it can significantly change your output of e.g. clustering algorithms. No, on the other hand, if these changes are what you want to achieve. Or to put it in the words of the author of ... 6 You begin by asking about image normalisation, but then refer to other techniques, which I believe all fall under \"image augmentation\". So I will answer the more general question: how can I perform image augmentation to improve my model? I would generally say that the more augmentation you can apply, the better. A caveat to that statement is that the ... 6 As the other answers previously said, in practice it doesn't have much difference which of the two you choose. However, theoretically it's better to scale your input to$[-1, 1]$than$[0, 1]$and I'd argue that it's even better to standardize your input (i.e.$\u03bc=0$,$\u03c3=1\\$). Let me explain why: Deep neural networks, especially in their early days, had ...\n\n5\n\nYou should look into other estimators of location. What you want is a robust estimator, with a high break-down point. The extreme approach would be the median. But you may get more numerically interesting results with a trimmed mean. You define a threshold, say 2%. Then you remove the top 2% of votes, and the bottom 2% of votes, and take the mean only of ...\n\n5\n\nYou can not use PCA, or at least it is not recommended, for mixed data. It is best to use Factor analysis of mixed data. You are lucky that Prince is a Python package that covers all data scenarios, borrowing from its explanation: All your variables are numeric: use principal component analysis (prince.PCA) You have a contingency table: use ...\n\n5\n\nOne reason for normalising the inputs is to make gradient descent more stable, as gradients spend more time in a comfortable region with meaningful updates and less neurons 'die' during trainings - getting stuck at one of the tails of e.g. the sigmoid non-linearity. Normalising the output distribution is perhaps not the best idea, as you are by definition ...\n\n5\n\nI don't understand why you would like to fill values with zeros ! This would basically mean, \"this guy, who is 170 cm tall, weights 0 kg\" and would fool your network. In my opinion, you have two options: discard missing values (the entire row): you end up with less but more consistent training data if you really need these rows, then fill missing values ...\n\n5\n\nIf apply normalization on training and testing in a separate way, I get really good results 85% (and sometimes more) and the further steps I try to do next work better as well. The problem with applying normalization across instances on the test set separately is that the test set represents any new data. So in principle the model should be able to give a ...\n\n5\n\nWhen building any Machine Learning model, the only observable data you have is training data. Test data is supposed to be unobserved data, meaning that even though you might have it now, you need to act as if you didn't. When you apply normalisation, you first observe the data to get the parameters you need. As you are only supposed to be able to observe the ...\n\n4\n\nYou can use sklearn.preprocessing.QuantileTransformer (or sklearn.preprocessing.PowerTransformer) which does exactly what you want: from sklearn.preprocessing import QuantileTransformer import numpy as np ey = np.random.exponential(size=100) qt = QuantileTransformer(output_distribution='normal') no = qt.fit_transform(ey.reshape(-1, 1)) You can plot ...\n\n4\n\nHow is your experience using feature normalization with boosted trees does it in general improve our models? My rather limited experience with scaling of features suggests that it has virtually no impact on xgboost results. I suppose by normalisation you mean subtracting the mean and then dividing by standard deviation. If you calculated the statistics ...\n\nOnly top voted, non community-wiki answers of a minimum length are eligible","date":"2021-06-13 05:16:19","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6400828957557678, \"perplexity\": 794.8105131285292}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-25\/segments\/1623487600396.21\/warc\/CC-MAIN-20210613041713-20210613071713-00282.warc.gz\"}"}
null
null
Restraining order against Tent City denied Early Monday morning, a King County judge denied a request for a restraining order that sought to prevent a homeless camp from coming to Mercer Island next week. I-90 bridge closures for Seafair The Washington State Department of Transportation will close all lanes of Interstate 90 across Lake Washington several times from Thursday, July 31, through Sunday, Aug. 3, to accommodate flights by the U.S. Navy Blue Angels. Wilburton tunnel teardown will slow I-405 traffic Southbound lanes on Interstate 405 from S.E. 8th Street in Bellevue to Interstate 90 will be completely closed for three weekends in August as construction crews remove the Wilburton Tunnel. 75 gypsy moth traps placed on Mercer Island For 35 years, the Washington State Department of Agriculture has kept the gypsy moth out of Washington. With the help of nearly 30 seasonal gypsy moth trappers, officials hope that record continues. $1 million for Luther Burbank | Money earmarked for dog park, shoreline restoration The often muddy and somewhat makeshift off-leash dog park at Luther Burbank will finally get a much needed makeover. The adjacent shoreline overrun by invasive blackberry bushes will also be renovated from Calkins Point down to the docks. Focus on education for 2020 changes leaders During its summer retreat last week, the Mercer Island School Board designed a new path for the Really Big Idea Committee (RBIC) in preparation for the next phase of achieving its 2020 Vision. The School Board will henceforth relinquish responsibility for the project, passing the reins on to Superintendent Gary Plano, who will continue to define the 2020 Vision with the RBIC. by Elizabeth Celms Ipecac is in short supply for child care The current Washington Administrative Code (WAC) 170-296-0830 states that childcare businesses must keep a first-aid kit containing "at least one unexpired bottle of Syrup of Ipecac that must be given only at the direction of a poison control center." The decreased availability of Syrup of Ipecac, from a lack of raw materials or the manufacturers' decision to reduce production, has hindered this requirement. MI Olympic swimmer still inspires Like many other American cities, Mercer Island has its own proud list of Olympic medalists. There is Mary Wayte — gold medalist swimmer in the 1984 Los Angeles Games — whose name has been eternalized by the Island's Mary Wayte Pool; figure skater Peter Kennedy, who won silver skating pairs with his sister, Karol, in the 1952 Winter Olympics in Oslo; and Carl Buchanan, who won gold for sailing in the 1984 Summer Games. Town Center businesses hit by burglars Burglars robbed two neighboring commercial businesses in the Town Center two weekends ago, exploiting the stores that did not have burglar alarms at the time. Both Dooz, a hair salon that caters to children and also sells toys, baby shower gifts and child hair products, and Yogabliss, a private yoga school, were burglarized on the night of Friday, July 25. Merrimount decision to be revisited It turns out that the city's recent decision to reduce Island Crest Way to three lanes south of Merrimount Drive may not stick, as members of the City Council want to revisit the project in the coming weeks. Come and find a variety of edibles from farm-fresh fruits to specialty cheeses and delicious breads at the Mercer Island Farmers Market this weekend from 11 a.m. to 3 p.m. along S.E. 32nd Street at Mercerdale Park. Gazing at Angels The Blue Angels rehearse their aerial performance over Lake Washington in this view from the I-90 bridge on Mercer Island, Friday, Aug. 1. To view… Continue reading Technology brings Council meetings into Island homes Observant Islanders might notice quite a bit of new technology inside City Hall, now that City Council meetings are televised on channel 21 and equipment has been added to assist city leaders in responding to emergencies. Islander named to legendary Committee on U.S.-China Relations As the world turns its attention toward the Olympic Games in Beijing this week, it is easy to forget that not all that many years ago, China was essentially forbidden to all outsiders. Since 1913, when the United States first formally recognized the Government of the Republic of China, the relationship between the two nations has been rocky. Armed conflicts, ideological differences, diplomatic breakdowns and cultural differences have made permanent diplomatic relations illusive. Yet it was a seemingly small and personal gesture that brought the nations back to the table in the 1970s. That gesture, in the form of an invitation to a game of ping pong, set in motion the famous visit by former President Richard Nixon to China in 1972. The famous game and its results are credited not to a government agency but to the private, nonprofit National Committee on U.S.-China Relations. Mercer Island street improvements begin Two separate road construction projects began this week and several lane closures and delays should be expected. Improvements along S.E. 40th Street between Island Crest Way and 86th Avenue S.E. started on Monday and a resurfacing project along North Mercer Way is scheduled to begin today. A cell phone was stolen from a glove compartment during a car prowl that occurred in the 8400 block of Benotho Place, the night of July 30. New MISD classes focus on the future This month, King County will conduct a Top 2 primary. Voters will not have to pick a party and will be able to choose among all candidates for each office. In each race, the two candidates with the most votes will advance to the November General Election. King County will conduct a Top 2 primary this month YFS food pantry supply low, usage high The Youth & Family Services food pantry, located in the Luther Burbank Administration Building at the park, is in need of donations as its usage has increased by 36 percent as compared to last summer. It has no more than a two-week supply on hand, and it serves an average of 10 Mercer Island families on a weekly basis with a user breakdown as follows: unemployed, 30 percent; low-income, 30 percent; disabled persons, 23 percent; senior citizens, 10 percent; homeless, 7 percent. Erica Breese, of Mercer Island, graduated from Emory College of Emory University in Atlanta, Ga., with a Bachelor of Science in May. She is the daughter of Dr. John Sydney Breese and Emily Seklar Breese. Emory University is ranked as one of the country's top 20 national universities, according to "U.S. News & World Report."
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,698
Le Al Mahalah Sporting Club (en ), plus couramment abrégé en Al Mahalah, est un club libyen de football fondé en 1977 et basé à Tripoli, la capitale du pays. Histoire Palmarès Personnalités liées au club Joueurs Entraîneurs Notes et références Club de football à Tripoli Club de football fondé en 1977
{ "redpajama_set_name": "RedPajamaWikipedia" }
333
class Edge_Counter : public Device { private: bool triggered = true; unsigned long counter = 0; bool _falling; bool _rising; int _pin; bool _pull_up = true; void reinit(); void count(); public: Edge_Counter(const char* name, int pin, bool rising=true, bool falling=true); void start(); bool measure(); Edge_Counter& with_pull_up(bool pull_up=true) { _pull_up = pull_up; reinit(); return *this; } }; #endif // _EDGE_COUNTER_H_
{ "redpajama_set_name": "RedPajamaGithub" }
4,863
! Driver for FFTLog-f90. ! ! FFTLog-f90 is the Fortran 90 version of FFTLog, a Fortran 77 code by ! Andrew Hamilton to compute the Fourier-Bessel, or Hankel, transform of ! of a series of logarithmically spaced points. ! ! The main reference for the algorithm is Andrew Hamilton's webpage, ! http://casa.colorado.edu/~ajsh/FFTLog. He first introduced FFTLog in a ! paper on the nonlinearities in the cosmological structures ! (http://xxx.lanl.gov/abs/astro-ph/9905191). ! ! The Fourier-Bessel, or Hankel, transform F(r) of a function f(k) is defined ! as in eq. 159 of Hamilton's paper: ! ! / ! F(r) = | dk * k * J_mu(k*r) * f(k) ! / ! ! where J_mu(k*r) is the Bessel function of order mu and argument k*r. ! ! IMPORTANT: Currently, FFTLog-f90 only supports the sine transform (mu=0.5) ! and returns the following integral: ! ! 1 / sin(k*r) ! F(r) = ---------- | dk * k^2 * ---------- * f(k) ! (2*pi)^2 / k*r ! ! If f(k)=P(k) is the power spectrum of a homogeneous 3D random field, then the ! integral will yield the two-point correlation function xi(r), as in eq. 3.104 ! of http://arxiv.org/abs/1405.2280. ! ! To generalise FFTLog-f90 to arbitrary Bessel order, one needs to use ! the FHTQ subroutine rather than the FFTL one. ! ! ARGUMENTS ! ! This is the list of command-line arguments taken by FFTLog-f90: ! ! 1. The path of the input file, containing the k and f(k) columns. ! ! 2. The path of the output file, containing the r and F(r) columns. The file ! will have as many rows as the input file, unless a third argument is given. ! By default, the range in r will go from 1/k_max to 1/k_min and will contain ! as many points as the rows in the input file. ! ! 3. [OPTIONAL] N_OUTFILE, the number of r values to include in the output file. ! Set to zero to use the same number of points in the input file (default behaviour). ! If positive, then f(k) will be interpolated in N_OUTFILE points and fed to FFTLog. ! ! 4. [OPTIONAL] The order mu of the Bessel function J_mu. By default we take mu=0.5, ! which corresponds to a regular Fourier transform. ! ! 5. [OPTIONAL] The bias q of the Hankel transformation, as defined in eq. 156. ! By default we take it to vanish (b=0). ! ! 6. [OPTIONAL] Inferior limit in k for the integration. By default it is set ! to the first value in the k column. ! ! 7. [OPTIONAL] Superior limit in k for the integration. By default it is set ! to the first value in the k column. ! ! Created by Guido Walter Pettinari on 26/07/2010 ! Last modified on 09/09/2015 by GWP PROGRAM fftlog_f90 IMPLICIT NONE REAL ( KIND = 8 ), PARAMETER :: PI=3.141592653589793238462643383279502884197d0, CONSTANT=1/(2*PI*PI) REAL ( KIND = 8 ) :: xx_inf_limit = -1d300, xx_sup_limit = 1d300, dln_xx, dlog_xx, log_xmedian,& &log_xxmedian, log_xxmax, log_xxmin, rk, kr, mu, CENTRAL_INDEX, q, x, temp1, temp2 INTEGER :: N_ARGUMENTS, DIRECTION, KR_OPTION, UNIT, error, N_INFILE=0, N_USED=0, FIRST_USED_INDEX = 0,& &LAST_USED_INDEX=0, N_OUTFILE=0, i=0 LOGICAL :: INTERPOLATION = .FALSE., FFT_OK CHARACTER(512) :: input_filename, output_filename CHARACTER(128) :: buffer ! The arrays xx and yy will contain the x and y values from the input file, respectively. ! They will be overwritten by the FFTL subroutine with the Hankel transform. REAL ( KIND = 8 ), DIMENSION(:), ALLOCATABLE :: xx, yy ! Parameters for the spline interpolation REAL ( KIND = 8 ), DIMENSION(:), ALLOCATABLE :: yy_second_derivative, yy_interp, wsave ! Complain if the input file has more than N_MAX lines, because it would require ! about 2 GB or RAM. Increase N_MAX if you have more memory than that. INTEGER, PARAMETER :: N_MAX=67108864 ! ======================================================================================= ! = Parse arguments = ! ======================================================================================= N_ARGUMENTS = COMMAND_ARGUMENT_COUNT() IF ( N_ARGUMENTS.GT.7 .OR. N_ARGUMENTS.LT.2 ) STOP & & "FFTLog-f95 can have between 2 and 7 arguments. For more details, please refer to the& & documentation in fftlog_driver.f90 (https://github.com/coccoinomane/fftlog-f90)." CALL GET_COMMAND_ARGUMENT( 1, input_filename ) CALL GET_COMMAND_ARGUMENT( 2, output_filename ) IF ( N_ARGUMENTS .GE. 3 ) THEN CALL GET_COMMAND_ARGUMENT( 3, buffer ) READ ( buffer, * ) N_OUTFILE END IF ! order of Bessel function ( mu = 0.5 for sine transform ) mu = 0.5d0 IF ( N_ARGUMENTS .GE. 4 ) THEN CALL GET_COMMAND_ARGUMENT( 4, buffer ) READ ( buffer, * ) mu END IF IF (mu .NE. 0.5d0) STOP & & "FFTLog-f90 so far only supports sine (mu=0.5) and cosine (mu=0.5) transforms. Make& & sure that the fourth argument is either 0.5 or -0.5, respectively." ! bias exponent: q = 0 is unbiased ( good for power spectrum <-> correlation function ) q = 0.d0 IF ( N_ARGUMENTS .GE. 5 ) THEN CALL GET_COMMAND_ARGUMENT( 5, buffer ) READ ( buffer, * ) q END IF IF ( N_ARGUMENTS == 6 ) THEN CALL GET_COMMAND_ARGUMENT( 6, buffer ) READ ( buffer, * ) xx_sup_limit END IF IF ( N_ARGUMENTS == 7 ) THEN CALL GET_COMMAND_ARGUMENT( 6, buffer ) READ ( buffer, * ) xx_inf_limit CALL GET_COMMAND_ARGUMENT( 7, buffer ) READ ( buffer, * ) xx_sup_limit END IF ! ======================================================================================= ! = Read input file = ! ======================================================================================= ! Select the entries of input_filename to process OPEN ( UNIT=50, FILE=input_filename, STATUS='OLD', IOSTAT=error ) IF ( error /= 0 ) STOP "Input file could not be opened, exiting..." DO READ (UNIT=50, FMT=*, IOSTAT=error) temp1 IF ( error < 0 ) EXIT N_INFILE = N_INFILE + 1 IF ( temp1 < xx_inf_limit .OR. temp1 > xx_sup_limit ) CYCLE N_USED = N_USED + 1 LAST_USED_INDEX = N_INFILE END DO CLOSE(UNIT = 50) FIRST_USED_INDEX = LAST_USED_INDEX - N_USED + 1 ! Control block IF ( N_USED == 0 ) STOP "Either the input file is empty or incompatible with the specified integration limits." IF ( N_OUTFILE .LE. 0 ) THEN N_OUTFILE = N_USED INTERPOLATION = .FALSE. PRINT *, "No interpolation of input data will be performed." ELSE INTERPOLATION = .TRUE. END IF IF ( (N_USED > N_MAX) .OR. (N_OUTFILE > N_MAX) ) & &STOP "The number you specified or the number of elements of the input file are greater that N_MAX" ! It is now safe to allocate memory to the arrays ALLOCATE( xx(N_USED), yy(N_USED), yy_second_derivative(N_USED) ) ALLOCATE( yy_interp(N_OUTFILE), wsave ( 2*N_OUTFILE + 3*(N_OUTFILE/2) + 19 ) ) ! Read the first two columns of the input file OPEN ( UNIT=50, FILE=input_filename, STATUS='OLD', IOSTAT=error ) IF ( error /= 0 ) STOP "Input file could not be opened, exiting..." ! Dump the rows of the file that are smaller than xx_inf_limit DO i = 1, FIRST_USED_INDEX - 1 READ ( 50, FMT=*, IOSTAT=error ) END DO DO i = 1, N_USED READ ( 50, FMT=*, IOSTAT=error ) temp1, temp2 IF ( error < 0 ) EXIT xx(i) = temp1 yy(i) = temp2 END DO CLOSE( UNIT = 50 ) PRINT *, "Inferior integration limit = ", xx(1) PRINT *, "corresponding in input file to row = ", FIRST_USED_INDEX PRINT *, "Superior integration limit = ", xx(N_USED) PRINT *, "corresponding in input file to row = ", LAST_USED_INDEX PRINT *, "Order of Bessel function mu = ", mu PRINT *, "Number of elements in input file = ", N_INFILE PRINT *, "Number of elements used = ", N_USED PRINT *, "Number of elements in output file = ", N_OUTFILE ! ======================================================================================= ! = Prepare integrand f(k) = ! ======================================================================================= ! Take the logarithm of the x limits log_xxmin = LOG10( xx(1) ) log_xxmax = LOG10( xx(N_USED) ) ! Logarithmic step in x dlog_xx = (log_xxmax-log_xxmin) / (N_OUTFILE-1) ! central index (1/2 integral if N_OUTFILE is even) CENTRAL_INDEX = dble( N_OUTFILE+1 ) / 2.d0 ! logarithmical spacing between points (needed by fhti). Was dlnr dln_xx = dlog_xx * LOG(10.d0) ! central point of periodic interval at log10 (xxmedian). Was logrc log_xxmedian = ( log_xxmin + log_xxmax) / 2.d0 ! sensible approximate choice of k_c r_c kr = 1.d0 ! tell fhti to change kr to low-ringing value KR_OPTION = 2 ! forward transform DIRECTION = 1 ! central point in k-space. Was logkc log_xmedian = LOG10 (kr) - log_xxmedian ! rk = r_c/k_c rk = 10.d0 ** ( log_xxmedian - log_xmedian ) ! Interpolate the input IF( INTERPOLATION ) THEN ! Compute the second derivative of the input array yy. The prototype is ! spline_cubic_set ( n, t, y, ibcbeg, ybcbeg, ibcend, ybcend, ypp ) CALL SPLINE_CUBIC_SET ( N_USED, xx, yy, 0, 0.0d0, 0, 0.0d0, yy_second_derivative ) DO i = 1, N_OUTFILE ! the x's are the input domain x = 10.d0 ** ( log_xxmedian + ( i - CENTRAL_INDEX )*dlog_xx ) ! Interpolate the input array yy in x. The prototype is ! spline_cubic_val ( n, t, y, ypp, tval, yval, ypval, yppval ) CALL SPLINE_CUBIC_VAL ( N_USED, xx, yy, yy_second_derivative, x, yy_interp(i), temp1, temp2 ) yy_interp(i) = yy_interp(i)*x END DO DEALLOCATE ( yy_second_derivative ) ELSE DO i = 1, N_USED yy(i) = xx(i) * yy(i) END DO END IF ! ======================================================================================= ! = Call FFTLog = ! ======================================================================================= ! Initialize FFTLog transform. Note that fhti resets kr CALL FHTI ( N_OUTFILE, mu, q, dln_xx, kr, KR_OPTION, wsave, FFT_OK ) IF( .NOT. FFT_OK ) STOP "FHTI not ok!" ! Call FFTLog IF( INTERPOLATION ) THEN CALL FFTL ( N_OUTFILE, yy_interp, rk, DIRECTION, wsave ) ELSE CALL FFTL ( N_OUTFILE, yy, rk, DIRECTION, wsave ) END IF ! ======================================================================================= ! = Write result to file = ! ======================================================================================= OPEN( UNIT=40, FILE = output_filename, STATUS = "REPLACE", IOSTAT = error ) IF ( error /= 0 ) STOP "Output file could not be opened, exiting..." IF ( INTERPOLATION ) THEN DO i = 1, N_OUTFILE ! now the x's are the output domain x = 10.d0 ** ( log_xmedian + ( i - CENTRAL_INDEX ) * dlog_xx ) WRITE ( 40, '(3g24.16)' ) x, CONSTANT * yy_interp(i)/x END DO DEALLOCATE ( yy_interp ) ELSE DO i = 1, N_OUTFILE ! now the x's are the output domain x = 10.d0 ** ( log_xmedian + ( i - CENTRAL_INDEX )*dlog_xx ) WRITE( 40, '(3g24.16)' ) x, CONSTANT * yy(i)/x END DO END IF DEALLOCATE(xx,yy,wsave) END PROGRAM fftlog_f90
{ "redpajama_set_name": "RedPajamaGithub" }
7,818
Q: Declaring an object with a parameterized constructor inside another class C++ I wish to have an object of one class contained inside a different class. class A contained inside class B as in the simple example below. The problem is that class A only has a parametrized constructor. Is there another way to do declare the class A object in class B without having to use a pointer to class A ? class A { public: A(int var1, int var2); private: //... }; class B { public: B(); private: A a; // Compiler error A* a_ptr; // This will of course work fine. We can create a new A object with parameters any time using the a_ptr }; A: A a; // Compiler error except if the constructor of B explicitly calls the constructor of A with required arguments (e.g. B() : a(an_int, an_int) {...}), that requests a constructor without parameter or where all parameters have default value, but you do not have that constructor, only a constructor requiring 2 int Is there another way to do declare the class A object in class B without having to use a pointer to class A ? you do not declare but define / instantiate a new instance of A each time you instantiate a new instance of B Note you can create later the instance of A without using a pointer as with a_ptr having for instance std::vector<A> a; and when you know the instance of A you need doing for instance a.push_back(A(an-int, an-int)); but the fact you want only one instance is not visible in the definition of a (out of a welcome comment)
{ "redpajama_set_name": "RedPajamaStackExchange" }
92
Q: Getting Detail Information of process handle I have run "handle.exe -a \Device\0000006c" on command line where "\Device\0000006c" is physical object name of my device for e.g.. Microphone and getting following output: Handle v4.0 Copyright (C) 1997-2014 Mark Russinovich Sysinternals - www.sysinternals.com svchost.exe pid: 864 type: File 770: \Device\0000006c\global svchost.exe pid: 864 type: File ECC: \Device\0000006c\global svchost.exe pid: 348 type: File 514: \Device\0000006c\global svchost.exe pid: 348 type: File 88C: \Device\0000006c\global audiodg.exe pid: 4592 type: File 1C4: \Device\0000006c audiodg.exe pid: 4592 type: File 1CC: \Device\0000006c Last two line of output showing that device is being used by audiodg.exe process when audio is being played. audiodg.exe pid: 4592 type: File 1CC: \Device\0000006c I am able to get that "1CC" is handle Hex Address but what is "\Device\0000006c" here is it name associated with handle or something else being searched in core of handle. I am trying to get handle information from this below link https://code.msdn.microsoft.com/windowsapps/CppFileHandle-03c8ea0b but not able to get this kind of information for handle DWORD EnumerateFileHandles(ULONG pid) { HINSTANCE hNtDll = LoadLibrary(_T("ntdll.dll")); assert(hNtDll != NULL); PFN_NTQUERYSYSTEMINFORMATION NtQuerySystemInformation = (PFN_NTQUERYSYSTEMINFORMATION)GetProcAddress(hNtDll, "NtQuerySystemInformation"); assert(NtQuerySystemInformation != NULL); PFN_NTQUERYINFORMATIONFILE NtQueryInformationFile = (PFN_NTQUERYINFORMATIONFILE)GetProcAddress(hNtDll, "NtQueryInformationFile"); DWORD nSize = 4096, nReturn; PSYSTEM_HANDLE_INFORMATION pSysHandleInfo = (PSYSTEM_HANDLE_INFORMATION) HeapAlloc(GetProcessHeap(), 0, nSize); while (NtQuerySystemInformation(SystemHandleInformation, pSysHandleInfo, nSize, &nReturn) == STATUS_INFO_LENGTH_MISMATCH) { HeapFree(GetProcessHeap(), 0, pSysHandleInfo); nSize += 4096; pSysHandleInfo = (SYSTEM_HANDLE_INFORMATION*)HeapAlloc( GetProcessHeap(), 0, nSize); } DWORD dwFiles = 0; HANDLE hProcess = OpenProcess( PROCESS_QUERY_LIMITED_INFORMATION, FALSE, pid); if (hProcess == NULL) { _tprintf(_T("OpenProcess failed w/err 0x%08lx\n"), GetLastError()); getchar(); return -1; } for (ULONG i = 0; i < pSysHandleInfo->NumberOfHandles; i++) { PSYSTEM_HANDLE pHandle = &(pSysHandleInfo->Handles[i]); if(pHandle->ProcessId == pid) { int a=10; } if (pHandle->ProcessId == pid && pHandle->ObjectTypeNumber == HANDLE_TYPE_FILE) { dwFiles++; // Increase the number of file handles // Duplicate the handle in the current process HANDLE hCopy; if (!DuplicateHandle(hProcess, (HANDLE)pHandle->Handle, GetCurrentProcess(), &hCopy, MAXIMUM_ALLOWED, FALSE, 0)) continue; // Retrieve file name information about the file object. IO_STATUS_BLOCK ioStatus; PFILE_NAME_INFORMATION pNameInfo = (PFILE_NAME_INFORMATION) malloc(MAX_PATH * 2 * 2); DWORD dwInfoSize = MAX_PATH * 2 * 2; if (NtQueryInformationFile(hCopy, &ioStatus, pNameInfo, dwInfoSize, FileNameInformation) == STATUS_SUCCESS) { // Get the file name and print it WCHAR wszFileName[MAX_PATH + 1]; StringCchCopyNW(wszFileName, MAX_PATH + 1, pNameInfo->FileName, /*must be WCHAR*/ pNameInfo->FileNameLength /*in bytes*/ / 2); wprintf(L"0x%x:\t%s\n", pHandle->Handle, wszFileName); } free(pNameInfo); CloseHandle(hCopy); } } CloseHandle(hProcess); HeapFree(GetProcessHeap(), 0, pSysHandleInfo); // Return the number of file handles in the process return dwFiles; } int _tmain(int argc, _TCHAR* argv[]) { ULONG pid = GetCurrentProcessId(); DWORD dwFiles = EnumerateFileHandles(4592); _tprintf(TEXT("\r\n")); // Get file name from file handle using a file mapping object HANDLE hFile; hFile = CreateFile(TEXT("test.txt"), GENERIC_WRITE | GENERIC_READ, 0, NULL, CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL); if (hFile == INVALID_HANDLE_VALUE) { _tprintf(TEXT("CreateFile failed with %d\n"), GetLastError()); return 0; } BYTE bWriteBuffer[] = "0123456789"; DWORD dwBytesWritten; // Write 11 bytes from the buffer to the file if (!WriteFile(hFile, // File handle bWriteBuffer, // Buffer to be write from sizeof(bWriteBuffer), // Number of bytes to write &dwBytesWritten, // Number of bytes that were written NULL)) // No overlapped structure { // WriteFile returns FALSE because of some error _tprintf(TEXT("Could not write to file w/err 0x%08lx\n"), GetLastError()); CloseHandle(hFile); return 0; } //GetFileNameFromHandle(hFile); CloseHandle(hFile); return 0; } Any help how handle is programmatically searching for process usage of device by physical device object information. A: Your code uses only file handles of given process: if (pHandle->ProcessId == pid && pHandle->ObjectTypeNumber == HANDLE_TYPE_FILE) When you get a handle via SystemHandleInformation, you should check it's type and based on it's type, do something. As you see in your example if handle is file handle, it gets it filename via NtQueryInformationFile. So you should do a similar task on every handle type you want. Using NtQueryObject function in ntdll, you can get type of a handle. In this example, every handle of a process used to print some information based on it's type.
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,333
Home » Biography » Gail Kim Age Gail Kim Age Published Date: 28th November, 2017 @06:11 AM Gail Kim age is 40, she is a former professional wrestler, actress, and model. Gail Kim is best known for her professional wrestling career in WWE Women's Championship and TNA. Gail Kim age is 40, she was born on February 20, 1977, as Gail Kim-Irvine. Her birthplace is Toronto, Canada. She belongs to Korean descent and she was raised in her birthplace along with her parents. Gail Kim joined York Memorial Collegiate Institute where she studied kinesiology and got majored at the University of Toronto. Later she was transferred to Ryerson University where she changed her major to nutrition. After completing her degree from Ryerson University, she joined Ron Hutchison's School of Pro Wrestling in Toronto to become a professional wrestler. At the Squared Circle, Pro Wrestling Gym Kim received supplementary training to Rob Etchevarria. Career and Net Worth Gail Kim career started in December 2000 in the Southern Ontario-based Apocalypse Wrestling Federation, when Gail Kim age was 23. She entered the ring wearing a mask and wrestling as "The Queen of the Cats" La Felina. Kim worked for two years on the Canadian independent circuit wrestling for promotions such as Border City Wrestling. In 2001, she was introduced in World Wrestling Federation (WWF) by Jason Sensation and encourage her to send her videos and tapes to WWF officials. Later a year later, she was hired by the WWF when Gail Kim age was 25. Kim had an eight months training in Ohio Valley Wrestling. Kim's first WWE match was a seven-woman battle in June 2003. In the match, Kim won eliminating Victoria lately. She remained champion for a month before losing tile against Molly Holly. Gail Kim age was 26 when she was released by WWE on November 2004, the reason for her release was a result of cost cutting and also management wanted to take the women's division in a new direction. After WWE, she appeared in Michigan's All World Wrestling League and the independent circuit in Japan, Korea, and Mexico. In September 2005 she was signed by Total Nonstop Action Wrestling (TNA). Gail Kim age was 28 when she made her debut on the episode of TNA Impact. In 2008, Kim rejoined with WWE. In March 2009 she was featured in an episode of SmackDown, interrupting a WWE Divas Championship. In 2011, she left WWE confirming from her authorized Twitter account. In 2011, she appeared returned to Total Nonstop Action Wrestling (TNA) and appeared on an episode of Impact Wrestling. Later on January 13, 2013, Kim competed in a five-woman gauntlet match at the Genesis pay-per-view to determine the number one contender for the TNA Knockouts Championship. In the match, she eliminated ODB, Mickie James, and Miss Tessmacher but was eventually eliminated by Velvet Sky. Kim defeated Sienna on the May 24 episode of Impact Wrestling to keep her job. Later after 2013, Kim also appeared in another platform including in TNA, Impact Wrestling and Global Force Wrestling. Kim also appeared as a guest in several Wrestling events. In June 2017, she has announced the first female inductee into the TNA Hall of Fame. Finally, she retired in 2017 after 17 years long career in wrestling. Besides wrestling, she has featured in several commercials. Kim has also been featured in psychological thriller film Royal Kill. During Gail Kim career in wrestling, she was famous for her signature moves like Double knee facebreaker, Dragon sleeper, Front missile dropkick, Ringpost figure-four leglock, Springboard arm drag and more. Gail Kim net worth and salary is one of the most paid female wrestlers in the ring. Gail Kim net worth is around 400 thousand dollars. Personal Life and Spouse Gail Kim's relational status is married, she was married to Robert Irvine on May 10, 2012. Gail Kim husband is English Chef and dietician. Gail Kim husband met her on the set of Dinner: Impossible and later they started dating. Kim's nationality is Canadian, but she holds dual citizenship of the US and Canada. Her ethnicity is Asian-American. Kim has also been featured in fashion magazines posing her attractive body. Awards Recognition and Achievements Gail Kim is a renowned professional wrestler and successful model; she has a huge fan following. Kim has won titles including Diva of the Year (2001), FC Women's Championship, IWR Diamond Championship. TNA Knockouts Championship and many more. By 2012, Kim was ranked No. 1 by the PWI Female 50 of the top 50 female wrestlers. Kim has authorized Twitter account on which she has around 400 thousand fans follower. She is also active on Instagram and Facebook. Can't even tell you how much I'm loving my @thegoodchiropractor and @mrs_candice_michelle 's #TheGoodRoll pillow! I normally don't sleep with any pillow Bc I find it uncomfortable. I always sleep with no support and flat. I feel like this pillow was made for me custom lol I also use it for my back when chilling and today for my Thanksgiving nap! Thank you The Good Roll for proper support and a good sleep #chiropracticpillow #support #greatsleep A post shared by Gail Kim-Irvine (@gailkimitsme) on Nov 23, 2017 at 11:19am PST This beautiful wrestling diva has the very charming face and her videos have been uploaded to YouTube with numerous views. Kim's bio is updated in Wikipedia with her career, birth and personal information. Gail Kim Wendy Burch Net Worth Ingrid Vandebosch Age Amanda Drury Vanessa Simmons Baby, Boyfriend & Net Worth Zach Randolph Age Fionnuala Sweeney, News, Divorce, Report, Boyfriend, Visit, Salary, Income & legs
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
510
package org.kaaproject.kaa.server.admin.servlet; import java.io.IOException; import javax.servlet.Servlet; import javax.servlet.ServletConfig; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import net.iharder.Base64; import org.kaaproject.kaa.common.dto.admin.SdkKey; import org.kaaproject.kaa.common.dto.file.FileData; import org.kaaproject.kaa.server.admin.services.cache.CacheService; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.context.support.SpringBeanAutowiringSupport; public class SdkServlet extends HttpServlet implements Servlet { /** The Constant logger. */ private static final Logger logger = LoggerFactory.getLogger(SdkServlet.class); private static final long serialVersionUID = 4151191758109799417L; private static final int BUFFER = 1024 * 100; @Autowired private CacheService cacheService; @Override public void init(ServletConfig config) throws ServletException { super.init(config); SpringBeanAutowiringSupport.processInjectionBasedOnServletContext(this, config.getServletContext()); } @Override public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { String sdkKeyBase64 = request.getParameter(SdkKey.SDK_KEY_PARAMETER); try { SdkKey key = (SdkKey)Base64.decodeToObject(sdkKeyBase64, Base64.URL_SAFE, null); FileData sdkFile = cacheService.getSdk(key); response.setContentType(sdkFile.getContentType()); ServletUtils.prepareDisposition(request, response, sdkFile.getFileName()); response.setContentLength(sdkFile.getFileData().length); response.setBufferSize(BUFFER); response.getOutputStream().write(sdkFile.getFileData()); response.flushBuffer(); } catch (Exception e) { logger.error("Unexpected error in SdkServlet.doGet: ", e); response.sendError(HttpServletResponse.SC_INTERNAL_SERVER_ERROR, "Unable to get Sdk file: " + e.getMessage()); } } }
{ "redpajama_set_name": "RedPajamaGithub" }
316
package org.apache.hadoop.fs.azurebfs; import java.lang.ref.WeakReference; import org.junit.Assert; import org.junit.Test; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.azurebfs.services.AuthType; /** * Test finalize() method when "fs.abfs.impl.disable.cache" is enabled. */ public class ITestAzureBlobFileSystemFinalize extends AbstractAbfsScaleTest{ static final String DISABLE_ABFS_CACHE_KEY = "fs.abfs.impl.disable.cache"; static final String DISABLE_ABFSSS_CACHE_KEY = "fs.abfss.impl.disable.cache"; public ITestAzureBlobFileSystemFinalize() throws Exception { super(); } @Test public void testFinalize() throws Exception { // Disable the cache for filesystem to make sure there is no reference. Configuration rawConfig = this.getRawConfiguration(); rawConfig.setBoolean( this.getAuthType() == AuthType.SharedKey ? DISABLE_ABFS_CACHE_KEY : DISABLE_ABFSSS_CACHE_KEY, true); AzureBlobFileSystem fs = (AzureBlobFileSystem) FileSystem.get(rawConfig); WeakReference<Object> ref = new WeakReference<Object>(fs); fs = null; int i = 0; int maxTries = 1000; while (ref.get() != null && i < maxTries) { System.gc(); System.runFinalization(); i++; } Assert.assertTrue("testFinalizer didn't get cleaned up within maxTries", ref.get() == null); } }
{ "redpajama_set_name": "RedPajamaGithub" }
8,045
Register Log In AmbergrisCaye.com Home Forums Ambergris Caye Looking For Long Term Rental, Where To Start? My SO and I have been considering the move to Belize for about a year now, and have finally decided that it's time! Early to late spring is our goal. We are a younger couple with a good enough income. We own an internet business based in US so working in Belize will not be an issue. We have 1 large dog who will be coming with us. My boyfriend lived on AC a couple years ago, and we visited the island together last year on vacation and I completely fell in love with it. The only concerns I have are finding a place to live, and getting our dog into the country. As far as I've read, you only have to have the proper paperwork provided by the airport, filled out by your vet, but the webpage I read was quite old. My first question is: Is there any type of quarantine required for dogs brought onto the island? We're hoping to find a large 1 bed. or 2 bed. dog friendly place, not too close and not too far from town. If anyone hears of anything, please give me a shout! We're very serious about moving soon, and would love to find our place. 1. a current rabies vaccination more than 30 days old. 2. a health certificate issued within 10 days of flying in. You do not need an international certificate. Just an ordinary one from a legitimate veterinarian. 3. an import permit or a pet passport which can be arranged by email from Jerlyn Tucker or others at BAHA. (just google BAHA) You can pay for this document when you arrive at the airport and the dog is inspected by a BAHA agent. No appointment or anything like that, they are at the airport 7 days a week when flights are arriving. No overtime but there is a fee. I think it is 25bz for the inspection but I can't remember. The import permit also has a fee. If the dog flies in as cargo you will have to pay customs on the waybill air freight charge. No quarantine. If you need further info pm me and I'll walk you through it. I've done it every year at least once for the last 5. As far as LT rentals, you need to talk to a Realtor or ask around town. I know of one young couple (with a dog) that managed to get a LT rental near me to start their new life. I didn't ask what it costs and I suspect prices vary greatly depending on what you need and can afford. I don't think you will have great difficulties. Thank you, that's consistent with the info. we got. I'm glad there's no quarantine, I don't know if I could go through with that! We will probably just have to find a pet friendly short term rental and look for a place when we get there. You said you where coming in the spring so I would think that would not be a problem. Snowbirds who rent will be headed north. Snowbirds who own, maybe looking for a summer rental. Thanks! If anyone hears of anything on the East side, not too close/ not to far from town, please give me a shout. I am snowed in my house today- so will be by my computer. Can't wait to get out of here! I don't know of any places to rent on AC but my tip about bringing pets in to the country is that it's hard to do the whole thing by email (especially initial contact). It's worth the investment of an international phone call to ring up BAHA and talk to Melody about the process. She'll walk you right through it. Also, their process is to forward your documents to the BAHA office at the airport which is fine but make sure she faxes or emails a copy to you also because they don't always make it to the airport office and it will be a pain in the neck and extra money to arrive and have no permit. Bring triplicate copies of everything (the BAHA permit, the health certificate and the vaccine record) because sometimes the airlines also want a copy. Also, the BAHA office at the airport closes at 3PM so make sure the flight you schedule arrives in time to catch them. Make friends with Melody. She's the ticket. The features of rentals on this island vary dramatically, as do prices. Some are luxurious by North American standards. Some have outdoor toilets and rooms in which to bathe with a bucket. Many have only cool running water. Most have electricity. Some have intact window screens. Some have many appliances, some none. Some are more secure from intruders. Some are noisy. Some are barely bigger than a bed; some huge. For me, it's been helpful to think about what features I really value now. Email & websites are not how most folks I know here get information. People like to talk with each other. Most people I know ask around, face to face, to find a new home. I rented a golf cart for one day, and my friends used it to find where I'm living now and to move me in. Then we circled for fun. There are some less expensive hotels, like Ruby's & Pedro's (hotel & hostel). I don't know who takes dogs. Well we haven't had much luck. It looked like we were going to be at Seascape Villas, but the person we were talking to actually had no affiliation with the place. Don't know if she was trying to scam us or what, but she was the person who responded to my email directly to their site. We almost had the chance to get the place next to Capt. Morgans- 3rd floor penthouse was available, but the manager had an appointment to show it today, and it looks like the family is going to take it. We offered to put a deposit down, but since he had the appointment with them, he wanted to keep it. We are out of our lease at our current house at the end of April, and are really scrambling to find a place. If anyone hears of anything available in the co co beach, captain morgan area beach side, please let me know! We haven't had a lot of luck so far.
{ "redpajama_set_name": "RedPajamaC4" }
2,484
Q: API calls inside mapreduce job I would want to ask you about the inconveniences of calling an external API while running a map reduce job. which are the drawbacks? Some examples: If inside the mapper we need to geocode an address and we call a google maps api, or calling an external DB in order to get related elements of an item, etc. A: It's perfectly OK to make a call to an external API as long as there are no DB calls in the external API. In many ways this is preferred to writing your logic over again. Often times you want your MapReduce jobs to be nothing more than wrapper's around logic written in a non MapReduce context. This make's for better testable code. However, making external DB calls is STRONGLY discouraged. This will drastically reduce the speed of your MapReduce jobs as every call would be a random access call. In addition, having several thousand Map/Reduce taks hitting your DB at the same time could bring the DB to it's knees. If you need related elements, it's preferable to have all the elements on HDFS and doing a join in MapReduce. If the DB you're talking about is a NoSQL store such as Cassandra or HBase, they'll have a batch export feature to export the entire table onto HDFS.
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,004
{"url":"https:\/\/www.techwhiff.com\/learn\/to-examine-relationships-between-categorical\/168821","text":"# A. To examine relationships between a categorical and a numerical variable, we can use a. counts...\n\n###### Question:\n\nA. To examine relationships between a categorical and a numerical variable, we can use\n\n a. counts and corresponding charts of the counts b. scatterplots c. correlation matrices d. box-whisker plots\n\nB. Coding all Business majors as 1 and all others as 0 in a data set illustrates the use of\n\n a. numerical variables b. dummy variables c. ordinal variables d. nominal variables\n\nC. Abby has been keeping track of what she spends on Starbucks coffee. The last seven week's expenditures, in dollars, were 6, 12, 14, 10, 15, 12, and 8. The average amount Abby spends on Starbucks is $11. True of false. D. A sample of 25 observations has a standard deviation of 10. The sum of the squared deviations from the sample mean is a. 475 b. 400 c. 500 d. 2400 e. 2500 ## Answers #### Similar Solved Questions 1 answer ##### ACTIVE LEARNING TEMPLATE: Basic Concept STUDENT NAME CONCEPT Management of Hypermagnesemia REVIEW MODULE CHAPTER Underlying Principles... ACTIVE LEARNING TEMPLATE: Basic Concept STUDENT NAME CONCEPT Management of Hypermagnesemia REVIEW MODULE CHAPTER Underlying Principles Related Content (E.G., DELEGATION, LEVELS OF PREVENTION, ADVANCE DIRECTIVES) Nursing Interventions WHO? WHEN? WHY? HOW?... 1 answer ##### CASE STUDY II: T. J. was 4 years old when he first presented to his family... CASE STUDY II: T. J. was 4 years old when he first presented to his family doctor with a 3-week history of fatigue, weakness, and a persistent sore throat. On physical examination he had a palpable spleen but no evidence of lymphadenopathy. He appeared pale and had multiple bruises over his lower ex... 1 answer ##### Identify the correct use of arrows to show the movement of electrons in the acid-base reaction... Identify the correct use of arrows to show the movement of electrons in the acid-base reaction below. \u0441\u043d. H\u00c1C N H + OH. - - CHE H-C-N: + H-O-H CHE CHE OH - -N: + H-0-H Hic-N H + CH, CH N: + H-OH HC-NH + OH CH, CHE CH, H.C-N H CHE + OH - + H o H. | H,C-N: CHE... 1 answer ##### A mutual fund manager has a$20 million portfolio with a beta of 1.40. The risk-free...\nA mutual fund manager has a $20 million portfolio with a beta of 1.40. The risk-free rate is 3.25%, and the market risk premium is 6.5%. The manager expects to receive an additional$5 million, which she plans to invest in a number of stocks. After investing the additional funds, she wants the fund&...\n##### Physics rotational motion problem\nAn apparatus for launching a small boat consists of a 150.0 kg cart that rides down a set of tracks on four solid steel wheels, each with radius 20.0 cm and mass 45.0kg. The tracks slope at an angle of 7.50deg to the horizontal, and the boat\u2019s mass is 735kgIf the boat is released from rest a d...\n##### Calculate X You are buying a perpetuity with annual payments as follows Payment of X at...\nCalculate X You are buying a perpetuity with annual payments as follows Payment of X at the end of the first year and every three years thereafter. Payment of X+1 at the end of the second year and every three years thereafter. Payment of X+2 at the end of the third year and every three years thereaf...\n##### Describe how IR spectroscopy might be used to monitor the progress of the following reaction: H2Cr04...\nDescribe how IR spectroscopy might be used to monitor the progress of the following reaction: H2Cr04 OH OH H20, H2S04 1-butanol butanoic acid...\n##### 9. It is known that toddlers pick red-colored toys more often than other colors (70% of...\n9. It is known that toddlers pick red-colored toys more often than other colors (70% of the time). Second on the list is bright yellow (20%) and all other colors together with 10% of the times. Three bowls with cookies are left at a play room. The red bowl has 10 chocolate chip cookies and 5 oatmeal...\n##### Time Taken:0:09:26 Cecile Kamusau: Attempt 1 Question 3 (1 point) Which of the following statements are...\nTime Taken:0:09:26 Cecile Kamusau: Attempt 1 Question 3 (1 point) Which of the following statements are correct? The condition you are testing is known as the dependent variable. A lurking variable is usually something you didn't control for and you do not want it to be present in an experiment....\n##### Along the East and North directions.) 5.1 km straight east: then\nalong the East and North directions.) 5.1 km straight east: then...\n##### 8.55b Draw step two of the mechanism. Include lone pairs in your answer. Do not use...\n8.55b Draw step two of the mechanism. Include lone pairs in your answer. Do not use abbreviations such as OMe. HC- H.C Edit HC SHOW HINT...","date":"2023-03-26 18:01:57","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.20711718499660492, \"perplexity\": 1922.3719509918294}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296946445.46\/warc\/CC-MAIN-20230326173112-20230326203112-00764.warc.gz\"}"}
null
null
Svenska Visakademien bildades hösten 1999 på Den gyldene freden i Stockholm "för att stärka den svenskspråkiga sångens och visans ställning". Redan på 1940-talet ville Evert Taube med flera skapa en visans akademi, men först i november 1999 instiftades Svenska Visakademien med Sven-Bertil Taube som förste ordförande. Akademien består av högst 27 ledamöter, och 2021 var Sven-Bertil Taube, Jan-Olof Andersson, Martin Bagge, Marie Bergman, Thorstein Bergman, Eva Borgström, Monica Dominique, Håkan Elmquist, Jan Hammarlund, Johan Johansson, Torbjörn Johansson, Carin Kjellman, Sven Kristersson, Maria Lindström, Maud Lindström, Britt Ling, Christina Mattsson, Annika Nordström, Märta Ramsten, Georg Riedel, Mikael Samuelson, Lucas Stark, Pär Sörman, Lena Willemark och Finn Zetterholm dessa ledamöter. Referenser Externa länkar Svenska visakademiens tidigare webbplats (arkiverad 2014) Musikorganisationer i Sverige
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,011
\section{Gravity Perturbations from Seismic Fields} \label{sec:ambient} \index{Newtonian noise!seismic} Already in the first design draft of a laser-interferometric GW detector laid out by Rainer Weiss, gravity perturbations from seismic fields were recognized as a potential noise contribution \cite{Wei1972}. He expressed the transfer function between ground motion and gravitational displacement noise of a test mass as effective isolation factor, highlighting the fact that gravitational coupling can be understood as additional link that circumvents seismic isolation. The equations that he used already had the correct dependence on ground displacement, density and seismic wavelength, but it took another decade, before Peter Saulson presented a more detailed calculation of numerical factors \cite{Sau1984}. He divided the half space below a test mass into volumes of correlated density fluctuations, and assigned a mean displacement to each of these volumes. Fluctuations were assumed to be uncorrelated between different volumes. The total gravity perturbation was then obtained as an incoherent sum over these volumes. The same scheme was carried out for gravity perturbations associated with vertical surface displacement. The sizes of volumes and surface areas of correlated density perturbations were determined by the length of seismic waves, but Saulson did not make explicit use of the wave nature of the seismic field that produces the density perturbations. As a result, also Saulson had to concede that certain steps in his calculation ``cannot be regarded as exact''. The next step forward was marked by two papers that were published almost simultaneously by groups from the LIGO and Virgo communities \cite{HuTh1998,BeEA1998}. In these papers, the wave nature of the seismic field was taken into account, producing for the first time accurate predictions of Newtonian noise. They understood that the dominant contribution to Newtonian noise would come from seismic surface waves, more specifically Rayleigh waves. The Rayleigh field produces density perturbations beneath the surface, and correlated surface displacement at the same time. The coherent summation of these effects was directly obtained, and since then, models of Newtonian noise from Rayleigh waves have not improved apart from a simplification of the formalism. Nonetheless, Newtonian-noise models are not only important to estimate a noise spectrum with sufficient accuracy. More detailed models are required to analyze Newtonian-noise mitigation, which is discussed in Section \ref{sec:mitigate}. Especially the effect of seismic scattering on gravity perturbations needs to be quantified. A first analytical calculation of gravity perturbations from seismic waves scattered from a spherical cavity is presented in Sections \ref{sec:scattercomp} and \ref{sec:scattershear}. In general, much of the recent research on Newtonian-noise modelling was carried out to identify possible limitations in Newtonian-noise mitigation. Among others, this has led to two major new developments in the field. First, finite-element simulations were added to the set of tools \cite{HaEA2009a,BeEA2010c}. We will give a brief summary in Section \ref{sec:numsim}. The advantage is that several steps of a complex analysis can be combined such as simulations of a seismic field, simulations of seismic measurements, and simulations of noise mitigation. Second, since seismic sources can be close to the test masses, it is clear that the seismic field cannot always be described as a superposition of propagating plane seismic waves. For this reason, analytical work has begun to base calculations of gravity perturbations on simple models of seismic sources, which can give rise to complex seismic fields \cite{HaEA2015}. Since this work also inspired potential applications in geophysics and seismology, we devote Section \ref{sec:pointsources} entirely to this new theory. Last but not least, ideas for new detector concepts have evolved over the last decade, which will make it possible to monitor gravity strain perturbations at frequencies below 1\,Hz. This means that our models of seismic Newtonian noise (as for all other types of Newtonian noise) need to be extended to lower frequencies, which is not always a trivial task. We will discuss aspects of this problem in Section \ref{sec:lowfNNRay}. \subsection{Seismic waves} \label{sec:seismic} In this section, we describe the properties of seismic waves relevant for calculations of gravity perturbations. The reader interested in further details is advised to study one of the classic books on seismology, for example Aki \& Richards \cite{AkRi2009}. The formalism that will be introduced is most suited to describe physics in infinite or half-spaces with simple modifications such as spherical cavities, or small perturbations of a flat surface topography. At frequencies well below 10\,mHz where the finite size of Earth starts to affect significantly the properties of the seismic field, seismic motion is best described by Earth's normal modes \cite{DaTr1998}. It should also be noted that in the approximation used in the following, the gravity field does not act back on the seismic field. This is in contrast to the theory of Earth's normal modes, which includes the gravity potential and its derivative in the elastodynamic equations. Seismic waves can generally be divided into shear waves, compressional waves, and surface waves. Compressional waves\index{seismic waves!compressional or P} produce displacement along the direction of propagation. They are sometimes given the alternative name ``P-waves'', which arises from the field of seismology. The P stands for \emph{primary} and means that these waves are the first to arrive after an earthquake (i.~e.~they are the fastest waves). These waves are characterized by a frequency $\omega$ and a wave vector $\vec k^{\,\rm P}$. While one typically assumes $\omega=k^{\rm P}\alpha$ with compressional wave speed $\alpha$, this does not have to hold in general, and many results presented in the following sections do not require a fixed relation between frequency and wavenumber. The displacement field of a plane compressional wave can be written \beq \vec\xi^{\,\rm P}(\vec r,t)=\vec e_k\xi_0^{\rm P}(\vec k^{\,\rm P},\omega)\exp(\irm(\vec k^{\,\rm P}\cdot\vec r\,-\omega t)) \label{eq:comprPW} \eeq The index 'P' is introduced to distinguish between displacements of shear and compressional waves, and $\vec e_k\equiv\vec k^{\,\rm P}/k^{\rm P}$. In media with vanishing shear modulus such as liquids and gases, compressional waves are also called sound waves. There are many ways to express the P-wave speed in terms of other material constants, but a widely used definition is in terms of the Lam\'e constants $\lambda,\,\mu$:\index{Lam\'e constants} \beq \alpha = \sqrt{\frac{\lambda+2\mu}{\rho}} \eeq The Lam\'e constant $\mu$ is also known as shear modulus, and $\rho$ is the density of the medium. Shear waves\index{seismic waves!shear or S} produce transversal displacement and do not exist in media with vanishing shear modulus. They are also known as ``S-waves'', where S stands for \emph{secondary} since it is the seismic phase to follow the P-wave arrival after earthquakes. The shear-wave displacement $\vec\xi^{\;\rm S}(\vec r,t)$ of a single plane wave can be expressed in terms of a polarization vector $\vec e_p$: \beq \vec\xi^{\,\rm S}(\vec r,t)=\vec e_p\xi_0^{\rm S}(\vec k^{\,\rm S},\omega)\exp(\irm(\vec k^{\,\rm S}\cdot\vec r\,-\omega t)) \label{eq:shearPW} \eeq with $\vec e_p\cdot\vec k^{\,\rm S}=0$. The S-wave speed in terms of the Lam\'e constants reads \beq \beta = \sqrt{\frac{\mu}{\rho}} \eeq Both wave types, compressional and shear, will be referred to as \emph{body waves} since they can propagate through media in all directions. Clearly, inside inhomogeneous media, all material constants are functions of the position vector $\vec r$. Another useful relation between the two seismic speeds is given by \beq \beta = \alpha\cdot\sqrt{\frac{1-2\nu}{2-2\nu}}, \label{eq:speedPS} \eeq where $\nu$ is the Poisson's ratio \index{Poisson's ratio}of the medium. It should be mentioned that there are situations when a wave field cannot be described as a superposition of compressional and shear waves. This is for example the case in the near field of a seismic source. In the remainder of this section, we will calculate gravity perturbations for cases where the distinction between compressional and shear waves is meaningful. The more complicated case of gravity perturbations from seismic fields near their sources is considered in Section \ref{sec:pointsources}. An elegant way to represent a seismic displacement field $\vec \xi(\vec{r},t)$ is in terms of its seismic or Lam\'e potentials $\phi_{\rm s}(\vec{r},t)$, $\vec\psi_{\rm s}(\vec{r},t)$ \cite{AkRi2009}: \index{potential!seismic, or Lam\'e} \beq \vec{\xi}(\vec{r},t)=\nabla\phi_{\rm s}(\vec{r},t)+\nabla\times\vec\psi_{\rm s}(\vec{r},t) \label{eq:seispot} \eeq with $\nabla\cdot\vec\psi_{\rm s}(\vec{r},t)=0$. The rotation of the first term vanishes, which is characteristic for compressional waves. The divergence of the second term vanishes, which is characteristic for shear waves. Therefore, the scalar potential $\phi_{\rm s}(\vec{r},t)$ will be called P-wave potential, and $\vec \psi_{\rm s}(\vec{r},t)$ S-wave potential. As will become clear in the following, many integrals involving the seismic field $\vec{\xi}(\vec{r},t)$ simplify greatly when using the seismic potentials to represent the field. It is possible to rewrite the shear-wave potential in terms of two scalar quantities in Cartesian coordinates \cite{Sas1985}: \beq \vec\psi_{\rm s}(\vec{r},t)=\nabla\times(0,0,\psi_{\rm s}(\vec{r},t))+(0,0,\chi_{\rm s}(x,y,t)) \label{eq:shearpot} \eeq This form can lead to useful simplifications. For example, if seismic displacement is relevant only in $z$-direction, then it suffices to calculate the contribution from the scalar potential $\psi_{\rm s}(\vec{r},t)$. Next we will introduce the Rayleigh waves\index{seismic waves!Rayleigh or Rf}. These are surface waves and in fact the only seismic waves that can propagate on surfaces of homogeneous media. In the presence of an interface between two types of media, the set of possible solutions of interface waves is much richer as described in detail in \cite{Pil1972}. In this paper, we will not deal specifically with the general solutions of interface waves, but it should be noted that gravity perturbations from at least one of the types, the Stoneley waves\index{seismic waves!Stoneley}, can be calculated using the same equations derived later for the Rayleigh waves. The definition of Rayleigh waves does not require a plane surface, but let us consider the case of a homogeneous half space for simplicity. The direction normal to the surface corresponds to the $z$-axis of the coordinate system, and will also be called vertical direction. The normal vector is denoted as $\vec e_z$. Rayleigh waves propagate along a horizontal direction $\vec e_k$. A wave vector $\vec k$ can be split into its vertical $\vec k_z$ and horizontal components $\vec k_\varrho$. The vertical wavenumbers are defined as \beq k_z^{\rm P}(k_\varrho)=\sqrt{(k^{\rm P})^2-k_\varrho^2},\qquad k_z^{\rm S}(k_\varrho)=\sqrt{(k^{\rm S})^2-k_\varrho^2} \label{eq:wavek} \eeq Even though Rayleigh waves are surface waves, their displacement field extends evanescently (i.~e.~with exponential amplitude fall-off from the surface) throughout the entire medium. They can be considered as analytical extension of a situation where body waves are reflected from the surface in the sense that we can allow the horizontal wavenumber $k_\varrho$ to be larger than $k^{\rm P}$ and $k^{\rm S}$. In this case, the vertical wavenumbers have imaginary values. Hence, in the case of Rayleigh waves, it is convenient to define new wave parameters as: \beq q_z^{\rm S}=\sqrt{k_\varrho^2-(k^{\rm S})^2},\qquad q_z^{\rm P}=\sqrt{k_\varrho^2-(k^{\rm P})^2} \label{eq:verticalk} \eeq Here, $k_\varrho$ is the horizontal wavenumber of the Rayleigh wave. Note that the order of terms in the square-roots are reversed with respect to the case of body waves as in Equation (\ref{eq:wavek}). Rewriting the equations in \cite{HaNa1998} in terms of the horizontal and vertical wavenumbers, the horizontal and vertical amplitudes of the three-dimensional displacement field of a Rayleigh wave reads \beq \begin{split} \xi_k(\vec{r},t) &= A\cdot\left(k_\varrho\e^{q_z^{\rm P}z}-\zeta q_z^{\rm S}\e^{q_z^{\rm S}z}\right)\cdot\sin(\vec k_\varrho\cdot\vec \varrho-\omega t)\\ \xi_z(\vec{r},t) &= A\cdot\left(q_z^{\rm P}\e^{q_z^{\rm P}z}-\zeta k_\varrho\e^{q_z^{\rm S}z}\right)\cdot\cos(\vec k_\varrho\cdot\vec \varrho-\omega t) \end{split} \label{eq:Rayfield} \eeq with $\zeta(k_\varrho)\equiv\sqrt{q_z^{\rm P}/q_z^{\rm S}}$. The speed $c_{\rm R}=k_\varrho/\omega$ of the fundamental Rayleigh wave obeys the equation \index{Rayleigh pole} \beq \begin{split} &R\left((c_{\rm R}/\beta)^2\right)=0,\\ &R(x)=x^3-8x^2+8x\frac{2-\nu}{1-\nu}-\frac{8}{1-\nu} \end{split} \label{eq:speedR} \eeq The real-valued solution to this equation is known as Rayleigh pole since the same function appears in the denominator of surface reflection coefficients. Note that the horizontal and vertical displacements are phase shifted by 90$^\circ$, which gives rise to elliptical particle motions. Therefore, arbitrary time series of vertical displacement are related to horizontal displacement via the Hilbert transform\index{Hilbert transform}. The displacement vector is constructed according to \beq \vec\xi(\vec r,t) = \xi_k(\vec r,t)\vec e_k+\xi_z(\vec r,t)\vec e_z \eeq In the case of a stratified medium, this wave type is also known as fundamental Rayleigh wave to distinguish them from \emph{higher-order Rayleigh waves} that can exist in these media \cite{HuTh1998}. For this reason, we will occasionally refer to Rayleigh waves as Rf-waves. According to Equations (\ref{eq:speedPS}) \& (\ref{eq:speedR}), given a shear-wave speed $\beta$, the compressional-wave speed $\alpha$ and Rayleigh-wave speed $c_{\rm R}$ are functions of the Poisson's ratio only. Figure \ref{fig:wavespeeds} shows the values of the wave speeds in units of $\beta$. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.6\textwidth]{Chp3-WaveSpeed.pdf}} \caption[Seismic wave speeds]{Rayleigh speed (solid line) and P-wave speed (dashed line) in units of S-wave speed $\beta$ as a function of Poisson's ratio.} \label{fig:wavespeeds} \end{figure}} As can be seen, for a given shear-wave speed the Rayleigh-wave speed (shown as solid line), depends only weakly on the Poisson's ratio. The P-wave speed however varies more strongly, and in fact grows indefinitely with Poisson's ratio approaching the value $\nu=0.5$. \subsection{Basics of seismic gravity perturbations} \label{eq;basicsgrav} In this section, we derive the basic equations that describe the connection between seismic fields and associated gravity perturbations. Expressions will first be derived in terms of the seismic displacement field $\vec\xi(\vec r,t)$, then in terms of seismic potentials $\phi_{\rm s}(\vec r,t),\,\vec\psi_{\rm s}(\vec r,t)$, and this section concludes with a discussion of gravity perturbations in transform domain. \subsubsection{Gravity perturbations from seismic displacement} The starting point is the continuity equation\index{continuity equation}, which gives an expression for the density perturbation caused by seismic displacement: \beq \delta\rho(\vec{r},t) = -\nabla\cdot(\rho(\vec r\,)\vec\xi(\vec r,t)) \label{eq:densinh} \eeq Here it is assumed that the seismic density perturbations are much smaller than the unperturbed density $\delta\rho(\vec{r},t)\ll\rho(\vec r\,)$ so that self-induced seismic scattering is insignificant. The perturbation of the gravity potential can now be written \beq \begin{split} \delta\phi (\vec{r}_0,t) &= -G\int\drm V \frac{\delta\rho(\vec r,t)}{|\vec r-\vec r_0|}\\ &= G\int\drm V \frac{\nabla\cdot(\rho(\vec r\,)\vec\xi(\vec r,t))}{|\vec r-\vec r_0|}\\ &= -G\int\drm V\,\rho(\vec r\,)\vec\xi(\vec r,t)\cdot\nabla\frac{1}{|\vec r-\vec r_0|} \end{split} \label{eq:totalNNinh} \eeq Note that in the last step integration by parts did not lead to surface terms since any type of geology can be described as having infinite size. For example, a half space would correspond to an infinite space with vanishing density above surface. Carrying out the gradient operation, we obtain the gravity perturbation in dipole form\index{gravity perturbation!dipole form} \beq \delta\phi(\vec r_0,t) = G\int\drm V\rho(\vec r\,)\vec \xi(\vec r,t)\cdot\frac{\vec r-\vec r_0}{|\vec r-\vec r_0|^3} \label{eq:dipolephi} \eeq and the corresponding perturbation of gravity acceleration reads \beq \begin{split} \delta\vec a(\vec r_0,t) &= -G\int\drm V\rho(\vec r\,)(\vec \xi(\vec r,t)\cdot\nabla_0)\cdot\frac{\vec r-\vec r_0}{|\vec r-\vec r_0|^3}\\ &= G\int\drm V\rho(\vec r\,)\frac{1}{|\vec r-\vec r_0|^3}\left(\vec\xi(\vec r,t)-3(\vec e_{rr_0}\cdot\vec\xi(\vec r,t))\vec e_{rr_0}\right) \end{split} \label{eq:dipoleacc} \eeq with $\vec e_{rr_0}\equiv (\vec r-\vec r_0)/|\vec r-\vec r_0|$, and $\nabla_0$ denotes the gradient operation with respect to $\vec r_0$. In this form, it is straight-forward to implement gravity perturbations in finite-element simulations (see Section \ref{sec:numsim}), where each finite element is given a mass $\rho(\vec r\,)\delta V$. This equation is valid whenever the continuity Equation (\ref{eq:densinh}) holds, and describes gravity perturbations inside infinite media as well as media with surfaces. Especially in the case of a homogeneous medium with surface, treating bulk and surface contributions to gravity perturbations separately can often simplify complex calculations. The continuity equation with constant (unperturbed) density $\rho_0=\rho(\vec r\,)$ describes density perturbations inside the medium contained in the volume $\mathcal V$, which directly yields the bulk term: \beq \delta\phi_{\rm bulk}(\vec{r}_0,t)=G\rho_0\int\limits_{\mathcal V}\drm V \frac{\nabla\cdot\vec\xi(\vec r,t)}{|\vec r-\vec r_0|} \label{eq:bulkNN} \eeq The surface term can be constructed by noting that it is the displacement normal to the surface that generates gravity perturbations: \beq \delta\phi_{\rm surf}(\vec{r}_0,t) = -G\rho_0\int\drm S \frac{\vec n(\vec{r})\cdot\vec{\xi}(\vec{r},t)}{|\vec r-\vec r_0|} \label{eq:surfNN} \eeq Note that also this equation is true only for small displacements since the surface normal is assumed to change negligibly due to seismic waves. The sum of bulk and surface terms is equal to the expression in Equation (\ref{eq:dipolephi}) with constant mass density. The same results can also be obtained using an explicit expression of the density $\rho(\vec r\,)$ that includes the density change at the surface in the form of a Heaviside function $\Theta(\cdot)$. The surface is solution to an equation $\sigma(\vec r\,)=0$, with $\sigma(\vec r\,)$ being normalized such that $\nabla\sigma(\vec r\,)$ is the unit normal vector $\vec n(\vec r\,)$ of the surface pointing from the medium outwards into the empty space. For a homogeneous medium with density $\rho_0$, the density of the entire space can be written as \beq \rho(\vec r\,)=\rho_0\Theta(-\sigma(\vec r\,)) \eeq Inserting this expression into Equation (\ref{eq:totalNNinh}), one obtains \beq \begin{split} \delta\phi (\vec{r}_0,t) &= G\rho_0\int\drm V \frac{\nabla\cdot(\Theta(-\sigma(\vec r\,))\vec\xi(\vec r,t))}{|\vec r-\vec r_0|}\\ &= G\rho_0\int\drm V \frac{\Theta(-\sigma(\vec r\,))\nabla\cdot\vec\xi(\vec r,t)-\delta(-\sigma(\vec r\,))\vec n(\vec r\,)\cdot\vec\xi(\vec r,t)}{|\vec r-\vec r_0|} \end{split} \eeq The first part of the infinite-space integral can be rewritten as the integral in Equation (\ref{eq:bulkNN}) over the volume $\mathcal V$ of the medium, while the second part translates into the surface integral in Equation (\ref{eq:surfNN}). \subsubsection{Gravity perturbations in terms of seismic potentials} In the last part of this section, results will be expressed in terms of the seismic potentials. This is helpful to connect this work to geophysical publications where solutions to seismic fields are often derived for these potentials. In many cases, it also greatly simplifies the calculation of gravity perturbations. In order to simplify the notation, the equations are derived for a homogeneous medium. Expressing the displacement field in terms of its potentials according to Equation (\ref{eq:seispot}), the bulk contribution reads \beq \delta\phi_{\rm bulk}(\vec{r}_0,t)=G\rho_0\int\limits_{\mathcal V}\drm V\,\frac{\Delta\phi_{\rm s}(\vec r,t)}{|\vec r-\vec r_0|}, \eeq This expression can be transformed via integration by parts into \beq \delta\phi_{\rm bulk}(\vec{r}_0,t)=G\rho_0\int\drm S\,\vec n(\vec r\,)\cdot\left[\frac{\nabla\phi_{\rm s}(\vec r,t)}{|\vec r-\vec r_0|}-\phi_{\rm s}(\vec r,t)\nabla\frac{1}{|\vec r-\vec r_0|}\right]-4\pi G\rho_0\phi_{\rm s}(\vec r_0,t). \label{eq:bodyinparts} \eeq One integral was solved explicitly by using \beq \Delta\frac{1}{|\vec r-\vec r_0|}=-4\pi\delta(\vec r-\vec r_0) \eeq The contribution $\delta\phi_{\rm surf}(\vec{r}_0,t)$ from the surface can also be rewritten in terms of seismic potentials \beq \delta\phi_{\rm surf}(\vec{r}_0,t)=-G\rho_0\int\drm S\,\vec n(\vec r\,)\cdot\frac{\nabla\phi_{\rm s}(\vec r,t)+\nabla\times\vec\psi_{\rm s}(\vec r,t)}{|\vec r-\vec r_0|}. \eeq As can be seen, terms in the bulk and surface contributions cancel, and so we get for the gravity potential \beq \begin{split} \delta\phi(\vec{r}_0,t) &=\delta\phi_{\rm bulk}(\vec{r}_0,t)+\delta\phi_{\rm surf}(\vec{r}_0,t) \\ &= -G\rho_0\int\drm S\,\vec n(\vec r\,)\cdot\left[\frac{\nabla\times\vec\psi_{\rm s}(\vec r,t)}{|\vec r-\vec r_0|}+\phi_{\rm s}(\vec r,t)\nabla\frac{1}{|\vec r-\vec r_0|}\right]-4\pi G\rho_0\phi_{\rm s}(\vec r_0,t)\\ &= -G\rho_0\int\drm S\,\vec n(\vec r\,)\cdot\left[\vec\psi_{\rm s}(\vec r,t)\times\nabla\frac{1}{|\vec r-\vec r_0|}+\phi_{\rm s}(\vec r,t)\nabla\frac{1}{|\vec r-\vec r_0|}\right]-4\pi G\rho_0\phi_{\rm s}(\vec r_0,t) \end{split} \label{eq:gravHelm} \eeq The last equation follows from the fact that the boundary of a boundary is zero after application of Stokes' theorem (the surface $S$ being the boundary of a body with volume $V$). The seismic potentials vanish above surface, and therefore the gravity perturbation in empty space is the result of a surface integral. This is a very important conclusion and useful to theoretical investigations, but of limited practical relevance since the integral depends on the seismic potential $\phi_{\rm s}(\vec r,t)$, which cannot be measured or inferred in general from measurements. The shear-wave potential enters as $\nabla\times\vec\psi_{\rm s}(\vec r,t)$, which is equal to the (observable) shear-wave displacement. In the absence of a surface, the solution simplifies to \beq \delta\phi(\vec{r}_0,t) =-4\pi G\rho_0\phi_{\rm s}(\vec r_0,t). \label{eq:gravP} \eeq The latter result is remarkable as it states the proportionality of gravity and seismic potentials in infinite media. If a solution of a seismic field is given for its seismic potentials, then one can immediately write down the gravity perturbation without further calculations. We will make use of it in Section \ref{sec:pointsources} to calculate gravity perturbations from seismic point sources. \subsubsection{Gravity perturbations in transform domain} In certain situations, it is favorable to consider gravity perturbations in transform domain. For example, in calculations of gravity perturbations in a half space, it can be convenient to express solutions in terms of the displacement amplitudes $\vec\xi(\vec k_\varrho,z,t)$, and in infinite space in terms of $\vec\xi(\vec k,t)$. As shown in Section \ref{sec:sourcehalf}, it is also possible to obtain concise solutions for the half-space problem using cylindrical harmonics, but in the following, we only consider plane-wave harmonics. The transform-domain equations for gravity perturbations from seismic fields in a half space, with the surface at $z=0$, are obtained by calculating the Fourier transforms of Equations (\ref{eq:bulkNN}) and (\ref{eq:surfNN}) with respect to $x_0,\,y_0$. This yields the bulk term \beq \begin{split} \delta\phi_{\rm bulk}(\vec k_\varrho,z_0,t)&=2\pi G\rho_0\frac{1}{k_\varrho}\int\limits_{-\infty}^0\drm z \, \e^{-k_\varrho|z-z_0|}\left[\partial_z\xi_z(\vec k_\varrho,z,t)-\irm\vec k_\varrho\cdot\vec \xi_\varrho(\vec k_\varrho,z,t)\right]\\ &=2\pi G\rho_0\frac{1}{k_\varrho}\Bigg[\xi_z(\vec k_\varrho,0,t)\e^{-k_\varrho|z_0|}\\ &\qquad\qquad-\int\limits_{-\infty}^0\drm z \, \e^{-k_\varrho|z-z_0|}\left(-k_\varrho{\rm sgn}(z-z_0)\xi_z(\vec k_\varrho,z,t)-\irm\vec k_\varrho\cdot\vec \xi_\varrho(\vec k_\varrho,z,t)\right)\Bigg], \end{split} \label{eq:bulkNNtr} \eeq where sgn denotes the signum function. The surface term reads \beq \delta\phi_{\rm surf}(\vec k_\varrho,z_0,t) = -2\pi G\rho_0\frac{1}{k_\varrho}\xi_z(\vec k_\varrho,0,t)\e^{-k_\varrho|z_0|} \label{eq:surfNNtr} \eeq Hence, the total perturbation of the gravity potential is given by \beq \delta\phi(\vec k_\varrho,z_0,t)= 2\pi G\rho_0\int\limits_{-\infty}^0\drm z \, \e^{-k_\varrho|z-z_0|}\left({\rm sgn}(z-z_0)\xi_z(\vec k_\varrho,z,t)+\frac{\irm}{k_\varrho}\vec k_\varrho\cdot\vec \xi_\varrho(\vec k_\varrho,z,t)\right). \eeq This equation is valid above surface as well as underground. Expanding the seismic field into plane waves, the integral over the coordinate $z$ is straight-forward to calculate. Using seismic potentials as defined in Equations (\ref{eq:seispot}) and (\ref{eq:shearpot}) instead of the displacement field, one obtains \beq \begin{split} \delta\phi(\vec k_\varrho,z_0,t)&= 2\pi G\rho_0\int\limits_{-\infty}^0\drm z \, \e^{-k_\varrho|z-z_0|}\left({\rm sgn}(z-z_0)(\partial_z\phi_{\rm s}(z)+k_\varrho^2\psi_{\rm s}(z))-k_\varrho(\phi_{\rm s}(z)+\partial_z\psi_{\rm s}(z))\right)\\ &=-2\pi G\rho_0\Bigg[\e^{-k_\varrho|z_0|}\left({\rm sgn}(z_0)\phi_{\rm s}(0)+k_\varrho\psi_{\rm s}(0)\right)+2\phi_{\rm s}(z_0)\Bigg], \end{split} \eeq with $\phi_{\rm s}(z>0)=0$, and dependence of the potentials on $\vec k_\varrho$ and $t$ is omitted. This equation is particularly useful since seismologists often define their fields in terms of seismic potentials, and it is then possible to directly write down the perturbation of the gravity potential in transform domain without solving any integrals. The corresponding expressions in infinite space are obtained by calculating the Fourier transforms of Equations (\ref{eq:bulkNN}) and (\ref{eq:surfNN}) with respect to $x_0,\,y_0$ and $z_0$. Since there are no surface terms, the result is simply \beq \delta\phi(\vec k,t)=-4\pi\irm G\rho_0\frac{1}{k^2}\vec k\cdot\vec\xi(\vec k,t) \eeq Substituting the displacement field by its seismic potentials, we find immediately the transform-domain version of Equation (\ref{eq:gravP}). \subsection{Seismic gravity perturbations inside infinite, homogeneous media with spherical cavity} \label{sec:scatterNN} Test masses of underground detectors, as for example KAGRA \cite{AsEA2013}, will be located inside large chambers hosting corner and end stations of the interferometer. Calculation of gravity perturbations based on a spherical chamber model can be carried out explicitly and provides at least some understanding of the problem. This case was first investigated by Harms et al.~\cite{HaEA2009b}. In their work, contributions from normal displacement of cavity walls were taken into account, but scattering of incoming seismic waves from the cavity was neglected. In this section, we will outline the main results of their paper in Section \ref{sec:bodynoscatt}, and present for the first time a calculation of gravity perturbations from seismic waves scattered from a spherical cavity in Sections \ref{sec:scattercomp} and \ref{sec:scattershear}. \subsubsection{Gravity perturbations without scattering} \label{sec:bodynoscatt} The first step is to calculate an explicit solution of the integral in Equation (\ref{eq:dipoleacc}) for plane seismic waves. The plane-wave solution will be incomplete, since scattering of the incident wave from the cavity is neglected. However, as will be shown later, scattering can be neglected assuming realistic dimensions of a cavity. We will start with the gravity perturbation from a plane compressional wave as defined in Equation (\ref{eq:comprPW}). Inserting this expression into Equation (\ref{eq:dipoleacc}), which includes bulk as well as surface gravity perturbations, the integral over the infinite medium excluding a cavity of radius $a$ can be solved. The gravity acceleration at the center $\vec r=\vec 0$ of the cavity is given by \beq \delta\vec a^{\,\rm P}(\vec 0,t) = 8\pi G\rho_0\vec\xi^{\;\rm P}(\vec 0,t)\frac{j_1(k^{\rm P}a)}{(k^{\rm P}a)}, \label{eq:planePcav} \eeq where $j_n(\cdot)$ is the spherical Bessel function. In the case that the length of the seismic wave is much larger than the cavity radius, the ratio can be approximated according to \beq \frac{j_1(ka)}{(ka)}\approx \frac{1}{3}\left[1-\frac{1}{10}(ka)^2\right], \label{eq:approxGG} \eeq which neglects terms of order $\mathcal{O}((ka)^4)$, and the result in the limit of vanishing cavity radius simplifies to \beq \delta\vec a^{\,\rm P}(\vec 0,t) = \frac{8\pi}{3} G\rho_0\vec\xi^{\,\rm P}(\vec 0,t) \label{eq:longewaveP} \eeq Since the gravity perturbation and therefore the seismic displacement is evaluated at the center of the cavity, the seismic displacement cannot be observed strictly speaking. Placing a seismometer at the cavity walls, an error of order $(ka)^2$ is made in the modelling of the gravity perturbation. The numerical factor in this equation is smaller by $-4\pi/3$ compared to the factor in Equation (\ref{eq:gravP}). This means that the bulk gravity perturbation is partially cancelled by cavity-surface contributions, which can be verified by directly evaluating the surface term: \beq \delta\vec a_{\rm surf}^{\,\rm P}(\vec 0,t) = -4\pi G\rho_0\vec\xi^{\;\rm P}(\vec 0,t) \cdot\left(j_0(k^{\rm P}a)-2\dfrac{j_1(k^{\rm P}a)}{(k^{\rm P}a)}\right) \label{eq:surfP} \eeq The long-wavelength limit $ka\rightarrow 0$ of the expression in brackets is 1/3, which is consistent with Equations (\ref{eq:longewaveP}) and (\ref{eq:gravP}). If the seismic field consisted only of pressure waves propagating in a homogeneous medium, then Equation (\ref{eq:longewaveP}) would mean that a seismometer placed at the test mass monitors all information required to estimate the corresponding gravity perturbations. A concise form of Equation (\ref{eq:planePcav}) can still be maintained if shear waves, which produce NN exclusively through surface displacement, are added to the total displacement $\vec\xi(\vec r,t)=\vec\xi^{\;\rm P}(\vec r,t)+\vec\xi^{\;\rm S}(\vec r,t)$. Inserting the plane-wave expression of Equation (\ref{eq:shearPW}) into Equation (\ref{eq:dipoleacc}), and adding the solution to the compressional-wave contribution, one obtains \beq \delta\vec a(\vec 0,t) = 4\pi G\rho_0\left(2\vec\xi^{\;\rm P}(\vec 0,t)\frac{j_1(k^{\rm P}a)}{(k^{\rm P}a)}-\vec\xi^{\;\rm S}(\vec 0,t)\frac{j_1(k^{\rm S}a)}{(k^{\rm S}a)}\right) \label{eq:planeNNcav} \eeq The shear-wave contribution has the same dependence on cavity radius as the compressional-wave contribution, even though the shear term is purely due to cavity-surface displacement. We can take a look at the gravity perturbation as a function of cavity radius. Figure \ref{fig:cavNNr} shows the perturbation from P-waves and S-waves using Equation (\ref{eq:planeNNcav}). It is assumed that P-waves have a factor 1.8 higher speed than S-waves. \epubtkImage{}{ \begin{figure}[htbp] \centerline{\includegraphics[width=0.6\textwidth]{Chp3-GravCav.pdf}} \caption[Gravity perturbation inside cavity]{The plot shows the gravity perturbation at the center of a cavity as a function of cavity radius in units of seismic wavelength.} \label{fig:cavNNr} \end{figure}} If the cavity has a radius of about $0.4\lambda$, then gravity perturbation is reduced by about a factor 2. Keeping in mind that the highest interesting frequency of Newtonian noise is about 30\,Hz, and that compressional waves have a speed of about 4\,km/s, the minimal cavity radius should be about 50\,m to show a significant effect on gravity noise. Building such cavities would be a major and very expensive effort, and therefore, increasing cavity size does not seem to be a good option to mitigate underground Newtonian noise. \subsubsection{Incident compressional wave} \label{sec:scattercomp} The fact that the shear term in Equation (\ref{eq:planeNNcav}) has the opposite sign of the compressional term does not mean that gravity perturbations are reduced since noise in both components is typically uncorrelated. However, as explained in more detail in Section \ref{sec:bodyhalf}, compressional and shear waves are partially converted into each other at reflection from interfaces, which leads to correlated shear and compressional displacement. So one may wonder if a detailed calculation of the problem including scattering effects yields different numerical factors due to partial cancellation or coherent enhancement of gravity perturbations. This problem will be solved now and outlined in greater detail since it is algebraically more complex. The calculation is based on an explicit solution of the seismic field for a compressional wave incident on a spherical obstacle \cite{YiTr1956}. The part of the seismic field that is produced by the spherical cavity has spherical symmetry. Therefore, it can be written in the form: \beq \vec\xi_{\rm cav}(\vec r,t)=[\nabla\phi_{\rm s}(\vec r\,)+\nabla\times(\nabla\times(\psi_{\rm s}(\vec r\,)\,\vec r\,))]\e^{-\irm\omega t} \label{eq:seismicSphSym} \eeq Since the seismic field can be expressed in terms of scalar potentials, it is possible to expand the incident plane wave according to Equation (\ref{eq:pwscalar}). The outgoing scattered field is then obtained by fulfilling the boundary conditions at the cavity walls. For hollow cavities, the boundary condition states that the stress tensor produced by the seismic field projected onto the cavity normal, which yields a vector known as traction\index{traction}, must vanish \cite{AkRi2009}. In spherical coordinates, the potentials of the scattered waves can be expanded according to\index{scattering!compressional waves} \beq \begin{split} \phi_{\rm s}(r,\cos(\theta)) &= \xi_0\sum\limits_{l=0}^\infty A_l(a)h_l^{(2)}(k_{\rm P}r)P_l(\cos(\theta))\\ \psi_{\rm s}(r,\cos(\theta)) &= \xi_0\sum\limits_{l=0}^\infty B_l(a)h_l^{(2)}(k_{\rm S}r)P_l(\cos(\theta)) \end{split} \eeq where $k_{\rm P},\,k_{\rm S}$ are the wave numbers of compressional and shear waves respectively, $\theta$ is the angle between the direction of propagation of the scattered wave with respect to the direction of the incident compressional wave, $\xi_0$ is the displacement amplitude of the incoming compressional wave, and the origin of the coordinate system lies at the center of the cavity. The spherical Hankel functions of the second kind $h_n^{(2)}(\cdot)$ are defined in terms of the spherical Bessel functions of the first and second kind as:\index{Hankel function!spherical, second kind} \beq h_n^{(2)}(x) \equiv j_n(x)-\irm y_n(x) \eeq The expansion or scattering coefficients $A_l,\,B_l$ need to be determined from boundary conditions at the cavity surface, which was presented in detail in \cite{YiTr1956}. Here we just mention that for small cavities, i.~e.~in the Rayleigh-scattering regime with $\{k_{\rm P},k_{\rm S}\}\cdot a\ll 1$, the dependence of the scattering coefficients on the cavity radius $a$ is $(k_{\rm P}a)^3$ or higher order. In order to understand the gravity perturbations from shear and compressional components, we consider bulk and surface contributions separately. The bulk contribution of Equation (\ref{eq:bulkNN}) assumes the form \beq \delta\vec a_{\rm bulk}(\vec{r}_0,t) = -G\rho_0\e^{-\irm\omega t}\int\limits_{\mathcal V}\drm V \frac{-k_{\rm P}^2\phi_{\rm s}(r,\cos(\theta))}{|\vec r-\vec r_0|^2}\vec e_{rr_0}, \eeq where we have used the fact that the P-wave potential obeys the Helmholtz \index{Helmholtz equation} equation: \beq (\Delta+k_{\rm P}^2)\phi_{\rm s}(\vec r\,) = 0 \eeq According to Equation (\ref{eq:surfNN}), the surface contribution reads \beq \delta\vec a_{\rm surf}(\vec r_0,t) = G\rho_0\int\drm S \frac{\vec\xi_{\rm cav}(\vec{r},t)\cdot\vec e_r}{|\vec r-\vec r_0|^2}\vec e_{rr_0} \eeq The last expression can be further simplified by making use of the identity \beq \vec\xi_{\rm cav}(\vec r,t)\cdot\vec e_r=\left(\partial_r\phi_{\rm s}(\vec r\,)-\frac{1}{r}\partial_u\left[(1-u^2)\partial_u\psi_{\rm s}(\vec r\,)\right]\right)\e^{-\irm\omega t} \eeq with $u\equiv\cos(\theta)$. If the gravity perturbations are to be calculated at the center $\vec r_0=\vec 0$ of the spherical cavity, then the integrals are easily evaluated by substituting powers of $\cos(\theta)$ according to the right-hand-side of Table \ref{tab:legendre}, and making use of the orthogonality relation in Equation (\ref{eq:legorth}). We first outline the calculation for the bulk term. Identifying the $z$-axis with the direction of propagation of the incoming wave, one obtains: \beq \begin{split} \delta a_{\rm bulk}^z(\vec 0,t) &= 2\pi G\rho_0k_{\rm P}^2\e^{-\irm\omega t}\int\limits_a^\infty\drm r\int\limits_{-1}^1\drm u\, u\,\phi_{\rm s}(r,u)\\ & = 2\pi G\rho_0k_{\rm P}^2\e^{-\irm\omega t}\xi_0\sum\limits_{l=0}^\infty A_l(a)\int\limits_a^\infty\drm r\, h_l^{(2)}(k_{\rm P}r)\int\limits_{-1}^1\drm uP_1(u)P_l(u)\\ & = \frac{4\pi}{3} G\rho_0k_{\rm P}^2\xi_0A_1(a)\e^{-\irm\omega t}\int\limits_a^\infty\drm r\,h_1^{(2)}(k_{\rm P}r)\\ & = \frac{4\pi}{3a} G\rho_0\xi_0A_1(a)\e^{-\irm \omega t}(k_{\rm P}a)h_0^{(2)}(k_{\rm P}a) \end{split} \label{eq:scattPbulk} \eeq The perturbations along $x,\,y$ vanish. Also the surface contribution is readily obtained with integration by parts: \beq \begin{split} \delta a_{\rm surf}^z(\vec 0,t) &= -2\pi G\rho_0\xi_0\e^{-\irm\omega t}\int\limits_{-1}^1\drm u\,u \left(\partial_r\phi_{\rm s}-\frac{1}{a}\partial_u\left[(1-u^2)\partial_u\psi_{\rm s}\right]\right)_{r=a}\\ &= -2\pi G\rho_0\xi_0\e^{-\irm\omega t}\int\limits_{-1}^1\drm u\,\left(P_1(u)(\partial_r\phi_{\rm s})+\frac{2}{a}P_1(u)\psi_{\rm s}\right)_{r=a}\\ & = -2\pi G\rho_0\xi_0\e^{-\irm\omega t}\sum\limits_{l=0}^\infty\int\limits_{-1}^1\drm u\,\bigg(A_l(a)(\partial_rh_l^{(2)}(k_{\rm P}r))_{r=a}+\frac{2B_l(a)h_l^{(2)}(k_{\rm S}a)}{a}\bigg)P_1(u) P_l(u)\\ & = \frac{4\pi}{3a} G\rho_0\xi_0\e^{-\irm\omega t}\bigg(A_1(a)(2h_1^{(2)}(k_{\rm P}a)-(k_{\rm P}a)h_0^{(2)}(k_{\rm P}a))-2B_1(a)h_1^{(2)}(k_{\rm S}a)\bigg) \end{split} \label{eq:scattPsurf} \eeq Again, perturbations along $x,\,y$ vanish. Adding the bulk and surface contributions, we finally obtain \beq \delta a^z(\vec 0,t) = \frac{8\pi}{3a} G\rho_0\xi_0\e^{-\irm\omega t}\bigg(A_1(a)h_1^{(2)}(k_{\rm P}a)-B_1(a)h_1^{(2)}(k_{\rm S}a)\bigg) \eeq This expression can be evaluated in the Rayleigh regime $\{k_{\rm P},\,k_{\rm S}\}\cdot a\ll 1$ using the following approximations of the scattering coefficients $A_1(a),B_1(a)$ given in \cite{YiTr1956}: \beq \begin{split} A_1(a) &= \frac{\irm}{3k_{\rm P}}(k_{\rm P}a)^3\left[1-\frac{1}{45}\left(11(k_{\rm P}a)^2+15(k_{\rm S}a)^2\right)\right]\\ B_1(a) &= \frac{\irm}{3k_{\rm S}}(k_{\rm S}a)^3\left[1-\frac{1}{18}\left(5(k_{\rm P}a)^2+6(k_{\rm S}a)^2\right)\right] \end{split} \eeq \beq \delta a^z(\vec 0,t) = \frac{4\pi}{9} G\rho_0\xi_0\e^{-\irm\omega t}\left((k_{\rm S}a)^2-\frac{16(k_{\rm P}a)^2}{15}\right) \label{eq:scatterPNN} \eeq This solution needs to be added to the contribution in Equation (\ref{eq:planePcav}) from the unperturbed incident wave. The gravity perturbation associated with the scattered waves is in phase with the perturbation from the incoming compressional wave. The perturbation in Equation (\ref{eq:scatterPNN}) vanishes in the limit $a\rightarrow 0$, which may seem intuitive, but notice that the surface contribution of the incoming wave does not vanish in the same limit. Instead, it is a consequence of perfect cancellation of leading order terms from scattered shear and compressional waves. Therefore, this result shows explicitly that neglecting contributions from scattered waves has no influence on leading order terms of the full gravity perturbation, at least if the incident wave is of compressional type. \subsubsection{Incident shear wave} \label{sec:scattershear} The calculation of the seismic field scattered from a spherical cavity with incident shear-wave can be found in \cite{KoJo1996}. Although it is in principle possible to solve this problem in terms of scalar seismic potentials, we choose to represent the fields directly in vector form using vector spherical harmonics. We assume that the polarization vector of the incident shear wave is $\vec e_x$, while the propagation direction is along $\vec e_z$. The explicit expression of the incident field is given in Equation (\ref{eq:expandPWtrans}). The scattered field can be expanded according to \beq \vec\xi_{\rm s}(\vec r,t) = \xi_0\e^{-\irm\omega t}\sum\limits_{l,m}\left(y_{lm}(r)\vec Y_l^m(\theta,\phi)+s_{lm}(r)\vec \Psi_l^m(\theta,\phi)+p_{lm}(r)\vec \Phi_l^m(\theta,\phi)\right) \eeq We will not further specify the radial functions. The expressions can be found in \cite{KoJo1996} (after converting their vector spherical harmonics into the ones used here). As for the incident P-wave, we will carry out the calculation of the gravity perturbations at the center of the cavity. Let us first calculate the bulk integral of Equation (\ref{eq:bulkNN}), using the divergence relations in Equation (\ref{eq:divvecharm}) and the integral Equation (\ref{eq:intvecharm}): \beq \begin{split} \delta\vec a_{\rm bulk}(\vec 0,t) &= -G\rho_0\int\limits_{\mathcal V}\drm V \frac{\nabla\cdot\vec\xi(\vec r,t)}{r^2}\vec e_r\\ &= -G\rho_0\xi_0\e^{-\irm\omega t}\int\limits_a^\infty\drm r \int\drm\Omega\sum\limits_{l,m}\left(\nabla\cdot(y_{lm}(r)\vec Y_l^m(\theta,\phi))+\nabla\cdot(s_{lm}(r)\vec \Psi_l^m(\theta,\phi))\right)\vec e_r\\ &= -G\rho_0\xi_0\e^{-\irm\omega t}\int\limits_a^\infty\drm r \int\drm\Omega\sum\limits_{l,m}\left(\partial_ry_{lm}(r)+\frac{2}{r}y_{lm}(r)-\frac{\sqrt{l(l+1)}}{r}s_{lm}(r)\right) \vec Y_l^m(\theta,\phi)\\ &= -G\rho_0\xi_0\e^{-\irm\omega t}\int\limits_a^\infty\drm r \sqrt{\frac{2\pi}{3}}\left((\mathcal Y_1^{-1}(r)-\mathcal Y_1^1(r))\vec e_x-\irm(\mathcal Y_1^{-1}(r)+\mathcal Y_1^1(r))\vec e_y+\sqrt{2}\mathcal Y_1^0(r)\vec e_z\right)\\ &= -G\rho_0\xi_0\e^{-\irm\omega t}\sqrt{\frac{2\pi}{3}}\vec e_x\int\limits_a^\infty\drm r (\mathcal Y_1^{-1}(r)-\mathcal Y_1^1(r)) \end{split} \eeq where the term in brackets in the third line was defined as $\mathcal Y_l^m(r)$. The last equation follows from the definition of the coefficients $y_{lm},\,s_{lm}$ in Equation (C.2) of \cite{KoJo1996}, but it should also be clear from symmetry considerations that gravity perturbation can be non-zero only along the displacement direction of the incident wave. The term under the last integral takes the form \beq \mathcal Y_1^{-1}(r)-\mathcal Y_1^1(r)=-\frac{1}{r}\sqrt{6\pi}(k^{\rm P}r)a^{\rm SP}_1h^{(2)}_1(k^{\rm P}r) \eeq The scattering coefficient $a^{\rm SP}_1$ corresponds to the relative amplitude of the $l=1$ scattered $P$-wave to the $l=1$ amplitude of the incident S-wave. It can be calculated using equations from \cite{KoJo1996} (note that the explicit solutions given in the appendix are wrong). Inserting this expression into the last equation, we finally obtain \beq \delta\vec a_{\rm bulk}(\vec 0,t) =2\pi G\rho_0\xi_0\e^{-\irm\omega t}a^{\rm SP}_1h_0^{(2)}(k^{\rm P}a)\vec e_x \eeq This result is very similar to Equation (\ref{eq:scattPbulk}), just that the scattering coefficients are defined slightly differently. We can now repeat the exercise for the surface term: \beq \begin{split} \delta\vec a_{\rm surf}(\vec 0,t) &= G\rho_0\int\drm S \frac{\vec n(\vec{r})\cdot\vec{\xi}(\vec{r},t)}{r^2}\vec e_r\\ &= -G\rho_0\int\drm\Omega (\vec e_r\cdot\vec{\xi}(r=a,\theta,\phi,t))\vec e_r\\ &= -G\rho_0\xi_0\e^{-\irm\omega t}\sqrt{\frac{2\pi}{3}}\left((y_{1,-1}(a)-y_{1,1}(a))\vec e_x-\irm(y_{1,-1}(a)+y_{1,1}(a))\vec e_y+\sqrt{2}y_{1,0}(a)\vec e_z\right)\\ &= -G\rho_0\xi_0\e^{-\irm\omega t}\sqrt{\frac{2\pi}{3}}(y_{1,-1}(a)-y_{1,1}(a))\vec e_x\\ &= -G\rho_0\xi_0\e^{-\irm\omega t}2\pi\left(\frac{a_1^{\rm SP}}{k^{\rm P}a}(-2h_1^{(2)}(k^{\rm P}a)+(k^{\rm P}a)h_0^{(2)}(k^{\rm P}a))-\frac{2b_1^{\rm SS}}{k^{\rm S}a}h_1^{(2)}(k^{\rm S}a)\right)\vec e_x \end{split} \eeq As in Section \ref{sec:bodynoscatt}, the surface term contains P-wave contributions quantified by the scattering coefficient $a_1^{\rm SP}$, and S-wave contributions quantified by $b_1^{\rm SS}$. Again, the result is formally very similar to Equation (\ref{eq:scattPsurf}) with incident P-wave. Adding the surface and bulk term, we finally obtain \beq \delta\vec a(\vec 0,t) = 4\pi G\rho_0\xi_0\e^{-\irm\omega t}\left(\frac{a_1^{\rm SP}}{k^{\rm P}a}h_1^{(2)}(k^{\rm P}a)+\frac{b_1^{\rm SS}}{k^{\rm S}a}h_1^{(2)}(k^{\rm S}a)\right)\vec e_x \eeq In the Rayleigh-scattering regime, $k^{\rm P}a\ll 1$ and $k^{\rm S}a\ll 1$, the scattering coefficients can be expanded according to \beq \begin{split} a_1^{\rm SP}(a) &= -\frac{2\irm}{9}(k_{\rm P}a)^3\left[1-\frac{1}{18}\left(5(k_{\rm P}a)^2+6(k_{\rm S}a)^2\right)\right]\\ b_1^{\rm SS}(a) &= \frac{2\irm}{9}(k_{\rm S}a)^3\left[1-\frac{1}{40}\left(20(k_{\rm P}a)^2+87(k_{\rm S}a)^2\right)\right] \end{split} \eeq Note that the explicit expressions for the scattering coefficients in the appendix of \cite{KoJo1996} are wrong. However, since the equations in the main part of the paper are correct, it is straight-forward to recalculate the scattering coefficients. Finally, we can write down the gravity perturbation in the Rayleigh-scattering regime \beq \delta\vec a(\vec 0,t) = \frac{4\pi}{9} G\rho_0\xi_0\e^{-\irm\omega t}\left(\frac{2}{3}(k^{\rm P}a)^2-\frac{7}{10}(k^{\rm S}a)^2\right)\vec e_x \label{eq:scatterSNN} \eeq This completes our analysis of scattering effects on gravity perturbations. We found that waves scattered from a spherical cavity with incident P-waves and S-waves have negligible impact on gravity perturbations if the cavity radius is much smaller than the length of seismic waves. The gravity change according to Equations (\ref{eq:scatterPNN}) and (\ref{eq:scatterSNN}) is quadratic in the cavity radius $a$. In addition, the gravity perturbation from scattered waves is in phase with gravity perturbations of the incident wave (in the Rayleigh-scattering regime), which is beneficial for coherent noise cancellation, if necessary. \subsection{Gravity perturbations from seismic waves in a homogeneous half space} \label{sec:halfspace} In this section, the gravity perturbation produced by plane seismic waves in a homogeneous half space will be calculated. The three types of waves that will be considered are compressional, shear, and Rayleigh waves. Reflection of seismic waves from the free surface will be taken into account. The purpose is to provide equations that can be used to estimate seismic Newtonian noise in GW detectors below and above surface. For underground detectors, corrections from the presence of a cavity will be neglected, but with the results of Section \ref{sec:scatterNN}, it is straight-forward to calculate the effect of a cavity also for the half-space problem. \subsubsection{Gravity perturbations from body waves} \label{sec:bodyhalf} As a first step, we will calculate the gravity perturbation from plane shear and compressional waves without taking reflection from the free surface into account. The compressional wave has the form in Equation (\ref{eq:comprPW}), and the perturbation of the gravity potential above surface integrated over the half space and including the surface contribution is found to be \beq \delta\phi^{\rm P}(\vec r,t) = -2\pi G\rho_0 \xi_0^{\rm P}\e^{\irm(\vec k_\varrho\cdot\vec\varrho-\omega t)}\e^{-k_\varrho h}\frac{1}{\irm k^{\rm P}}, \eeq with $h$ being the height of the point $\vec r$ above surface, $\vec\varrho$ being the projection of $\vec r$ onto the surface, and $\vec k_\varrho$ being the horizontal wave vector (omitting superscript 'P' to ease notation). The solution above surface can be understood as pure surface term characterized by an exponential suppression with increasing height. Also the phase term is solely a function of horizontal coordinates. These are typical characteristics for a surface gravity perturbation, and we will find similar results for gravity perturbations from Rayleigh waves. Below surface, $h$ reinterpreted as (positive valued) depth, the solution reads \beq \delta\phi^{\rm P}(\vec r,t) = -2\pi G\rho_0 \xi_0^{\rm P}\e^{-\irm\omega t}\frac{1}{\irm k^{\rm P}}\left(2\e^{\irm\vec k^{\rm P}\cdot\vec r}-\e^{-k_\varrho h}\e^{\irm\vec k_\varrho\cdot\vec\varrho}\right) \label{eq:underP} \eeq It consists of a surface term with exponential suppression as a function of depth, and of the infinite-space solution of Equation (\ref{eq:gravP}). If the point $\vec r$ is at the surface ($h=0$), then the total half-space gravity perturbation is simply half of the infinite-space perturbation. The calculation is substantially easier for shear waves. Shear waves being transversal waves can have two different orthogonal polarizations. If the displacement is parallel to the free surface, then the polarization is called SH, otherwise it is called SV. An SH polarized wave cannot produce gravity perturbation, since shear waves do not produce density perturbations inside media, and SH waves also do not displace the surface along its normal. Gravity perturbations can be produced by SV waves through surface displacement. The result valid for gravity perturbations underground and above surface is \beq \delta\phi^{\rm SV}(\vec r,t) = 2\pi G\rho_0 \xi_0^{\rm SV}\e^{\irm(\vec k_\varrho\cdot\vec\varrho-\omega t)}\e^{-k_\varrho h}\frac{1}{k^{\rm S}}, \eeq where $h$ is the distance to the surface. These solutions can now be combined to calculate the gravity perturbation from an SV or P wave reflected from the surface. An incident compressional wave is partially converted into an SV wave and vice versa. Only waves with the same horizontal wave vector $\vec k_\varrho$ couple at reflection from a flat surface \cite{AkRi2009}. Therefore, the total gravity perturbation above surface in the case of an incident compressional wave can be written \beq \delta\phi^{\rm P}(\vec r,t) = -2\pi G\rho_0 \xi_0^{\rm P}\e^{\irm(\vec k_\varrho\cdot\vec\varrho-\omega t)}\e^{-k_\varrho h}\frac{1}{\irm k^{\rm P}}\left((1+{\rm PP}(k_\varrho))-\irm\frac{k^{\rm P}}{k^{\rm S}} {\rm PS}(k_\varrho)\right) \eeq The conversion of amplitudes is described by two reflection coefficients ${\rm PP}(k_\varrho),\,{\rm PS}(k_\varrho)$, as functions of the horizontal wave number. Their explicit form can for example be found in \cite{AkRi2009}, which leads to the gravity perturbation \beq \begin{split} \delta\phi^{\rm P}(\vec r,t) &= -2\pi G\rho_0 \xi_0^{\rm P}\e^{\irm(\vec k_\varrho\cdot\vec\varrho-\omega t)}\e^{-k_\varrho h}\frac{1}{\irm k^{\rm P}}\delta(\nu,k_\varrho)\\ \delta(\nu,k_\varrho) &\equiv \frac{8k_\varrho^2k^{\rm P}_zk^{\rm S}_z-\irm 4k_\varrho k^{\rm P}_z((k^{\rm S})^2-2k_\varrho^2)}{((k^{\rm S})^2-2k_\varrho^2)^2+4k_\varrho^2k^{\rm P}_zk^{\rm S}_z} \end{split} \label{eq:incidentP} \eeq The gravity perturbation vanishes for horizontally and vertically propagating incident P-waves: the total P-wave contribution proportional to $1+{\rm PP}(k_\varrho)$ vanishes because of interference of the incident and reflected P-wave, while there is no conversion ${\rm PS}(k_\varrho)$ from P to S-waves for these two angles. The gravity amplitude $\delta(\cdot)$ depends on the Poisson's ratio, and the angle of incidence of the P-wave. Its absolute value is plotted in Figure \ref{fig:coeffNN} for three different angles of incidence $10^\circ,\,45^\circ,\,80^\circ$ of the P-wave with respect to the surface normal. Important to note is that above surface, the gravity perturbation produced by shear and body waves assumes the form of a surface density perturbation with exponential suppression as a function of height above ground, determined by the horizontal wavenumber. The expression for an incident S-wave can be constructed analogously. \subsubsection{Gravity perturbations from Rayleigh waves} \label{sec:gravRayleigh} The results for body waves can be compared with gravity perturbations from fundamental Rayleigh waves. There are two options to calculate the gravity perturbation. One is based on a representation of the Rayleigh wave in terms of seismic potentials (explicit expression can be found in \cite{Nov1999}), and using the last line in Equation (\ref{eq:gravHelm}). In the following, we choose to calculate gravity based on the displacement field since it is more intuitive, and not significantly more difficult. The Rayleigh displacement field can be written as \cite{HaNa1998} \beq \begin{split} \vec\xi(\vec r,t) &= \xi_k(\vec r,t) \vec e_k+\xi_z(\vec r,t) \vec e_z\\ \xi_k(\vec r,t) &= \irm\left(H_1\e^{h_1z}+H_2\e^{h_2z}\right)\e^{\irm (\vec k_\varrho\cdot\vec \varrho-\omega t)}\\ &= \irm H(z)\e^{\irm (\vec k_\varrho\cdot\vec \varrho-\omega t)}\\ \xi_z(\vec r,t) &= \left(V_1\e^{v_1z}+V_2\e^{v_2z}\right)\e^{\irm (\vec k_\varrho\cdot\vec \varrho-\omega t)}\\ &= V(z)\e^{\irm (\vec k_\varrho\cdot\vec \varrho-\omega t)} \end{split} \label{eq:Rayleigh} \eeq The parameters $H_i,\,V_i,\,h_i,\,v_i$ are real numbers, see Equation (\ref{eq:Rayfield}), and so there is a constant $90^\circ$ phase difference between horizontal and vertical displacement leading to elliptical particle motion. The surface displacement and the density change inside the medium caused by the Rayleigh wave lead to gravity perturbations. The surface contribution valid below and above ground is given by \beq \begin{split} \delta\phi_{\rm surf} (\vec r_0,t)&= -G\rho_0V(0)\e^{\irm(\vec k_\varrho\cdot\vec\varrho_0-\omega t)}\int\drm S\frac{\e^{\irm k_\varrho \varrho\cos(\phi)}}{\sqrt{\varrho^2+h^2}}\\ &= -2\pi G\rho_0(V_1+V_2)\e^{-hk_\varrho}\e^{\irm(\vec k_\varrho\cdot\vec\varrho_0-\omega t)} \end{split} \eeq As before, the distance of the test mass to the surface is denoted by $h$. The density perturbations in the ground are calculated from the divergence of the Rayleigh-wave field: \beq \nabla\cdot\vec\xi(\vec r,t)=(-k_\varrho H(z)+V^\prime(z))\e^{\irm (\vec k_\varrho\cdot\vec \varrho-\omega t)}, \eeq and therefore the bulk contribution to the gravity perturbation above surface reads: \beq \begin{split} \delta\phi_{\rm bulk}(\vec r_0,t) &= G\rho_0\e^{\irm (\vec k_\varrho\cdot\vec \varrho_0-\omega t)}\int\drm V\frac{(-k_\varrho H(z)+V^\prime(z))\e^{\irm k \varrho\cos(\phi)}}{\sqrt{\varrho^2+(h-z)^2}}\\ &= 2\pi G\rho_0\e^{\irm (\vec k_\varrho\cdot\vec \varrho_0-\omega t)}\frac{1}{k_\varrho}\int\limits_{-\infty}^0\drm z(-k_\varrho H(z)+V^\prime(z))\e^{-(h-z)k_\varrho} \end{split} \label{eq:Surface} \eeq Inserting the definitions of $H(z),\,V(z)$ from Equation (\ref{eq:Rayleigh}) into the last equation, we finally obtain \beq \delta\phi_{\rm bulk}(\vec r_0,t) = 2\pi G\rho_0\e^{-hk_\varrho}\e^{\irm (\vec k_\varrho\cdot\vec \varrho_0-\omega t)}\frac{1}{k_\varrho}\left[-\frac{k_\varrho H_1}{h_1+k_\varrho}-\frac{k_\varrho H_2}{h_2+k_\varrho}+\frac{v_1V_1}{v_1+k_\varrho}+\frac{v_2V_2}{v_2+k_\varrho}\right] \eeq Adding bulk and surface contributions, one obtains the full gravity perturbation from a Rayleigh wave: \beq \begin{split} \delta\phi_{\rm surf} (\vec r_0,t)&+\delta\phi_{\rm bulk}(\vec r_0,t)=\\ &\phantom{=}-2\pi G\rho_0 \e^{-hk_\varrho}\e^{\irm (\vec k_\varrho\cdot\vec \varrho_0-\omega t)}\left[\frac{H_1}{h_1+k_\varrho}+\frac{H_2}{h_2+k_\varrho}+\frac{V_1}{v_1+k_\varrho}+\frac{V_2}{v_2+k_\varrho}\right]\\ &=-2\pi G\rho_0 A\e^{-hk_\varrho}\e^{\irm (\vec k_\varrho\cdot\vec \varrho_0-\omega t)}(1-\zeta(k_\varrho)), \end{split} \label{eq:RayleighNN} \eeq where in the last line the parameters $H_i,\,V_i,\,h_i,\,v_i$ have been substituted by the expressions in Equation (\ref{eq:Rayfield}) for fundamental Rayleigh waves. The gravity perturbation underground contains an additional contribution from the compressional-wave content of the Rayleigh field: \beq \begin{split} \delta\phi_{\rm surf} (\vec r_0,t)&+\delta\phi_{\rm bulk}(\vec r_0,t)=2\pi G\rho_0 A\e^{\irm (\vec k_\varrho\cdot\vec \varrho_0-\omega t)}\left(-2\e^{-hq_z^{\rm P}}+(1+\zeta(k_\varrho))\e^{-hk_\varrho}\right), \end{split} \label{eq:RayleighNNUG} \eeq where $q_z^{\rm P}$ is the vertical wavenumber of evanescent compressional waves defined in Section \ref{sec:seismic}, and $h$ is the depth of the test mass. Contributions from a cavity wall need to be added, which is straight-forward at least for a very small cavity, by using results from Section \ref{sec:bodynoscatt} and amplitudes of shear and compressional waves dependent on depth as given in Equation (\ref{eq:Rayfield}). \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.6\textwidth]{Chp3-FlatSurfaceGrav.pdf}} \caption[Plane-wave Newtonian-noise amplitudes]{Gravity amplitudes for a medium with free, flat surface as functions of the Poisson's ratio. The solid line shows the gravity amplitude for Rayleigh waves, whereas the dashed lines show the gravity amplitudes for incident P-waves for three different angles of incidence: $10^\circ,\,45^\circ,\,80^\circ$ with increasing dash length.} \label{fig:coeffNN} \end{figure}} Comparing with Equation (\ref{eq:incidentP}), one can see that the analytical expressions of gravity perturbations above ground produced by incident compressional waves or by Rayleigh waves are very similar. Only the wavenumber-dependent amplitude term, either in the form of wave-reflection coefficients or Rayleigh-wave amplitude coefficients, is different. In order to plot the results, it is convenient to substitute the amplitude $A$ by vertical surface displacement: \beq A=\frac{\xi_z(\vec 0,0)}{q_z^{\rm P}-k_\varrho\zeta(k_\varrho)} \eeq Inserting this expression into Equation (\ref{eq:RayleighNN}), and applying the gradient operator to both sides of the equation (which yields an expression for $\delta\vec a(\vec r_0,t)$), we obtain a unit-less factor that depends on the elastic properties of the half-space: \beq \gamma(\nu)=\frac{k_\varrho(1-\zeta(k_\varrho))}{q_z^{\rm P}-k_\varrho\zeta(k_\varrho)}. \eeq The wavenumbers of shear, compressional, and Rayleigh waves all have fixed proportions determined by the Poisson's ratio $\nu$ of the half-space medium (see Section \ref{sec:seismic}). Therefore, $\gamma$ itself is fully determined by $\nu$. A plot of $\gamma(\nu)$ is shown in Figure \ref{fig:coeffNN}. The maximum value of $\gamma(\nu)$ is equal to 1, which also corresponds to the case of gravity perturbations from pure surface displacement. This means that the density perturbations generated by the Rayleigh wave inside the medium partially cancel the surface contribution for $\nu<0.5$. \subsection{Numerical simulations} \label{sec:numsim}\index{numerical simulations} Numerical simulations have become an important tool in seismic Newtonian-noise modelling. There are two types of numerical simulations. The first will be called ``kinematic'' simulation.\index{numerical simulations!kinematic} It is based on a finite-element model where each element is displaced according to an explicit, analytical expression of the seismic field. These can be easily obtained for individual seismic surface or body waves. The main work done by the kinematic simulation is to integrate gravity perturbations from a complex superposition of waves over the entire finite-element model according to Equation (\ref{eq:dipoleacc}). Today, we have explicit expressions for all types of seismic waves produced by all types of seismic sources, in infinite and half-space media. While this means in principle that many interesting kinematic simulations can be carried out, some effects are very hard to deal with. The kinematic simulation fails whenever it is impossible to provide analytical expressions for the seismic field. This is generally the case when heterogeneities of the ground play a role. Also deviations from a flat surface may make it impossible to run accurate kinematic simulations. In this case, a ``dynamical'' simulation needs to be employed.\index{numerical simulations!dynamical} A dynamical simulation only requires accurate analytical models of the seismic sources. The displacement field evolves from these sources governed by equations of motion that connect the displacement of neighboring finite elements. Even though the dynamical simulation can be considered more accurate since it does not rely on guessing solutions to the equations of motion, it is also true that not a single simulation of Newtonian noise has been carried out so far that could not have been done with a kinematic simulation. The reasons are that dynamical simulations are computationally very expensive, and constructing realistic models of the medium can be very challenging. It is clear though dynamical simulations will play an important role in future studies when effects from heterogeneities on gravity signals are investigated in detail. Since kinematic simulations are easy to set up from scratch, we will focus on the discussion of dynamical simulations. Two tools have been used in the past for Newtonian-noise simulations. The first one is the commercial software Comsol. It interfaces with Matlab, which facilitates analyzing sometimes complex results. Simulation results for a seismic field produced by a point force at the origin are shown in Figure \ref{fig:comsolimp}. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.8\textwidth]{Chp3-QS2Clay.pdf}} \caption[Finite-element simulation of gravity perturbations from a surface impact]{Comsol simulation of gravity perturbation from seismic fields. The plot to the left shows a snapshot of the displacement field produced by a step-function point source at the origin. The plots to the right show the corresponding gravity perturbations evaluated at the two points marked in red in the left plot. Courtesy of Beker at al \cite{BeEA2010c}.} \label{fig:comsolimp} \end{figure}} The results were presented in \cite{BeEA2010c}. A snapshot of the displacement field is plotted on the left. The P-wavefront is relatively weak and has already passed half the distance to the boundaries of the grid. Only a spherical octant of the entire finite-element grid is shown. The true surface in this simulation is the upper face of the octant. Consequently, a strong Rayleigh-wave front is produced by the point force. Slightly faster than the Rayleigh waves, an S-wavefront spreads underground. Its maximum is close to the red marker located underground. This seismic field represents a well-known problem in seismology, the so-called Lamb's problem, which has an explicit time-domain solution \cite{Ric1979}.\index{Lamb's problem} The plots on the right show the gravity perturbations evaluated at the two red markers. The P-wave, S-wave and Rf-wave arrival times are $t_{\rm P},\,t_{\rm S}$ and $t_{\rm R}$ respectively. The gravity perturbations are also divided into contributions from density perturbations inside the medium according to Equation (\ref{eq:bulkNN}) and surface contributions according to Equation (\ref{eq:surfNN}). A second simulation package used in the past is SPECFEM3D. It is a free software that can be downloaded at \url{http://www.geodynamics.org/cig/software/specfem3d}. It is one of the standard simulation tools in seismology. It implements the spectral finite element method \cite{KoVi1998,KoTr1999}. Recently, Equation (\ref{eq:dipoleacc}) has been implemented for gravity calculations \cite{HaEA2015}. SPECFEM3D simulations typically run on computer clusters, but it is also possible to execute simple examples on a modern desktop. Simulations of wave propagation in heterogeneous ground and based on realistic source models such as shear dislocations are probably easier to carry out with SPECFEM3D than with commercial software. However, it should be noted that it is by no means trivial to run any type of simulation with SPECFEM3D, and a large amount of work goes into defining a realistic model of the ground for SPECFEM3D simulations. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.33\textwidth]{Chp3-AccE_RP_Sym.png} \includegraphics[width=0.33\textwidth]{Chp3-AccN_RP_Sym.png} \includegraphics[width=0.33\textwidth]{Chp3-AccZ_RP_Sym.png}} \caption[Finite-element simulation of fault rupture]{SPECFEM3D simulation of a strike-slip fault rupture. The gravity is evaluated on a horizontal plane that includes the hypocenter.} \label{fig:specfem} \end{figure}} Nonetheless, this is the realm of dynamical simulations, and simplifying geological models one should always check if a kinematic simulation can be used. An example of a gravity simulation using SPECFEM3D is shown in Figure \ref{fig:specfem}. The contour plots are snapshots of the gravity field after 5\,s of rupture on a strike-slip fault. The length of the vertical fault is 30\,km with hypocenter located 7.5\,km underground. The plots show the gravity perturbation on a horizontal plane that includes the hypocenter. Gravity perturbations in the vicinity of the fault are dominated by the lasting gravity change. The transient perturbation carried by seismic waves is invisible in these plots simply because of their small amplitudes compared to the lasting gravity change. An explicit time-domain expression of the gravity field does not exist, but it could be constructed with a kinematical simulation using the results of Section \ref{sec:sourcehalf}. In conclusion, while dynamical simulations are required to represent seismic fields in complex geologies and surface topographies, one should always favor kinematic simulations when possible. Kinematic simulations are faster by orders of magnitude facilitating systematic studies of gravity perturbations. \subsection{Seismic Newtonian-noise estimates} \label{sec:estSeismicNN} The results of the analytical calculations can be used to estimate seismic Newtonian noise in GW detectors above surface and underground. The missing steps are to convert test-mass acceleration into gravity strain, and to understand the amplitudes of perturbation as random processes, which are described by spectral densities (see Section \ref{sec:noisefreq}). For a precise noise estimate, one needs to measure the spectrum of the seismic field, its two-point spatial correlation or anisotropy. These properties have to be known within a volume of the medium under or around the test masses, whose size depends on the lengths of seismic waves within the relevant frequency range. Practically, since all these quantities are then used in combination with a Newtonian-noise model, one can apply simplifications to the model, which means that some of these quantities do not have to be known very accurately or do not have to be known at all. For example, it is possible to obtain good Newtonian-noise estimates based on the seismic spectrum alone. All of the published Newtonian-noise estimates have been obtained in this way, and only a few conference presentations showed results using additional information such as the anisotropy measurement or two-point spatial correlation. In the following, the calculation of Newtonian-noise spectra is outlined in detail. \subsubsection{Using seismic spectra} We start with the simplest approach based on measured spectra of the ambient seismic field, all other quantities are represented by simple analytical models. At the LIGO Hanford site, it was found by array measurements that the main contribution to the vertical seismic spectrum at frequencies relevant to Newtonian noise comes from Rayleigh waves \cite{Dri2012}. Even if the wave composition of a seismic field at a surface site is unknown, then it would still be reasonable to assume that Rayleigh waves dominate the vertical spectrum since they couple most strongly to surface or near-surface sources \cite{Moo1976,BoEA2006}. We emphasize that this only holds for the vertical displacement since horizontal displacement can contain strong contributions from Love waves, which are shear waves with purely horizontal displacement trapped in surface layers\index{seismic waves!Love}. This means that we can use Equation (\ref{eq:RayleighNN}) to obtain an estimate of seismic Newtonian noise. We first rewrite it to give the Cartesian components of gravity acceleration: \beq \begin{split} \delta\vec a(\vec r_0,t) &= 2\pi G\rho_0 \xi_z\e^{-hk_\varrho}\e^{\irm (\vec k_\varrho\cdot\vec \varrho_0-\omega t)}\gamma(\nu)\begin{pmatrix}\irm\cos(\phi)\\ \irm\sin(\phi)\\ -1\end{pmatrix} \end{split} \label{eq:RayleighS} \eeq where $\phi$ is the angle of propagation with respect to the $x$-axis. Note that all three components of acceleration are determined by vertical surface displacement. This is possible since vertical and horizontal displacements of Rayleigh waves are not independent. As we will argue in Section \ref{sec:mitigate}, expressing Newtonian noise in terms of vertical displacement is not only a convenient way to model Newtonian noise, but it is also recommended to design coherent cancellation schemes at the surface based on vertical sensor data, since horizontal sensor data can contain contributions from Love waves, which do not produce Newtonian noise. Hence, horizontal channels are expected to show lower coherence with Newtonian noise. Assuming that the Rayleigh-wave field is isotropic, one can simply average the last equation over all propagation directions. The noise spectral density of differential acceleration along a baseline of length $L$ parallel to the $x$-axis reads \beq S(\delta \vec a(L\vec e_x)-\delta \vec a(\vec 0);\omega) = \left(2\pi G\rho_0 \e^{-hk_\varrho}\gamma(\nu)\right)^2S(\xi_z;\omega)\begin{pmatrix}1-2J_0(k_\varrho L)+2J_1(k_\varrho L)/(k_\varrho L)\\ 1-2J_1(k_\varrho L)/(k_\varrho L)\\ 2-2J_0(k_\varrho L)\end{pmatrix} \label{eq:RayNN} \eeq The vector contains the three direction-averaged response functions of horizontal and vertical gravity perturbations. Rayleigh Newtonian noise in one direction is uncorrelated with Newtonian noise in the other two directions independent of the value of $L$. Introducing $\lambda_{\rm R}\equiv 2\pi/k_\varrho$, the response functions, i.~e.~the square roots of the components of the vector in Equation (\ref{eq:RayNN}), divided by $L/\lambda_{\rm R}$ are shown in Figure \ref{fig:rayResp}. \epubtkImage{}{ \begin{figure}[htbp] \centerline{\includegraphics[width=0.6\textwidth]{Chp3-StrainRayleigh.pdf}} \caption[Strain response to Rayleigh gravity perturbations]{Strain response to Rayleigh gravity perturbations. The solid curve shows the horizontal response for gravity perturbations along the line of separation, the dotted curve the horizontal response for perturbations perpendicular to the line of separation, and the dashed curve for perturbations in vertical direction.} \label{fig:rayResp} \end{figure}} Gravity perturbations at the two locations $x=\{0,L\},\,y=z=0$ are uncorrelated for sufficiently large distances, and therefore the strain response decreases with increasing $L$. In other words, increasing the length of large-scale GW detectors would decrease Newtonian noise. Rayleigh Newtonian noise is independent of $L$ for short separations. This corresponds to the regime relevant to low-frequency GW detectors \cite{HaEA2013}. Equation (\ref{eq:RayNN}) is the simplest possible seismic surface Newtonian-noise estimate. Spatial correlation of the isotropic seismic field is fully determined by the fact that all seismic waves are assumed to be Rayleigh waves. Practically one just needs to measure the spectral density of vertical surface displacement, and also an estimate of the Poisson's ratio needs to be available (assuming a value of $\nu=0.27$ should be a good approximation in general \cite{ZaAm1995}). In GW detectors, the relevant noise component is along the $x$-axis. Taking the square-root of the expression in Equation (\ref{eq:RayNN}), and using a measured spectrum of vertical seismic motion, we obtain the Newtonian-noise estimate shown in Figure \ref{fig:VirgoNN} \footnotetext{Seismic data stem from channel SEBDCE06 between June 4, 2011, UTC 00:00 and September 3, 2011 UTC 00:00.}. Virgo's arm length is $L=3000\,$m, and the test masses are suspended at a height of about $h=1\,$m (although, it should be mentioned that the ground is partially hollow directly under the Virgo test masses). In order to take equal uncorrelated noise contributions from both arms into account, the single-arm strain noise needs to be multiplied by $\sqrt{2}$. The seismic spectrum falls approximately with $1/f$ in units of $\rm m/s/\sqrt{ Hz}$ within the displayed frequency range, which according to Equation (\ref{eq:RayNN}) means that the Newtonian-noise spectrum falls with $1/f^4$ (two additional divisions by $f$ from converting differential acceleration noise into differential displacement noise, and another division by $f$ from converting the seismic spectrum into a displacement spectrum). Note that the knee frequency of the response curve in Figure \ref{fig:rayResp} lies well below the frequency range of the spectral plots, and therefore does not influence the frequency dependence of the Newtonian-noise spectrum. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.49\textwidth]{Chp3-Rayleigh_Seismic.pdf} \includegraphics[width=0.49\textwidth]{Chp3-VirgoNNRay.pdf}} \caption[Rayleigh Newtonian noise at Virgo site]{Histograms of seismic spectra at the central station of the Virgo detector and modelled Rayleigh-wave Newtonian noise \addtocounter{footnote}{-1}\footnotemark. A sensitivity model of the Advanced Virgo detector is plotted for comparison.} \label{fig:VirgoNN} \end{figure}} Since seismic noise is non-stationary in general, and therefore can show relatively large variations in spectra, it is a wise idea to plot the seismic spectra as histograms instead of averaging over spectra. The plots can then be used to say for example that Newtonian noise stays below some level 90\% of the time (the corresponding level curve being called 90th percentile). In the shown example, a seismic spectrum was calculated each 128\,s for 7 days. Red colors mean that noise spectra often assume these values, blue colors mean that seismic spectra are rarely observed with these values. No colors mean that a seismic spectrum has never assumed these values within the 7 days of observation. Interesting information can be obtained in this way. For example, it can be seen that between about 11\,Hz -- 12\,Hz a persistent seismic disturbance increases the spectral variation, which causes the distribution to be wider and therefore the maximum value of the histogram to be smaller. \addtocounter{footnote}{-1} \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.50\textwidth]{Chp3-LLO_LVEA_Histo_2009_08_2010_07.pdf} \includegraphics[width=0.50\textwidth]{Chp3-LIGONNRay.pdf}} \caption[Seismic noise at LIGO Livingston]{Histograms of seismic spectra at the central station of the LIGO Livingston detector and modelled Rayleigh-wave Newtonian noise \footnotemark. In the left plot, the dashed black curves are the global seismic high-noise and low-noise models. The white curves are the 10th, 50th, and 90th percentiles of the histogram. In the right plot, a sensitivity model of the Advanced LIGO detector is plotted for comparison.} \label{fig:seismicLLO} \end{figure}} Generally, seismic spectra at the Virgo and LIGO sites show a higher grade of stationarity above 10\,Hz than at lower frequencies. For example, between 1\,Hz and 10\,Hz, seismic spectra have pronounced diurnal variation from anthropogenic activity, and between 0.05\,Hz and 1\,Hz seismic spectra follow weather conditions at the oceans. These features are shown in Figure \ref{fig:seismicLLO}. The white curves mark the 10th, 50th and 90th percentiles of the histogram. The histogram is based on a full year of 128\,s spectra. Strong disturbances during the summer months from logging operations near the site increase the width of the histogram in the anthropogenic band. In general, a 90th percentile curve exceeding the global high-noise model is almost certainly a sign of anthropogenic disturbances. At lowest frequencies, strong spectra far above the 90th percentile are frequently being observed due to earthquakes. Additional examples of Newtonian-noise spectra evaluated in this way can be found in \cite{DHA2012,BeEA2012}. \footnotetext{Seismic data stem from channel L0:PEM-LVEA$\_$SEISZ between August 1, 2009, UTC 00:00 and August 1, 2010 UTC 00:00.}\addtocounter{footnote}{-1} \subsubsection{Corrections from anisotropy measurements} Anisotropy of the seismic field can be an important factor in Newtonian-noise modelling. According to Equation (\ref{eq:RayleighNN}), Rayleigh waves that propagate perpendicularly to the relevant displacement direction of a test mass (which is along the arm of a GW detector), do not produce Newtonian noise. The chances of the Rayleigh-wave field to show significant anisotropy at Newtonian-noise frequencies are high since the dominant contribution to the field comes from nearby sources, probably part of the detector infrastructure. At one of the end stations of the LIGO Hanford detector, an array of 44 vertical seismometers was used to show that indeed the main seismic source of waves around 10\,Hz lies in the direction of an exhaust fan \cite{Har2013}. Coincidentally, this direction is almost perpendicular to the direction of the detector arm. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.49\textwidth]{Chp3-Azimuth_10Hz.pdf} \includegraphics[width=0.49\textwidth]{Chp3-Rayleigh_Anisotropy_Ratio.pdf}} \caption[Rayleigh-wave anisotropy at LIGO Hanford]{Anisotropy of the Rayleigh-wave field at 10\,Hz and Newtonian-noise suppression of a single test mass.} \label{fig:anisoRayleigh} \end{figure}} Figure \ref{fig:anisoRayleigh} shows the anisotropy measurement at 10\,Hz and Newtonian-noise suppression of a single test mass obtained from anisotropy measurements over a range of frequencies. The seismic array was used to triangulate the source of dominant seismic waves over a period of a few hours. As shown in the left plot of Figure \ref{fig:anisoRayleigh}, the waves at 10\,Hz almost always come from a preferred direction approximately equal to 100$^\circ$. The same is true at almost all frequencies between 5\,Hz and 30\,Hz. Using the mean azimuth of waves within this range of frequencies, the Newtonian-noise suppression was calculated using Equation (\ref{eq:RayleighS}) inserting the mean azimuth at each frequency as direction of propagation $\phi$ of the Rayleigh waves. An azimuth of 90$^\circ$ corresponds to a direction perpendicular to the arm, which means that one expects Newtonian noise to be lower compared to the isotropic case. The suppression factor is plotted on the right of Figure \ref{fig:anisoRayleigh} with a typical value of about 2. If the situation is the same at the other end station at LIGO Hanford (which is a reasonable assumption, also for the Livingston site), and conservatively assuming that the field is isotropic in the central station, then Newtonian noise would be reduced overall by about a factor $\sqrt{2}$. \subsubsection{Corrections from two-point spatial correlation measurements} \label{sec:spatialcorr} A calculation of Newtonian noise based on seismic two-point spatial correlation was first presented in \cite{BeEA2010}. In this section, we will outline the main part of the calculation focussing on gravity perturbations of a single test mass. The goal is to provide the analytical framework to make optimal use of array data in Newtonian-noise estimation. We will also restrict the analysis to surface arrays and Rayleigh waves. It is straightforward though to extend the analysis to 3D arrays, and as explained below, it is also in principle possible to integrate contributions from other wave types. Assuming that surface displacement is dominated by Rayleigh waves, the most general form of the single test-mass surface gravity perturbation is given by \beq S(\delta a_x;\vec\varrho,\omega) = \left(2\pi G\rho_0\gamma(\nu)\right)^2\int\frac{\drm^2k}{(2\pi)^2}\frac{\drm^2k'}{(2\pi)^2} S(\xi_z;\vec k_\varrho,\vec k_\varrho',\omega)\,\frac{k_x}{k_\varrho}\frac{k_x'}{k_\varrho'}\e^{-hk_\varrho}\e^{-hk_\varrho'}\e^{\irm\vec\varrho\cdot(\vec k_\varrho-\vec k_\varrho')} \eeq If the Rayleigh field is homogeneous, then the last equation can be simplified to \beq S(\delta a_x;\omega) = \left(2\pi G\rho_0\gamma(\nu)\right)^2\int\frac{\drm^2k}{(2\pi)^2}S(\xi_z;\vec k_\varrho,\omega)\frac{k_x^2}{k_\varrho^2}\e^{-2hk_\varrho} \label{eq:specHomNN} \eeq If in addition the field is isotropic, one obtains \beq S(\delta a_x;\omega) = \left(2\pi G\rho_0\gamma(\nu)\right)^2\frac{1}{4\pi}\int\limits_0^\infty\drm k_\varrho\, k_\varrho S(\xi_z;k_\varrho,\omega)\e^{-2hk_\varrho} \label{eq:specIsoNN} \eeq Equation (\ref{eq:specHomNN}) is probably the most useful variant since one should always expect that isotropy does not hold, and at the same time, it is practically unfeasible to characterize a seismic field that is inhomogeneous (corrections from inhomogeneities are probably minor as well). Nevertheless, the wavenumber spectra in all three equations can be measured in principle with seismic arrays as Fourier transforms of two-point spatial correlation measurements. In general, the correlation function and wavenumber spectrum are related via \beq S(\xi_z;\vec k_\varrho,\vec k_\varrho',\omega)=\int\drm^2\varrho\,\drm^2\varrho'\,C(\xi_z;\vec \varrho,\vec \varrho\,',\omega)\e^{-\irm(\vec\varrho\cdot\vec k_\varrho+\vec\varrho\,'\cdot\vec k_\varrho')} \eeq For a homogeneous field, we have \beq S(\xi_z;\vec k_\varrho,\omega)=\int\drm^2\varrho\,C(\xi_z;\vec \varrho,\omega) \e^{-\irm\vec\varrho\cdot\vec k_\varrho} \label{eq:spec2D} \eeq We can first insert this expression into Equation (\ref{eq:specHomNN}), and integrate over wavenumbers to obtain the Newtonian noise spectrum in terms of the two-point spatial correlation $C(\xi_z;\vec\varrho,\omega)$ of the seismic field: \beq \begin{split} S(\delta a_x;\omega) = \left(2\pi G\rho_0\gamma(\nu)\right)^2\frac{1}{2\pi}\int\drm^2 \varrho\, \Bigg[&\frac{x^2}{\varrho^2}\frac{2h}{\left((2h)^2+\varrho^2\right)^{3/2}}\\ &+\frac{y^2-x^2}{\varrho^4} \left(1-\frac{2h}{\left((2h)^2+\varrho^2\right)^{1/2}}\right)\Bigg] C(\xi_z;\vec\varrho,\omega) \end{split} \label{eq:homNNC} \eeq For isotropic and homogeneous fields, the wavenumber spectrum can be calculated as \beq S(\xi_z;k_\varrho,\omega)=2\pi\int\limits_0^\infty\drm\varrho\,\varrho J_0(k_\varrho\varrho)C(\xi_z;\varrho,\omega) \eeq Together with Equation (\ref{eq:specIsoNN}), we can express the gravity spectrum in terms of the isotropic two-point correlation: \beq S(\delta a_x;\omega) = \left(2\pi G\rho_0\gamma(\nu)\right)^2\frac{1}{2}\int\limits_0^\infty\drm \varrho\, \frac{2h\varrho}{\left((2h)^2+\varrho^2\right)^{3/2}} C(\xi_z;\varrho,\omega) \label{eq:isoNNC} \eeq This result can also be obtained directly from Equation (\ref{eq:homNNC}) by integrating over the azimuth. The fraction inside the integral can be understood as the kernel of an integral transformation of the spatial correlation function with the two variables $\varrho,\,h$. \index{kernel} For vanishing test-mass height $h$, the kernel is to be substituted by the Delta-distribution $\delta(\varrho)$. This means that for negligible test-mass height, the gravity perturbation from a homogeneous and isotropic field is determined by the seismic spectral density $S(\xi_z;\omega)=C(\xi_z;0,\omega)$. \epubtkImage{}{% \begin{figure}[htb] \centerline{\includegraphics[width=0.5\textwidth]{Chp3-IsotropicKernelNN.pdf}} \caption[Newtonian-noise kernel for isotropic, homogeneous Rayleigh fields]{Newtonian-noise kernel for isotropic, homogeneous Rayleigh fields. The dashed line is the kernel in wavenumber domain, Equation (\ref{eq:specIsoNN}), the solid line is the kernel in coordinate space, Equation (\ref{eq:isoNNC}).} \label{fig:kernelRay} \end{figure}} Equation (\ref{eq:isoNNC}) also states that for a homogeneous, isotropic field, the values of $\varrho$ that are most relevant to the Newtonian-noise estimate lie around $\varrho=\sqrt{2}h$ where the kernel assumes its maximum. The kernel is plotted as solid curve in Figure \ref{fig:kernelRay}. For example, LIGO test masses are suspended 1.5\,m above ground. Spatial correlation over distances much longer than 5\,m are irrelevant to estimate Newtonian noise at the LIGO sites from homogeneous and isotropic fields. Consequently a seismic experiment designed to measure spatial correlations to improve Newtonian-noise estimates does not need to cover distances longer than this. Of course, in reality, fields are neither homogeneous nor isotropic, and seismic arrays should be designed conservatively so that all important features can be observed. The kernel of the integral transform in Equation (\ref{eq:specIsoNN}) is a function of the variables $k_\varrho,\,h$ with maximum at $k_\varrho=1/(2h)$. It is displayed in Figure \ref{fig:kernelRay} as dashed curve. The behavior of the two kernels with changing $h$ is intuitive. The higher the test mass above ground, the larger the scales of the seismic field that dominate the gravity perturbation, which means larger values of $\varrho$ and smaller values of $k_\varrho$. Kernels in higher dimension can also be calculated for homogeneous seismic fields, and for the general case. The calculation is straight-forward and will not be presented here. The isotropic, homogeneous case is further illustrated for the Rayleigh field. A homogeneous, isotropic Rayleigh wave field has a two-point spatial correlation given by \cite{DHA2012} \beq C(\xi_z;\varrho,\omega)=S(\xi_z;\omega)J_0(k^{\rm R}_\varrho\varrho), \label{eq:corrRayiso} \eeq which gives rise to a wavenumber spectrum equal to \beq S(\xi_z;k_\varrho,\omega)=2\pi S(\xi_z;\omega)\frac{\delta(k^{\rm R}_\varrho-k_\varrho)}{k^{\rm R}_\varrho}, \label{eq:specRayIso} \eeq where we used the closure relation in Equation (\ref{eq:closureJ}). According to Equation (\ref{eq:specIsoNN}) or (\ref{eq:isoNNC}), the corresponding Newtonian noise of a single test mass is \beq S^{\rm R}(\delta a_x;\omega) = \left(2\pi G\rho_0 \e^{-hk_\varrho^{\rm R}}\gamma(\nu)\right)^2\frac{1}{2}S(\xi_z;\omega) \label{eq:specNNiso} \eeq This result is consistent with our previous solution (the limit $L\rightarrow\infty$ of Equation (\ref{eq:RayNN}) is twice as high). As mentioned already, in the form given here with the numerical factor $\gamma(\nu)$, the results are strictly only valid for Rayleigh waves. Contributions from other types of waves to $C(\xi_z;\vec \varrho,\omega)$ could potentially be integrated separately with different numerical factors, but then one needs some prior information helping to distinguish wave types in these spectra (e.~g.~based on estimated seismic speeds). \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=1\textwidth]{Chp3-Map_43.png}} \caption[Wavenumber spectrum at LIGO Hanford]{Wavenumber spectra measured at the LIGO Hanford site using a 44 seismometer array \footnotemark. The white circles with decreasing radius mark wave speeds of 100\,m/s, 250\,m/s, 500\,m/s and 1000\,m/s.} \label{fig:specKLHO} \end{figure}} In Figure \ref{fig:specKLHO}, wavenumber spectra measured at the LIGO Hanford site are shown for three different frequencies. The maxima in all three spectra correspond to Rayleigh waves (since the corresponding speeds are known to be Rayleigh-wave speeds). However, the 50\,Hz spectrum contains a second mode with significant amplitude that lies much closer to the origin, which is therefore much faster than a Rayleigh wave. It can only be associated with a body wave. One can now split the integration of this map into two parts, one for the Rayleigh wave, and one for the body wave, using a different numerical factor in each case. This can work, but with the information that can be extracted from this spectrum alone, it is not possible to say what type of body wave it is. So one can either study particle motion with three-axes sensors to characterize the body wave further (which was not possible in this case since the array consisted of vertical sensors only), or instead of $\gamma(\nu)<1$ one can use the conservative numerical factor equal to 1 to calculate at least the corresponding gravity perturbation from pure surface displacement. The latter method would neglect sub-surface density perturbations produced by a P-wave. It should be noted that one can obtain a model independent estimate of Newtonian noise with a 3D array. The numerical factor $\gamma(\nu)$ came from a calculation of sub-surface gravity perturbations based on surface displacement. With information about the entire 3D displacement field, this step is not necessary and the noise estimate becomes model independent and does not require any other prior knowledge. An example of calculating Newtonian noise based on a 3D spatial correlation function is given in Section \ref{sec:quasitemp}. \footnotetext{The data of the LIGO Hanford array are stored in LIGO channels H2:PEM-EY$\_$AUX$\_$NNARRAY$\_$ACC$\_\{$1--44$\}\_$OUT$\_$DQ. The plot uses 16\,s starting from April 28, 2012 UTC 09:00.} \subsubsection{Low-frequency Newtonian-noise estimates} \label{sec:lowfNNRay} There are qualitative differences between low- and high-frequency Newtonian noise that are worth being discussed more explicitly. First of all, we need to provide a definition of what should be considered low frequencies. There are two length scales relevant to Newtonian-noise estimates. The first is the size $L$ of the GW detector. The second is the depth $h$ of the detector. In this section, we will consider the scenario where both length scales are much shorter than the reduced length of seismic waves: $h,L\ll\lambda/(2\pi)$. This should typically be the case below about 1\,Hz. If the detector is much smaller than the reduced wavelength, then gravity perturbations along the same directions are significantly correlated over the extent of the detector. We can see this by expanding Equation (\ref{eq:RayNN}) rewritten into units of strain acceleration for small $L$: \beq S((\delta a_L-\delta a_0)/L;\omega) = \left(2\pi G\rho_0 \e^{-hk_\varrho}\gamma(\nu)\right)^2S(\xi_z;\omega)\frac{k_\varrho^2}{8}\begin{pmatrix}3\\ 1\\ 4\end{pmatrix} \label{eq:RayNNlow} \eeq The next order is proportional to $L^2$, and we recall that the test masses are separated by $L$ along the $x$-coordinate. The first important observation is that the strain noise is independent of the detector size. The common-mode rejection of the differential acceleration, which is proportional to $L^2$ with respect to noise power, exactly compensates the $1/L^2$ from the conversion into strain. This also means that Newtonian-noise is significantly weaker at low frequencies consistent with Figure \ref{fig:rayResp}, which shows that gravity gradient response saturates below some test-mass distance. Next, we discuss the role of detector depths. It should be emphasized that Equation (\ref{eq:RayNNlow}) is valid only above surface. As we have seen in Equation (\ref{eq:underP}), density changes below surface give rise to additional contributions if the test mass is located underground. We have not explicitly calculated these contributions for Rayleigh waves in this article. The point that we want to make though is that if the length of the Rayleigh wave is much longer than the depth of the detector, then the surface model in Equation (\ref{eq:RayNNlow}) is sufficiently accurate. It can be used with the parameter $h$ set to 0. This is not only true for Newtonian noise from Rayleigh waves, but for all forms of seismic Newtonian noise. It should be noted though that these conclusions are not generally true in the context of coherent Newtonian-noise cancellation. If a factor 1000 noise reduction is required (as predicted for low-frequency GW detectors, see \cite{HaEA2013}), then much more detail has to be included into the noise models, to be able to predict cancellation performance. Here, not only the depth of the detector could matter, but also the finite thickness of the crust, the curvature of Earth, etc. Estimates of seismic Newtonian noise at low frequencies were presented with focus on atom-interferometric GW detectors in \cite{VeVi2013}. The interesting aspect here is that atom interferometers in general have a more complicated response to gravity perturbations. A list of gravity couplings for atom interferometers can be found in \cite{DiEA2008}. So while atom-interferometric GW detectors would also be sensitive to gravity strain only, the response function may be more complicated compared to laser interferometers depending on the detector design. \subsection{Summary and open problems} \label{sec:ambientsummary} In this section on Newtonian noise from ambient seismic fields, we reviewed basic analytical equations to calculate density perturbations in materials due to vibrations, to calculate the associated gravity perturbations, and to estimate Newtonian noise based on observations of the seismic field. Equations were given for gravity perturbations of seismic body waves in infinite and half spaces, and for Rayleigh waves propagating on a free surface. Newtonian noise above a half space can be fully characterized by surface displacement, even for body waves. It was found that analytical expressions for gravity perturbations from body and Rayleigh waves have the same form, just the numerical, material dependent conversion factor between seismic and gravity amplitudes has different values also depending on the propagation direction of a body wave with respect to the surface normal. In practice, this means that prior information such as seismic speeds of body waves is required to calculate gravity perturbations based on surface data alone. Another important difference between body and Rayleigh gravity perturbations is that the conversion factor has a material and propagation-direction dependent complex phase in the body-wave case. This has consequences on the design of a seismic surface array that one would use to coherently cancel the gravity perturbations, which will be discussed further in Section \ref{sec:mitigate}. Scattering of body waves from spherical cavities was calculated concluding that gravity perturbations on a test mass inside a cavity are insignificantly affected by seismic scattering from the cavity. Here, ``insignificantly" is meant with respect to Newtonian-noise estimates. In coherent noise cancellation schemes, scattering could be significant if the subtraction goals are sufficiently high. An open problem is to understand the impact of seismic scattering on gravity perturbations in heterogeneous materials where scattering sources are continuously distributed. This problem was studied with respect to its influence on the seismic field \cite{Nor1986,LGZ2009}, but the effect on gravity perturbations has not been investigated yet. We also showed that the calculation of simple Newtonian-noise estimates can be based on seismic spectra alone, provided that one has confidence in prior information (e.~g.~that Rayleigh waves dominate seismic noise). In general, seismic arrays help to increase confidence in Newtonian-noise estimates. It was shown that either simple anisotropy measurements or measurements of 2D wavenumber spectra can be used to improve Newtonian-noise estimates. In this section, we did not discuss in detail the problem of estimating wavenumber spectra. Simply carrying out the Fourier transform in Equation (\ref{eq:spec2D}) is prone to aliasing. A review on this problem is given in \cite{KrVi1996}. Estimation of wavenumber spectra has also become an active field of research in GW groups, using data from the LIGO Hanford array deployed between April 2012 and February 2013, and the surface and underground arrays at the Sanford Underground Research Facility, which are currently being deployed with data to be expected in 2015. The problem of Newtonian-noise estimation using seismic arrays needs to be separated though from the problem of Newtonian-noise cancellation. The latter is based on Wiener filtering. From an information theory perspective, the Wiener filtering process is easier to understand than the noise estimation since Wiener filters are known to extract information from reference channels in an optimal way for the purpose of noise cancellation (under certain assumptions). There is no easy way to define a cost function for spectral estimation, which makes the optimal estimation of wavenumber spectra rather a philosophical problem than a mathematical or physical one. The optimal choice of analysis methods depends on which features of the seismic field are meant to be represented most accurately in a wavenumber spectrum. For example, some methods are based on the assumption that all measurement noise is stationary and effectively interpreted as isotropic seismic background. This does not have to be the case if the seismic field itself acts as a noise background for measurements of dominant features of the field. Nonetheless, designs of seismic arrays used for noise cancellation need to be based on information about wavenumber spectra. Initially, array data are certainly the only reliable sources of information, but also with Newtonian-noise observations, optimization of noise-mitigation schemes will be strongly guided by our understanding of the seismic field. \section{Atmospheric Gravity Perturbations} \label{sec:atmos} \index{Newtonian noise!atmospheric} The properties of the atmosphere give rise to many possible mechanisms to produce gravity perturbations. Sound fields are one of the major sources of gravity perturbations. Typically, sound is produced at boundaries between air and solid materials, but in general, one also needs to consider the \emph{internal} production of sound via the Lighthill process. The models of gravity perturbations from sound fields are very similar to perturbations from seismic compressional waves as given in Section \ref{sec:ambient}. The main difference in the models is related to the fact that the two fields are observed by different types of sensors. Additional mechanisms of producing atmospheric gravity noise are related to the fact that air can flow. This can lead to the formation of vortices or convection, and turbulence can always play a role in these phenomena. The Navier-Stokes equations directly predict density perturbations in these phenomena \cite{Dav2004}. Also static density perturbations produced by non-uniform temperature fields can be transported past a gravity sensor and cause gravity noise. One goal of Newtonian-noise modelling is to provide a strategy for noise mitigation. For this reason, it is important to understand the dependence of each noise contribution on distance between source and test mass, and also to calculate correlation functions. The former determines the efficiency of passive isolation schemes, such as constructing detectors underground, the latter determines the efficiency of coherent cancellation using sensor arrays. Atmospheric gravity perturbations have been known since long to produce noise in gravimeter data \cite{Neu2010}, where they can be observed below about 1\,mHz. At these frequencies, they are modelled accurately as a consequence of pressure fluctuations and loading of Earth's surface. Atmospheric gravity perturbations are generally expected to be the dominant contribution to ambient Newtonian noise below 1\,Hz \cite{HaEA2013}. In contrast, Creighton showed that atmospheric Newtonian noise can likely be neglected above 10\,Hz in large-scale GW detectors \cite{Cre2008}. His paper is until today the only detailed study of atmospheric Newtonian noise at frequencies above the sensitive band of gravimeters, and includes noise models for infrasound waves, quasi-static temperature fluctuations advected in various modes past test masses, and shockwaves. His results will be reviewed in the following with the exception that a new solution is given for gravity perturbations from shockwaves based on the point-source formalism of Section \ref{sec:pointsources}. Preliminary work on modelling gravity perturbations from turbulence was first published in \cite{CaAl2009}, and is reviewed and improved in Section \ref{sec:turbNN}. \subsection{Gravity perturbation from atmospheric sound waves} \label{sec:soundNN} Sound waves are typically understood as propagating perturbations of the atmosphere's mean pressure $p_0$. The pressure change can be translated into perturbation of the mean density $\rho_0$. The relation between pressure and density fluctuations depends on the adiabatic index $\gamma\approx 1.4$ of air \cite{Woo1955} \index{adiabatic index} \beq \gamma\frac{\delta \rho(\vec r,t)}{\rho_0}= \frac{\delta p(\vec r,t)}{p_0} \eeq The classical explanation for $\gamma>1$ is that the temperature increases when the sound wave compresses the gas sufficiently slowly, and this temperature increase causes an increase of the gas pressure beyond what is expected from compression at constant temperature. Note that in systems whose size is much smaller than the length of a sound wave, the statement needs to be reversed, i.~e.~fast pressure fluctuations describe an adiabatic process, not slow changes. An explanation of this counter-intuitive statement in terms of classical thermodynamics is given in \cite{Fle1974}. It can also be explained in terms of the degrees of freedom of gas molecules \cite{Hen1963}. At very high frequencies (several kHz or MHz depending on the gas molecule), vibrations and also rotations of the molecules cannot follow the fast sound oscillation, and their contribution to the specific heat freezes out (thereby lowering the adiabatic index). At low audio frequencies, sound propagation in air is adiabatic \footnote{Only at really low frequencies, below 10\,mHz, where the finite size of the atmosphere starts to matter, pressure oscillations can be isothermal again.}. Let us return to the calculation of gravity perturbations from sound fields. Assuming that a sound wave incident on the surface is reflected without loss such that its horizontal wavenumber is preserved, one obtains the gravity perturbation as the following integral over the half space $z>0$: \beq \begin{split} \delta\phi(\vec r_0,t) &= -\frac{G\rho_0}{\gamma\,p_0}\e^{\irm(\omega t-\vec k_\varrho\cdot\vec\varrho_0)}\delta p(\omega)\int\limits_{\mathcal H}\drm V\,\frac{(\e^{-\irm k_z z}+\e^{\irm k_z z})\e^{-\irm \vec k_\varrho\cdot\vec\varrho}}{(\varrho^2+(z-z_0)^2)^{1/2}}\\ &= 4\pi\frac{G\rho_0}{\gamma\,p_0}\e^{\irm(\omega t-\vec k_\varrho\cdot\vec\varrho_0)}(\e^{-k_\varrho|z_0|}(2\Theta(z_0)-1)-2\cos(k_zz_0)\Theta(z_0))\frac{\delta p(\omega)}{k^2} \end{split} \label{eq:gravinfra} \eeq Here, the Heaviside function $\Theta(\cdot)$ has the value 1 at $z_0=0$. The gravity potential and acceleration are continuous across the surface. We neglect the surface term here, but this is mostly to simplify the calculation and not fully justified. Part of the energy of a sound wave is transmitted into the ground in the form of seismic waves. Intuitively, one might be tempted to say that only a negligible amount of the energy is transmitted into the ground, but at the same time the density of the ground is higher, which amplifies the gravity perturbations. Let us analyze the case for a sound wave incident at a normal angle to the surface. In this case, the sound wave is transmitted as pure compressional wave into the ground. We denote the air medium by the index ``1'' and the ground medium by ``2''. Multiplying the seismic transmission coefficient (see \cite{AkRi2009}) by $\rho_2/\rho_1$, the relative amplitude of gravity perturbations is\index{sound!reflection} \beq \frac{\delta a_1}{\delta a_2}=2\frac{\rho_2\alpha_1}{\rho_1\alpha_1+\rho_2\alpha_2}, \eeq where $\alpha_1$ is the speed of sound, and $\alpha_2$ the speed of compressional waves. The sum in the denominator can be approximated by $\rho_2\alpha_2$, which leaves $2\alpha_1/\alpha_2$ as gravity ratio. The ratio of wave speeds does not necessarily have to be small at the surface. We know that the Rayleigh-wave speed at the LIGO sites is about 250\,m/s \cite{HaOR2011}, which we can use to estimate the compressional-wave speed to be around 600\,m/s (by making a guess about the Poisson's ratio of the ground medium). This means that the effective transmissivity with respect to gravity perturbations could even exceed a value of 1! Therefore, it is clear that the physics of infrasound gravity perturbations is likely more complicated than outlined in this section. Nonetheless, we will keep this for future work and proceed with the simplified analysis assuming that sound waves are fully reflected by the ground. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.49\textwidth]{Chp5-InfrasoundField_NN_pi2-pi16.png} \includegraphics[width=0.49\textwidth]{Chp5-InfrasoundField_NN_pi16.png}} \caption[Infrasound Newtonian noise]{Gravity acceleration along a horizontal direction produced by plane infrasound waves. The left plot shows the field for an angle of incidence of $7\pi/16$, the right plot for an angle of $\pi/16$ with respect to the surface normal.} \label{fig:infraNN} \end{figure}} The gravity acceleration caused by infrasound waves is shown in Figure \ref{fig:infraNN} for two different angles of incidence with respect to the surface normal. Note that the infrasound field modelled in Equation (\ref{eq:gravinfra}) consists of two plane waves propagating in opposite directions with respect to the normal, and along the same direction with respect to the horizontal. Therefore, the pressure and consequently gravity field have the form of a standing wave along the normal direction. Below surface, the gravity perturbation falls off exponentially. The decrease is faster when the infrasound wave propagates nearly horizontally. The length scale that determines the exponential fall off becomes infinite if the wave propagates vertically, but at the same time the projection of gravity acceleration onto a horizontal direction vanishes. This is why underground construction of GW detectors is an efficient means to mitigate infrasound Newtonian noise. Creighton also considered the case of a shield against infrasound disturbances around the test masses of surface detectors, which in its simplest form is already given by the buildings hosting the test masses \cite{Cre2008}. A detailed investigation of noise-reduction techniques is given in Section \ref{sec:mitigate}. \subsection{Gravity perturbations from quasi-static atmospheric temperature perturbations} \label{sec:quasitemp} In this section, we review the rather complex calculation of gravity perturbations from a temperature field presented as appendix in \cite{Cre2008}. The calculation is also instructive to solve similar problems in the future. The basic idea is the following. Temperature fluctuations in the atmosphere lead to density changes. In terms of the mean temperature $T_0$ and density $\rho_0$ of the atmosphere, and according to the ideal gas law at constant pressure, small fluctuations in the temperature field cause perturbations of the density: \beq \delta \rho(\vec r,t)=-\frac{\rho_0}{T_0}\delta T(\vec r,t) \eeq Pressure fluctuations also cause density perturbations, but as we have seen in the previous section, they result in quickly propagating infrasound waves. The effect that we want to study here is the Newtonian noise from slowly changing density fields, transported past a test mass by air flow. These are predominantly associated with slow temperature fluctuations. The gravity perturbation produced by such a temperature field is given by \beq \delta\vec a(\vec r_0,t)=-\frac{G\rho_0}{T_0}\int\drm V\frac{\delta T(\vec r,t)}{|\vec r-\vec r_0|^3}(\vec r-\vec r_0) \eeq Trying to obtain an explicit expression of the temperature field, inserting it into this integral, and solving the integral is hopeless here. What one can do instead is to work with the statistical properties of the temperature field. If the temperature field is stationary, then we can calculate the spectral density as \beq S(\delta a_x;\vec r_0,\omega)=2\left(\frac{G\rho_0}{T_0}\right)^2\int\drm\tau\int\drm V\int\drm V'\frac{xx'}{r^3(r')^3}\langle\delta T(\vec r,t)\delta T(\vec r\,',t+\tau)\rangle\e^{-\irm\omega\tau}, \label{eq:tempNNspec} \eeq where we have used Equation (\ref{eq:defspecdens}). The vectors $\vec r,\,\vec r\,'$ point from the test mass to temperature fluctuations in the atmosphere. The next step is to characterize the temperature field. Temperature fluctuations in the vicinity of Earth surface are distributed by turbulent mixing. As shown in \cite{KuNa2006}, temperature inhomogeneities of the surface play a minor role in the formation of the temperature field at frequencies above a few tens of a mHz. Therefore, at sufficiently high frequencies, one can approximate the temperature perturbations as homogeneous and isotropic. In this case, the second-order noise moments of $\delta T(\vec r,t)$ can be characterized by the \emph{temperature structure function} $D(\delta T;r)$:\index{temperature structure function} \beq \langle(\delta T(\vec r,t)-\delta T(\vec r+\Delta\vec r,t))^2\rangle=D(\delta T;\Delta r) \label{eq:tempstruct} \eeq The structure function can typically be approximated as a power law \beq D(\delta T;|\Delta\vec r|)=c_T^2(\Delta r)^p \eeq provided that the distance $\Delta r$ is sufficiently small. This relation also breaks down at distances similar to and smaller than the Kolmogorov length scale, which is about 0.4\,mm for atmospheric surface layers \cite{AnAt1978}\index{Kolmogorov length scale}. Turbulent mixing enforces power laws with $p\sim 2/3$ \cite{AnAt1978}. Applying Taylor's hypothesis, the distance $\Delta r$ can be substituted by the product of wind speed $v$ with time $\tau$, and Equation (\ref{eq:tempstruct}) can be reformulated as \beq \langle\delta T(\vec r,t)\delta T(\vec r,t+\tau)\rangle=\sigma_T^2-\frac{c_T^2}{2}(v\tau)^p \label{eq:eulercorr} \eeq The parameter $c_T$ depends on the dissipation rate of turbulent kinetic energy and the temperature diffusion rate, and $\sigma_T$ is the standard deviation of temperature fluctuations. Since Taylor's hypothesis is essential for the following calculations, we should make sure to understand it\index{Taylor's hypothesis}. Qualitatively it states that turbulence is transported as frozen pattern with the mean wind speed. More technically, it links measurements in Eulerian coordinates, i.~e.~at points fixed in space, with measurements in Lagrangian coordinates, i.~e.~that are connected to fluid particles. The practical importance is that two-point spatial correlation functions such as Equation (\ref{eq:tempstruct}) can be estimated based on a measurement at a single location when carried out over some duration $\tau$ as in Equation (\ref{eq:eulercorr}). In either case, the hypothesis can be expected to fail over sufficiently long periods $\tau$ or distances $\Delta r$, which are linked to the maximal scale of turbulent structures \cite{DaSo1997}. In any case, we assume that Taylor's hypothesis is sufficiently accurate for our purposes. The Fourier transform of Equation (\ref{eq:eulercorr}) yields the spectral density of temperature fluctuations \beq S(\delta T;\vec r_0,\omega)=c_T^2v^p(\vec r_0)\omega^{-(p+1)}\Gamma(p+1)\sin(\pi p/2) \label{eq:spectemp} \eeq The Fourier transform cannot be calculated without employing an upper cutoff on the variable $\tau$. This means that the spectral density given here only holds at sufficiently high frequencies (at the same time not exceeding the Kolmogorov limit defined by the size $l$ of the smallest turbulence structures, $\omega<v/l$, which is of the order kHz). Technically, the Fourier transform can be calculated by multiplying an exponential term $\exp(-\epsilon \tau)$ to the integrand, and subsequently taking the limit $\epsilon\rightarrow 0$. The next step is to calculate the temperature correlation that appears in Equation (\ref{eq:tempNNspec}). Using Taylor's hypothesis to convert Equation (\ref{eq:eulercorr}) back into a two-point spatial correlation, we see that correlations over large distances are negligible. In terms of the frequency of temperature fluctuations, correlations are significant over distances of the order $v/\omega$ (which is shown in the following). Consider the scenario displayed in Figure \ref{fig:tempNN}. Two air pockets are shown at locations $\vec r,\,\vec r\,'$ and times $t,\,t'$ on two steam lines that we denote by $S$ and $S'$. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.85\textwidth]{Chp5-laminarNN.pdf}} \caption[Newtonian noise from temperature fields]{Sketch of laminar air flow past the test mass. Air volume is divided into cells that move along streamlines. Speed can change with time, and be different along a streamline.} \label{fig:tempNN} \end{figure}} If $\tau=t-t'$ is sufficiently small, then the separation of the two pockets can be written $(s^2+(v\tau)^2)^{1/2}$, where the distance $s$ of the two streamlines $S,\,S'$ and $v$ are evaluated at $\vec r$. Together with Taylor's hypothesis, temperature fluctuations between the two pockets are significant if $\tau$ is sufficiently close to the time $\tau_0$ it takes for the pocket at $\vec r\,'$ to reach the reference plane, and also $s$ must be sufficiently small. The temperature correlation can then be written as \beq \langle\delta T(\vec r\,',t')\delta T(\vec r,t'+\tau)\rangle=\sigma_T^2-\frac{c_T^2}{2}(s^2+v^2(\tau-\tau_0)^2)^{p/2} \eeq This allows us to carry out the integral over $\tau$ in Equation (\ref{eq:tempNNspec}): \beq \begin{split} \int\drm\tau\langle&\delta T(\vec r\,',t')\delta T(\vec r,t'+\tau)\rangle\e^{-\irm\omega\tau} = \\ &\sqrt{\frac{2^{p+1}}{\pi}\frac{s}{v\omega}}\Gamma(p/2+1)c_T^2 \left(\frac{vs}{\omega}\right)^{p/2}\sin(\pi p/2)K_{(p+1)/2}(\omega s/v)\e^{-\irm\omega\tau_0}, \end{split} \eeq where $K_n(\cdot)$ is the modified Bessel function of the second kind, and $v,\,s$ are functions of $\vec r$. Again, the integral can only be evaluated if an exponential upper cutoff on the variable $\tau$ is multiplied to the integrand, which means that we neglect contributions from large-scale temperature perturbations. The correlation spectrum is plotted in Figure \ref{fig:corrTemp}. \epubtkImage{} \begin{figure}[htbp] \centerline{\includegraphics[width=0.8\textwidth]{Chp5-tempCorr.pdf}} \caption[Temperature correlation spectrum]{Two-point temperature correlation spectrum.} \label{fig:corrTemp} \end{figure}} At frequencies above $v/s$, the spectrum falls exponentially since $K_\nu(x)\rightarrow \sqrt{\pi/(2x)}\exp(-x)$ for $x\gg|\nu^2-1/4|$. This means that the distance between streamlines contributing to the two-point spatial correlation must be very small to push the exponential suppression above the detection band. The integral over $V'$ in Equation (\ref{eq:tempNNspec}) can be turned into an integral over streamlines $S'$ that lie within a bundle $s\lesssim v/\omega$ of streamline $S$, which allows us to approximate the volume element as cylindrical bundle $\drm V'=2\pi s \drm s\, \drm\tau_0v(\vec r\,)$. The form of the volume element is retained over the whole extent of the streamline since the air is nearly incompressible for all conceivable wind speeds, i.~e.~changes in the speed of the cylindrical pocket are compensated by changes in the radius of the pocket to leave the volume constant. Hence, the speed in the volume element can be evaluated at $\vec r$. With this notation, the integral can be carried out over $0<s<\infty$ since the modified Bessel function automatically enforces the long-distance cutoff necessary for our approximations, which yields \beq S(\delta a_x;\vec r_0,\omega)=\left(\frac{G\rho_0}{T_0}\right)^24\pi c_T^2\Gamma(2+p)\sin(\pi p/2) \int\drm V\int\drm \tau_0\frac{xx'}{r^3(r')^3} \left(\frac{v(\vec r\,)}{\omega}\right)^{p+3}\e^{-\irm\omega\tau_0}, \eeq Here the vector $\vec r$ is parameterized by $\tau_0$. This result can be interpreted as follows. We have two streamlines $S$, $S'$, whose contributions to this integral are evaluated in terms of the duration $\tau_0$ it takes for the pocket at $\vec r\,'$ to reach the reference plane that goes through all streamlines, and contains the test mass at $\vec r_0$ and location $\vec r$ (as indicated in Figure \ref{fig:tempNN}). Since we consider the pocket on streamline $S$ to be at the reference plane at time $t$, we can set $\tau_0=t-t'$, and integrating contributions from all streamlines over the reference plane with area element $\drm A$, with wind speed $v(\vec\varrho\,)$, and $\vec\varrho$ pointing from the test mass to streamlines on the reference plane, we can finally write \beq \begin{split} S(\delta a_x;\vec r_0,\omega)&=\left(\frac{G\rho_0}{T_0}\right)^2 4\pi c_T^2\omega^{-(p+3)}\Gamma(2+p)\sin(\pi p/2)\\ & \hspace*{2cm}\cdot\int\limits_{\mathcal A(\vec r_0)}\drm A\, v(\vec\varrho\,)\int\drm t'\frac{x'}{(r')^3} \e^{-\irm\omega t'}\int\drm t\frac{x}{r^3}v(\vec r\,)^{p+3} \e^{\irm\omega t}\\ &=\left(\frac{G\rho_0}{T_0}\right)^2 \frac{4\pi}{\omega^2}(2+p)S(\delta T;\vec r_0,\omega)\\ & \hspace*{2cm}\cdot\int\limits_{\mathcal A(\vec r_0)}\drm A\, v(\vec\varrho\,)\int\drm t'\frac{x'}{(r')^3} \e^{-\irm\omega t'}\int\drm t\frac{x}{r^3}\left(\frac{v(\vec r\,)}{v(\vec r_0)}\right)^p v^3(\vec r\,)\e^{\irm\omega t} \end{split} \label{eq:advectNN} \eeq In this equation, $\vec r\,' = \vec r\,'(\vec\varrho,t')$, and $\vec r = \vec r(\vec\varrho,t)$ are the parameterized streamlines. For uniform airflow we have $v=\;$const, and the remaining integrals can be solved with the results given in Section \ref{sec:objectline}. Other examples have been calculated by Creighton \cite{Cre2008}. \subsection{Gravity perturbations from shock waves} \label{sec:shockNN} In \cite{Cre2008}, an estimate of gravity perturbations from a shock wave produced in air was presented based on the infrasound perturbation in Equation (\ref{eq:gravinfra}). The goal was to estimate the transient gravity perturbation produced when the shock wave reaches the test mass. It did not address the question whether significant gravity perturbations can be produced before the arrival of the shock wave. A time-domain description may give further insight into this problem. As we have seen for seismic point sources, a time domain solution can reveal important characteristics of the gravity perturbation, such as the distinction between gravity perturbations from a distant wavefront, and from a wavefront that has reached the test mass. In the following, we will provide a full time-domain solution for an explosive point source of an atmospheric shock wave. It is assumed that the shock wave is produced sufficiently close to the test mass so that the pressure field can be approximated as spherical at the time the shock wave reaches the test mass. Reflections from the surface and upper atmospheric layers need to be considered for a more refined model applicable to distant sources. A shock wave from an explosive source is isotropic (which is rather a definition of what we mean by explosive source). The pressure change is built up over a brief amount of time initially involving an air mass $M=V\rho_0$ determined by the source volume $V$. In the theory of moment tensor sources, an explosion in air at $t_0=0$ can be represented by a diagonal moment tensor according to \beq \mathbf{M}(t)=-\alpha^2\frac{M}{\gamma p_0}\Delta p(t)\mathbf{1} \label{eq:explsource} \eeq where $\alpha$ is the speed of sound, $\gamma$ is the adiabatic coefficient of air, $p_0$ the mean air pressure, $\Delta p(t)$ the pressure change, and $\mathbf{1}$ the unit matrix. Since shock-wave generation is typically non-linear \cite{Whi1974}, the source volume should be chosen sufficiently large so that wave propagation is linear beyond its boundary. This entails that the pressure change $\Delta p$ is also to be evaluated on the boundary of the source volume. Note that in comparison to solitons, shock waves always show significant dissipation, which means that there should not be a fundamental problem with this definition of the source volume. Alternatively, if nonlinear wave propagation is significant over long distances, then one can attempt to linearize the shock-wave propagation by introducing a new nonlinear wave speed, which needs to be used instead of the speed of sound \cite{Whi1974}. In general, a sudden increase of atmospheric pressure by an explosive source must relax again in some way, which means that $\Delta p(t\rightarrow\infty)=0$. Next, we need an expression to obtain the acoustic potential in terms of the moment tensor. The acoustic potential is analogous to the seismic P-wave potential for a medium with vanishing shear modulus, and we can calculate the corresponding perturbation of the gravity potential using Equation (\ref{eq:gravP}). The coupling of a tensor source to the acoustic field can be expressed in terms of the Green's matrix \beq \mathbf{\Phi}(\vec r,t)=\frac{1}{4\pi\rho_0}\left[-\frac{1}{\alpha^2r}\delta(t-r/\alpha)(\vec e_r\otimes\vec e_r)+\frac{1}{r^3}\left(3\vec e_r\otimes\vec e_r-\mathbf{1}\right)\int\limits_0^{r/\alpha}\drm\tau\,\tau\delta(t-\tau)\right], \label{eq:coupletensor} \eeq where we assume that the shock wave is linear and propagates with the speed of sound outside the source volume, i.~e.~the amplitude of the shock wave has decreased to a level where non-linear propagation effects can be neglected. The acoustic potential can now be written \beq \phi_{\rm s}(\vec r,t) = \int\drm\tau\,{\rm Tr}(\mathbf{\Phi}(\vec r,t)\mathbf{M}(t-\tau)) \eeq and together with Equation (\ref{eq:gravP}), we find the gravity potential perturbation \beq \delta\phi(\vec r_0,t) = -\frac{GM}{r_0}\frac{\Delta p(t-r_0/\alpha)}{\gamma p_0}, \label{eq:gravshock} \eeq where $\vec r_0$ points from the source to the test mass. This result is of very different nature compared to the gravity potentials for point forces and point shear dislocations presented in Section \ref{sec:pointsources}. Due to spherical symmetry of the source, the instantaneous gravity perturbation far away from the source vanishes. If the diagonal components of the source tensor had different values, then the integral contribution in Equation (\ref{eq:coupletensor}) would remain, which gives rise to instantaneous gravity perturbations at all distances. Source symmetry plays an important role. The corresponding perturbation of gravity acceleration reads \beq \delta\vec a(\vec r_0,t) = -\frac{GM}{r_0^2}\frac{1}{\gamma p_0}\left(\Delta p(t-r_0/\alpha)+\frac{r_0}{\alpha}\Delta p'(t-r_0/\alpha)\right)\vec e_{r_0} \label{eq:gaccshock} \eeq The gravity perturbation in the far field is dominated by the derivative of the pressure change. One of the examples given in \cite{Cre2008} was a sonic boom from a supersonic aircraft. In this case, the source location changes with time along the trajectory of the aircraft. This amounts to an integral of Equation (\ref{eq:gaccshock}) over the trajectory. It is convenient in this case to introduce $r_0$ as distance at closest approach of the air craft to the test mass. The source volume is replaced by the rate $V\rightarrow A v$ ($A$ being the cross-sectional area of the ``source tube'' around the aircraft trajectory, and $v$ the speed of the aircraft). In the case of uniform motion of the aircraft, the calculation of the integral over the trajectory is straight-forward. \epubtkImage{} \begin{figure}[htbp] \centerline{\includegraphics[width=0.45\textwidth]{Chp5-SonicBoom.pdf} \includegraphics[width=0.55\textwidth]{Chp5-Shock.pdf}} \caption[Gravity perturbation from a sonic boom]{Gravity perturbation from a sonic boom produced by an aircraft. The curve with $v/\alpha=0.5$ is only for illustration purposes since a shockwave is not produced in sub-sonic flight. The direction of gravity perturbation plotted here is along the direction of the aircraft trajectory. The separation of the initial and final pressure change of a propagating wavefront is $\Delta t=0.1r_0/\alpha$.} \label{fig:shockgrav} \end{figure}} The result is shown in Figure \ref{fig:shockgrav} for three different ratios of aircraft speed over speed of sound. The pressure change is modelled as N-profile \cite{Cre2008} \beq \Delta p(t)=-\frac{2\Delta p}{\Delta t}(t-\Delta t/2)\theta(t)\theta(\Delta t-t), \eeq which consists of two positive pressure changes by $\Delta p$ at times $t=0$ and $t=\Delta t=0.1$, and a linear pressure fall between these two times. The aircraft trajectory is assumed to be horizontal and passing directly above the test mass. Time $t=0$ corresponds to the moment when the aircraft reaches the point of closest approach. If $v<\alpha$, then sound waves reach the test mass well before the aircraft reaches the closest point of approach. In the case of supersonic flight, $\alpha<v$, the first sound waves reach the test mass at $t=r_0/\alpha$. Inserting the pressure change into Equation (\ref{eq:gaccshock}), we see that the far-field gravity perturbation is characterized by two $\delta$-peaks. The derivative of the linear pressure change between the peaks cancels with a contribution of the near-field term. As can be understood from the left plot in Figure \ref{fig:shockgrav}, the gravity perturbation falls gradually after the initial peak since a test mass inside the cone still responds to pressure changes associated with two propagating wavefronts. \subsection{Gravity perturbations in turbulent flow} \label{sec:turbNN} In this section, we review the calculation of gravity perturbations from turbulent flow \cite{CaAl2009}. While in Section \ref{sec:quasitemp}, the problem was to calculate gravity perturbations from an advected temperature field whose spectrum is determined by turbulent mixing, we are now interested in the gravity perturbations from pressure fluctuations produced in turbulent flow. Generation of pressure fluctuations (sound) in air is a non-linear phenomenon known as Lighthill process \cite{Lig1952,Lig1954}. Lighthill found that the Navier-Stokes equations can be rearranged into equations for the propagation of sound, \beq \left(\Delta-\frac{1}{c_{\rm s}^2}\partial_t^2\right)\rho(\vec r,t)=-\frac{1}{c_{\rm s}^2}(\nabla\otimes\nabla):\boldsymbol\tau(\vec r,t), \label{eq:Lighthill} \eeq where $\tau_{ij}=\rho v_iv_j+\sigma_{ij}-c_{\rm s}^2\rho\delta_{ij}$ is an effective stress field, and $c_{\rm s}$ is the speed of sound in the uniform medium. The terms in the effective stress tensor are the fluctuating Reynolds stress $\rho v_iv_j$, the compressional stress tensor $\sigma_{ij}$, and the stress $c_{\rm s}^2\rho\delta_{ij}$ of a uniform acoustic medium at rest. In other words, the effective stress tensor acting as a source term of sound is the difference between the stresses in the real flow and the stress of a uniform medium at rest. Equation (\ref{eq:Lighthill}) is exact. In order to calculate the associated gravity perturbations, we introduce some approximations. First, we consider viscous stress contributions to $\sigma_{ij}$ unimportant (we neglect viscous damping in sound propagation), and therefore the temperature field can be assumed to be approximately uniform. This means that the difference $\sigma_{ij}-c_{\rm s}^2\rho\delta_{ij}$ is negligible with respect to the fluctuating Reynolds stress. Furthermore, we will assume that the root mean square of the velocities $v_i$ are much smaller than the speed of sound $c_{\rm s}$ (i.~e.~the turbulence has a small Mach number), and consequently the relative pressure fluctuations $\delta p(\vec r,t)/p_0$ produced by the Reynolds stress is much smaller than 1. In this case, we can rewrite the Lighthill equation into the approximate form \beq \left(\Delta-\frac{1}{c_{\rm s}^2}\partial_t^2\right)\frac{\delta p(\vec r,t)}{p_0}=-\frac{1}{c_{\rm s}^2}(\nabla\otimes\nabla):(\vec v(\vec r,t)\otimes\vec v(\vec r,t)), \label{eq:lightLight} \eeq with $(\nabla\otimes\nabla):(\vec v\otimes\vec v\,)\equiv\partial_{x_i}\partial_{x_j}v_iv_j$ (summing over indices $i,\,j$). This equation serves as a starting point for the calculation of the pressure field. It describes the production of sound in turbulent flow through conversion of shear motion into longitudinal motion. The Reynolds stress represents a quadrupole source, which means that sound production is less efficient in turbulent flow than for example at vibrating boundaries where the source has dipole form. The remaining task is to characterize the velocity fluctuations in terms of spatial correlation functions, translate these into a two-point correlation function of the pressure field using Equation (\ref{eq:lightLight}), and finally obtain the spectrum of gravity fluctuations from these correlations. The last step is analogous to the calculation carried out in Section \ref{sec:quasitemp}, specifically Equation (\ref{eq:tempNNspec}), for the perturbed temperature field. The calculation of gravity perturbations will be further simplified by assuming that the velocity field is stationary, isotropic, and homogeneous. These conditions can certainly be contested, but they are necessary to obtain an explicit solution to the problem (at least, solutions for a more general velocity field are unknown to the author). Since the source term is quadratic in the velocity field, it is clear that the problem of this section is rather complicated. For example, the relation between temperature perturbations and gravity fluctuations was linear. For this reason, the authors of \cite{CaAl2009} decided to carry out the calculation in Fourier space (with respect to time and space). From Equation (\ref{eq:lightLight}), we can calculate the Fourier transform of the auto-correlation of the pressure field, which yields (see Section \ref{sec:noisefreq}) \beq \begin{split} S(\delta p;\vec k,\omega) &= \frac{1}{(2\pi)^4}\frac{p_0^2}{(\omega^2-c_{\rm s}^2 k^2)^2}\\ &\cdot\int\drm \tau\,\e^{-\irm \omega \tau}\int\drm V\,\e^{\irm \vec k\cdot\vec r}\langle (\vec k\cdot \vec v(\vec r_0,t))(\vec k\cdot \vec v(\vec r_0,t))(\vec k\cdot\vec v(\vec r_0+\vec r,t+\tau))(\vec k\cdot \vec v(\vec r_0+\vec r,t+\tau))\rangle \end{split} \label{eq:turbpress} \eeq Note that the convention in turbulence theory used here to normalize the Fourier transform by $1/(2\pi)^4$ is different from the convention used elsewhere in this article, where the inverse Fourier transform obtains this factor. The fact that noise amplitudes at different wave vectors and frequencies do not couple is a consequence of homogeneity and stationarity of the velocity field. Once the spectral density of pressure fluctuations is known, we can use it to calculate the gravity perturbation according to \beq S(\delta\vec a;\vec k,\omega)=\left(\frac{4\pi G}{c_{\rm s}^2}\right)^2\frac{\vec k\otimes\vec k}{k^4}S(\delta p;\vec k,\omega), \label{eq:accpress} \eeq which is given in tensor form to describe spectral densities of the three acceleration components including their cross-spectral densities. This equation is obtained by taking the negative gradient of the first line in Equation (\ref{eq:totalNNinh}), and subsequently calculating its spatial Fourier transform. We can now focus on the calculation of the source spectrum. According to Isserlis' theorem\index{Isserlis' theorem}, the ensemble average in Equation (\ref{eq:turbpress}) can be converted into a product of second-order moments in case that the velocity fluctuations are Gaussian. We assume this to be the case (one of the less disputable assumptions), and write: \beq \langle v_iv_jv'_lv'_m\rangle =\langle v_iv_j\rangle\langle v'_lv'_m\rangle+\langle v_iv'_l\rangle\langle v_jv'_m\rangle+\langle v_iv'_m\rangle\langle v_jv'_l\rangle \label{eq:Isserlis} \eeq The second-order moments are determined by turbulence theory. An isotropic turbulence has the wavenumber spectrum \cite{Dav2004} \beq \begin{split} \langle (\vec k\cdot \vec v(\vec r_0,t))(\vec k\cdot \vec v(\vec r_0,t))\rangle &= \frac{2}{3}k^2\int\limits_{k_0}^{k_\nu}\drm k' \mathcal E(k')\\ \langle (\vec k\cdot \vec v(\vec r_0,t))(\vec k\cdot \vec v(\vec r_0+\vec r,t))\rangle &= k^2 \int\limits_{\mathcal I}\drm^3k'\,\e^{-\irm\vec k'\cdot\vec r}\left(1-\frac{(\vec k\cdot\vec k\,')^2}{k^2k'^2}\right)\frac{\mathcal E(k')}{4\pi k'^2}\\ &=\frac{2}{3}k^2\int\limits_{k_0}^{k_\nu}\drm k' \mathcal E(k')\left((j_0(k'r)-\frac{1}{2}j_2(k'r))+\frac{3}{2}\frac{(\vec k\cdot\vec r)^2}{k^2r^2}j_2(k'r)\right) \end{split} \label{eq:corrvel} \eeq where $\mathcal E(k)=\mathcal K_0\epsilon^{2/3}k^{-5/3}$ is the Kolmogorov energy spectrum, $\mathcal K_0$ the Kolmogorov number, and $\epsilon$ the total (specific) energy dissipated by viscous forces \beq \epsilon=2\nu\int\limits_0^\infty\drm k'\,k'^2\mathcal E(k') \label{eq:dissturb} \eeq Here, $\nu$ is the fluid's viscosity. The Kolmogorov energy spectrum holds for the inertial regime $\mathcal I$ (viscous forces are negligible), i.~e.~for wavenumbers between $k_0=2\pi/\mathcal R$ and $k_\nu=(\epsilon/\nu^3)^{1/4}$, where $\mathcal R$ is the linear dimension of the largest eddy in the turbulent flow. In Equation (\ref{eq:corrvel}), we have only written the equal-time correlations (the first following from the second equation). The velocities in the second equation should however be evaluated at two different times $t,\,t+\tau$. In \cite{Kan1993}, we find that for $k\gg k_0$ \beq \int\drm V\e^{\irm\vec k\cdot\vec r}\langle v_i(\vec r_0,t)v_j(\vec r_0+\vec r,t+\tau)\rangle=\exp\left(-\frac{1}{2}\frac{\tau^2}{\tau_0^2(k)}\right)\int\drm V\e^{\irm\vec k\cdot\vec r}\langle v_i(\vec r_0,t)v_j(\vec r_0+\vec r,t)\rangle \label{eq:tcorrturb} \eeq with $\tau_0^2(k)=1/(k^2\langle v_i^2\rangle)$, where $v_i$ is any of the components of the velocity vector. The first term in Equation (\ref{eq:Isserlis}) is independent of time for a stationary velocity field (both expectation values are equal-time). Therefore, its energy only contributes to frequency $\omega=0$, and we can neglect it. The Fourier transform in Equation (\ref{eq:turbpress}) of the second and third terms in Equation (\ref{eq:Isserlis}) with respect to $\tau$ can be carried out easily using Equation (\ref{eq:tcorrturb}). Also integrating over the angular coordinates of the spatial Fourier transform in Equation (\ref{eq:turbpress}), the gravity spectrum can be written \beq \begin{split} S(\delta\vec a;&\vec k,\omega) = \left(\frac{4\pi G}{c_{\rm s}^2}\right)^2\frac{\vec k\otimes\vec k}{k^4}\frac{1}{(2\pi)^3}\frac{p_0^2}{(\omega^2-c_{\rm s}^2 k^2)^2}\frac{\tau_0(k)}{2\sqrt{\pi}}\exp\left(-\frac{\tau_0^2(k)\omega^2}{4}\right)\\ &\cdot 2\int\drm V\,\e^{\irm \vec k\cdot\vec r}\left[\frac{2}{3}k^2\int\limits_{k_0}^{k_\nu}\drm k' \mathcal E(k')\left((j_0(k'r)-\frac{1}{2}j_2(k'r))+\frac{3}{2}\frac{(\vec k\cdot\vec r)^2}{k^2r^2}j_2(k'r)\right)\right]^2\\ &=\left(\frac{2 G p_0}{c_{\rm s}^2}\right)^2\frac{\vec k\otimes\vec k}{(\omega^2-c_{\rm s}^2 k^2)^2}\frac{\tau_0(k)}{\pi^{3/2}}\exp\left(-\frac{\tau_0^2(k)\omega^2}{4}\right)\\ &\cdot \Bigg\{\frac{4}{9}\int\limits_0^\infty\drm r\,r^2j_0(k r)\left[\,\int\limits_{k_0}^{k_\nu}\drm k' \mathcal E(k')\left(j_0(k'r)-\frac{1}{2}j_2(k'r)\right)\right]^2\\ &\quad+\frac{4}{9}\int\limits_0^\infty\drm r\,r^2\left(j_0(kr)-2j_2(kr)\right)\left[\,\int\limits_{k_0}^{k_\nu}\drm k' \mathcal E(k')\left(j_0(k'r)-\frac{1}{2}j_2(k'r)\right)\right]\left[\,\int\limits_{k_0}^{k_\nu}\drm k' \mathcal E(k')j_2(k'r)\right]\\ &\quad+ \int\limits_0^\infty\drm r\,r^2\left(\frac{1}{5}j_0(kr)-\frac{4}{7}j_2(kr)+\frac{8}{35}j_4(kr)\right)\left[\,\int\limits_{k_0}^{k_\nu}\drm k' \mathcal E(k')j_2(k'r)\right]^2\Bigg\} \end{split} \eeq Probably the best way to proceed is to carry out the integral over the radius $r$. The integrands are products of three spherical Bessel functions. An analytic solution for this type of integral was presented in \cite{MLM1991} where we find that the integral is non-zero only if the three wavenumbers fulfill the triangular relation $|k'-k''|\leq k\leq k'+k''$ (i.~e.~the sum of the three corresponding wave vectors needs to vanish), and the orders of the spherical Bessel functions must fulfill $|n'-n''|\leq n\leq n'+n''$. Especially the last relation is useful since many products can be recognized by eye to have zero value. In each case, the result of the integration is a rational function of the three wavenumbers if the triangular condition is fulfilled, and zero otherwise. While it may be possible to solve the integral analytically, we will stop the calculation at this point. Numerical integration as suggested in \cite{CaAl2009} is a valuable option. The square-roots of the noise spectra normalized to units of strain, $S(\delta\vec a;\vec k,\omega)(2/(L\omega^2)^2$, are shown in Figure \ref{fig:lightspec} for $k = \rm 0.1\,m^{-1},\,0.67\,m^{-1},\,1.58\,m^{-1},\,3.0\,m^{-1}$, where $L=3000\,$m is the length of an interferometer arm. \epubtkImage{} \begin{figure}[htbp] \centerline{\includegraphics[width=0.55\textwidth]{Chp5-turbSpec.pdf}} \caption[Gravity perturbations from Lighthill acoustic noise]{Newtonian-noise spectra from Lighthill acoustic noise. The four curves are plotted for $k = \rm 0.1\,m^{-1},\,0.67\,m^{-1},\,1.58\,m^{-1},\,3.0\,m^{-1}$ (with decreasing dash length).} \label{fig:lightspec} \end{figure}} Each spectrum is exponentially suppressed above the corner frequency $1/\tau_0(k)$ with $\tau_0=\rm 3.5\,s,\,0.52\,s,\,0.22\,s,\,0.12\,s$. Below the corner frequency, the spectrum is proportional to $1/\omega^2$. In order to calculate the dissipation rate $\epsilon$, a measured spectrum was used \cite{AlEA1997}, which has a value of about $1\,\rm m^3 s^{-2}$ at $k=1\,\rm m^{-1}$, and wavenumber dependence approximately equal to the Kolmogorov spectrum. In this way, we avoid the implicit relation of the dissipation rate in Equation (\ref{eq:dissturb}), since $\epsilon$ also determines the Kolmogorov energy spectrum. Solving the implicit relation for $\epsilon$ gave poor numerical results, and also required us to extend the energy spectrum (valid in the inertial regime) to higher wavenumbers (the viscous regime). It is also worth noting that the energy spectrum and the scale $\mathcal R$ (we used a value of $150\,$m) are the only required model inputs related to properties of turbulence. Any other turbulence parameter in this calculation can be calculated from these two (and a few standard parameters such as air viscosity, air pressure, $\ldots$). The resulting spectra show that Newtonian noise from the Lighthill process is negligible above 5\,Hz, but it can be a potential source of noise in low-frequency detectors. In the future, it should be studied how strongly the Lighthill gravity perturbation is suppressed when the detector is built underground. \subsection{Atmospheric Newtonian-noise estimates} \label{sec:estAtmNN} In the following, we present the strain-noise forms of gravity perturbations from infrasound fields and uniformly advected temperature fluctuations. While the results of the previous sections allow us in principle to estimate noise at the surface as well as underground, we will only calculate the surface noise spectra here. Newtonian noise from advected temperature perturbations decreases strongly with depth and should not play a role in underground detectors. Suppression of infrasound gravity noise with depth depends strongly on the isotropy of the infrasound field. Using Equation (\ref{eq:gravinfra}), it is straight-forward to modify the results of this section to include noise suppression with depth once the infrasound field is characterized. We start with the infrasound Newtonian noise. According to Equation (\ref{eq:gravinfra}), the gravity acceleration of a single test mass at $z_0=0$ due to an infrasound wave is given by \beq \delta a_x(\vec \varrho_0,\omega)=-4\pi\irm\frac{G\rho_0}{\gamma p_0}\e^{-\irm\vec k_\varrho\cdot\varrho_0}\frac{\vec e_x\cdot\vec k}{k^2}\delta p(\omega) \eeq Averaging over all propagation directions, the strain noise measured between two test masses separated by a distance $L$ along $\vec e_x$ reads \beq S(h;\omega)=\frac{2}{3}\left(\frac{4\pi}{kL\omega^2}\frac{G\rho_0}{\gamma p_0}\right)^2S(\delta p;\omega)(1-j_0(kL)+2j_2(kL)) \label{eq:atmstrainNN} \eeq The gravity-strain amplitude response is plotted in Figure \ref{fig:infraresp} expressing the distance $L$ between the two test masses in units of sound wavelength $\lambda_{\rm IS}=2\pi/k$. \epubtkImage{} \begin{figure}[htbp] \centerline{\includegraphics[width=0.7\textwidth]{Chp5-InfraNoise.pdf}} \caption[Gravity-strain response to infrasound fields]{Gravity-strain amplitude response to infrasound fields.} \label{fig:infraresp} \end{figure}} For short distances between the test masses, the response is independent of $L$, and at large distances, the response falls with $1/L$. The long-distance response follows from the fact that gravity noise is uncorrelated between the two test masses, while the small-distance response corresponds to the regime where the two test masses sense gravity-gradient fluctuations. The strain noise spectrum from uniformly advected temperature fluctuations is calculated from Equation (\ref{eq:advectNN}) using the solution of the integrals given in Section \ref{sec:objectline}: \beq S(h;\omega)=(2\pi)^3\left(\frac{G\rho_0c_T}{LT_0}\right)^2\omega^{-(p+7)}\Gamma(2+p)\sin(\pi p/2)\e^{-2r_{\rm min}\omega/v}v^{p+2}, \label{eq:atmtempNN} \eeq where the modified Bessel functions were approximated according to Equation (\ref{eq:approxmov}). We assume that both test masses experience gravity perturbations characterized by the same spectral densities. The integral over stream lines in Equation (\ref{eq:advectNN}) was carried out over a semi-infinite disk with a disk-shaped excision of radius $r_{\rm min}$ around the test mass. The excision enforces a minimum distance between stream lines and test masses, for example because of buildings hosting the test masses. Due to the exponential suppression, this noise contribution can be expected to be insignificant deep underground. The integration includes an average over streamline directions. If the expression is to be converted into a GW strain sensitivity of a two-arm interferometer, then it is not fully accurate to simply multiply the strain noise by 2 due to gravity correlations between the two inner test masses of the two arms. Nonetheless, for the noise budget presented in this section, we will use the factor 2 conversion. Figure \ref{fig:atmNN} shows the Newtonian-noise spectra together with a reference sensitivity of the Advanced Virgo detector. \epubtkImage{} \begin{figure}[htbp] \centerline{\includegraphics[width=0.7\textwidth]{Chp5-Atm_NN.pdf}} \caption[Atmospheric Newtonian noise]{Atmospheric Newtonian noise from sources that were first discussed in \cite{Cre2008} in comparison with a reference sensitivity curve of Advanced Virgo. Temperature fluctuations are advected at speed 15\,m/s. The sound spectrum represents typical background noise inside laboratory buildings of large-scale GW detectors.} \label{fig:atmNN} \end{figure}} The Newtonian noise from advected temperature fields is evaluated using a wind speed of $v=15\,$m/s, and a minimum distance of $5\,$m to the test masses. With respect to the advanced GW detectors LIGO/Virgo, atmospheric Newtonian noise will be insignificant according to these results. The slope of infrasound Newtonian noise is steeper than of seismic Newtonian noise (see Figure \ref{fig:LIGONN}), which can be taken as an indication that there may be a frequency below which atmospheric Newtonian noise dominates over seismic Newtonian noise. This has in fact been predicted in \cite{HaEA2013}. Using measured spectra of atmospheric pressure fluctuations and seismic noise, the intersection between seismic and infrasound Newtonian noise happens at about 1\,Hz for a test mass at the surface. From Section \ref{sec:gravimeterNN}, we also know that Newtonian noise from atmospheric pressure fluctuations is the dominant ambient noise background around 1\,mHz. One might be tempted to conclude that gravity perturbations from advected temperature fields may be an even stronger contribution at low frequencies. However, one has to be careful since the noise prediction cannot be extended to much below a few Hz without modifying the model. The quasi-static approximation of the temperature field will fail at sufficiently low frequencies, and the temperature field cannot be characterized anymore as a result of turbulent mixing \cite{KuNa2006}. Also, the part of the model shown in Figure \ref{fig:atmNN} is characterized by an exponential suppression (effective above 3\,Hz). \subsection{Summary and open problems} \label{sec:atmsummary} In this section, we reviewed models of atmospheric gravity perturbations that are either associated with infrasound waves, or with quasi-stationary temperature fields advected by wind. We have seen that atmospheric Newtonian noise will very likely be insignificant in GW detectors of the advanced generation. For surface detectors, atmospheric Newtonian noise starts to be significant below 10\,Hz according to these models. According to Equations (\ref{eq:atmstrainNN}) and (\ref{eq:atmtempNN}), and comparing with seismic Newtonian noise (see Figure \ref{fig:LIGONN}), we see that atmospheric spectra are steeper and therefore potentially the dominating gravity perturbation in low-frequency detectors. However, both models are based on approximations that may not hold at frequencies below a few Hz. A summary of approximations applied to the infrasound Newtonian noise model can be found in \cite{HaEA2013}, including modelling of a half-space atmosphere, neglecting wind, etc. Also the noise model of advected temperature fluctuations likely does not hold at low frequencies since it is based on the assumption that the temperature field is quasi-stationary. In addition, at low frequencies, near-surface temperature spectra can be affected by variations of ground temperature in addition to turbulent mixing. As we have seen, few time-varying atmospheric noise models have been developed so far, which leaves plenty of space for future work in this field. For example, convection may produce atmospheric gravity perturbations, and only very simple models of gravity perturbations from turbulence have been calculated so far. While these yet poorly modelled forms of atmospheric noise are likely insignificant in GW detectors sensitive above 10\,Hz, they may become important in low-frequency detectors. Another open problem is to study systematically the decrease in atmospheric Newtonian noise with depth in the case of underground GW detectors. Especially, it is unclear how much atmospheric noise is suppressed in sub-Hz underground detectors. We have argued that seismic Newtonian noise does not vary significantly with detector depth in low-frequency GW detectors, but some forms of atmospheric Newtonian noise depend strongly on the minimal distance between source and test mass. So the conclusion might be different for atmospheric noise. Finally, the question should be addressed whether atmospheric disturbances transmitted in the form of seismic waves into the ground can be neglected in Newtonian-noise models. As we outlined briefly in Section \ref{sec:soundNN}, even though transmission coefficients of sound waves into the ground are negligible with respect to their effect on seismic and infrasound fields, it seems that they may be relevant with respect to their effect on the gravity field. \section{Gravity Measurements} \label{sec:gravmeasure} In this section, we describe the relevant mechanisms by which a gravity sensor can couple to gravity perturbations, and give an overview of the most widely used measurement schemes: the (relative) gravimeter \cite{CHR2013,ZhEA2011}, the gravity gradiometer \cite{MPC2002}, and the gravity strainmeter. The last category includes the large-scale GW detectors Virgo \cite{Vir2011}, LIGO \cite{LSC2010}, GEO600 \cite{LuEA2010}, KAGRA \cite{AsEA2013}, and a new generation of torsion-bar antennas currently under development \cite{AnEA2010b}. Also atom interferometers can potentially be used as gravity strainmeters in the future \cite{DiEA2008b}. Strictly speaking, none of the sensors only responds to a single field quantity (such as changes in gravity acceleration or gravity strain), but there is always a dominant response mechanism in each case, which justifies to give the sensor a specific name. A clear distinction between gravity gradiometers and gravity strainmeters has never been made to our knowledge. Therefore the sections on these two measurement principles will introduce a definition, and it is by no means the only possible one. Later on in this article, we almost exclusively discuss gravity models relevant to gravity strainmeters since the focus lies on gravity fluctuations above 10\,mHz. Today, the sensitivity near 10\,mHz of gravimeters towards gravity fluctuations is still competitive to or exceeds the sensitivity of gravity strainmeters, but this is likely going to change in the future so that we can expect strainmeters to become the technology of choice for gravity observations above 10\,mHz \cite{HaEA2013}. The following sections provide further details on this statement. Space-borne gravity experiments such as GRACE \cite{WaEA2004} will not be included in this overview. The measurement principle of GRACE is similar to that of gravity strainmeters, but only very slow changes of Earth gravity field can be observed, and for this reason it is beyond the scope of this article. The different response mechanisms to terrestrial gravity perturbations are summarized in Section \ref{sec:gravresp}. While we will identify the tidal forces acting on the test masses as dominant coupling mechanism, other couplings may well be relevant depending on the experiment. The Shapiro time delay will be discussed as the only relativistic effect. Higher-order relativistic effects are neglected. All other coupling mechanisms can be calculated using Newtonian theory including tidal forces, coupling in static non-uniform gravity fields, and coupling through ground displacement induced by gravity fluctuations. In Sections \ref{sec:superg} to \ref{sec:gravstrain}, the different measurement schemes are explained including a brief summary of the sensitivity limitations (choosing one of a few possible experimental realizations in each case). As mentioned before, we will mostly develop gravity models relevant to gravity strainmeters in the remainder of the article. Therefore, the detailed discussion of alternative gravimetry concepts mostly serves to highlight important differences between these concepts, and to develop a deeper understanding of the instruments and their role in gravity measurements. \subsection{Gravity response mechanisms} \label{sec:gravresp} \subsubsection{Gravity acceleration and tidal forces} \label{sec:respgh} We will start with the simplest mechanism of all, the acceleration of a test mass in the gravity field. Instruments that measure the acceleration are called gravimeters. A test mass inside a gravimeter can be freely falling such as atom clouds \cite{ZhEA2011} or, as suggested as possible future development, even macroscopic objects \cite{FrEA2014}. Typically though, test masses are supported mechanically or magnetically constraining motion in some of its degrees of freedom. A test mass suspended from strings responds to changes in the horizontal gravity acceleration. A test mass attached at the end of a cantilever with horizontal equilibrium position responds to changes in vertical gravity acceleration. The support fulfills two purposes. First, it counteracts the static gravitational force in a way that the test mass can respond to changes in the gravity field along a chosen degree of freedom. Second, it isolates the test mass from vibrations. Response to signals and isolation performance depend on frequency. If the support is modelled as a linear, harmonic oscillator, then the test mass response to gravity changes extends over all frequencies, but the response is strongly suppressed below the oscillators resonance frequency. The response function between the gravity perturbation $\delta g(\omega)$ and induced test mass acceleration $\delta a(\omega)$ assumes the form\index{response! gravity acceleration} \beq \delta a(\omega)=\frac{\omega^2}{\omega^2-\omega_0^2+\irm\gamma\omega}\delta g(\omega)\equiv R(\omega;\omega_0,\gamma)\delta g(\omega), \label{eq:accresp} \eeq where we have introduced a viscous damping parameter $\gamma$, and $\omega_0$ is the resonance frequency. Well below resonance, the response is proportional to $\omega^2$, while it is constant well above resonance. Above resonance, the supported test mass responds like a freely falling mass, at least with respect to ``soft'' directions of the support. The test-mass response to vibrations $\delta \alpha(\omega)$ of the support is given by\index{vibration isolation} \beq \delta a(\omega)=\frac{\omega_0^2-\irm\gamma\omega}{\omega_0^2-\omega^2-\irm\gamma\omega}\delta \alpha(\omega)\equiv S(\omega;\omega_0,\gamma)\delta \alpha(\omega), \label{eq:isolvibr} \eeq This applies for example to horizontal vibrations of the suspension points of strings that hold a test mass, or to vertical vibrations of the clamps of a horizontal cantilever with attached test mass. Well above resonance, vibrations are suppressed by $\omega^{-2}$, while no vibration isolation is provided below resonance. The situation is somewhat more complicated in realistic models of the support especially due to internal modes of the mechanical system (see for example \cite{GoSa1994}), or due to coupling of degrees of freedom \cite{MaEA2014}. Large mechanical support structures can feature internal resonances at relatively low frequencies, which can interfere to some extent with the desired performance of the mechanical support \cite{Win2002}. While Equations (\ref{eq:accresp}) and (\ref{eq:isolvibr}) summarize the properties of isolation and response relevant for this paper, details of the readout method can fundamentally impact an instrument's response to gravity fluctuations and its susceptibility to seismic noise, as explained in Sections \ref{sec:superg} to \ref{sec:gravstrain}. Next, we discuss the response to tidal forces. In Newtonian theory, tidal forces cause a relative acceleration $\delta g_{12}(\omega)$ between two freely falling test masses according to \beq \begin{split} \delta \vec g_{12}(\omega)&= -\nabla \psi(\vec r_2,\omega)+\nabla \psi(\vec r_1,\omega)\\ &\approx-(\nabla\otimes\nabla \psi(\vec r_1,\omega))\cdot \vec r_{12}, \end{split} \label{eq:resptide} \eeq where $\psi(\vec r,\omega)$ is the Fourier amplitude of the gravity potential. The last equation holds if the distance $r_{12}$ between the test masses is sufficiently small, which also depends on the frequency. The term $-\nabla\otimes\nabla \psi(\vec r,t)$ is called gravity-gradient tensor\index{gravity gradient}. In Newtonian approximation, the second time integral of this tensor corresponds to gravity strain $\mathbf h(\vec r,t)$, which is discussed in more detail in Section \ref{sec:gravstrain}. Its trace needs to vanish in empty space since the gravity potential fulfills the Poisson equation. Tidal forces produce the dominant signals in gravity gradiometers and gravity strainmeters, which measure the differential acceleration or associated relative displacement between two test masses (see Sections \ref{sec:gradio} and \ref{sec:gravstrain}). If the test masses used for a tidal measurement are supported, then typically the supports are designed to be as similar as possible, so that the response in Equation (\ref{eq:accresp}) holds for both test masses approximately with the same parameter values for the resonance frequencies (and to a lesser extent also for the damping). For the purpose of response calibration, it is less important to know the parameter values exactly if the signal is meant to be observed well above the resonance frequency where the response is approximately equal to 1 independent of the resonance frequency and damping (here, ``well above'' resonance also depends on the damping parameter, and in realistic models, the signal frequency also needs to be ``well below'' internal resonances of the mechanical support). \subsubsection{Shapiro time delay}\index{Shapiro time delay} \label{sec:Shapiro} Another possible gravity response is through the Shapiro time delay \cite{BMS2010}. This effect is not universally present in all gravity sensors, and depends on the readout mechanism. Today, the best sensitivities are achieved by reflecting laser beams from test masses in interferometric configurations. If the test mass is displaced by gravity fluctuations, then it imprints a phase shift onto the reflected laser, which can be observed in laser interferometers, or using phasemeters. We will give further details on this in Section \ref{sec:gravstrain}. In Newtonian gravity, the acceleration of test masses is the only predicted response to gravity fluctuations. However, from general relativity we know that gravity also affects the propagation of light. The leading-order term is the Shapiro time delay, which produces a phase shift of the laser beam with respect to a laser propagating in flat space. It can be calculated from the weak-field spacetime metric (see chapter 18 in \cite{MTW1973}): \beq \drm s^2=-(1+2\psi(\vec r,t)/c^2)(c\drm t)^2+(1-2\psi(\vec r,t)/c^2)|\drm \vec r\,|^2 \label{eq:weakfield} \eeq Here, $c$ is the speed of light, $\drm s$ is the so-called line element of a path in spacetime, and $\psi(\vec r,t)/c^2\ll 1$. Additionally, for this metric to hold, motion of particles in the source of the gravity potential responsible for changes of the gravity potential need to be much slower than the speed of light, and also stresses inside the source must be much smaller than its mass energy density. All conditions are fulfilled in the case of Earth gravity field. Light follows \emph{null geodesics} with $\drm s^2=0$. For the spacetime metric in Equation (\ref{eq:weakfield}), we can immediately write \beq \begin{split} \left|\frac{\drm \vec r}{\drm t}\right| &= c\sqrt{\frac{1+2\psi(\vec r,t)/c^2}{1-2\psi(\vec r,t)/c^2}}\\ &\approx c(1+2\psi(\vec r,t)/c^2) \end{split} \label{eq:null} \eeq As we will find out, this equation can directly be used to calculate the time delay as an integral along a straight line in terms of the coordinates $\vec r$, but this is not immediately clear since light bends in a gravity field. So one may wonder if integration along the proper light path instead of a straight line yields additional significant corrections. The so-called geodesic equation must be used to calculate the path. It is a set of four differential equations, one for each coordinate $t,\,\vec r$ in terms of a parameter $\lambda$. The weak-field geodesic equation is obtained from the metric in Equation (\ref{eq:weakfield}):\index{geodesic equation} \beq \begin{split} \frac{\drm^2 t}{\drm\lambda^2} &= -\frac{2}{c^2}\frac{\drm t}{\drm\lambda}\frac{\drm \vec r}{\drm\lambda}\cdot\nabla\psi(\vec r,t),\\ \frac{\drm^2 \vec r}{\drm\lambda^2} &= \frac{2}{c^2}\frac{\drm \vec r}{\drm\lambda}\times\left(\frac{\drm \vec r}{\drm\lambda}\times\nabla\psi(\vec r,t)\right), \end{split} \label{eq:geodesic} \eeq where we have made use of Equation (\ref{eq:null}) and the slow-motion condition $|\dot\psi(\vec r,t)|/c\ll |\nabla\psi(\vec r,t)|$. The coordinates $t,\,\vec r$ are to be understood as functions of $\lambda$. Since the deviation of a straight path is due to a weak gravity potential, we can solve these equations by perturbation theory introducing expansions $\vec r=\vec r^{\,(0)}+\vec r^{\,(1)}+\ldots$ and $t=t^{(0)}+t^{(1)}+\ldots$. The superscript indicates the order in $\psi/c^2$. The unperturbed path has the simple parametrization \beq \vec r^{\,(0)}(\lambda)=c\vec e_0\,\lambda+\vec r_0,\quad t^{(0)}(\lambda)=\lambda+t_0 \label{eq:zeroorder} \eeq We have chosen integration constants such that unperturbed time $t^{(0)}$ and parameter $\lambda$ can be used interchangeably (apart from a shift by $t_0$). Inserting these expressions into the right-hand side of Equation (\ref{eq:geodesic}), we obtain \beq \begin{split} \frac{\drm^2 t^{(1)}}{\drm\lambda^2} &= -\frac{2}{c}\vec e_0\cdot\nabla\psi(\vec r^{\,(0)},t^{(0)}),\\ \frac{\drm^2 \vec r^{\,(1)}}{\drm\lambda^2} &= 2\vec e_0\times\left(\vec e_0\times\nabla\psi(\vec r^{\,(0)},t^{(0)})\right)=2\vec e_0\cdot\left(\vec e_0\cdot\nabla\psi(\vec r^{\,(0)},t^{(0)})\right)-2\nabla\psi(\vec r^{\,(0)},t^{(0)}), \end{split} \label{eq:pertgeo} \eeq As we can see, up to linear order in $\psi(\vec r,t)$, the deviation $\vec r^{\,(1)}(\lambda)$ is in orthogonal direction to the unperturbed path $\vec r^{\,(0)}(\lambda)$, which means that the deviation can be neglected in the calculation of the time delay. After some transformations, it is possible to derive Equation (\ref{eq:null}) from Equation (\ref{eq:pertgeo}), and this time we find explicitly that the right-hand-side of the equation only depends on the unperturbed coordinates \footnote{It should be emphasized that in general, the null constraint given by Equation (\ref{eq:null}) cannot be obtained from the geodesic equation since the geodesic equation is valid for all freely falling objects (massive and massless). The reason that the null constraint can be derived from Equation (\ref{eq:pertgeo}) is that we used the null constraint together with the geodesic equation to obtain Equation (\ref{eq:pertgeo}), which is therefore valid only for massless particles.}. In other words, we can integrate the time delay along a straight line as defined in Equation (\ref{eq:zeroorder}), and so the total phase integrated over a travel distance $L$ is given by \beq \begin{split} \Delta\phi(\vec r_0,t_0) &= \frac{\omega_0}{c}\int\limits_0^{L/c}\drm \lambda\frac{\drm t}{\drm \lambda}\\ &= \frac{\omega_0 L}{c}-\frac{2\omega_0}{c^2}\int\limits_0^{L/c}\drm \lambda\, \psi(\vec r^{\,(0)}(\lambda),t^{(0)}(\lambda)) \end{split} \eeq In static gravity fields, the phase shift doubles if the light is sent back since not only the direction of integration changes, but also the sign of the expression substituted for $\drm t/\drm\lambda$. \subsubsection{Gravity induced ground motion} \label{sec:gravground} As we will learn in Section \ref{sec:ambient}, seismic fields produce gravity perturbations either through density fluctuations of the ground, or by displacing interfaces between two materials of different density. It is also well-known in seismology that seismic fields can be affected significantly by self-gravity. Self-gravity means that the gravity perturbation produced by a seismic field acts back on the seismic field. The effect is most significant at low frequency where gravity induced acceleration competes against acceleration from elastic forces. In seismology, low-frequency seismic fields are best described in terms of Earth's normal modes \cite{DaTr1998}.\index{normal modes!Earth} Normal modes exist as toroidal modes and spheroidal modes. Spheroidal modes are influenced by self-gravity, toroidal modes are not. For example, predictions of frequencies and shapes of spheroidal modes based on Earth models such as PREM (Preliminary Reference Earth Model) \cite{DzAn1981} are inaccurate if self-gravity effects are excluded. What this practically means is that in addition to displacement amplitudes, gravity becomes a dynamical variable in the elastodynamic equations that determine the normal-mode properties. Therefore, seismic displacement and gravity perturbation cannot be separated in normal-mode formalism (although self-gravity can be neglected in calculations of spheroidal modes at sufficiently high frequency). In certain situations, it is necessary or at least more intuitive to separate gravity from seismic fields. An exotic example is Earth's response to GWs \cite{Dys1969,CoHa2014,CoHa2014c,Ben1983,CoHa2014b}. Another example is the seismic response to gravity perturbations produced by strong seismic events at large distance to the source as described in Section \ref{sec:pointsources}. It is more challenging to analyze this scenario using normal-mode formalism. The sum over all normal modes excited by the seismic event (each of which describing a global displacement field) must lead to destructive interference of seismic displacement at large distances (where seismic waves have not yet arrived), but not of the gravity amplitudes since gravity is immediately perturbed everywhere. It can be easier to first calculate the gravity perturbation from the seismic perturbation, and then to calculate the response of the seismic field to the gravity perturbation at larger distance. This method will be adopted in this section. Gravity fields will be represented as arbitrary force or tidal fields (detailed models are presented in later sections), and we simply calculate the response of the seismic field. Normal-mode formalism can be avoided only at sufficiently high frequencies where the curvature of Earth does not significantly influence the response (i.~e.~well above 10\,mHz). In this section, we will model the ground as homogeneous half space, but also more complex geologies can in principle be assumed. Gravity can be introduced in two ways into the elastodynamic equations, as a conservative force $-\nabla\psi$ \cite{Run1980,Wan2005}, or as tidal strain $h$. The latter method was described first by Dyson to calculate Earth's response to GWs \cite{Dys1969}. The approach also works for Newtonian gravity, with the difference that the tidal field produced by a GW is necessarily a quadrupole field with only two degrees of freedom (polarizations), while tidal fields produced by terrestrial sources are less constrained. Certainly, GWs can only be fully described in the framework of general relativity, which means that their representation as a Newtonian tidal field cannot be used to explain all possible observations \cite{MTW1973}. Nonetheless, important here is that Dyson's method can be extended to Newtonian tidal fields. Without gravity, the elastodynamic equations for small seismic displacement can be written as \beq \rho\partial_t^2\vec\xi(\vec r,t)=\nabla\cdot\boldsymbol{\sigma}(\vec r,t), \label{eq:elastic} \eeq where $\vec\xi(\vec r,t)$ is the seismic displacement field, and $\boldsymbol{\sigma}(\vec r,t)$ is the stress tensor \cite{AkRi2009}. In the absence of other forces, the stress is determined by the seismic field. In the case of a homogeneous and isotropic medium, the stress tensor for small seismic displacement can be written as \beq \begin{split} \boldsymbol{\sigma}_\epsilon(\vec r,t) &= \lambda{\rm Tr}(\boldsymbol{\epsilon}(\vec r,t))\mathbf{1}+2\mu \boldsymbol{\epsilon}(\vec r,t)\\ \epsilon_{ij}(\vec r,t) &= \frac{1}{2}\left(\partial_i\xi_j(\vec r,t)+\partial_j\xi_i(\vec r,t)\right) \end{split} \label{eq:homoelast} \eeq The quantity $\boldsymbol{\epsilon}(\vec r,t)$ is known as seismic strain tensor\index{seismic strain}, and $\lambda,\,\mu$ are the Lam\'e constants (see Section \ref{sec:seismic})\index{Lam\'e constants}. Its trace is equal to the divergence of the displacement field. Dyson introduced the tidal field from first principles using Lagrangian mechanics, but we can follow a simpler approach. Equation (\ref{eq:homoelast}) means that a stress field builds up in response to a seismic strain field, and the divergence of the stress field acts as a force producing seismic displacement. The same happens in response to a tidal field, which we represent as gravity strain $\mathbf{h}(\vec r,t)$. A strain field changes the distance between two freely falling test masses separated by $\vec L$ by $\delta\vec L(\vec r,t)=\mathbf{h}(\vec r,t)\cdot\vec L$. For sufficiently small distances $L$, the strain field can be substituted by the second time integral of the gravity-gradient tensor $-\nabla\otimes\nabla\psi(\vec r,t)$. If the masses are not freely falling, then the strain field acts as an additional force. The corresponding contribution to the material's stress tensor can be written \beq \begin{split} \boldsymbol{\sigma}_h(\vec r,t) &= -\lambda{\rm Tr}(\boldsymbol{h}(\vec r,t))\mathbf{1}-2\mu \boldsymbol{h}(\vec r,t)\\ \partial_t^2\boldsymbol{\sigma}_h(\vec r,t) &= \lambda(\Delta\psi(\vec r,t))\mathbf{1}+2\mu\nabla\otimes\nabla\psi(\vec r,t) \end{split} \eeq Since we assume that the gravity field is produced by a distant source, the local contribution to gravity perturbations is neglected, which means that the gravity potential obeys the Laplace equation, $\Delta\psi(\vec r,t)=0$. Calculating the divergence of the stress tensor according to Equation (\ref{eq:elastic}), we find that the gravity term vanishes! This means that a homogeneous and isotropic medium does not respond to gravity strain fields. However, we have to be more careful here. Our goal is to calculate the response of a half-space to gravity strain. Even if the half-space is homogeneous, the Lam\'e constants change discontinuously across the surface. Hence, at the surface, the divergence of the stress tensor reads \beq \partial_t^2(\nabla\cdot\boldsymbol{\sigma}_h(\vec r,t))=2(\nabla\mu)\cdot(\nabla\otimes\nabla\psi(\vec r,t))=-2(\nabla\mu)\cdot\partial_t^2\boldsymbol{h}(\vec r,t) \eeq In other words, tidal fields produce a force onto an elastic medium via gradients in the shear modulus (second Lam\'e constant). The gradient of the shear modulus can be written in terms of a Dirac delta function, $\nabla\mu=-\mu\delta(z)\vec e_n$, for a flat surface at $z=0$ with unit normal vector $\vec e_n$. The response to gravity strain fields is obtained applying the boundary condition of vanishing surface traction\index{traction}, $\vec e_n\cdot\boldsymbol{\sigma}(\vec r,t)=0$: \beq \lambda {\rm Tr}(\boldsymbol{\epsilon}(\vec r,t))\vec e_n+2\mu\,\vec e_n\cdot(\boldsymbol{\epsilon}(\vec r,t)-\boldsymbol{h}(\vec r,t))=0 \eeq Once the seismic strain field is calculated, it can be used to obtain the seismic stress, which determines the displacement field $\vec \xi(\vec r,t)$ according to Equation (\ref{eq:elastic}). In this way, one can for example calculate that a seismometer or gravimeter can observe GWs by monitoring surface displacement as was first calculated by Dyson \cite{Dys1969}. \subsubsection{Coupling in non-uniform, static gravity fields}\index{gravity gradient!coupling} \label{sec:gravgrad} If the gravity field is static, but non-uniform, then displacement $\vec\xi(t)$ of the test mass in this field due to a non-gravitational fluctuating force is associated with a changing gravity acceleration according to \beq \delta \vec a(\vec r,t)=(\nabla\otimes\vec g(\vec r\,))\cdot\vec\xi(t) \label{eq:gravgrad} \eeq We introduce a characteristic length $\lambda$, over which gravity acceleration varies significantly. Hence, we can rewrite the last equation in terms of the associated test-mass displacement $\zeta$ \beq \zeta(\omega)\sim \frac{g}{\omega^2}\frac{\xi(\omega)}{\lambda}, \eeq where we have neglected directional dependence and numerical factors. The acceleration change from motion in static, inhomogeneous fields is generally more significant at low frequencies. Let us consider the specific case of a suspended test mass. It responds to fluctuations in horizontal gravity acceleration. The test mass follows the motion of the suspension point in vertical direction (i.~e.~no seismic isolation), while seismic noise in horizontal direction is suppressed according to Equation (\ref{eq:isolvibr}). Accordingly, it is possible that the unsuppressed vertical ($z$-axis) seismic noise $\xi_z(t)$ coupling into the horizontal ($x$-axis) motion of the test mass through the term $\partial_x g_z=\partial_z g_x$ dominates over the gravity response term in Equation (\ref{eq:accresp}). Due to additional coupling mechanisms between vertical and horizontal motion in real seismic-isolation systems, test masses especially in GW detectors are also isolated in vertical direction, but without achieving the same noise suppression as in horizontal direction. For example, the requirements on vertical test-mass displacement for Advanced LIGO are a factor 1000 less stringent than on the horizontal displacement \cite{BaEA2013}. Requirements can be set on the vertical isolation by estimating the coupling of vertical motion into horizontal motion, which needs to take the gravity-gradient coupling of Equation (\ref{eq:gravgrad}) into account. Although, because of the frequency dependence, gravity-gradient effects are more significant in low-frequency detectors, such as the space-borne GW detector LISA \cite{Sch2003}. Next, we calculate an estimate of gravity gradients in the vicinity of test masses in large-scale GW detectors, and see if the gravity-gradient coupling matters compared to mechanical vertical-to-horizontal coupling. \epubtkImage{} \begin{figure}[htbp] \centerline{\includegraphics[width=0.4\textwidth]{Chp2-CylinderGrad1.png}\hspace*{0.3cm} \includegraphics[width=0.4\textwidth]{Chp2-CylinderGrad2.png}} \caption[Gravity gradients inside hollow cylinder]{Gravity gradients inside hollow cylinder. The total height of the cylinder is $L$, and $M$ is its total mass. The radius of the cylinder is $0.3 L$. The axes correspond to the distance of the test mass from the symmetry axis of the cylinder, and its height above one of the cylinders ends. The plot on the right is simply a zoom of the left plot into the intermediate heights.} \label{fig:cylingrad} \end{figure}} One contribution to gravity gradients will come from the vacuum chamber surrounding the test mass. We approximate the shape of the chamber as a hollow cylinder with open ends (open ends just to simplify the calculation). In our calculation, the test mass can be offset from the cylinder axis and be located at any distance to the cylinder ends (we refer to this coordinate as height). The gravity field can be expressed in terms of elliptic integrals, but the explicit solution is not of concern here. Instead, let us take a look at the results in Figure \ref{fig:cylingrad}. Gravity gradients $\partial_z g_x$ vanish if the test mass is located on the symmetry axis or at height $L/2$. There are also two additional $\partial_z g_x=0$ contour lines starting at the symmetry axis at heights $\sim 0.24$ and $\sim 0.76$. Let us assume that the test mass is at height $0.3 L$, a distance $0.05L$ from the cylinder axis, the total mass of the cylinder is $M=5000\,$kg, and the cylinder height is $L=4\,$m. In this case, the gravity-gradient induced vertical-to-horizontal coupling factor at 20\,Hz is \beq \zeta/\xi\sim 0.1 \frac{GM}{L^3\omega^2}\sim 3\times 10^{-14} \eeq This means that gravity-gradient induced coupling is extremely weak, and lies well below estimates of mechanical coupling (of order 0.001 in Advanced LIGO \footnote{According to pages 2 and 25 of second attachment to \url{https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=6760}}). Even though the vacuum chamber was modelled with a very simple shape, and additional asymmetries in the mass distribution around the test mass may increase gravity gradients, it still seems very unlikely that the coupling would be significant. As mentioned before, one certainly needs to pay more attention when calculating the coupling at lower frequencies. The best procedure is of course to have a 3D model of the near test-mass infrastructure available and to use it for a precise calculation of the gravity-gradient field. \subsection{Gravimeters} \label{sec:superg}\index{gravimeter} Gravimeters are instruments that measure the displacement of a test mass with respect to a non-inertial reference rigidly connected to the ground. The test mass is typically supported mechanically or magnetically (atom-interferometric gravimeters are an exception), which means that the test-mass response to gravity is altered with respect to a freely falling test mass. We will use Equation (\ref{eq:accresp}) as a simplified response model. There are various possibilities to measure the displacement of a test mass. The most widespread displacement sensors are based on capacitive readout, as for example used in superconducting gravimeters (see Figure \ref{fig:superg} and \cite{HCW2007}). Sensitive displacement measurements are in principle also possible with optical readout systems; a method that is (necessarily) implemented in atom-interferometric gravimeters \cite{PCC2001}, and prototype seismometers \cite{BeEA2014} (we will explain the distinction between seismometers and gravimeters below). As will become clear in Section \ref{sec:gravstrain}, optical readout is better suited for displacement measurements over long baselines, as required for the most sensitive gravity strain measurements, while the capacitive readout should be designed with the smallest possible distance between the test mass and the non-inertial reference \cite{JoRi1973}. Let us take a closer look at the basic measurement scheme of a superconducting gravimeter shown in Figure \ref{fig:superg}. The central part is formed by a spherical superconducting shell that is levitated by superconducting coils. Superconductivity provides stability of the measurement, and also avoids some forms of noise (see \cite{HCW2007} for details). In this gravimeter design, the lower coil is responsible mostly to balance the mean gravitational force acting on the sphere, while the upper coil modifies the magnetic gradient such that a certain ``spring constant" of the magnetic levitation is realized. In other words, the current in the upper coil determines the resonance frequency in Equation (\ref{eq:accresp}). \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.6\textwidth]{Chp2-SuperG.png}} \caption[Superconducting gravimeter]{Sketch of a levitated sphere serving as test mass in a superconducting gravimeter. Dashed lines indicate magnetic field lines. Coils are used for levitation and precise positioning of the sphere. From Hinderer et al \cite{HCW2007}.} \label{fig:superg} \end{figure}} Capacitor plates are distributed around the sphere. Whenever a force acts on the sphere, the small signal produced in the capacitive readout is used to immediately cancel this force by a feedback coil. In this way, the sphere is kept at a constant location with respect to the external frame. This illustrates a common concept in all gravimeters. The displacement sensors can only respond to relative displacement between a test mass and a surrounding structure. If small gravity fluctuations are to be measured, then it is not sufficient to realize low-noise readout systems, but also vibrations of the surrounding structure forming the reference frame must be as small as possible. In general, as we will further explore in the coming sections, gravity fluctuations are increasingly dominant with decreasing frequency. At about 1\,mHz, gravity acceleration associated with fluctuating seismic fields become comparable to seismic acceleration, and also atmospheric gravity noise starts to be significant \cite{CHR2013}. At higher frequencies, seismic acceleration is much stronger than typical gravity fluctuations, which means that the gravimeter effectively operates as a seismometer. In summary, at sufficiently low frequencies, the gravimeter senses gravity accelerations of the test mass with respect to a relatively quiet reference, while at higher frequencies, the gravimeter senses seismic accelerations of the reference with respect to a test mass subject to relatively small gravity fluctuations. In superconducting gravimeters, the third important contribution to the response is caused by vertical motion $\xi(t)$ of a levitated sphere against a static gravity gradient (see Section \ref{sec:gravgrad}). As explained above, feedback control suppresses relative motion between sphere and gravimeter frame, which causes the sphere to move as if attached to the frame or ground. In the presence of a static gravity gradient $\partial_z g_z$, the motion of the sphere against this gradient leads to a change in gravity, which alters the feedback force (and therefore the recorded signal). The full contribution from gravitational, $\delta a(t)$, and seismic, $\ddot \xi(t)=\delta\alpha(t)$, accelerations can therefore be written \beq s(t)=\delta a(t)-\delta\alpha(t)+(\partial_z g_z)\xi(t) \label{eq:gravdata} \eeq It is easy to verify, using Equations (\ref{eq:accresp}) and (\ref{eq:isolvibr}), that the relative amplitude of gravity and seismic fluctuations from the first two terms is independent of the test-mass support. Therefore, vertical seismic displacement of the reference frame must be considered fundamental noise of gravimeters and can only be avoided by choosing a quiet measurement site. Obviously, Equation (\ref{eq:gravdata}) is based on a simplified support model. One of the important design goals of the mechanical support is to minimize \emph{additional} noise due to non-linearities and cross-coupling. As is explained further in Section \ref{sec:gradio}, it is also not possible to suppress seismic noise in \emph{gravimeters} by subtracting the disturbance using data from a collocated seismometer. Doing so inevitably turns the gravimeter into a gravity gradiometer. Gravimeters target signals that typically lie well below 1\,mHz. Mechanical or magnetic supports of test masses have resonance frequencies at best slightly below 10\,mHz along horizontal directions, and typically above 0.1\,Hz in the vertical direction \cite{BeEA1997,WLB1999} \footnote{Winterflood explains in his thesis why vertical resonance frequencies are higher than horizontal, and why this does not necessarily have to be so \cite{Win2002}.}. Well below resonance frequency, the response function can be approximated as $\omega^2/\omega_0^2$. At first, it may look as if the gravimeter should not be sensitive to very low-frequency fluctuations since the response becomes very weak. However, the strength of gravity fluctuations also strongly increases with decreasing frequency, which compensates the small response. It is clear though that if the resonance frequency was sufficiently high, then the response would become so weak that the gravity signal would not stand out above other instrumental noise anymore. The test-mass support would be too stiff. The sensitivity of the gravimeter depends on the resonance frequency of the support and the intrinsic instrumental noise. With respect to seismic noise, the stiffness of the support has no influence as explained before (the test mass can also fall freely as in atom interferometers). \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.6\textwidth]{Chp2-SuperMedians.pdf}} \caption[Superconducting gravimeter]{Median spectra of superconducting gravimeters of the GGP \cite{CoHa2014b}.} \label{fig:supernoise} \end{figure}} For superconducting gravimeters of the Global Geodynamics Project (GGP) \cite{CrHi2010}, the median spectra are shown in Figure \ref{fig:supernoise}. Between 0.1\,mHz and 1\,mHz, atmospheric gravity perturbations typically dominate, while instrumental noise is the largest contribution between 1\,mHz and 5\,mHz \cite{HCW2007}. The smallest signal amplitudes that have been measured by integrating long-duration signals is about $10^{-12}\,\rm m/s^2$. A detailed study of noise in superconducting gravimeters over a larger frequency range can be found in \cite{RoEA2003}. Note that in some cases, it is not fit to categorize seismic and gravity fluctuations as noise and signal. For example, Earth's spherical normal modes coherently excite seismic and gravity fluctuations, and the individual contributions in Equation (\ref{eq:gravdata}) have to be understood only to accurately translate data into normal-mode amplitudes \cite{DaTr1998}. \subsection{Gravity gradiometers} \label{sec:gradio}\index{gravity gradiometer} It is not the purpose of this section to give a complete overview of the different gradiometer designs. Gradiometers find many practical applications, for example in navigation and resource exploration, often with the goal to measure static or slowly changing gravity gradients, which do not concern us here. For example, we will not discuss rotating gradiometers, and instead focus on gradiometers consisting of stationary test masses. While the former are ideally suited to measure static or slowly changing gravity gradients with high precision especially under noisy conditions, the latter design has advantages when measuring weak tidal fluctuations. In the following, we only refer to the stationary design. A gravity gradiometer measures the relative acceleration between two test masses each responding to fluctuations of the gravity field \cite{Jek2014,MPC2002}. The test masses have to be located close to each other so that the approximation in Equation (\ref{eq:resptide}) holds. The proximity of the test masses is used here as the defining property of gradiometers. They are therefore a special type of gravity strainmeter (see Section \ref{sec:gravstrain}), which denotes any type of instrument that measures relative gravitational acceleration (including the even more general concept of measuring space-time strain). Gravity gradiometers can be realized in two versions. First, one can read out the position of two test masses with respect to the same rigid, non-inertial reference. The two channels, each of which can be considered a gravimeter, are subsequently subtracted. This scheme is for example realized in dual-sphere designs of superconducting gravity gradiometers \cite{HaEA2000} or in atom-interferometric gravity gradiometers \cite{SoEA2014}\index{gravity gradiometer! superconducting, dual-sphere}. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.6\textwidth]{Chp2-Gradiometer.pdf}} \caption[Gravity gradiometer]{Basic scheme of a gravity gradiometer for measurements along the vertical direction. Two test masses are supported by horizontal cantilevers (superconducting magnets, $\ldots$). Acceleration of both test masses is measured against the same non-inertial reference frame, which is connected to the ground. Each measurement constitutes one gravimeter. Subtraction of the two channels yields a gravity gradiometer.} \label{fig:gradiometer} \end{figure}} It is schematically shown in Figure \ref{fig:gradiometer}. Let us first consider the dual-sphere design of a superconducting gradiometer. If the reference is perfectly stiff, and if we assume as before that there are no cross-couplings between degrees of freedom and the response is linear, then the subtraction of the two gravity channels cancels all of the seismic noise, leaving only the instrumental noise and the differential gravity signal given by the second line of Equation (\ref{eq:resptide}). Even in real setups, the reduction of seismic noise can be many orders of magnitude since the two spheres are close to each other, and the two readouts pick up (almost) the same seismic noise \cite{MPC2002}. This does not mean though that gradiometers are necessarily more sensitive instruments to monitor gravity fields. A large part of the gravity signal (the common-mode part) is subtracted together with the seismic noise, and the challenge is now passed from finding a seismically quiet site to developing an instrument with lowest possible intrinsic noise. The atom-interferometric gradiometer differs in some important details from the superconducting gradiometer\index{gravity gradiometer!atom interferometric}. The test masses are realized by ultracold atom clouds, which are (nearly) freely falling provided that magnetic shielding of the atoms is sufficient, and interaction between atoms can be neglected. Interactions of a pair of atom clouds with a laser beam constitute the basic gravity gradiometer scheme. Even though the test masses are freely falling, the readout is not generally immune to seismic noise \cite{Har2011,BaTh2012}. The laser beam interacting with the atom clouds originates from a source subject to seismic disturbances, and interacts with optics that require seismic isolation. Schemes have been proposed that could lead to a large reduction of seismic noise \cite{NaTi2011,GrEA2013}, but their effectiveness has not been tested in experiments yet. Since the differential position (or tidal) measurement is performed using a laser beam, the natural application of atom-interferometer technology is as gravity strainmeter (as explained before, laser beams are favorable for differential position measurements over long baselines). Nonetheless, the technology is currently insufficiently developed to realize large-baseline experiments, and we can therefore focus on its application in gradiometry. Let us take a closer look at the response of atom-interferometric gradiometers to seismic noise. In atom-interferometric detectors (excluding the new schemes proposed in \cite{NaTi2011,GrEA2013}), one can show that seismic acceleration $\delta\alpha(\omega)$ of the optics or laser source limits the sensitivity of a tidal measurement according to \beq \delta a_{12}(\omega)\sim \frac{\omega L}{c}\delta\alpha(\omega), \label{eq:seismicatom} \eeq where $L$ is the separation of the two atom clouds, and $c$ is the speed of light. It should be emphasized that the seismic noise remains, even if all optics and the laser source are all linked to the same infinitely stiff frame. In addition to this noise term, other coupling mechanisms may play a role, which can however be suppressed by engineering efforts. The noise-reduction factor $\omega L/c$ needs to be compared with the common-mode suppression of seismic noise in superconducting gravity gradiometers, which depends on the stiffness of the instrument frame, and on contamination from cross coupling of degrees-of-freedom. While the seismic noise in Equation (\ref{eq:seismicatom}) is a fundamental noise contribution in (conventional) atom-interferometric gradiometers, the noise suppression in superconducting gradiometers depends more strongly on the engineering effort (at least, we venture to claim that common-mode suppression achieved in current instrument designs is well below what is fundamentally possible). To conclude this section, we discuss in more detail the connection between gravity gradiometers and seismically (actively or passively) isolated gravimeters. As we have explained in Section \ref{sec:superg}, the sensitivity limitation of gravimeters by seismic noise is independent of the mechanical support of the test mass (assuming an ideal, linear support). The main purpose of the mechanical support is to maximize the response of the test mass to gravity fluctuations, and thereby increase the signal with respect to instrumental noise other than seismic noise. Here we will explain that even a seismic isolation of the gravimeter cannot overcome this noise limitation, at least not without fundamentally changing its response to gravity fluctuations. Let us first consider the case of a passively seismically isolated gravimeter. For example, we can imagine that the gravimeter is suspended from the tip of a strong horizontal cantilever. The system can be modelled as two oscillators in a chain, with a light test mass $m$ supported by a heavy mass $M$ representing the gravimeter (reference) frame, which is itself supported from a point rigidly connected to Earth. The two supports are modelled as harmonic oscillators. As before, we neglect cross coupling between degrees of freedom. Linearizing the response of the gravimeter frame and test mass for small accelerations, and further neglecting terms proportional to $m/M$, one finds the gravimeter response to gravity fluctuations: \beq \begin{split} \delta a(\omega) &= R(\omega;\omega_2,\gamma_2)\left(\delta g_2(\omega)-R(\omega;\omega_1,\gamma_1)\delta g_1(\omega)\right)\\ &= R(\omega;\omega_2,\gamma_2)\left(\delta g_2(\omega)-\delta g_1(\omega)+S(\omega;\omega_1,\gamma_1)\delta g_1(\omega)\right) \end{split} \label{eq:gravisolpass} \eeq Here, $\omega_1,\,\gamma_1$ are the resonance frequency and damping of the gravimeter support, while $\omega_2,\,\gamma_2$ are the resonance frequency and damping of the test-mass support. The response and isolation functions $R(\cdot),\,S(\cdot)$ are defined in Equations (\ref{eq:accresp}) and (\ref{eq:isolvibr}). Remember that Equation (\ref{eq:gravisolpass}) is obtained as a differential measurement of test-mass acceleration versus acceleration of the reference frame. Therefore, $\delta g_1(\omega)$ denotes the gravity fluctuation at the center-of-mass of the gravimeter frame, and $\delta g_2(\omega)$ at the test mass. An infinitely stiff gravimeter suspension, $\omega_1\rightarrow\infty$, yields $R(\omega;\omega_1,\gamma_1)=0$, and the response turns into the form of the non-isolated gravimeter. The seismic isolation is determined by \beq \delta a(\omega)=-R(\omega;\omega_2,\gamma_2)S(\omega;\omega_1,\gamma_1)\delta \alpha(\omega) \eeq We can summarize the last two equations as follows. At frequencies well above $\omega_1$, the seismically isolated gravimeter responds like a gravity gradiometer, and seismic noise is strongly suppressed. The deviation from the pure gradiometer response $\sim \delta g_2(\omega)-\delta g_1(\omega)$ is determined by the same function $S(\omega;\omega_1,\gamma_1)$ that describes the seismic isolation. In other words, if the gravity gradient was negligible, then we ended up with the conventional gravimeter response, with signals suppressed by the seismic isolation function. Well below $\omega_1$, the seismically isolated gravimeter responds like a conventional gravimeter without seismic-noise reduction. If the centers of the masses $m$ (test mass) and $M$ (reference frame) coincide, and therefore $\delta g_1(\omega)=\delta g_2(\omega)$, then the response is again like a conventional gravimeter, but this time suppressed by the isolation function $S(\omega;\omega_1,\gamma_1)$. Let us compare the passively isolated gravimeter with an actively isolated gravimeter. In active isolation, the idea is to place the gravimeter on a stiff platform whose orientation can be controlled by actuators. Without actuation, the platform simply follows local surface motion. There are two ways to realize an active isolation. One way is to place a seismometer next to the platform onto the ground, and use its data to subtract ground motion from the platform. The actuators cancel the seismic forces. This scheme is called feed-forward noise cancellation\index{active noise cancellation!seismic}. Feed-forward cancellation of gravity noise is discussed at length in Section \ref{sec:cohcancel}, which provides details on its implementation and limitations. The second possibility is to place the seismometer together with the gravimeter onto the platform, and to suppress seismic noise in a feedback configuration \cite{AbCh2000,AbEA2004}. In the following, we discuss the feed-forward technique as an example since it is easier to analyze (for example, feedback control can be unstable \cite{AbCh2000}). As before, we focus on gravity and seismic fluctuations. The seismometer's intrinsic noise plays an important role in active isolation limiting its performance, but we are only interested in the modification of the gravimeter's response. Since there is no fundamental difference in how a seismometer and a gravimeter respond to seismic and gravity fluctuations, we know from Section \ref{sec:superg} that the seismometer output is proportional to $\delta g_1(\omega)-\delta\alpha(\omega)$, i.~e.~using a single test mass for acceleration measurements, seismic and gravity perturbations contribute in the same way. A transfer function needs to be multiplied to the acceleration signals, which accounts for the mechanical support and possibly also electronic circuits involved in the seismometer readout. To cancel the seismic noise of the platform that carries the gravimeter, the effect of all transfer functions needs to be reversed by a matched feed-forward filter. The output of the filter is then equal to $\delta g_1(\omega)-\delta\alpha(\omega)$ and is added to the motion of the platform using actuators cancelling the seismic noise and adding the seismometer's gravity signal. In this case, the seismometer's gravity signal takes the place of the seismic noise in Equation (\ref{eq:isolvibr}). The complete gravity response of the actively isolated gravimeter then reads \beq \delta a(\omega) = R(\omega;\omega_2,\gamma_2)(\delta g_2(\omega)-\delta g_1(\omega)) \label{eq:gravisolact} \eeq The response is identical to a gravity gradiometer, where $\omega_2,\gamma_2$ are the resonance frequency and damping of the gravimeter's test-mass support. In reality, instrumental noise of the seismometer will limit the isolation performance and introduce additional noise into Equation (\ref{eq:gravisolact}). Nonetheless, Equations (\ref{eq:gravisolpass}) and (\ref{eq:gravisolact}) show that any form of seismic isolation turns a gravimeter into a gravity gradiometer at frequencies where seismic isolation is effective. For the passive seismic isolation, this means that the gravimeter responds like a gradiometer at frequencies well above the resonance frequency $\omega_1$ of the gravimeter support, while it behaves like a conventional gravimeter below $\omega_1$. From these results it is clear that the design of seismic isolations and the gravity response can in general not be treated independently. As we will see in Section \ref{sec:gravstrain} though, tidal measurements can profit strongly from seismic isolation especially when common-mode suppression of seismic noise like in gradiometers is insufficient or completely absent. \subsection{Gravity strainmeters} \label{sec:gravstrain}\index{strainmeter!gravity} Gravity strain is an unusual concept in gravimetry that stems from our modern understanding of gravity in the framework of general relativity. From an observational point of view, it is not much different from elastic strain. Fluctuating gravity strain causes a change in distance between two freely falling test masses, while seismic or elastic strain causes a change in distance between two test masses bolted to an elastic medium. Fundamentally, gravity strain corresponds to a perturbation of the metric that determines the geometrical properties of spacetime \cite{MTW1973}. It should be emphasized though that there are important differences between seismic and gravity strain, which can play a role in certain experiments \cite{KaCh2004}. To understand this better, we need to talk briefly about GWs, before we can return to a Newtonian description of gravity strain. Gravitational waves are weak perturbations of spacetime propagating at the speed of light\index{gravitational wave}. Freely falling test masses change their distance in the field of a GW. When the length of the GW is much larger than the separation between the test masses, it is possible to interpret this change as if caused by a Newtonian force. We call this the long-wavelength regime. Since we are interested in the low-frequency response of gravity strainmeters throughout this article (i.~e.~frequencies well below 100\,Hz), this condition is always fulfilled for Earth-bound experiments. The effect of a gravity-strain field $\mathbf h(\vec r,t)$ on a pair of test masses can then be represented as an equivalent Newtonian tidal field \beq \delta a_{12}(\vec r,t)=\frac{1}{2}L \vec e_{12}^\top\cdot\ddot{\mathbf h}(\vec r,t)\cdot \vec e_{12} \label{eq:tidalstrain} \eeq Here, $\delta a_{12}(\vec r,t)$ is the relative acceleration between two freely falling test masses, $L$ is the distance between them, and $\vec e_{12}$ is the unit vector pointing from one to the other test mass. As can be seen, the gravity-strain field is represented by a $3\times 3$ tensor. It contains the space-components of a 4-dimensional metric perturbation of spacetime, and determines all properties of GWs \footnote{In order to identify components of the metric perturbation with tidal forces acting on test masses, one needs to choose specific spacetime coordinates, the so-called transverse-traceless gauge \cite{MTW1973}.}. Note the factor $1/2$ in Equation (\ref{eq:tidalstrain}), which is a consequence of $\mathbf h(\vec r,t)$ being defined as the space components of a metric perturbation. This factor does not appear in similar equations for example of seismic strain. The strain field of a GW takes the form of a quadrupole oscillation with two possible polarizations commonly denoted $\times$(cross)-polarization and $+$(plus)-polarization. The arrows in Figure \ref{fig:polarizeGW} indicate the lines of the equivalent tidal field of Equation (\ref{eq:tidalstrain}). \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.6\textwidth]{Chp2-PolarizeGW.pdf}} \caption[Polarizations of a gravitational wave]{Polarizations of a gravitational wave.} \label{fig:polarizeGW} \end{figure}} Consequently, to (directly) observe GWs, one can follow two possible schemes: (1) the conventional method, which is a measurement of the \emph{relative displacement} of suspended test masses typically carried out along two perpendicular baselines (arms); and (2) measurement of the \emph{relative rotation} between two suspended bars. Figure \ref{fig:measureGW} illustrates the two cases. In either case, the response of a gravity strainmeter is obtained by projecting the gravity strain tensor onto a combination of two unit vectors, $\vec e_1$ and $\vec e_2$, that characterize the orientation of the detector, such as the directions of two bars in a rotational gravity strain meter, or of two arms of a conventional gravity strain meter. This requires us to define two different gravity strain projections. The projection for the rotational strain measurement is given by \begin{equation} h_\times(\vec r\,,t)=(\vec e_1^{\top}\cdot {\mathbf h}(\vec r\,,t)\cdot\vec e_1^{\,\rm r}-\vec e_2^{\top}\cdot {\mathbf h}(\vec r\,,t)\cdot\vec e_2^{\,\rm r})/2, \label{eq:hx} \end{equation} where the subscript $\times$ indicates that the detector responds to the $\times$-polarization assuming that the $x,y$-axes (see Figure \ref{fig:polarizeGW}) are oriented along two perpendicular bars. The vectors $\vec e_1^{\,\rm r}$ and $\vec e_2^{\,\rm r}$ are rotated counter-clockwise by 90$^\circ$ with respect to $\vec e_1$ and $\vec e_2$. In the case of perpendicular bars $\vec e_1^{\,\rm r}=\vec e_2$ and $\vec e_2^{\,\rm r}=-\vec e_1$. The corresponding projection for the conventional gravity strain meter reads \begin{equation} h_+(\vec r\,,t)=(\vec e_1^{\top}\cdot {\mathbf h}(\vec r\,,t)\cdot\vec e_1-\vec e_2^{\top}\cdot {\mathbf h}(\vec r\,,t)\cdot\vec e_2)/2 \label{eq:hp} \end{equation} The subscript $+$ indicates that the detector responds to the $+$-polarization provided that the $x,\,y$-axes are oriented along two perpendicular baselines (arms) of the detector. The two schemes are shown in Figure \ref{fig:measureGW}. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.4\textwidth]{Chp2-TOBA.pdf} \includegraphics[width=0.4\textwidth]{Chp2-LIGO.pdf}} \caption[Gravity strainmeter]{Sketches of the relative rotational and displacement measurement schemes.} \label{fig:measureGW} \end{figure}} The most sensitive GW detectors are based on the conventional method, and distance between test masses is measured by means of laser interferometry. The LIGO and Virgo detectors have achieved strain sensitivities of better than $10^{-22}$\,Hz$^{-1/2}$ between about 50\,Hz and 1000\,Hz in past science runs and are currently being commissioned in their advanced configurations \cite{LSC2010,AcEA2015}. The rotational scheme is realized in torsion-bar antennas, which are considered as possible technology for sub-Hz GW detection \cite{ShEA2014,EdEA2014}\index{torsion-bar antenna}. However, with achieved strain sensitivity of about $10^{-8}$\,Hz$^{-1/2}$ near 0.1\,Hz, the torsion-bar detectors are far from the sensitivity we expect to be necessary for GW detection \cite{HaEA2013}. Let us now return to the discussion of the previous sections on the role of seismic isolation and its impact on gravity response. Gravity strainmeters profit from seismic isolation more than gravimeters or gravity gradiometers. We have shown in Section \ref{sec:superg} that seismically isolated gravimeters are effectively gravity gradiometers. So in this case, seismic isolation changes the response of the instrument in a fundamental way, and it does not make sense to talk of seismically isolated gravimeters. Seismic isolation could in principle be beneficial for gravity gradiometers (i.~e.~the acceleration of two test masses is measured with respect to a common rigid, seismically isolated reference frame), but the common-mode rejection of seismic noise (and gravity signals) due to the differential readout is typically so high that other instrumental noise becomes dominant. So it is possible that some gradiometers would profit from seismic isolation, but it is not generally true. Let us now consider the case of a gravity strainmeter. As explained in Section \ref{sec:gradio}, we distinguish gradiometers and strainmeters by the distance of their test masses. For example, the distance of the LIGO or Virgo test masses is 4\,km and 3\,km respectively. Seismic noise and terrestrial gravity fluctuations are insignificantly correlated between the two test masses within the detectors' most sensitive frequency band (above 10\,Hz). Therefore, the approximation in Equation (\ref{eq:resptide}) does not apply. Certainly, the distinction between gravity gradiometers and strainmeters remains somewhat arbitrary since at any frequency the approximation in Equation (\ref{eq:resptide}) can hold for one type of gravity fluctuation, while it does not hold for another. Let us adopt a more practical definition at this point. \emph{Whenever the design of the instrument places the test masses as distant as possible from each other given current technology, then we call such an instrument strainmeter}\index{strainmeter!gravity (practical definition)}. In the following, we will discuss seismic isolation and gravity response for three strainmeter designs, the laser-interferometric, atom-interferometric, and superconducting strainmeters. It should be emphasized that the atom-interferometric and superconducting concepts are still in the beginning of their development and have not been realized yet with scientifically interesting sensitivities. \paragraph{Laser-interferometric strainmeters} The most sensitive gravity strainmeters, namely the large-scale GW detectors, use laser interferometry to read out the relative displacement between mirror pairs forming the test masses. Each test mass in these detectors is suspended from a seismically isolated platform, with the suspension itself providing additional seismic isolation. Section \ref{sec:respgh} introduced a simplified response and isolation model based on a harmonic oscillator characterized by a resonance frequency $\omega_0$ and viscous damping $\gamma$ \footnote{In reality, the dominant damping mechanism in suspension systems is not viscous damping, but structural damping characterized by the so-called loss angle $\phi$, which quantifies the imaginary part of the elastic modulus \cite{Sau1992}.}. In a multi-stage isolation and suspension system as realized in GW detectors (see for example \cite{BrEA2005,MaEA2014}), coupling between multiple oscillators cannot be neglected, and is fundamental to the seismic isolation performance, but the basic features can still be explained with the simplified isolation and response model of Equations (\ref{eq:accresp}) and (\ref{eq:isolvibr}). The signal output of the interferometer is proportional to the relative displacement between test masses. Since seismic noise is approximately uncorrelated between two distant test masses, the differential measurement itself cannot reject seismic noise as in gravity gradiometers. Without seismic isolation, the dominant signal would be seismic strain, i.~e.~the distance change between test masses due to elastic deformation of the ground, with a value of about $10^{-15}$\,Hz$^{-1/2}$ at 50\,Hz (assuming kilometer-scale arm lengths)\index{seismic strain}. At the same time, without seismically isolated test masses, the gravity signal can only come from the ground response to gravity fluctuations as described in Section \ref{sec:gravground}, and from the Shapiro time delay as described in Section \ref{sec:Shapiro}. These signals would lie well below the seismic noise. Consequently, to achieve the sensitivities of past science runs, the seismic isolation of the large-scale GW detectors had to suppress seismic noise by at least 7 orders of magnitude, and test masses had to be supported so that they can (quasi-)freely respond to gravity-strain fluctuations in the targeted frequency band (which, according to Equations (\ref{eq:accresp}) and (\ref{eq:isolvibr}), is achieved automatically with the seismic isolation). Stacking multiple stages of seismic isolation enhances the gravity response negligibly, while it is essential to achieve the required seismic-noise suppression. Using laser beams, long-baseline strainmeters can be realized, which increases the gravity response according to Equation (\ref{eq:resptide}). The price to be paid is that seismic noise needs to be suppressed by a sophisticated isolation and suspension system since it is uncorrelated between test masses and therefore not rejected in the differential measurement. As a final note, the most sensitive torsion-bar antennas also implement a laser-interferometric readout of the relative rotation of the suspended bars \cite{ShEA2014}, and concerning the gravity response and seismic isolation, they can be modelled very similarly to conventional strainmeters. However, the suppression of seismic noise is impeded by mechanical cross-coupling, since a torsion bar has many soft degrees of freedom that can interact resonantly within the detection band. This problem spoils to some extent the big advantage of torsion bars to realize a very low-frequency torsion resonance, which determines the fundamental response and seismic isolation performance\index{torsion-bar antenna}. Nonetheless, cross-coupling can in principle be reduced by precise engineering, and additional seismic pre-isolation of the suspension point of the torsion bar can lead to significant noise reduction. \paragraph{Atom-interferometric strainmeters} \index{strainmeter!gravity (atom interferometric)}In this design, the test masses consist of freely-falling ultracold atom clouds. A laser beam interacting with the atoms serves as a common phase reference, which the test-mass displacement can be measured against. The laser phase is measured locally via atom interferometry by the same freely-falling atom clouds \cite{ChEA2008}. Subtraction of two of these measurements forms the strainmeter output. The gravity response is fundamentally the same as for the laser-interferometric design since it is based on the relative displacement of atom clouds. Seismic noise couples into the strain measurement through the laser. If displacement noise of the laser or laser optics has amplitude $\xi(\omega)$, then the corresponding strain noise in atom-interferometric strainmeters is of order $\omega \xi(\omega)/c$, where $c$ is the speed of light, and $\omega$ the signal frequency \cite{BaTh2012}. While this noise is lower than the corresponding term $\xi(\omega)/L$ in laser-interferometric detectors ($L$ being the distance between test masses), seismic isolation is still required. As we know from previous discussions, seismic isolation causes the optics to respond to gravity fluctuations. However, the signal contribution from the optics is weaker by a factor $\omega L/c$ compared to the contribution from distance changes between atom clouds. Here, $L$ is the distance between two freely-falling atom clouds, which also corresponds approximately to the extent of the optical system. This signal suppression is very strong for any Earth-bound atom-interferometric detector (targeting sub-Hz gravity fluctuations), and we can neglect signal contributions from the optics. Here we also assumed that there are no control forces acting on the optics, which could further suppress their signal response, if for example the distance between optics is one of the controlled parameters. Nonetheless, seismic isolation is required, not only to suppress seismic noise from distance changes between laser optics, which amounts to $\omega \xi(\omega)/c\sim 10^{-17}\,$Hz$^{-1/2}$ at 0.1\,Hz without seismic isolation (too high at least for GW detection \cite{HaEA2013}), but also to suppress seismic-noise contributions through additional channels (e.~g.~tilting optics in combination with laser-wavefront aberrations \cite{HoEA2011b}). The additional channels dominate in current experiments, which are already seismic-noise limited with strain noise many orders of magnitude higher than $10^{-17}\,$Hz$^{-1/2}$ \cite{DiEA2013}. It is to be expected though that improvements of the atom-interferometer technology will suppress the additional channels relaxing the requirement on seismic isolation. \paragraph{Superconducting strainmeters} \index{strainmeter!gravity (superconducting)} The response of superconducting strainmeters to gravity-strain fluctuations is based on the differential displacement of magnetically levitated spheres. The displacement of individual spheres is monitored locally via a capacitive readout (see Section \ref{sec:superg}). Subtracting local readouts of test-mass displacement from each other constitutes the basic strainmeter scheme \cite{Pai1976}. The common reference for the local readouts is a rigid, material frame. The stiffness of the frame is a crucial parameter facilitating the common-mode rejection of seismic noise. Even in the absence of seismic noise, the quality of the reference frame is ultimately limited by thermally excited vibrations\index{thermal noise} of the frame \footnote{It should not be forgotten that thermal noise also plays a role in the other two detector designs, but it is a more severe problem for superconducting gravimeters since the mechanical structure supporting the thermal vibrations is much larger. Any method to lower thermal noise, such as cooling of the structure, or lowering its mechanical loss is a greater effort.} (similar to the situation with torsion-bar antennas \cite{HaEA2013}). However, since strainmeters are very large (by definition), vibrational eigenmodes of the frame can have low resonance frequencies impeding the common-mode rejection of seismic noise. In fact, it is unclear if a significant seismic-noise reduction can be achieved by means of mechanical rigidity. Therefore, seismic isolation of the strainmeter frame is necessary. In this case, each local readout is effectively a gravity-strain measurement, since the gravity response of the test mass is measured against a reference frame that also responds to gravity fluctuations (see discussion of seismically isolated gravimeters in Section \ref{sec:gradio}). Another solution could be to substitute the mechanical structure by an optically rigid body as suggested in \cite{HaEA2013} for a low-frequency laser-interferometric detector. The idea is to connect different parts of a structure via laser links in all degrees of freedom. The stiffness of the link is defined by the control system that forces the different parts to keep their relative positions and orientations. Optical rigidity in all degrees of freedom has not been realized experimentally yet, but first experiments known as suspension point or platform interferometers have been conducted to control some degrees of freedom in the relative orientation of two mechanical structures \cite{AsEA2004,DaEA2012}. This approach would certainly add complexity to the experiment, especially in full-tensor configurations of superconducting gravity strainmeters, where six different mechanical structures have to be optically linked \cite{MPC2002}. \section{Introduction} \label{sec:intro} In the coming years, we will see a transition in the field of high-precision gravimetry from observations of slow lasting changes of the gravity field to the experimental study of fast gravity fluctuations. The latter will be realized by the advanced generation of the US-based LIGO \cite{LSC2015} and Europe-based Virgo \cite{AcEA2015} gravitational-wave (GW) detectors. Their goal is to directly observe for the first time GWs that are produced by astrophysical sources such as inspiraling and merging neutron-star or black-hole binaries. Feasibility of the laser-interferometric detector concept has been demonstrated successfully with the first generation of detectors, which, in addition to the initial LIGO and Virgo detectors, also includes the GEO600 \cite{LuEA2010} and TAMA300 \cite{Tat2008} detectors, and several prototypes around the world. The impact of these projects onto the field is two-fold. First of all, the direct detection of GWs will be a milestone in science opening a new window to our universe, and marking the beginning of a new era in observational astronomy. Second, several groups around the world have already started to adapt the technology to novel interferometer concepts \cite{DiEA2013,ShEA2014}, with potential applications not only in GW science, but also geophysics. The basic measurement scheme is always the same: the relative displacement of test masses is monitored by using ultra-stable lasers. Progress in this field is strongly dependent on how well the motion of the test masses can be shielded from the environment. Test masses are placed in vacuum and are either freely falling (e.~g.~atom clouds \cite{PCC2001}), or suspended and seismically isolated (e.~g.~high-quality glass or crystal mirrors as used in all of the detectors listed above). The best seismic isolations realized so far are effective above a few Hz, which limits the frequency range of detectable gravity fluctuations. Nonetheless, low-frequency concepts are continuously improving, and it is conceivable that future detectors will be sufficiently sensitive to detect GWs well below a Hz \cite{HaEA2013}. Terrestrial gravity perturbations were identified as a potential noise source already in the first concept laid out for a laser-interferometric GW detector \cite{Wei1972}. Today, this form of noise is known as ``terrestrial gravitational noise'', ``Newtonian noise'', or ``gravity-gradient noise''. It has never been observed in GW detectors, but it is predicted to limit the sensitivity of the advanced GW detectors at low frequencies. The most important source of gravity noise comes from fluctuating seismic fields \cite{Sau1984}. Gravity perturbations from atmospheric disturbances such as pressure and temperature fluctuations can become significant at lower frequencies \cite{Cre2008}. Anthropogenic sources of gravity perturbations are easier to avoid, but could also be relevant at lower frequencies \cite{ThWi1999}. Today, we only have one example of a direct observation of gravity fluctuations, i.~e.~from pressure fluctuations of the atmosphere in high-precision gravimeters \cite{Neu2010}. Therefore, almost our entire understanding of gravity fluctuations is based on models. Nonetheless, potential sensitivity limits of future large-scale GW detectors need to be identified and characterized well in advance, and so there is a need to continuously improve our understanding of terrestrial gravity noise. Based on our current understanding, the preferred option is to construct future GW detectors underground to avoid the most dominant Newtonian-noise contributions. This choice was made for the next-generation Japanese GW detector KAGRA, which is currently being constructed underground at the Kamioka site \cite{AsEA2013}, and also as part of a design study for the Einstein Telescope in Europe \cite{PuEA2010}. While the benefit from underground construction with respect to gravity noise is expected to be substantial in GW detectors sensitive above a few Hz \cite{BeEA2012}, it can be argued that it is less effective at lower frequencies \cite{HaEA2013}. Alternative mitigation strategies includes coherent noise cancellation \cite{Cel2000}. The idea is to monitor the sources of gravity perturbations using auxiliary sensors such as microphones and seismometers, and to use their data to generate a coherent prediction of gravity noise. This technique is successfully applied in gravimeters to reduce the foreground of atmospheric gravity noise using collocated pressure sensors \cite{Neu2010}. It is also noteworthy that the models of the atmospheric gravity noise are consistent with observations. This should give us some confidence at least that coherent Newtonian-noise cancellation can also be achieved in GW detectors. It is evident though that a model-based prediction of the performance of coherent noise cancellation schemes is prone to systematic errors as long as the properties of the sources are not fully understood. Ongoing experiments at the Sanford Underground Research Facility with the goal to characterize seismic fields in three dimensions are expected to deliver first data from an underground seismometer array in 2015 (see \cite{HaEA2010} for results from an initial stage of the experiment). While most people would argue that constructing GW detectors underground is always advantageous, it is still necessary to estimate how much is gained and whether the science case strongly profits from it. This is a complicated problem that needs to be answered as part of a site selection process. More recently, high-precision gravity strainmeters have been considered as monitors of geophysical signals \cite{HaEA2015}. Analytical models have been calculated, which allow us to predict gravity transients from seismic sources such as earthquakes. It was suggested to implement gravity strainmeters in existing earthquake-early warning systems to increase warning times. It is also conceivable that an alternative method to estimate source parameters using gravity signals will improve our understanding of seismic sources. Potential applications must still be investigated in greater detail, but the study already demonstrates that the idea to use GW technology to realize new geophysical sensors seems feasible. As explained in \cite{CoHa2014}, gravitational forces start to dominate the dynamics of seismic phenomena below about 1\,mHz (which coincides approximately with a similar transition in atmospheric dynamics where gravity waves start to dominate over other forms of oscillations \cite{Tos2014}). Seismic isolation would be ineffective below 1\,mHz since the gravitational acceleration of a test mass produced by seismic displacement becomes comparable to the seismic acceleration itself. Therefore, we claim that 10\,mHz is about the lowest frequency at which ground-based gravity strainmeters will ever be able to detect GWs, and consequently, modelling terrestrial gravity perturbations in these detectors can focus on frequencies above 10\,mHz. This article is divided into six main sections. Section \ref{sec:gravmeasure} serves as an introduction to gravity measurements focussing on the response mechanisms and basic properties of gravity sensors. Section \ref{sec:ambient} describes models of gravity perturbations from ambient seismic fields. The results can be used to estimate noise spectra at the surface and underground. A subsection is devoted to the problem of noise estimation in low-frequency GW detectors, which differs from high-frequency estimates mostly in that gravity perturbations are strongly correlated between different test masses. In the low-frequency regime, the gravity noise is best described as gravity-gradient noise. Section \ref{sec:pointsources} is devoted to time domain models of transient gravity perturbations from seismic point sources. The formalism is applied to point forces and shear dislocations. The latter allows us to estimate gravity perturbations from earthquakes. Atmospheric models of gravity perturbations are presented in Section \ref{sec:atmos}. This includes gravity perturbations from atmospheric temperature fields, infrasound fields, shock waves, and acoustic noise from turbulence. The solution for shock waves is calculated in time domain using the methods of Section \ref{sec:pointsources}. A theoretical framework to calculate gravity perturbations from objects is given in Section \ref{sec:objects}. Since many different types of objects can be potential sources of gravity perturbations, the discussion focusses on the development of a general method instead of summarizing all of the calculations that have been done in the past. Finally, Section \ref{sec:mitigate} discusses possible passive and active noise mitigation strategies. Due to the complexity of the problem, most of the section is devoted to active noise cancellation providing the required analysis tools and showing limitations of this technique. Site selection is the main topic under passive mitigation, and is discussed in the context of reducing environmental noise and criteria relevant to active noise cancellation. Each of these sections ends with a summary and a discussion of open problems. While this article is meant to be a review of the current state of the field, it also presents new analyses especially with respect to the impact of seismic scattering on gravity perturbations (Sections \ref{sec:scattercomp} and \ref{sec:scattershear}), active gravity noise cancellation (Section \ref{sec:arrayNNP}), and time-domain models of gravity perturbations from atmospheric and seismic point sources (Sections \ref{sec:forcegrav}, \ref{sec:sourcehalf}, and \ref{sec:shockNN}). Even though evident to experts, it is worth emphasizing that all calculations carried out in this article have a common starting point, namely Newton's universal law of gravitation. It states that the attractive gravitational force $\vec F$ between two point masses $m_1,\,m_2$ is given by \beq \vec F=-G\dfrac{m_1m_2}{r^2}\vec e_r, \label{eq:newtonlaw} \eeq where $G=6.672\times 10^{-11}\,$N\,${\rm m}^2$/${\rm kg}^2$ is the gravitational constant. Equation (\ref{eq:newtonlaw}) gives rise to many complex phenomena on Earth such as inner-core oscillations \cite{Sli1961}, atmospheric gravity waves \cite{SFV1987}, ocean waves \cite{HEG1995,You1999}, and coseismic gravity changes \cite{MaHe2011}. Due to its importance, we will honor the eponym by referring to gravity noise as Newtonian noise in the following. It is thereby clarified that the gravity noise models considered in this article are non-relativistic, and propagation effects of gravity changes are neglected. While there could be interesting scenarios where this approximation is not fully justified (e.~g.~whenever a gravity perturbation can be sensed by several sensors and differences in arrival times can be resolved), it certainly holds in any of the problems discussed in this article. We now invite the reader to enjoy the rest of the article, and hope that it proves to be useful. \section{Section One} \label{sec:section} Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text\epubtkFootnote{This is a footnote.} Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text~\cite{authorYear} Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text~\cite{otherAuthorYear} Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text \emph{emphasize something} Text Text Text Text. \begin{equation} a + b = c \,, \label{eq:equation1} \end{equation} Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text $ a + b = c $ Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text. \begin{eqnarray} y &=& x^4 + 4 \nonumber \\ &=& (x^2 + 2)^2 -4x^2 \nonumber \\ &\le& (x^2 + 2)^2 \,. \label{eq:equation2} \end{eqnarray} Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text. \newpage \section{Section Two} \label{sec:section2} Text Text Text Text Text Text Text Text Text Text Text Text Text Text Table~\ref{tab:table01} shows that Text Text Text Text Text Text Text Text Text Text Text Text Text Text. \begin{table}[htbp] \caption{Table caption.} \label{tab:table01} \centering \begin{tabular}{llll} \toprule head1 & head2 & head3 & head4\\ \midrule row1 & row1 & row1 & row1\\ row2 & row2 & row2 & row2\\ row3 & row3 & row3 & row3\\ row4 & row4 & row4 & row4\\ \bottomrule \end{tabular} \end{table} Text Text Text Text Text Text Figure~\ref{fig:figure01} shows that the above Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text TeText Text Text Text Text Text \epubtkImage{} \begin{figure}[htbp] \caption{Figure caption. From \cite{authorYear}.} \label{fig:figure01} \end{figure}} \epubtkImage{} \begin{figure}[htbp] \centerline{ } \caption{Figure caption. From \cite{authorYear}.} \label{fig:figure02} \end{figure}} \epubtkMovie{movie.mpeg}{movieStill.png} \begin{figure}[htbp] \caption{Movie caption.} \label{fig:movie01} \end{figure}} \newpage \section{Section Three} \label{sec:section3} Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text. \subsection{Subsection} \label{sec:subsection} Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text. \subsubsection{Subsubsection} \label{sec:subsection} Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text. \paragraph*{Paragraph} Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text Text. \newpage \section{List Environments} \label{section:lists} Here you will find an example for each of the above list environments: \begin{itemize} \item First item. \item Second item. \begin{itemize} \item First subitem. \item Second subitem. \end{itemize} \end{itemize} \begin{enumerate} \item This is an item which contains text that is more than one line long. It will be broken correctly at the line end. \item That is the second item, which contains two subitems. \begin{enumerate} \item This is an item which contains text that is more than one line long. It will be broken correctly at the line end. \item \dots \end{enumerate} \end{enumerate} \begin{description} \item[Item 1.]Here comes the text. \item[Item 2.]Text for the second item. \begin{description} \item[Subitem 1.]Here comes the text. \item[Subitem 2.]Text for the second subitem. \end{description} \end{description} \newpage \section{Acknowledgements} \label{sec:acknowledgements} Here the acknowledgments can be included. \newpage \section{Mathematical formalism} \label{sec:math} The purpose of this appendix is to define the mathematical quantities used in the paper, and to provide the key equations to master the more complex calculations. Only the basic properties are described here. More complex applications can be found throughout the paper. Comparing results in this article with results from other publications, one should pay attention especially to the definition of spherical scalar and vector harmonics, and multipoles. Various normalization conventions can be found in the literature, which can cause final results to look different. Also, to share valuable experience, here is how almost all complicated calculations can be carried out very efficiently. First, a problem needs to be calculated with pencil and paper. Half of the time, the results will be wrong. Reasons are typically a wrong sign or other mistakes in simple steps. However, with a good understanding of the structure of the calculation, it is always possible to translate the calculation efficiently into a symbolic computational software program to obtain the result (the author used \emph{Mathematica} for this purpose). This scheme has worked for all calculations in this article. Not solving the problem by hand first often leads to the situation that the symbolic software is programmed in a way that it cannot find the solution. In some cases, solutions are found by the software only for specific parameter ranges, and one needs to generalize the solution using one's understanding from the calculation by hand. More satisfactorily even, knowing the solution helps to identify the mistakes done in the first calculation by hand. \subsection{Bessel functions} \label{sec:cylindrical} Bessel functions exist in two types, the Bessel function of the first kind $J_n(\cdot)$, and the Bessel function of the second kind $Y_n(\cdot)$. The latter is irregular at the origin, and will not be used in this article. A common definition of the Bessel function is\index{Bessel function!first kind} \beq J_n(x)=\frac{1}{2\pi}\int\limits_{-\pi}^\pi\drm\tau\,\e^{\irm(n\tau-x\sin(\tau))} \label{eq:besselint} \eeq Here, the order $n$ needs to be integer. Only the $J_0(\cdot)$ is non-zero at the origin. Many important properties of $J_n(\cdot)$ can be derived from this equation. For example, negative integer orders can be re-expressed as positive orders: \beq J_{-n}(x)=(-1)^nJ_n(x) \eeq In this paper, the Bessel function will find application in cylindrical harmonics expansions. Any field that fulfills the Laplace equation $\Delta f(\vec r\,)=0$, i.~e.~a harmonic function\index{harmonic function}, can be expanded into cylindrical harmonics $C_n(k;\varrho,\phi,z)$. Cylindrical harmonics are linearly independent solutions to the Laplace equation. Throughout this article, we will only need harmonics that are regular at the origin. In this case, they can be written as: \beq C_n(k;\varrho,\phi,z)=J_n(k\varrho)\e^{\irm n\phi}\e^{-kz}, \eeq where $\varrho,\,\phi,\,z$ are the cylindrical coordinates. A arbitrary harmonic function $f(\vec r\,)$ that is regular at the origin can then be expanded according to \beq f(\vec r\,)=\sum\limits_{n=-\infty}^\infty\int\limits\drm k\,a_n(k)C_n(k;\varrho,\phi,z) \eeq Cylindrical harmonics find application in calculations of fields in a half space. The integration range of the parameter $k$ is not further specified here, since it depends on the specific physical problem. The parameter $k$ can in general be complex valued, and in some cases, i.~e.~when the field is constrained to a finite range of radii $\varrho$, it can also take on discrete values. Most of the important, non-trivial relations used in this article involving Bessel functions concern semi-infinite integrals. The first relation can be obtained as a limiting case of the Hankel integral \cite{Han1875,Wat1922} \beq \begin{split} \int\limits_0^\infty\drm \varrho\,\varrho^p J_n(k\varrho) &= \lim\limits_{a\rightarrow 0}\int\limits_0^\infty\drm \varrho\,\e^{-a\varrho}\varrho^p J_n(k\varrho)\\ &= \frac{1}{2}\left(\frac{2}{k}\right)^{p+1} \frac{\Gamma((n+p+1)/2)}{\Gamma((n-p+1)/2)}, \end{split} \eeq with $-n-1<p<1/2$ and $k>0$. A related integral can be derived consistent with the last equation, even though the conditions on the parameters are not fulfilled with $p=1$: \beq \int\limits_0^\infty\drm \varrho\,\varrho J_n(k\varrho)=\frac{n}{k^2} \eeq Finally, an integral that is useful in calculations with cylindrical harmonic expansions, see Section \ref{sec:sourcehalf}, is given by \cite{ArWe2005} \beq \int\limits_0^\infty\drm \varrho\,\varrho J_n(k\varrho)J_n(s\varrho)=\frac{\delta(s-k)}{k}, \label{eq:closureJ} \eeq with $\delta(\cdot)$ being the Dirac $\delta$-distribution. This equation allows us to reduce the number of Bessel functions in more complicated integrals, and is known as \emph{closure relation}\index{Bessel function!closure relation}. Bessel functions can also be defined for non-integer orders, which requires a modification of the definition in Equation (\ref{eq:besselint}). Using the generalized definition, one can define the spherical Bessel functions of the first kind according to \index{Bessel function!spherical} \beq j_n(x)=\sqrt{\frac{\pi}{2x}}J_{n+1/2}(x) \label{eq:sphericalj} \eeq The spherical Bessel functions have a form very similar to the Bessel functions. Also here, $j_0(\cdot)$ is the only spherical Bessel function that does not vanish at the origin. Spherical Bessel functions of the first kind appear in correlation functions of 3D fields (see Section \ref{sec:arrayNNP}). For example, correlations between scalar fields are given in terms of $j_0(\cdot)$, correlations of vector fields include $j_0(\cdot),\,j_2(\cdot)$. They also appear in the vector plane-wave expansions, see Equations (\ref{eq:expandPWlong}) and (\ref{eq:expandPWtrans}). In many calculations involving spherical Bessel functions, the following to recurrence relations are useful \beq \begin{split} j_{l+1}(x) &= \frac{2l+1}{x}j_l(x)-j_{l-1}(x)\\ \partial_xj_l(x) &= \frac{l}{x}j_l(x)-j_{l+1}(x)\\ &= \frac{l}{2l+1}j_{l-1}(x)-\frac{l+1}{2l+1}j_{l+1}(x) \end{split} \label{eq:sphBesselrec} \eeq The first relation means that it is always possible to express a sum over spherical Bessel functions with arbitrarily many different orders as a sum over two orders only. Therefore, it is first of all a great tool to reduce complexity of a result. The second equation is often applied in calculations with integrals following integration by parts. \subsection{Spherical harmonics} \label{sec:spherical} Spherical harmonics are the independent solutions to the Laplace equation in spherical coordinates. We distinguish between surface spherical harmonics and solid spherical harmonics. Two-dimensional scalar harmonic fields on spheres can be expanded into surface spherical harmonics. Three dimensional scalar harmonic fields can be expanded into solid spherical harmonics. We will also introduce the vector surface spherical harmonics used to expand vector fields on spheres. Spherical harmonics find wide application. In this article, we will use them to calculate seismic fields scattered from spherical cavities or gravity perturbations from seismic point sources (see Sections \ref{sec:scattershear} and \ref{sec:disgravity}). Furthermore, solid spherical harmonics are the constituents of the multipole expansion, which is an elegant means to describe gravity perturbations from objects with arbitrary shape (see Sections \ref{sec:vibobj} and \ref{sec:rotobj}). \subsubsection{Legendre polynomials} \index{Legendre polynomials} Legendre polynomials are introduced since they are part of the definition of spherical harmonics. They also directly serve in expansions of harmonic fields in spherical coordinates when the fields have cylindrical symmetry. The Legendre polynomial of integer order $l$ is defined as \beq P_l(x)=\dfrac{1}{2^ll!}\partial_x^l(x^2-1)^l \eeq In order to evaluate integrals involving Legendre polynomials, it is often convenient to express powers of the argument $x$ in terms of Legendre polynomials. Table \ref{tab:legendre} summarizes the relations for the first 4 orders. \begin{table}[htbp] \caption{Legendre polynomials} \label{tab:legendre} \renewcommand{\arraystretch}{2.5} \centerline{ \begin{tabular}{|l|l|} \hline $P_0(x)=1$ & $1=P_0(x)$\\ $P_1(x)=x$ & $x=P_1(x)$\\ $P_2(x)=\frac{1}{2}(3x^2-1)$ & $x^2=\frac{1}{3}(2P_2(x)+P_0(x))$\\ $P_3(x)=\frac{1}{2}(5x^3-3x)$ & $x^3=\frac{1}{5}(2P_3(x)+3P_1(x))$\\ \hline \end{tabular}} \end{table} Naturally, any polynomial of order $l$ can be expressed in terms of Legendre polynomials up to the same order. In most applications, the domain of the Legendre polynomials is the interval $[-1;1]$. In this case, the Legendre polynomials have interesting integral properties such as the orthogonality relation \beq \int\limits_{-1}^1\drm xP_m(x)P_n(x)=\frac{2}{2m+1}\delta_{mn} \label{eq:legorth} \eeq Making use of the orthogonality relation of Equation (\ref{eq:legorth}), the inverse expansion of monomials $x^m$ into Legendre polynomials $P_l(x)$, as shown for the first few orders in Table \ref{tab:legendre}, can be obtained from the integrals \beq \int\limits_{-1}^1\drm x\,x^mP_l(x)=\frac{2^{2+l}(1+(m+l)/2)!m!}{((m-l)/2)!(2+m+l)!}, \eeq for $m\geq l$ and $m+l$ even. The integral vanishes for all other pairs $m,\,l$. The Legendre polynomials obey Bonnet's recursion formula\index{Legendre polynomials!Bonnet's recursion}: \beq (l+1) P_{l+1}(x)=(2l+1)x P_l(x)-lP_{l-1}(x) \eeq Also the derivative of a Legendre polynomial can be expressed as a sum of Legendre polynomials according to \beq \begin{split} \partial_xP_l(x) &= \frac{1+l}{1-x^2}(x P_l(x)-P_{l+1}(x))\\ &= \frac{1}{1-x^2}\frac{l(l+1)}{2l+1}(P_{l-1}(x)-P_{l+1}(x)) \end{split} \label{eq:derivlegendre} \eeq In analytical calculations of seismic fields, it enters the equations through its role in the scalar plane-wave expansion:\index{plane-wave expansion!scalar} \beq \e^{\irm \vec k\cdot\vec r}=\e^{\irm kr\cos(\theta)}=\sum\limits_{l=0}^\infty \irm^l(2l+1)j_l(kr)P_l(\cos(\theta)) \label{eq:pwscalar} \eeq Here, $j_n(\cdot)$ is the spherical Bessel function defined in Equation (\ref{eq:sphericalj}). For example, in Section \ref{sec:scattercomp}, we will calculate the scattered seismic field from a cavity with incident longitudinal wave. This problem has cylindrical symmetry. More important for spherical harmonics are the associated Legendre polynomials $P_l^m(\cdot)$\index{Legendre polynomials!associated}. They are parameterized by a second integer index $m=-l,\ldots,l$. Their definition is given in terms of Legendre polynomials: \beq \begin{split} P_l^m(x) &= (-1)^m(1-x^2)^{m/2}\partial_x^mP_l(x)\\ &= \dfrac{(-1)^m}{2^ll!}(1-x^2)^{m/2}\partial_x^{l+m}(x^2-1)^l \end{split} \eeq Definitions of the associated Legendre polynomials can vary in terms of their $l,m$-dependent normalization. For example, some authors would normalize $P_l^m(\cdot)$ such that the factor in front of the Kronecker-$\delta$ in Equation (\ref{eq:orthoasLeg}) is equal to 1. While this choice of normalization has greater aesthetic appeal, we choose the more conventional definition since we will never work explicitly with the associated Legendre polynomials. In this article, they merely serve as building block of the spherical harmonics. Defined over the domain $x\in[-1;1]$, the associated Legendre polynomials obey the orthogonality relation \beq \int\limits_{-1}^1\drm x\,P_k^m(x)P_l^m(x)=\frac{2}{2l+1}\frac{(l+m)!}{(l-m)!}\delta_{k,l} \label{eq:orthoasLeg} \eeq Finally, positive and negative orders $m$ are linked via \beq P_l^{-m}(x)=(-1)^m\dfrac{(l-m)!}{(l+m)!}P_l^m(x) \label{eq:asLegsign} \eeq Associated Legendre polynomials will never be used explicitly in this article, but only as part of the definition of spherical harmonics. From the theory of spherical harmonics it will become clear that cylindrically symmetric fields can always be expanded in terms of the polynomials $P_l^0(x)=P_l(x)$. \subsubsection{Scalar surface spherical harmonics} \index{spherical harmonics!scalar, surface} Scalar surface spherical harmonics $Y_l^m(\theta,\phi)$ are eigenfunctions of the Laplace operator with respect to the angular coordinates \beq \left(\frac{1}{\sin(\theta)}\partial_\theta\sin(\theta)\partial_\theta +\frac{1}{\sin^2(\theta)}\partial^2_\phi\right)Y_l^m(\theta,\phi)=-l(l+1)Y_l^m(\theta,\phi) \label{eq:eigenY} \eeq As such, they form an important part in the expansion of harmonic functions expressed in spherical coordinates (see Sections \ref{sec:solidharm} and \ref{sec:multipole}). The degree $l$ of the spherical harmonic can assume all non-negative integer values, while the order $m$ lies in the range $m=-l,\ldots,l$. Their explicit form is given by \beq Y_l^m(\theta,\phi)=\sqrt{\dfrac{2l+1}{4\pi}\dfrac{(l-m)!}{(l+m)!}}P_l^m(\cos(\theta))\e^{\irm m\phi} \label{eq:surfharm} \eeq The first 4 degrees of the harmonics are listen in Table \ref{tab:sphereharm}. \begin{table}[htbp] \caption{Spherical surface harmonics} \label{tab:sphereharm} \renewcommand{\arraystretch}{2.5} \centerline{ \begin{tabular}{|l|l|} \hline $Y_0^0$ & $\dfrac{1}{2}\sqrt{\dfrac{1}{\pi}}$\\ \hline $Y_1^0$ & $\dfrac{1}{2}\sqrt{\dfrac{3}{\pi}}\cos(\theta)$\\ $Y_1^{\pm 1}$ & $\mp\dfrac{1}{2}\sqrt{\dfrac{3}{2\pi}}\sin(\theta)\e^{\pm\irm\phi}$\\ \hline $Y_2^0$ & $\dfrac{1}{4}\sqrt{\dfrac{5}{\pi}}(3\cos^2(\theta)-1)$\\ $Y_2^{\pm 1}$ & $\mp\dfrac{1}{2}\sqrt{\dfrac{15}{2\pi}}\sin(\theta)\cos(\theta)\e^{\pm\irm\phi}$\\ $Y_2^{\pm 2}$ & $\dfrac{1}{4}\sqrt{\dfrac{15}{2\pi}}\sin^2(\theta)\e^{\pm 2\irm\phi}$\\ \hline \end{tabular} \begin{tabular}{|l|l|} \hline $Y_3^0$ & $\dfrac{1}{4}\sqrt{\dfrac{7}{\pi}}(5\cos^2(\theta)-3)\cos(\theta)$\\ $Y_3^{\pm 1}$ & $\mp \dfrac{1}{8}\sqrt{\dfrac{21}{\pi}}(5\cos^2(\theta)-1)\sin(\theta)\e^{\pm \irm\phi}$\\ $Y_3^{\pm 2}$ & $\dfrac{1}{4}\sqrt{\dfrac{105}{2\pi}}\cos(\theta)\sin^2(\theta)\e^{\pm 2\irm\phi}$\\ $Y_3^{\pm 3}$ & $\mp \dfrac{1}{8}\sqrt{\dfrac{35}{\pi}}\sin^3(\theta)\e^{\pm 3\irm\phi}$\\ \hline \end{tabular}} \end{table} Another related role of the spherical harmonics is that, on the unit sphere, any (square-integrable) function can be expanded according to \beq f(\theta,\phi)=\sum\limits_{l=0}^\infty\sum\limits_{m=-l}^lf_l^mY_l^m(\theta,\phi) \label{eq:harmexpand} \eeq In expansions with cylindrical symmetry, it is convenient to define the angle $\theta$ with respect to the symmetry axis, in which case the order $m$ can be set to 0, and the associated Legendre polynomials reduce to ordinary Legendre polynomials. In this article, the normalization of spherical harmonics is chosen such that \beq \int\drm\Omega\, Y_l^m(Y_{l'}^{m'})^*=\delta_{ll'}\delta_{mm'} \label{eq:normalY} \eeq In other words, the surface spherical harmonics form on orthonormal basis of (square-integrable) functions on the unit sphere. The relation between positive and negative orders can be found using Equations (\ref{eq:asLegsign}) and (\ref{eq:surfharm}): \beq Y_l^{-m}(\theta,\phi)=(-1)^m(Y_l^m(\theta,\phi))^* \label{eq:conjugateY} \eeq Finally, we conclude this section with a few obvious and not so obvious relations. The first three relations are evaluations of the spherical harmonics at specific points: \beq \begin{split} &Y_l^m(\theta,0) = \sqrt{\pi(2l+1)}P_l(\cos(\theta))\delta_{m,0}\\ &Y_l^m(0,\phi) = \frac{1}{2}\sqrt{\frac{2l+1}{\pi}}\delta_{m,0}\\ &Y_l^m(\pi/2,\phi) = \begin{cases} 0 & l+m\quad\mbox{odd} \\ \frac{1}{2^l}(-1)^{(l+m)/2}\sqrt{\dfrac{2l+1}{4\pi}}\dfrac{\sqrt{(l+m)!(l-m)!}}{((l+m)/2)!((l-m)/2)!} & l+m\quad\mbox{even} \end{cases} \end{split} \label{eq:pointrelY} \eeq All three relations can be useful if fields are to be expanded on planes. Useful integrals of the spherical harmonics are \beq \begin{split} &\int\limits_0^{2\pi}\drm\phi\, Y_l^m(\theta,\phi)= 2\pi Y_l^0(\theta,0)\delta_{m,0}=\sqrt{\pi(2l+1)}P_l(\cos(\theta))\delta_{m,0}\\ &\int\limits_0^{2\pi}\drm\phi\int\limits_0^{\pi/2}\drm\theta\,\sin(\theta) Y_l^m(\theta,\phi) =\sqrt{\pi(2l+1)} \begin{cases} 1 & l=0\\ 0 & l>0\quad\mbox{and}\quad l\quad\mbox{even} \\ (-1)^{(l-1)/2}\dfrac{l!!}{l(l+1)(l-1)!!} & l\quad\mbox{odd} \end{cases} \end{split} \label{eq:intrelY} \eeq The latter integral can be found in \cite{Bye1893}. These equations demonstrate the typical situation that integrals over angles constrain the degrees and orders of spherical harmonics in infinite expansions as in Equation (\ref{eq:harmexpand}). The second relation is quite exotic, but could be useful in some half-space problems, for example, to predict the performance of coherent cancellation of infrasound Newtonian noise (see Section \ref{sec:arrayNNatm}). \subsubsection{Vector surface spherical harmonics} \index{spherical harmonics!vector,surface} Vector spherical harmonics form a basis of square-integrable vector fields. One can find various definitions of vector spherical harmonics that do not only differ in normalization. The fact that so many definitions exist is because different classes of differential operators are applied to these harmonics depending on the physical problem. If the interest lies in angular momentum operators, then one defines the harmonics to be eigenfunctions of the Laplace operator as shown in \cite{Tho1980}, or, from the perspective of rotation operators invariant under rotations of a spherical coordinate system, \cite{KoJo1993}. The convention chosen here is similar to definitions typically used in seismology text books, see for example \cite{BMSi1981,AkRi2009}, and a nice introduction to these harmonics can be found in \cite{BEG1985}. Here, they are defined as \beq \begin{split} \vec Y_l^m(\theta,\phi) &=Y_l^m(\theta,\phi)\vec e_r \\ \vec \Psi_l^m(\theta,\phi) &=\frac{1}{\sqrt{l(l+1)}}r\nabla Y_l^m(\theta,\phi) \\ \vec \Phi_l^m(\theta,\phi) &=\frac{1}{\sqrt{l(l+1)}}\vec r\times\nabla Y_l^m(\theta,\phi) \end{split} \label{eq:vectharm} \eeq Note that even though the radial coordinate $r$ appears explicitly in these definitions, it cancels when carrying out the gradient operations. The normalization differs from most other publications since it is chosen to make the vector spherical harmonics orthonormal: \beq \begin{split} \int\drm\Omega\,\vec Y_l^m(\theta,\phi) \cdot(\vec Y_{l'}^{m'}(\theta,\phi))^* &= \delta_{ll'}\delta_{mm'}\\ \int\drm\Omega\,\vec \Psi_l^m(\theta,\phi) \cdot(\vec \Psi_{l'}^{m'}(\theta,\phi))^* &= \delta_{ll'}\delta_{mm'}\\ \int\drm\Omega\,\vec \Phi_l^m(\theta,\phi) \cdot(\vec \Phi_{l'}^{m'}(\theta,\phi))^* &= \delta_{ll'}\delta_{mm'} \end{split} \eeq Integrals involving the product of two different vector spherical harmonics vanish. Using the orthogonality relations, one can also calculate the integrals \beq \begin{split} \int\drm\Omega\,\vec Y_l^m(\theta,\phi) &= \sqrt{\frac{2\pi}{3}}\delta_{l,1}\left(\delta_{m,-1}(\vec e_x-\irm\vec e_y)-\delta_{m,1}(\vec e_x+\irm\vec e_y)+\sqrt{2}\delta_{m,0}\vec e_z\right)\\ \int\drm\Omega\,\vec \Psi_l^m(\theta,\phi) &= \sqrt{\frac{4\pi}{3}}\delta_{l,1}\left(\delta_{m,-1}(\vec e_x-\irm\vec e_y)-\delta_{m,1}(\vec e_x+\irm\vec e_y)+\delta_{m,0}\vec e_z\right)\\ \int\drm\Omega\,\vec \Phi_l^m(\theta,\phi) &= 0 \end{split} \label{eq:intvecharm} \eeq Vector spherical harmonics are essential in calculations of scattered seismic fields. In some cases, the scattering problem can be formulated in terms of scalar quantities, but in general, as shown in Section \ref{sec:scattershear}, the calculation requires the vector harmonics. The most important properties of vector spherical harmonics are expressed by the equations that involve differential operators. For our purposes, the gradient and divergence operators are the most important ones. For example, the gradient of a scalar spherical harmonic has the following form \beq \phi(\vec r\,) = f(r)Y_l^m(\theta,\phi),\quad\nabla\phi(\vec r\,)=(\partial_r f(r))\vec Y_l^m(\theta,\phi)+\sqrt{l(l+1)}\dfrac{f(r)}{r}\vec\Psi_l^m(\theta,\phi), \label{eq:gradientscalar} \eeq while the divergence of the vector spherical harmonics reads \beq \begin{split} {\rm div}(f(r)\vec Y_l^m(\theta,\phi)) &=\left((\partial_rf(r))+2\dfrac{f(r)}{r}\right)Y_l^m(\theta,\phi)\\ {\rm div}(f(r)\vec \Psi_l^m(\theta,\phi)) &=-\dfrac{\sqrt{l(l+1)}}{r}f(r) Y_l^m(\theta,\phi) \\ {\rm div}(f(r)\vec \Phi_l^m(\theta,\phi)) &=0 \end{split} \label{eq:divvecharm} \eeq As a second example, we give expansions of simple vector fields that we will need later again. Expressed in vector harmonics as defined in this paper, the solution for a longitudinal plane wave reads:\index{plane-wave expansion!vector,longitudinal} \beq \begin{split} \e^{-\irm kz}\vec e_z = \sum\limits_{l=0}^\infty\bigg[&\sqrt{\frac{4\pi}{2l+1}}(-\irm)^{l+1}\left((l+1)j_{l+1}(kr)-lj_{l-1}(kr)\right)\vec Y_l^0(\theta,\phi)\\ &-\sqrt{\frac{4\pi}{2l+1}}\sqrt{l(l+1)}(-\irm)^{l+1}\left(j_{l+1}(kr)+j_{l-1}(kr)\right)\vec \Psi_l^0(\theta,\phi)\bigg] \end{split} \label{eq:expandPWlong} \eeq As usual, expansion coefficients can be calculated by integrating products of the left-hand side of the equation with vector spherical harmonics. The exact form of the result given here can be obtained by subsequently using the recurrence relations of spherical Bessel functions as given in Equation (\ref{eq:sphBesselrec}). Transversal waves have a more complicated expansion into vector spherical harmonics:\index{plane-wave expansion!vector,transversal} \beq \begin{split} \e^{-\irm kz}\vec e_x = \sum\limits_{l=1}^\infty\bigg[&\sqrt{\frac{\pi l(l+1)}{2l+1}}(-\irm)^{l+1}(j_{l+1}(kr)+j_{l-1}(kr))(\vec Y_l^1(\theta,\phi)-\vec Y_l^{-1}(\theta,\phi))\\ &+\sqrt{\frac{\pi}{2l+1}}(-\irm)^{l+1}(-lj_{l+1}(kr)+(l+1)j_{l-1}(kr))(\vec \Psi_l^1(\theta,\phi)-\vec \Psi_l^{-1}(\theta,\phi))\\ &+\sqrt{\pi(2l+1)}(-\irm)^{l+1}j_l(kr)(\vec \Phi_l^1(\theta,\phi)+\vec \Phi_l^{-1}(\theta,\phi))\bigg]\\ \e^{-\irm kz}\vec e_y = \sum\limits_{l=1}^\infty\bigg[&-\sqrt{\frac{\pi l(l+1)}{2l+1}}(-\irm)^l(j_{l+1}(kr)+j_{l-1}(kr))(\vec Y_l^1(\theta,\phi)+\vec Y_l^{-1}(\theta,\phi))\\ &-\sqrt{\frac{\pi}{2l+1}}(-\irm)^l(-lj_{l+1}(kr)+(l+1)j_{l-1}(kr))(\vec \Psi_l^1(\theta,\phi)+\vec \Psi_l^{-1}(\theta,\phi))\\ &-\sqrt{\pi(2l+1)}(-\irm)^lj_l(kr)(\vec \Phi_l^1(\theta,\phi)-\vec \Phi_l^{-1}(\theta,\phi))\bigg] \end{split} \label{eq:expandPWtrans} \eeq As complicated as these expressions may seem, they greatly simplify more complicated calculations, especially of scattering problems as shown in Section (\ref{sec:scatterNN}). \subsubsection{Solid scalar spherical harmonics} \label{sec:solidharm} \index{spherical harmonics!scalar, solid} Expanding a square-integrable field in terms of spherical harmonics, the expansion coefficients will generally be functions of the radial coordinate $r$. If the field is a solution of the Laplace equation, then it is easy to show using Equation (\ref{eq:eigenY}) that the radial dependence can only have the two forms $r^l$ and $1/r^{l+1}$. Therefore, it is convenient to define so-called solid spherical harmonics, which directly incorporate $r$ into the expansion. A nice review of solid spherical harmonics can be found in \cite{StRu1973}. To introduce the solid spherical harmonics, we start with a well-known expansion of the inverse distance: \beq \frac{1}{|\vec r-\vec r\,'|}=\dfrac{1}{(r^2+(r')^2-2rr'\cos(\gamma))^{1/2}}= \frac{1}{r_>}\sum\limits_{l=0}^\infty\left(\frac{r_<}{r_>}\right)^lP_l(\cos(\gamma)) \label{eq:potexpand} \eeq where $r_>\equiv \max(r,r')$, $r_<\equiv \min(r,r')$, and $\gamma$ is the angle between the two vectors $\vec r,\,\vec r\,'$. This equation is known as Laplace expansion\index{Laplace expansion} of the distance between two points. The expansion was later generalized to arbitrary powers of the distance, which can often serve as a short cut for calculations \cite{Sac1964a,Sac1964b,Sac1964c}. This equation is not always directly helpful since the two position vectors $\vec r,\vec r\,'$ are often defined in a coordinate system that does not allow us to provide a simple expression of the angle $\gamma$. This can make it very difficult to calculate integrals of this expansion over angular coordinates. Another important relation, known as spherical harmonic addition theorem, can solve this problem: \beq P_l(\cos(\gamma))=\frac{4\pi}{2l+1}\sum\limits_{m=-l}^l\left(Y_l^m(\theta',\phi')\right)^*Y_l^m(\theta,\phi), \label{eq:spheraddition} \eeq where $\gamma$ is now reexpressed in terms of the angular spherical coordinates $(\theta,\phi)$ and $(\theta',\phi')$ of the two position vectors. Together with Equation (\ref{eq:potexpand}), the Laplace expansion can be rewritten as \beq \frac{1}{|\vec r-\vec r\,'|}=\sum\limits_{l=0}^\infty\sum\limits_{m=-l}^l\left(I_l^m(\vec r_>)\right)^*R_l^m(\vec r_<) \eeq with the solid spherical harmonics defined in Racah's normalization \beq R_l^m(\vec r\,)\equiv\sqrt{\dfrac{4\pi}{2l+1}}r^lY_l^m(\theta,\phi),\quad I_l^m(\vec r\,)\equiv\sqrt{\dfrac{4\pi}{2l+1}}\dfrac{Y_l^m(\theta,\phi)}{r^{l+1}} \label{eq:solidharm} \eeq The functions $R_l^m(\cdot),\,I_l^m(\cdot)$ are the regular and irregular solid spherical harmonics, respectively. The explicit expressions of the first three degrees are listed in Table \ref{tab:solidreg}. \begin{table}[ht!] \caption{Regular and irregular solid harmonics in Racah normalization} \label{tab:solidreg} \renewcommand{\arraystretch}{2.5} \centerline{ \begin{tabular}{|l|l|} \hline $R_0^0$ & $1$\\ \hline $R_1^0$ & $r\cos(\theta)$\\ $R_1^{\pm 1}$ & $\mp\dfrac{r}{\sqrt{2}}\sin(\theta)\e^{\pm\irm\phi}$\\ \hline $R_2^0$ & $\dfrac{r^2}{2}(3\cos^2(\theta)-1)$\\ $R_2^{\pm 1}$ & $\mp r^2\sqrt{\dfrac{3}{2}}\sin(\theta)\cos(\theta)\e^{\pm\irm\phi}$\\ $R_2^{\pm 2}$ & $\dfrac{r^2}{2}\sqrt{\dfrac{3}{2}}\sin^2(\theta)\e^{\pm 2\irm\phi}$\\ \hline \end{tabular} \begin{tabular}{|l|l|} \hline $I_0^0$ & $\dfrac{1}{r}$\\ \hline $I_1^0$ & $\dfrac{1}{r^2}\cos(\theta)$\\ $I_1^{\pm 1}$ & $\mp\dfrac{1}{r^2\sqrt{2}}\sin(\theta)\e^{\pm\irm\phi}$\\ \hline $I_2^0$ & $\dfrac{1}{2r^3}(3\cos^2(\theta)-1)$\\ $I_2^{\pm 1}$ & $\mp \dfrac{1}{r^3}\sqrt{\dfrac{3}{2}}\sin(\theta)\cos(\theta)\e^{\pm\irm\phi}$\\ $I_2^{\pm 2}$ & $\dfrac{1}{2r^3}\sqrt{\dfrac{3}{2}}\sin^2(\theta)\e^{\pm 2\irm\phi}$\\ \hline \end{tabular}} \end{table} With an appropriate definition of vector surface spherical harmonics, different from Equation (\ref{eq:vectharm}), since the surface harmonics need to be eigenfunctions of the Laplace operator, one could also define solid vector spherical harmonics. They will not be required in this review article. \subsection{Spherical multipole expansion} \label{sec:multipole} The expansion of scalar and vector fields into spherical harmonics is an example of a so-called multipole expansion\index{multipole expansion}. We will see interesting applications in Section \ref{sec:objects}, but a simple example is discussed in this section already to illustrate the method. In the context of calculating gravity perturbations between two objects, the goal is to provide the multipole expansion of their mass distributions. These expansions come in two forms. If the two objects are much smaller than the distance between them, then it is possible to solve the problem in terms of the so-called exterior multipole moments\index{multipole moments!exterior} \beq X_l^m\equiv \int\drm V\,\rho(\vec r\,)R_l^m(\vec r\,), \label{eq:multiext} \eeq which require the regular solid harmonics. The moment $X_0^0$ is always equal to the total mass of the object. As outlined in Section \ref{sec:solidharm}, the coordinate vector $\vec r$ needs to be ``shorter'', in this case shorter than the distance between the two objects. However, since the length of the vector depends on the location of the origin of the coordinate system, and since only one of two distant objects can be close to the origin, a more complicated expansion scheme is required to make use of the exterior multipole moments of both objects. This problem is discussed in Section \ref{sec:momentinter}. Another possible scenario is that one mass is located inside another hollow mass. In this case, it is impossible to calculate their gravitational attraction using only exterior mass multipole moments. At least one mass distribution needs to be described in terms of its interior multipole moments\index{multipole moments!interior} \beq N_l^m\equiv \int\drm V\,\rho(\vec r\,)I_l^m(\vec r\,), \label{eq:multiint} \eeq In the remainder of this section, the calculation of an example will highlight the effect of symmetry of mass distributions on their multipole moments. For this purpose, we consider $N$ point masses regularly distributed on a circle as shown in Figure \ref{fig:pointring}. The results could for example be used to approximate the mass multipole moments of a rotor. The mass density of a point mass $m_i$ at $\vec r_i=(r_i,\theta_i,\phi_i)$ can be written in spherical coordinates as \beq \rho(\vec r\,)=\frac{M_i}{r^2\sin(\theta)}\delta(r-r_i)\delta(\theta-\theta_i)\delta(\phi-\phi_i) \eeq We want to use this example to explore the effect of simple symmetries in multipole expansion. The mass is considered to lie on a circle with radius $R$, so that we can choose $r_i=R$ and $\theta_i=\pi/2$. Together with Equation (\ref{eq:pointrelY}), we find \beq \begin{split} R_l^m(r=R,\theta=\pi/2,\phi) &= R^lK_l^m\e^{\irm m\phi}\\ K_l^m &= \begin{cases} 0 & l+m\quad\mbox{odd} \\ \frac{1}{2^l}(-1)^{(l+m)/2}\dfrac{\sqrt{(l+m)!(l-m)!}}{((l+m)/2)!((l-m)/2)!} & l+m\quad\mbox{even} \end{cases} \end{split} \eeq Therefore the exterior multipoles of a point mass $M_i$ at $\vec r_i=(R,\pi/2,\phi_i)$ can be written \beq X_l^m(\vec r_i)=M_iR^lK_l^m\e^{\irm m\phi_i} \eeq This result means that all multipole moments of a point mass with odd $l+m$ vanish, whereas moments with even $l+m$ are nonzero independent of $\phi_i$. Now we consider two point masses at antipodal locations $\phi_1=0,\,\phi_2=\pi$ at the same distance $R$ to the origin and the same mass $M$. The multipoles are given by \beq X_l^m=MR^lK_l^m\left(1+(-1)^m\right) \eeq Therefore, $m$ needs to be even for non-vanishing multipole moments, which also means that $l$ needs to be even. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.3\textwidth]{ChpA-PointMasses.png}} \caption[Point masses in a plane]{Symmetric configuration of point masses in a plane.} \label{fig:pointring} \end{figure}} As a last example, we add two more point masses, so that the configuration now consists of four equal masses at $\phi_1=0,\,\phi_2=\pi/2,\,\phi_3=\pi,\,\phi_4=3\pi/2$. The multipoles moments are \beq X_l^m=MR^lK_l^m\left(1+(-1)^m\right)\left(1+\irm^m\right) \eeq Now $m$ needs to be a multiple of 4 to generate a non-vanishing moment, and $l$ needs to be even as in the previous case. For $N$ point masses, we have \beq X_l^m=MR^lK_l^m\dfrac{1-\e^{2\pi\irm m}}{1-\e^{2\pi\irm m/N}} \eeq The fraction is equal to $N$ for $m$ being a multiple of $N$ (including $m=0$), and 0 otherwise. As before $l+m$ needs to be even for non-vanishing $K_l^m$. The limit $N\rightarrow\infty$ turns the collection of point masses into a continuous ring, which can be obtained as finite limit by expressing the individual point mass in terms of the total mass of the ring as $M=M_{\rm ring}/N$. In this case only the $m=0$ moments do not vanish. This is a property of multipole moments of all axially symmetric mass distributions provided that the angle $\theta$ is measured with respect to the symmetry axis. The only non-vanishing moment of spherically symmetric mass distributions with total mass $M$ is $X_0^0=M$. \subsection{Clebsch-Gordan coefficients} \label{sec:clebsch}\index{Clebsch-Gordan coefficients} Clebsch-Gordan coefficients $\langle l_1,m_1;l_2,m_2|L,M\rangle$ are required for the bipolar expansion discussed in Section \ref{sec:momentinter}. In general, they can be calculated recursively according to \beq \begin{split} & C_\pm(L,M)\langle l_1,m_1;l_2,m_2|L,M\pm 1\rangle = \\ &\qquad C_\pm(l_1,m_1\mp 1)\langle l_1,m_1\mp 1;l_2,m_2|L,M\rangle+C_\pm(l_2,m_2\mp 1)\langle l_1,m_1;l_2,m_2\mp 1|L,M\rangle \end{split} \label{eq:recclebsch} \eeq where the integer parameters can assume the values $l_1\geq 0$, $l_2\geq 0$, $m_1=-l_1,\ldots l_1$, $m_2=-l_2,\ldots l_2$, $0\leq L\leq l_1+l_2$, $M=m_1+m_2$ and \beq C_\pm(l,m)\equiv\sqrt{l(l+1)-m(m\pm 1)}, \eeq in Condon-Shortley phase convention. The Clebsch-Gordan coefficients obey the orthogonality relation: \beq \sum\limits_{m_1+m_2=M}\langle L,M|l_1,m_1;l_2,m_2\rangle\langle l_1,m_1;l_2,m_2|L,M\rangle=1 \label{eq:normclebsch} \eeq A practical method to calculate the coefficients using the recursion relation is based on a graphical scheme, which we are going to outline with the help of Figure \ref{fig:clebsch} for the case of $l_1=l_2=1$. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.6\textwidth]{ChpA-ClebschGordan.pdf}} \caption[Clebsch-Gordan coefficients]{Illustration of recursion relation for Clebsch-Gordan coefficients: $l_1,\,l_2=1$.} \label{fig:clebsch} \end{figure}} The diagram shows a table with row index $m_2=-1,0,1$, and column index $m_1=-1,0,1$. The points in the diagram represent Clebsch-Gordan coefficients. Clebsch-Gordan coefficients are zero unless $M=m_1+m_2$. We know that the upper right corner must belong to $L=2$ since $M=2$. The two points marked with solid, red rings either belong to $L=2$ or $L=1$. Let us pick the value $L=1$ as example. Only the filled points represent possible coefficients in this case with $M=-1,0,1$. Now, inserting $M=1$ into Equation (\ref{eq:recclebsch}) and choosing the lower sign, the recursion relation links three points as for example the three marked with dashed, green rings. If the values of two points of a triangle are known, then the value of the third can be calculated. If we choose the point $m_1=1,\,m_2=0$ as upper corner of such a triangle, then the recursion relation only involves two coefficients. The lower-right corner of the triangle is off the diagram and therefore zero. Starting from there, one can fill in the values of all other points using the recursion relation. The orientation of the triangle formed by the green marked points can be flipped across the diagonal by using the other sign in Equation (\ref{eq:recclebsch}). We can set the value of one coefficient equal to 1, and later use Equation (\ref{eq:normclebsch}) to give all coefficients the correct normalization. Equation (\ref{eq:normclebsch}) says that the sum of squares of coefficients along a $M=\rm const$ diagonal is equal to 1. Note that all coefficients need to be recalculated for a different value of $L$. Nonetheless, the procedure is straight-forward, and one only needs to set up a new diagram for each combination of values of $l_1$, $l_2$. \subsection{Noise characterization in frequency domain} \label{sec:noisefreq} In this section, we give a brief introduction into frequency-domain functions used to characterize random processes. We will assume throughout this section that the random processes are Gaussian and stationary. Gaussianity implies that variances, correlations, and their spectral variants, i.~e.~power spectral densities and cross spectral densities, give a complete characterization of the noise. The role of stationarity is explained below. This does not mean that the presented equations are not useful in practice, when noise is non-stationary, and non-Gaussian, but then one needs to be more careful than we want to be in this article. For stationary random processes the auto-correlation between measurements at two different times $t_1,\,t_2$ is only a function of the difference $\tau=t_2-t_1$. In this case, the power spectral density can be defined as the Fourier transform of the auto-correlation with respect to $\tau$:\index{spectral density} \beq S(x;\omega)=2\int\drm\tau \langle x(t)x(t-\tau)\rangle \e^{-\irm\omega\tau} \label{eq:defspecdens} \eeq This equation assumes stationary noise $x(t)$. If noise is non-stationary, then the spectrum $S(x;\omega)$ explicitly depends on the time $t$. Another property of stationary noise is that Fourier amplitudes of the random process at different frequencies are uncorrelated: \beq \langle x(\omega)x^*(\omega')\rangle=2\pi \frac{1}{2}S(x;\omega)\delta(\omega-\omega') \label{eq:gaussian} \eeq The left-hand side is an ensemble average over many noise realizations. Since a stationary random process has a constant expectation value of its noise power for all times $-\infty<t<\infty$, its Fourier transform does not exist strictly speaking. This is the reason why the right-hand side involves a $\delta$-distribution. A more suitable form is obtained by integrating the last equation over frequency $\omega'$. Considering the product of Fourier amplitudes of two different random processes, one obtains \beq 2\int\limits_0^\infty\frac{\drm\omega'}{2\pi}\langle x(\omega)y^*(\omega')\rangle= S(x,y;\omega) \eeq The cross spectral density\index{spectral density!cross} $S(x,y;\omega)$ is equal to the Fourier transform of the cross-correlation $\langle x(t)y(t-\tau)\rangle$. In this article, the cross spectral density will often be denoted as $\langle x(\omega),y(\omega)\rangle$ and referred to as correlation function. A typical case is that the two quantities $x(t),\,y(t)$ represent measurements of a field at two potentially different locations. In this case, the correlation function can be cast into the form \beq \langle x(\vec r_1,\omega),x(\vec r_2,\omega)\rangle= S(x;\omega)\mathpzc{r}(\vec r_1,\vec r_2\,) \label{eq:twopoint} \eeq with $\mathpzc{r}(\vec r,\vec r\,)=1$. In practice, correlation functions are calculated based on plane-wave (or normal-mode) solutions. A field can then be represented as a superposition of plane waves, and field correlations are obtained by averaging the plane-wave correlations over wave parameters such as propagation directions $\vec e_k$ and polarizations $\vec p$. If the random field is isotropic, stationary, and unpolarized, then different modes $\vec k,\,\vec p$ are uncorrelated \cite{AlRo1999}. Consequently, we can focus on correlations between waves that are described by the same parameters: \beq \langle x_{\vec k,\vec p}(\vec r_1,\omega),y_{\vec k,\vec p}(\vec r_2,\omega)\rangle = S(x,y;\omega)\mathpzc{s}(\vec k,\vec p)\e^{\irm \vec k\cdot(\vec r_2-\vec r_1)} \eeq Note that this expression is evaluated for fixed wave parameters, and the only random quantities in this equation are the complex (scalar) amplitudes of the two waves. As a next step, we consider the field as a superposition of waves with random polarization and propagation direction. Averaging over directions and polarizations, we find \beq \langle x_k(\vec r_1,\omega),y_k(\vec r_2,\omega)\rangle = S(x,y;\omega)\frac{1}{4\pi P}\int\drm\vec p\int\drm\Omega_k\,\mathpzc{s}(\vec k,\vec p)\e^{\irm \vec k\cdot(\vec r_2-\vec r_1)} \label{eq:twoproccorr} \eeq Here, $P\equiv\int\drm\vec p$ is the measure of the integral over all polarization parameters, and since the number of polarizations is finite, the integral can also be rewritten as a sum over polarizations. The last equation is formally identical to the definition of the so-called overlap reduction function, which describes correlations between measurements of a stochastic GW background at two different locations \cite{Chr1992,Fla1993}. If the two random processes represent measurements of the same (scalar) field, then together with Equation (\ref{eq:twopoint}), we have \beq \mathpzc{r}(\vec r_1,\vec r_2\,)=\frac{1}{4\pi P}\int\drm\vec p\int\drm\Omega_k\,\mathpzc{s}(\vec k,\vec p)\e^{\irm \vec k\cdot(\vec r_2-\vec r_1)} \eeq Two-point correlation functions can have rich structure if the two random processes in Equation (\ref{eq:twoproccorr}) represent projections of a vector or tensor field at two different locations. Examples of this case can be found in Section \ref{sec:cohcancel}. \section{Newtonian-Noise Mitigation} \label{sec:mitigate} \index{mitigation!Newtonian noise} In early sensitivity plots of GW detectors, Newtonian noise was sometimes included as infrastructure noise. It means that it was considered a form of noise that cannot be mitigated in a straight-forward manner, except maybe by changing the detector site or applying other major changes to the infrastructure. Today however, some form of Newtonian-noise mitigation is part of every design study and planning for future generations of GW detectors, and it is clear that mitigation techniques will have a major impact on the future direction of ground-based GW detection. The first to mention strategies of seismic Newtonian-noise mitigation ``by modest amounts'' were Hughes \& Thorne \cite{HuTh1998}. Their first idea was to use arrays of dilatometers in boreholes, and seismometers at the surface to monitor the seismic field and use the sensor data for a coherent subtraction of Newtonian noise. The seismic channels serve as input to a linear filter, whose output is then subtracted from the target channel (i.~e.~the data of a GW detector). The output of an optimal filter can be interpreted as the best possible linear estimation of gravity perturbations based on seismic data. This method will be discussed in Section \ref{sec:cohcancel}. The second idea was to construct narrow moats around the test masses that reflect incoming Rf waves and therefore reduce seismic disturbances and associated gravity perturbations. As they already recognized in their paper, and as will be discussed in detail in Section \ref{sec:shieldseismic}, moats must be very deep (about 10\,m for the LIGO and Virgo sites). They are also less effective to reduce Newtonian noise from body waves. The idea of coherent cancellation of seismic Newtonian noise has gained popularity in the GW community, probably because it is based on techniques that have already been implemented successfully in GW detectors to mitigate other forms of noise \cite{GiEA2003,DrEA2012,DeEA2012}. These techniques are known as \emph{active} noise mitigation\index{mitigation!active}. It is mostly considered as a means to reduce seismic Newtonian noise, but the same scheme may also be applied to atmospheric Newtonian noise (see especially Section \ref{sec:gravimeterNN}) and possibly also other forms of gravity perturbations. While for example active seismic isolation cancels seismic disturbances before they reach the final suspension stages of a test mass, gravity perturbations have to be cancelled in the data of the GW detector. Coherent cancellation comes without (known) ultimate limitations, which means that in principle any level of noise reduction can be achieved provided that the environmental sensors are sufficiently sensitive, and one can deploy as many senors as required. The prediction by Hughes \& Thorne of a modest noise reduction rather follows from a vision of a practicable solution at the time the paper was written. The first detailed study of coherent Newtonian-noise cancellation was carried out by Cella \cite{Cel2000}. He studied the Wiener-filter scheme. Wiener filters are based on observed mutual correlation between environmental sensors and the target channel. The Wiener filter is the optimal linear solution to reduce variance in a target channel as explained in Section \ref{sec:Wiener}. The goal of a cancellation scheme can be different though, e.~g.~reduction of a stationary noise background in non-stationary data. The focus in Section \ref{sec:cohcancel} will also lie on Wiener filters, but limitations will be demonstrated, and the creation of optimal filters using real data is mostly an open problem. Techniques to mitigate Newtonian noise without using environmental data are summarized under the category of \emph{passive} Newtonian-noise mitigation\index{mitigation!passive}. Site selection is the best understood passive mitigation strategy. The idea is to identify the quietest detector site in terms of seismic noise and possibly atmospheric noise, which obviously needs to precede the construction of the detector as part of a site-selection process. The first systematic study was carried out for the Einstein Telescope \cite{BeEA2012} with European underground sites. Other important factors play a role in site selection, and therefore one should not expect that future detector sites will be chosen to minimize Newtonian noise, but rather to reduce it to an acceptable level. Current understanding of site selection for Newtonian-noise reduction is reviewed in Section \ref{sec:siteselect}. Other passive noise-reduction techniques are based on building shields against disturbances that cause density fluctuations near the test masses, such as moats and recess structures against seismic Newtonian noise, which are investigated in Section \ref{sec:shieldseismic}. \subsection{Coherent noise cancellation} \label{sec:cohcancel} \index{coherent noise cancellation} Coherent noise cancellation, also known as \emph{active noise cancellation}, is based on the idea that the information required to model noise in data can be obtained from auxiliary sensors that monitor the sources of the noise. The noise model can then be subtracted from the data in real time or during post processing with the goal to minimize the noise. In practice, cancellation performance is limited for various reasons. Depending on the specific implementation, non-stationarity of data, sensor noise, and also signal and other noise in the target channel can limit the performance. Furthermore, the filter that represents the noise model, which in the context of Newtonian-noise cancellation is a multiple-input-single-output (MISO) filter with reference channels providing the inputs, and the noise model being the output, also maps sensor noise into the noise model, which means that sensor noise is added to the target channel. It follows that the auxiliary sensors must provide information about the sources with sufficiently high signal-to-noise ratio. The best way to understand the noise-cancellation problem is to think of it as an optimization of extraction of information, subject to constraints. Constraints can exist for the maximum number of auxiliary sensors, for the possible array configurations, and for the amount of data that can be used to calculate the optimal filter. Also the type of filter and the algorithm used to calculate it can enforce constraints on information extraction. There is little understanding of how most of these constraints limit the performance. A well-explored cancellation scheme is based on \emph{Wiener filters} \cite{Vas2001}. Wiener filters are linear filters calculated from correlation measurements between reference and target channels. They are introduced in Section \ref{sec:Wiener}. In the context of seismic or atmospheric Newtonian-noise cancellation, the auxiliary sensors monitor a field of density perturbations, which means that correlation between auxiliary sensors is to be expected. In this case, if the field is wide-sense stationary (defined in Section \ref{sec:Wiener}), if the target channel is wide-sense stationary, and if all forms of noise are additive, then the Wiener filter is known to be the optimal linear filter for a given configuration of the sensor array \cite{RVRe2013}. In Sections \ref{sec:arrayNNRay} to \ref{sec:arrayNNatm}, the problem is described for seismic and infrasound Newtonian noise. The focus lies on gravity perturbations from fluctuating density fields. Noise cancellation from finite-size sources is mostly a practical problem, and trivial from the theory perspective. The optimization of array configurations for noise cancellation is a separate problem, which is discussed in Section \ref{sec:optimarray}. \subsubsection{Wiener filtering} \label{sec:Wiener} \index{Wiener filter} A linear, time-invariant filter that produces an estimate of a random stationary (target) process minimizing the deviation between target and estimation is known as Wiener filter \cite{BSH2008}. It is based on the idea that data from reference channels exhibit some form of correlation to the target channel, which can therefore be used to provide a coherent estimate of certain contributions to the target channel. Strictly speaking, the random processes only need to be wide-sense stationary\index{stationary!wide sense}, which means that noise moments are independent of time up to second order (i.~e.~variances and correlations). Without prior knowledge of the random processes, the Wiener filter itself needs to be estimated. In this section, we briefly review Wiener filtering, and discuss some of its limitations. Two main modes of Wiener filtering exist: filtering in time domain (real-valued) or frequency domain (complex-valued). Let us start with the time-domain filter. Wiener filter require random processes as inputs that are assumed to be correlated with the target process. We will call these reference channels, and collect them as components of a vector $\vec x_n$. The subindex $n$ represents time $t_n=t_0+n\Delta t$, where $\Delta t$ is the common sampling time of the random processes. With discretely sampled data, a straight-forward filter implementation is the convolution with a finite-impulse response filter (FIR). These filters are characterized by a filter order $N$. Assuming that we have $M$ reference channels, the FIR filter $\mathbf w$ is a $(N+1)\times M$ matrix with components $w_{nm}$. The convolution assumes the form \beq \begin{split} \hat y_n &= \sum\limits_{k=0}^N \vec w_k\cdot\vec x_{n-k}\\ &\equiv \mathbf w\circ\vec x_n \end{split} \eeq where the dot-product is with respect to the $M$ reference channels. This equation implies that there is only one target channel $y_n$, in which case the FIR filter is also known as multiple-input-single-output (MISO) filter. We have marked the filter output with a hat to indicate that it should be interpreted as an estimate of the actual target channel. The coefficients of the Wiener filter can be calculated by demanding that the mean-square deviation $\langle (y_n-\hat y_n)^2\rangle$ between the target channel and filter output is minimized, which directly leads to the Wiener-Hopf equations\index{Wiener-Hopf equations}: \beq \mathbf C_{xx}\cdot\vec w(:)=\vec C_{xy} \label{eq:wienerhopf} \eeq The Wiener-Hopf equations are a linear system of equations that determine the filter coefficients. Here, $\vec w(:)$ is the $NM$-dimensional vector that is obtained by concatenating the $M$ columns of the matrix $\mathbf w$. The $(N+1)M\times (N+1)M$ matrix $\mathbf C_{xx}$ is the cross-correlation matrix between reference channels. Correlations must be evaluated between all samples of all reference channels where sample times differ at most by $N\Delta t$. It contains the autocorrelations of each reference channel as $(N+1)\times (N+1)$ blocks on its diagonal: \beq \mathbf C_{xx}^{\rm auto}= \left(\begin{matrix} c_0 & c_1 & \cdots & c_N \\ c_1 & c_0 & \cdots & c_{N-1} \\ \vdots & \vdots & \ddots & \vdots \\ c_N & c_{N-1} & \cdots & c_0 \end{matrix}\right) \eeq with $c_k\equiv\langle x_n x_{n+k}\rangle$ for each of the $M$ reference channels. In this form it is a symmetric Toeplitz matrix. The $(N+1)M$-dimensional vector $\vec C_{xy}$ is a concatenation of correlations between each reference channel and the target channel. The components contributed by a single reference channel are \beq \vec C_{xy}^{\,\rm sgl}=\left(s_0,s_1,\ldots,s_N\right) \eeq with $s_k\equiv\langle x_n y_{n+k}\rangle$. Note that we do not assume independence of noise between different reference channels. This is important since there can be forms of noise correlated between reference channels, but uncorrelated with the target channel (e.~g.~shear waves in Newtonian-noise cancellation, see Section \ref{sec:arrayNNP}). In general, the correlations that determine the Wiener-Hopf equations are unknown and need to be estimated from measurements using data from reference and target channels. An elegant implementation of the code that provides these estimates and solves the Wiener-Hopf equations can be found in \cite{Pep2007}. The residual of the target channel after subtraction of $\hat y_n$ is given by \beq r_n = y_n-\mathbf w\circ\vec x_n \label{eq:reswiener} \eeq This equation summarizes the concept of coherent noise cancellation. In the context of Newtonian noise subtraction, the target channel $y_n$ corresponds to the GW strain signal contaminated by Newtonian noise, and $\hat y_n$ is the estimate of Newtonian noise provided by the Wiener filter using reference data from seismometers or other sensors. Time-domain Wiener filters were successfully implemented in GW detectors for the purpose of noise reduction \cite{DrEA2012,DeEA2012}. Results from a time-domain simulation of Newtonian-noise cancellation using Wiener filters was presented in \cite{DHA2012}. Not all coherent cancellation schemes are necessarily implemented as Wiener filters. For example in \cite{GiEA2003}, noise cancellation was optimized by solving a system-identification problem. A frequency-domain version of the Wiener filter can be obtained straight-forwardly by dividing the data into segments and calculating their Fourier transforms. Equation (\ref{eq:reswiener}) translates into a segment-wise noise cancellation where $n$ stands for a double index to specify the segment and the discrete frequency (also known as frequency bin). For stationary random processes, correlations between noise amplitudes at different frequencies are zero (keep in mind that amplitudes of stationary, random processes do not exist as Fourier amplitudes, and therefore this statement needs a suitable definition of these amplitudes, see Section \ref{sec:noisefreq} and \cite{RoPi1955}). This means that coherent noise cancellation in frequency domain can be done on each frequency bin separately, which is numerically much less demanding, and more accurate since the dimensionality of the system of equations in Equation (\ref{eq:wienerhopf}) is reduced from $NM$ to $M$ (for $N$ different frequency bins). In contrast, time-domain correlations $c_k,\,s_k$ can be large for small values of $k$. This can cause significant numerical problems to solve the Wiener-Hopf equations, and as observed in \cite{CoEA2014}, FIR filters of lower order can be more effective (even though theoretically, increasing the filter order should not make the cancellation performance worse). It should be noted that coherence between channels needs to be very high even for ``modest'' noise cancellation. The ideal suppression factor $s(\omega)$ as a function of frequency in the case of a single reference channel is related to the reference-target coherence $c(\omega)$ via \beq s(\omega)=\frac{1}{\sqrt{1-c(\omega)^2}} \label{eq:singlechcoh} \eeq where the coherence is defined in terms of the spectral densities (see Section \ref{sec:noisefreq}): \beq c(\omega) = \frac{S(x,y;\omega)}{(S(x;\omega)S(y;\omega))^{1/2}} \eeq If coherence between reference and target channels at some frequency is $\{0.9, 0.99, 0.999\}$, then the residual amplitude spectrum at that frequency will ideally be reduced by factors $\{2.3, 7.1, 22\}$, respectively. These numbers clearly do not pose a limit to cancellation with multiple reference sensors. Trivially, cancellation using $M$ collocated, identical sensors leads at least to a $\sqrt{M}$ reduction of the sensor noise limit. If the $M$ sensors monitor a field whose values at nearby points are dynamically correlated (i.~e.~the two-point spatial correlation is not just a $\delta$-peak), then further gain is to be expected for example by being able to distinguish between modes of the field that produce correlation with the target channel, and modes that do not. This will be discussed in detail in Section \ref{sec:arrayNNP} and Section \ref{sec:optimarray}. \subsubsection{Cancellation of Newtonian noise from Rayleigh waves} \label{sec:arrayNNRay}\index{active noise cancellation!Rayleigh waves} As we have seen in Section \ref{sec:Wiener}, the correlations between reference channels and the target channel determine the Wiener filter. For seismic fields, correlations between reference channels (seismometers) can be measured, but we still need a model consistent with the seismic correlations that provides the correlation with the gravity channel. Obviously, as long as 2D arrays are used for the characterization of the seismic field, the predicted correlation with the gravity channel can be subject to systematic errors. For example, we will have to guess the types of seismic waves that contribute to the seismic correlations. In this section, the problem will be solved assuming that all seismic waves are Rayleigh waves. Here, we will also discuss the cancellation problem in low-frequency detectors explicitly since it is qualitatively different. It is assumed that the seismic correlation is known, and the problem is to calculate the correlation with the gravity channel. We will consider the case of a homogeneous, but not necessarily isotropic seismic field. In this case, we can choose to evaluate the gravity acceleration at $\vec\varrho_0=\vec 0$, and its correlation with the vertical surface displacement at $\vec \varrho=(x,y)$. For the gravity acceleration along the $x$-axis measured at height $h$ above surface, the correlation is given by (see Section \ref{sec:spatialcorr} for spectral representation of noise) \beq \begin{split} \langle \delta a_x(\vec 0,\omega), \xi_z(\vec \varrho\,,\omega)\rangle &= -2\pi\irm G\rho_0\gamma(\nu)\int\frac{\drm^2k}{(2\pi)^2}S(\xi_z;\vec k_\varrho,\omega)\frac{k_x}{k_\varrho}\e^{-hk_\varrho}\e^{\irm\vec k_\varrho\vec\varrho}\\ &=G\rho_0\gamma(\nu)\int\drm^2\varrho'\,C(\xi_z;\vec \varrho\,',\omega) \frac{x-x'}{(h^2+|\vec\varrho-\vec\varrho\,'|^2)^{3/2}} \end{split} \label{eq:kernelRayaxi} \eeq The two dimensional kernel of the integral in the second line is plotted in Figure \ref{fig:kernelRayaxi}. The important coordinate range of the correlation function lies around the two extrema at $x'=x\pm h/\sqrt{2}$ and $y'=y$. \epubtkImage{} \begin{figure}[htbp] \centerline{\includegraphics[width=0.8\textwidth]{Chp7-kernelRay_a_xi.png}} \caption[Homogeneous displacement-gravity kernel for Rayleigh fields]{Homogeneous displacement-gravity kernel for Rayleigh fields according to Equation (\ref{eq:kernelRayaxi}).} \label{fig:kernelRayaxi} \end{figure}} Next, we will consider the explicit example of an isotropic Rayleigh wave field. The easiest way to obtain the result is to insert the known solution of the wavenumber spectrum, Equation (\ref{eq:specRayIso}), into the first line in Equation (\ref{eq:kernelRayaxi}), which gives: \beq \langle \delta a_x(\vec 0,\omega), \xi_z(\vec \varrho\,,\omega)\rangle =2\pi G\rho_0\gamma(\nu) S(\xi_z;\omega)\e^{-hk_\varrho^{\rm R}}\cos(\phi)J_1(k_\varrho^{\rm R}\varrho) \label{eq:corraxiRayiso} \eeq Interestingly, the correlation between vertical displacement and the gravity perturbation vanishes for $\varrho = 0$. This is a consequence of the fact that any elastic perturbation of the ground must fulfill the wave equation. If instead the ground were considered as a collection of infinitely many point masses without causal link, then the correlation of displacement of point masses nearest to the test mass with the gravity perturbation would be strongest. Since the purpose of this section is to evaluate and design a coherent noise cancellation of gravity perturbations in $x$-direction, one may wonder why the correlation with the vertical surface displacement is used, and not the displacement along the direction of the $x$-axis. The reason is that in general horizontal seismic motion of a flat surface correlates weakly with gravity perturbations produced at the surface. Other waves such as horizontal shear waves can produce horizontal surface displacement without perturbing gravity. Vertical surface displacement always perturbs gravity, no matter by what type of seismic wave it is produced. The situation is different underground as we will see in Section \ref{sec:arrayNNP}. Notice that the results so far can only be applied to the case where Newtonian noise is uncorrelated between different test masses. In future GW detectors that measure signals below 1\,Hz, correlation of seismic Newtonian noise between two test masses can be very high since the seismic wavelength is much larger than the dimension of the detector. Since the only position dependence in Equation (\ref{eq:RayleighNN}) is the phase term $\exp(\irm \vec k_\varrho\cdot\vec\varrho_0)$, the differential acceleration between two test masses is governed by the difference of phase terms at the two test masses, which simplifies to $\irm \vec k_\varrho\cdot\vec L$ when the distance $L$ between the test masses is much smaller than the length of the Rayleigh wave. Considering the case that direction of acceleration and direction of separation are the same, the correlation is given by \beq \begin{split} \langle(\delta a_x(L\vec e_x,\omega)-&\delta a_x(\vec 0,\omega))/L, \xi_z(\vec \varrho\,,\omega)\rangle_{\rm low-f} \\ &=2\pi G\rho_0\gamma(\nu)\int\frac{\drm^2k}{(2\pi)^2}S(\xi_z;\vec k_\varrho,\omega)\frac{k_x^2}{k_\varrho}\e^{-hk_\varrho}\e^{\irm\vec k_\varrho\vec\varrho}\\ &=G\rho_0\gamma(\nu)\int\drm^2\varrho'\,C(\xi_z;\vec \varrho\,',\omega) \frac{h^2-3(x-x')^2+|\vec\varrho-\vec\varrho\,'|^2}{(h^2+|\vec\varrho-\vec\varrho\,'|^2)^{5/2}} \end{split} \label{eq:kernelRayaxiLow} \eeq The maximum of the kernel lies at the origin $x'=x$, $y'=y$ independent of test-mass height. Now, for the homogeneous and isotropic field, the solution with respect to the strain acceleration reads \beq \begin{split} \langle(\delta a_x(L\vec e_x,\omega)-&\delta a_x(\vec 0,\omega))/L, \xi_z(\vec \varrho\,,\omega)\rangle_{\rm low-f} \\ &=\pi G\rho_0\gamma(\nu) S(\xi_z;\omega)\e^{-hk_\varrho^{\rm R}}(J_0(k_\varrho^{\rm R}\varrho)-\cos(2\phi)J_2(k_\varrho^{\rm R}\varrho)) \end{split} \label{eq:corraxiRayisoLow} \eeq Here, correlation does not vanish in the limit $\varrho\rightarrow 0$. Also notice that the result is independent of the distance $L$. This is the typical situation for strain quantities at low frequencies since the differential signal is proportional to the distance, which then cancels in the strain variable when dividing by $L$. We have seen this already in Section \ref{sec:lowfNNRay}. At this point, we have the required analytical expressions to evaluate the performance of Wiener filters. The goal is to derive equations that allow us to calculate the performance of the Wiener filter, given a specific array configuration and seismometer self noise. We also want to know whether it is possible to use the results to design optimal array configurations based on seismic correlation measurements alone. First, we continue with the specific example of a homogeneous and isotropic field, and a single test mass. Since the Wiener filter is based on measured correlations between seismometers and a gravity channel, we need to introduce the seismometer self noise. It is convenient to express the noise in terms of the signal-to-noise ratio $\sigma(\omega)$ with respect to measurements of seismic displacement. According to Equation (\ref{eq:corrRayiso}), the correlation between two seismometers at locations $\vec\varrho_i,\,\vec\varrho_j$ can then be written \beq C_{\rm SS}^{ij}(\xi_z;\omega)=S(\xi_z;\omega)\left(J_0(k_\varrho^{\rm R}|\vec\varrho_i-\vec\varrho_j|)+\frac{\delta_{ij}}{\sigma_i^2(\omega)}\right) \eeq Seismometer self noise is assumed to be uncorrelated among different seismometers and with the gravity channel. The correlations between all seismometers form (a frequency-domain version of) the correlation matrix $\mathbf C_{xx}$ in Equation (\ref{eq:wienerhopf}). The correlation of each seismometer with the gravity perturbation will be denoted as \beq C_{\rm SN}^i(\xi_z,\delta a_x;\omega)\equiv 2\pi G\rho_0\gamma(\nu) S(\xi_z;\omega)\e^{-hk_\varrho^{\rm R}}\cos(\phi_i)J_1(k_\varrho^{\rm R}\varrho_i) \eeq Subtracting the output of a Wiener filter leaves a residual, whose spectrum relative to the original gravity spectrum $C_{\rm NN}(\omega)=S(\delta a_x;\omega)$ is \cite{Cel2000} \beq R(\omega)=1-\frac{\vec C_{\rm SN}^{\,\rm T}(\omega)\cdot(C_{\rm SS}(\omega))^{-1}\cdot\vec C_{\rm SN} (\omega)}{C_{\rm NN}(\omega)} \label{eq:residualNN} \eeq A simple question to answer is where a single seismometer should be placed to minimize the residual. In this case, the residual spectrum is given by \beq R_1(\omega) = 1-\frac{2\cos^2(\phi_1)J_1^2(k_\varrho^{\rm R}\varrho_1)}{1+1/\sigma_1^2(\omega)} \label{eq:residualNNRay1} \eeq Since the fraction is always positive and smaller than 1, it needs to be maximized. This means that $\phi_1=0$ or $\pi$, and $\varrho_1$ is chosen to maximize the value of the Bessel function. In the presence of $N>1$ seismometers, the optimization problem is non-trivial. The optimal array configuration fulfills the relation \beq \nabla^N(\vec C_{\rm SN}^{\,\rm T}(\omega)\cdot(C_{\rm SS}(\omega))^{-1}\cdot\vec C_{\rm SN} (\omega))=\vec 0, \eeq where $\nabla^N$ contains $2N$ derivatives with respect to the two horizontal coordinates of $N$ seismometers. Already with a few seismometers, it becomes very challenging to find numerical solutions to this equation (see Section \ref{sec:optimarray}). An easier procedure that we want to illustrate now is to perform a step-wise optimal placement of seismometers. In other words, one after the other, seismometers are added at the best locations, with all previous seismometers having fixed positions. \epubtkImage{} \begin{figure}[htbp] \centerline{\includegraphics[width=0.3\textwidth]{Chp7-RaySub1.png} \includegraphics[width=0.3\textwidth]{Chp7-RaySub2.png} \includegraphics[width=0.3\textwidth]{Chp7-RaySub3.png}} \caption[Step-wise optimal array for Rayleigh Newtonian noise]{Step-wise optimal placement of seismometers for Wiener filtering of Rayleigh Newtonian noise. Maxima indicate best placement. Left: optimal location(s) of first seismometer. Middle: optimal location of second seismometer. Right: optimal location of third seismometer.} \label{fig:arrayRay} \end{figure}} The procedure can be seen in Figure \ref{fig:arrayRay}. The first seismometer must be placed at $x_1=\pm 0.3\lambda^{\rm R}$ and $y_1=0$. We choose the side with positive $x$-coordinate. Assuming a signal-to-noise ratio of $\sigma=10$, the single seismometer residual would be 0.38. The second seismometer needs to be placed at $x_2=-0.28\lambda^{\rm R}$ and $y_2=0$, with residual 0.09. The third seismometer at $x_3=0.75\lambda^{\rm R}$ and $y_3=0$, with residual 0.07. The step-wise optimization described here works for a single frequency since the optimal locations depend on the length $\lambda^{\rm R}$ of a Rayleigh wave. In reality, the goal is to subtract over a band of frequencies, and the seismometer placement should be optimized for the entire band. The result is shown in the left of Figure \ref{fig:residualsRay} for a sub-optimal spiral array, and seismometers with frequency-independent $\sigma=100$. Rayleigh-wave speed is constant $c_{\rm R}=250\,$m/s. There are three noteworthy features. First, the minimal relative residual lies slightly below the value of the inverse seismometer signal-to-noise ratio. It is a result of averaging of self noise from different seismometers. Second, residuals increasing with $1/\omega$ at low frequencies is a consequence of the finite array diameter. An array cannot analyze waves much longer than its diameter. Third, the residuals grow sharply towards higher frequencies. The explanation is that the array has a finite seismometer density, and therefore, waves shorter than the typical distance between seismometers cannot be analyzed. If the seismic speed is known, then the array diameter and number of seismometers can be adjusted in this way to meet a subtraction goal in a certain frequency range. \epubtkImage{} \begin{figure}[htbp] \centerline{\includegraphics[width=0.45\textwidth]{Chp7-Residuals_Rayleigh.pdf} \includegraphics[width=0.45\textwidth]{Chp7-Residuals_Rayleigh_Grad.pdf}} \caption[Broadband Rayleigh Newtonian-noise residuals]{Residuals from Wiener noise cancellation using spiral seismometer arrays. The curves show the residuals for different numbers $N$ of seismometers, and also different spiral radii. In all cases, the spirals have two full windings. Left: subtraction of uncorrelated test-mass noise. Right: subtraction of gravity-gradient noise typical for low-frequency detectors.} \label{fig:residualsRay} \end{figure}} Residuals are also shown in the right plot of Figure \ref{fig:residualsRay} after subtraction of gravity-gradient noise (i.~e.~the low-frequency case). The sensor signal-to-noise ratio is the same as before. As we have seen in Section \ref{sec:lowfNNRay}, Newtonian noise is suppressed at low frequencies in gravity strainmeters due to common-mode rejection of correlated gravity perturbations between two test masses. However, as soon as coherent cancellation is required, one has to pay the price for this gain. Each seismometer measures seismic displacement that is similarly correlated with gravity perturbations at both test masses. This means that the dominant part of the seismic data is useless since the corresponding gravity perturbations are rejected as common mode. Therefore, the data provided by the seismic array must make it possible to distinguish between the common-mode and differential noise. The Wiener filter needs to cancel the common-mode noise in the seismic data by combining data from different seismometers. An underlying weak correlation with the differential gravity signal then needs to be sufficient to optimize the Wiener filter for noise cancellation. It can be seen that the common-mode rejection causes the residuals to be higher, but only if the number of seismometers lies below a critical value. With the $N=20$ arrays it is possible to distinguish the common-mode noise from differential noise, and subtraction residuals are similar to the standard Newtonian-noise cancellation. However, in all cases, suppression of common-mode noise becomes less efficient at long wavelengths. For this reason, the low-frequency slope of the residual spectra has an additional $1/\omega$, which causes the cancellation to be less broadband. Further results from this analysis can be found in \cite{Har2013}. In the future, it should be analyzed if an inherently differential seismic sensor, such as a seismic strainmeter, naturally provides the required common-mode rejection of seismic data, leading to more efficient noise subtraction. \subsubsection{Cancellation of Newtonian noise from body waves} \label{sec:arrayNNP}\index{active noise cancellation!compressional waves} In this section, the focus lies on noise subtraction in infinite media. As we have seen in Sections \ref{eq;basicsgrav} and \ref{sec:halfspace}, any gravity perturbation can be divided into two parts, one that has the form of gravity perturbations from seismic fields in infinite space, and another that is produced by the surface. Subtraction of the surface part follows the scheme outlined in Section \ref{sec:arrayNNRay} using surface arrays. The additional challenge is that body waves can have a wide range of angles of incidence leading to a continuous range of apparent horizontal speeds, which could affect the array design. In this section, we will investigate the properties of coherent noise cancellation of the bulk contribution. Therefore, this section is without purpose to low-frequency GW detectors. The reason is that a low-frequency detector (i.~e.~sub-Hz detector) can always be considered to be located at the surface with respect to seismic Newtonian noise, since feasible detector depths are only a small fraction of the length of seismic waves. In other words, surface perturbations will always vastly dominate bulk contributions. Since we consider the high-frequency case, we can assume here that Newtonian noise is uncorrelated between test masses. In order to simplify the analysis, only homogeneous and isotropic body-wave fields are considered, without contributions from surface waves. The evaluation of Wiener-filter performance requires the calculation of two-point spatial correlation functions between seismic measurements and gravity measurements. Since gravity perturbations are assumed to be uncorrelated between test masses, we can focus on gravity perturbations at a single test mass. The test mass is assumed to be located underground inside a cavity. We know from Section \ref{sec:bodynoscatt} that gravity perturbations are produced by compressional waves through density perturbations of the medium, and by shear and compressional waves due to displacement of cavity walls. From the theory perspective, cancellation of noise from cavity walls is straight-forward and will not be discussed here. More interesting is the cancellation of noise from density perturbations in the medium. A seismic measurement is represented by the projection $\vec e_n\cdot\vec \xi(\vec r,\omega)$, where $\vec e_n$ is the direction of the axis of the seismometer, and the gravity measurement by a similar projection $\vec e_n\cdot\delta\vec a(\vec r,\omega)$. Therefore, the general two-point correlation function depends on the directions $\vec e_1,\,\vec e_2$ of two measurements, and the unit vector $\vec e_{12}$ that points from one measurement location at $\vec r_1$ to the other at $\vec r_2$. The correlation functions are calculated using the formalism presented in \cite{Fla1993,AlRo1999}, developed for correlations between measurements of strain tensors representing gravitational waves. The first step is to obtain an expression of correlations from single plane waves characterized by a certain polarization and direction of propagation, and then to average over all directions. Here our goal is to calculate separate solutions for compressional and shear waves, which means that we only average over the two transversal polarizations for the case of shear waves. We first calculate the two-point spatial correlation between two seismic measurements of a field composed of P-waves: \beq \begin{split} \langle (\vec e_1\cdot\vec \xi^{\,\rm P}(\vec r_1,\omega)), (\vec e_2\cdot\vec \xi^{\,\rm P}(\vec r_2,\omega))\rangle &= \frac{3S(\xi_n^{\rm P};\omega)}{4\pi}\int\drm\Omega_k(\vec e_1\cdot\vec e_k)(\vec e_2\cdot\vec e_k)\e^{\irm \vec k^{\rm P}\cdot(\vec r_2-\vec r_1)}\\ &= \frac{3S(\xi_n^{\rm P};\omega)}{4\pi}\vec e_1\cdot\left(\int\drm\Omega_k(\vec e_k\otimes\vec e_k)\e^{\irm k^{\rm P}|\vec r_2-\vec r_1|(\vec e_k\cdot\vec e_{12})}\right)\cdot\vec e_2, \end{split} \eeq where $\vec e_k$ is the direction of propagation of a P-wave. The factor 3 accounts for the isotropic distribution of P-wave energy among the three displacement directions: $S(\xi_n^{\rm P};\omega)=S(\xi^{\rm P};\omega)/3$. The integral is carried out easily in spherical coordinates $\theta,\,\phi$ by choosing the $z$-axis parallel to $\vec e_{12}$ so that $\vec e_k\cdot\vec e_{12}=\cos(\theta)$. Instead of writing down the explicit expression of $\vec e_k\otimes\vec e_k$ and evaluating the integral over all of its independent components, one can reduce the problem to two integrals only. The point is that the matrix that results from the integration can in general be expressed in terms of two ``basis'' matrices $\mathbf 1$ and $\vec e_{12}\otimes \vec e_{12}$. For symmetry reasons, it cannot depend explicitly on any other combination of the coordinate basis vectors $\vec e_x\otimes\vec e_x$, $\vec e_x\otimes\vec e_y$, $\ldots$ Expressing the integral as linear combination of basis vectors, $P_1(\Phi_{12})\mathbf 1+P_2(\Phi_{12})(\vec e_{12}\otimes \vec e_{12})$ with $\Phi_{12}\equiv k^{\rm P}|\vec r_2-\vec r_1|$, solutions for $P_1(\Phi_{12}),P_2(\Phi_{12})$ can be calculated as outlined in \cite{Fla1993,AlRo1999}, and the correlation function finally reads \beq \begin{split} \langle (\vec e_1\cdot\vec \xi^{\,\rm P}(\vec r_1,\omega)), (\vec e_2\cdot\vec \xi^{\,\rm P}(\vec r_2,\omega))\rangle &= S(\xi_n^{\rm P};\omega)\left(P_1(\Phi_{12})(\vec e_1\cdot\vec e_2)+P_2(\Phi_{12})(\vec e_1\cdot\vec e_{12})(\vec e_2\cdot\vec e_{12})\right)\\[0.3cm] P_1(\Phi_{12})&=j_0(\Phi_{12})+j_2(\Phi_{12})\\ P_2(\Phi_{12})&=-3j_2(\Phi_{12}) \end{split} \label{eq:corrPP} \eeq A great advantage of this expression is that it is coordinate independent. P-wave correlation is zero only if the two measurement directions are orthogonal to each other and to the separation vector. For small distances of the two seismometers, correlation is significant only when the two measurement directions are similar. For compressional waves, one also needs to calculate the correlation with gravity perturbations. For the bulk contribution, we can use the gradient of Equation (\ref{eq:gravP}). The analytic form of the correlation is identical to Equation (\ref{eq:corrPP}) since the gravity acceleration is simply a multiple of the seismic displacement. Therefore we can immediately write \beq \langle (\vec e_1\cdot\vec \xi^{\,\rm P}(\vec r_1,\omega)), (\vec e_2\cdot\delta\vec a(\vec r_2,\omega))\rangle = 4\pi G\rho_0\langle (\vec e_1\cdot\vec \xi^{\,\rm P}(\vec r_1,\omega)), (\vec e_2\cdot\vec \xi^{\,\rm P}(\vec r_2,\omega))\rangle \label{eq:corrGravP} \eeq Consequently, also here, correlation does not necessarily vanish if gravity acceleration is measured in orthogonal direction to the seismic displacement. In contrast to the Rayleigh-wave correlation in Equation (\ref{eq:corraxiRayiso}), correlation between gravity perturbation and compressional wave displacement is maximal when $\vec r_1=\vec r_2$ assuming that $\vec e_1=\vec e_2$. Nonetheless, due to the more complex form of correlation functions of body fields, there are more choices to make when optimizing array configurations. For example, should single-axis seismometers all measure along the relevant direction of gravity perturbations? What do we gain from multi-axis seismometers? These questions still need to be investigated in detail. Next, the S-wave correlation is calculated. Since shear waves can be polarized in two orthogonal transverse directions, we form two polarization matrices in terms of basis vectors of the spherical coordinate system, $\vec e_\theta\otimes\vec e_\theta$, $\vec e_\phi\otimes\vec e_\phi$, and average the integrals over these two matrices. The result is \beq \begin{split} \langle (\vec e_1\cdot\vec \xi^{\,\rm S}(\vec r_1,\omega)), (\vec e_2\cdot\vec \xi^{\,\rm S}(\vec r_2,\omega))\rangle &= S(\xi_n^{\rm S};\omega)\left(S_1(\Phi_{12})(\vec e_1\cdot\vec e_2)+S_2(\Phi_{12})(\vec e_1\cdot\vec e_{12})(\vec e_2\cdot\vec e_{12})\right)\\[0.3cm] S_1(\Phi_{12})&=j_0(\Phi_{12})-\frac{1}{2}j_2(\Phi_{12})\\ S_2(\Phi_{12})&=\frac{3}{2}j_2(\Phi_{12}) \end{split} \label{eq:corrSS} \eeq Since shear waves do not produce gravity perturbations, they act as noise contribution correlated between seismometers. A mixing ratio $\mathpzc{p}$ needs to be introduced that parameterizes the ratio of energy in the P-wave field over the total energy in P- and S-waves. The correlation between seismometers depends on $\mathpzc{p}$: \beq \begin{split} \langle (\vec e_1\cdot\vec \xi(\vec r_1,\omega)), &(\vec e_2\cdot\vec \xi(\vec r_2,\omega))\rangle= \\ &S(\xi_n;\omega)\bigg(\frac{\mathpzc{p}}{S(\xi_n^{\rm P};\omega)}\langle (\vec e_1\cdot\vec \xi^{\,\rm P}(\vec r_1,\omega)), (\vec e_2\cdot\vec \xi^{\,\rm P}(\vec r_2,\omega))\rangle\\ &\hspace*{1.5cm}+\frac{1-\mathpzc{p}}{S(\xi_n^{\rm S};\omega)}\langle (\vec e_1\cdot\vec \xi^{\,\rm S}(\vec r_1,\omega)), (\vec e_2\cdot\vec \xi^{\,\rm S}(\vec r_2,\omega))\rangle\bigg), \end{split} \eeq with $S(\xi_n;\omega)=S(\xi_n^{\rm P};\omega)+S(\xi_n^{\rm S};\omega)$ All required quantities are calculated now to evaluate the Wiener filter. In the case of a single seismometer, the residual spectrum defined in Equation (\ref{eq:residualNN}) is given by \beq R_1(\omega) = 1-\frac{\mathpzc{p}}{1+1/\sigma_1^2(\omega)}\left(P_1(\Phi_{12})(\vec e_1\cdot\vec e_2)+P_2(\Phi_{12})(\vec e_1\cdot\vec e_{12})(\vec e_2\cdot\vec e_{12})\right)^2 \eeq The optimal placement of a single seismometer is independent of the mixing ratio. The minimal residual is achieved for $\Phi_{12}=0$, i.~e.~when the seismometer is placed at the test mass. The residual is solely limited by the mixing ratio and signal-to-noise ratio. The case was different for Rayleigh waves, see Equation (\ref{eq:residualNNRay1}), where a limitation was also enforced by the correlation pattern of the seismic field. This is a great advantage of underground detectors. In fact, if the mixing ratio is $\mathpzc{p}=1$ (only P-waves), then it can be shown that the optimal placement of all seismometers would be at the test mass. With a single seismometer, a residual of $\approx 1/\sigma^2$ would be achieved over all frequencies (assuming that $\sigma$ is constant). However, the case is different for mixing ratios smaller than 1. Assuming a conservative mixing ratio of $\mathpzc{p}=1/3$ (P-waves are one out of three possible body-wave polarizations), the single-seismometer residual is about $2/3$ provided that $\sigma\gg 1$. As in Section \ref{sec:arrayNNRay}, we consider the step-wise optimized array configuration, which is illustrated in Figure \ref{fig:residualMixPS}. \epubtkImage{} \begin{figure}[htbp] \centerline{\includegraphics[width=0.3\textwidth]{Chp7-ResidualMixPS1.png} \includegraphics[width=0.3\textwidth]{Chp7-ResidualMixPS2.png} \includegraphics[width=0.3\textwidth]{Chp7-ResidualMixPS3.png}} \caption[Body-wave step-wise array optimization for noise cancellation]{Step-wise array optimization for noise cancellation of bulk Newtonian noise. The red maxima mark the optimal location of the next seismometer to be placed. Left: placement of first seismometer with residual 0.67. Middle: placement of second seismometer with residual 0.54. Right: placement of third seismometer with residual 0.44.} \label{fig:residualMixPS} \end{figure}} The array is designed for cancellation of gravity perturbations along the $x$-axis. The plot only shows a plane of possible seismometer placement, and all seismometers measure along the relevant direction of gravity acceleration. Ideally, optimization should be done in three dimensions, but for the first three seismometers, the 2D representation is sufficient. In theses calculations, the P-wave speed is assumed to be a factor 1.8 higher than the S-wave speed. The mixing ratio is $1/3$, and the signal-to-noise ratio is 100. The optimal location of the second seismometer lies in orthogonal direction at $x_2=z_2=0$ and $y_2=\pm 0.33\lambda^{\rm P}$. We choose the positive $y$-coordinate. In this case, the third seismometer needs to be placed at $x_3=z_3=0$ and $y_3=- 0.33\lambda^{\rm P}$. With three seismometers, a residual of $0.44$ can be achieved. The left plot in Figure \ref{fig:residualXiStrain} shows the subtraction residuals of bulk Newtonian noise using a 3D spiral array with all seismometers measuring along the relevant direction of gravity acceleration. The mixing ratio is $1/3$. The ultimate limit enforced by seismometer self noise, $1/(\sigma\sqrt{N})$, is not reached. Nonetheless, residuals are strongly reduced over a wide range of frequencies. Note that residuals do not approach 1 at highest and lowest frequencies, since a single seismometer at the test mass already reduces residuals to $0.67$ at all frequencies assuming constant $\sigma=100$. Another idea is to use seismic strainmeters instead of seismometers\index{strainmeter!seismic}. Seismic strainmeters are instruments that measure the diagonal components of the seismic strain tensor \cite{Agn1986}. Off-diagonal components are measured by seismic tiltmeters\index{tiltmeter}. Strainmeters are also to be distinguished from dilatometers\index{dilatometer}, which are volumetric strainmeters measuring the trace of the seismic strain tensor. The advantage would be that strainmeters are ideally insensitive to shear waves. This means that the optimization of the array is independent of the mixing ratio $\mathpzc{p}$. The seismic strain field $\mathbf{h}(\vec r\,,t)$ produced by a compressional wave can be written as \beq \mathbf{h}(\vec r\,,t)=-\irm k^{\rm P}(\vec e_k\otimes\vec e_k)\xi^{\rm P}\e^{\irm(\omega t-\vec k^{\rm P}\cdot\vec r)}, \eeq which is a $3\times 3$ strain tensor. A seismic strainmeter measures differential displacement along a direction that coincides with the orientation of the strainmeter, which rules out any type of rotational measurements. In this case, the correlation between two seismic strainmeters measuring strain along direction $\vec e_1,\,\vec e_2$ is given by \beq \begin{split} \langle (\vec e_1\cdot\mathbf{h}(\vec r_1,\omega)\cdot\vec e_1),& (\vec e_2\cdot\mathbf{h}(\vec r_2,\omega)\cdot\vec e_2)\rangle\\ &=\frac{5S(h_n^{\rm P};\omega)}{4\pi}\int\drm\Omega_k(\vec e_1\cdot\vec e_k)^2(\vec e_2\cdot\vec e_k)^2\e^{\irm k^{\rm P}|\vec r_2-\vec r_1|\vec e_{12}\cdot \vec e_k} \end{split} \eeq The factor 5 accounts for the isotropic distribution of strain-wave energy among the five strain degrees of freedom: $S(h_n^{\rm P};\omega)=S(h^{\rm P};\omega)/5$, with $h^{\rm P}\equiv k^{\rm P}\xi^{\rm P}$. There are five degrees of freedom since the strain tensor is symmetric and its trace is a constant (note that any symmetric tensor with constant trace can be diagonalized, in which case the resulting tensor only has two independent components, but here we also need to include the three independent rotations). This integral can be solved fully analogously to the tensor calculation given in \cite{Fla1993,AlRo1999}, or more specifically, using the generalized result in \cite{CoHa2014b}. The required steps are to define a 4D polarization tensor $\vec e_k\otimes\vec e_k\otimes\vec e_k\otimes\vec e_k$ so that the projection along directions $\vec e_1,\,\vec e_2$ can be applied outside the integral, solve the integral, and then project the solution. As for the vector fields, due to symmetry, we can express the 4D matrix resulting from the integral as a sum over a relatively small number of basis matrices (in this case 5), and solve for the five expansion coefficients. It turns out (in the case of seismic strain measurements) that only 3 expansion coefficients are different, which means that the final solution can be expressed as a linear combination of three coefficients $T_1(\Phi_{12}),\,T_2(\Phi_{12}),\,T_3(\Phi_{12})$. The result is the following: \beq \begin{split} \langle (\vec e_1\cdot\mathbf{h}(\vec r_1,&\omega)\cdot\vec e_1),(\vec e_2\cdot\mathbf{h}(\vec r_2,\omega)\cdot\vec e_2)\rangle\\ &=S(h_n^{\rm P};\omega)\bigg(T_1(\Phi_{12})(1+2(\vec e_1\cdot\vec e_2)^2) +T_2(\Phi_{12})((\vec e_1\cdot\vec e_{12})^2+(\vec e_2\cdot\vec e_{12})^2 \\ &\hspace*{2cm}+4(\vec e_1\cdot\vec e_2)(\vec e_{12}\cdot\vec e_1)(\vec e_{12}\cdot\vec e_2))+T_3(\Phi_{12})(\vec e_{12}\cdot\vec e_1)^2(\vec e_{12}\cdot\vec e_2)^2\bigg)\\[0.3cm] T_1(\Phi_{12})&= \frac{1}{21}(7j_0(\Phi_{\rm 12})+10j_2(\Phi_{\rm 12})+3j_4(\Phi_{\rm 12}))\\ T_2(\Phi_{12})&= -\frac{5}{7}(j_2(\Phi_{\rm 12})+j_4(\Phi_{\rm 12}))\\ T_3(\Phi_{12})&= 5j_4(\Phi_{\rm 12}) \end{split} \label{eq:straincorr} \eeq Even though this expression looks rather complicated, it is numerically straight-forward to implement it in Wiener-filter calculations. \epubtkImage{} \begin{figure}[htbp] \centerline{\includegraphics[width=0.99\textwidth]{Chp7-CorrelationT.png}} \centerline{\includegraphics[width=0.99\textwidth]{Chp7-CorrelationL.png}} \caption[Two-point correlation of seismic strain measurements]{Two-point correlation between seismic strain measurements. The direction $\vec e_1$ is kept constant. The components of $\vec e_2$ are represented in angular spherical coordinates, with $z$-axis parallel to $\vec e_{12}$ (i.~e.~the ``vertical'' direction in these plots parallel to the symmetry axes in the lower row). From left to right, the value of $\Phi_{12}$ changes from 0 to $2\pi$ in equidistant steps. Upper row: $\vec e_1\cdot\vec e_{12}=0$. Lower row: $\vec e_1\cdot\vec e_{12}=1$.} \label{fig:strainpatterns} \end{figure}} Spherical plots of the two-point spatial correlation are shown in Figure \ref{fig:strainpatterns}. The vector $\vec e_1$ is kept constant, while the vector $\vec e_2$ is expressed in spherical coordinates $\theta,\,\phi$. For each value of these two angles, the resulting correlation between the two strainmeters corresponds to the radial coordinate of the plotted surfaces. Since the focus lies on the angular pattern of the correlation function, each surface is scaled to the same maximal radius. It can be seen that there is a rich variety of angular correlation patterns, which even includes near spherically symmetric patterns (which means that the orientation of the second strainmeter weakly affects correlation). A similar calculation yields the correlation between seismic strainmeter and gravity perturbation: \beq \begin{split} \langle (\vec e_1\cdot\mathbf{h}(\vec r_1,&\omega)\cdot\vec e_1),(\vec e_2\cdot\delta\vec a(\vec r_2,\omega))\rangle =4\pi G\rho_0S(h_n^{\rm P};\omega)\frac{1}{k^{\rm P}}\\ &\cdot\bigg(T_1(\Phi_{12})((\vec e_2\cdot\vec e_{12}) +2(\vec e_1\cdot\vec e_{12})(\vec e_1\cdot\vec e_2)) +T_2(\Phi_{12})(\vec e_{12}\cdot\vec e_1)^2(\vec e_{12}\cdot\vec e_2)\bigg)\\[0.3cm] T_1(\Phi_{12})&= j_1(\Phi_{\rm 12})+j_3(\Phi_{\rm 12})\\ T_2(\Phi_{12})&= -5j_3(\Phi_{\rm 12}) \end{split} \label{eq:corrNNstrain} \eeq The Wiener-filter cancellation using seismic strainmeters is independent of the mixing ratio. However, in contrast to the seismometer case, a strainmeter located at the test mass has zero correlation with the gravity perturbation. Therefore, a strainmeter located near the test mass can only have an indirect effect on the Wiener filter, such as improving the ability of a sensor array to disentangle shear and compressional waves. Without other seismic sensors, a strainmeter near the test mass is fully useless for the purpose of Newtonian-noise cancellation. \epubtkImage{} \begin{figure}[htbp] \centerline{ \includegraphics[width=0.45\textwidth]{Chp7-Residuals_P_Spiral_2phi_7theta.pdf} \includegraphics[width=0.45\textwidth]{Chp7-Residuals_P_Spiral_Strain.pdf}} \caption[Residual spectra using seismic displacement and strain sensors]{Residual spectra using seismic displacement and strain sensors in a 3D spiral array configuration. Left: seismometer array. Right: strain-meter array.} \label{fig:residualXiStrain} \end{figure}} The subtraction residuals from a strainmeter array are shown in the right of Figure \ref{fig:residualXiStrain}. The array configuration is the same 3D spiral array used for the seismometer array in the left plot, but with twice as large extent to have peak performance at similar frequencies. All strainmeters are oriented parallel to the relevant direction of gravity perturbations. Apparently, there is no advantage in using strain-meter arrays even though the subtraction performance is independent of shear-wave content. It should be emphasized though that subtraction performance of 3D arrays depends strongly on the array configuration. Therefore, optimized array configurations may perform substantially better, and it is also possible that orienting sensors along different directions, and combining strainmeters with seismometers leads to lower subtraction residuals. This needs to be investigated in the future. \subsubsection{Cancellation of Newtonian noise from infrasound} \label{sec:arrayNNatm} Coherent cancellation of Newtonian noise from infrasound is substantially different from the seismic case. Seismic sensors are substituted by microphones, which have more complicated antenna patterns. Here we will assume that a microphone measures the pressure fluctuations at a point without being able to distinguish directions. This is an important difference to seismic sensing. Furthermore, it is unfeasible to deploy a 3D array of microphones in the atmosphere. There may be other methods of sensing pressure fluctuations (e.~g.~some type of light/radar tomography of the atmosphere around the test masses), but it is unclear if they can be used to resolve the fast, relatively small-scale fluctuations produced by infrasound. So for now, we assume that pressure fluctuations can only be measured on surface. We also want to stress that we have not succeeded yet to calculate the correlation functions for the Wiener filtering in the case of a test mass underground. One can probably make progress in this direction starting with the scalar plane-wave expansion in Equation (\ref{eq:pwscalar}) and using the half-space integral in Equation (\ref{eq:intrelY}), but we will leave this as future work. In the following, we consider the test mass and all microphones to be located on the surface. In this case, the two-point spatial correlation is found to be \beq \begin{split} \langle \delta p(\vec \varrho_1,\omega),\delta p(\vec \varrho_2,\omega)\rangle&= \frac{S(\delta p;\omega)}{4\pi}\int\drm\Omega_k\e^{\irm\vec k^{\rm P}\cdot(\vec \varrho_2-\vec \varrho_1)}\\ &= S(\delta p;\omega) j_0(k^{\rm P}|\vec \varrho_2-\vec \varrho_1|) \end{split} \label{eq:corrpress} \eeq This can be calculated starting with the plane-wave expansion in Equation (\ref{eq:pwscalar}), and using Equations (\ref{eq:spheraddition}) and (\ref{eq:normalY}). Note that it makes no difference for microphones at $z_0=0$ that sound waves are reflected from the surface (apart from a doubling of the amplitude). This also means that the direction average can be carried out over the full solid angle. For $z_0>0$, one has to be more careful, explicitly include the reflection of sound waves, and only average over propagation directions incident ``from the sky'' (assuming also that there are no sources of infrasound on the surface). The correlation between pressure fluctuations and resulting gravity perturbations at the surface can be calculated using the negative gradient of Equation (\ref{eq:gravinfra}). Since the projection of $\delta \vec a$ onto the $x$-coordinate can be technically obtained by calculating the derivative $\partial_x$, with $x\equiv x_2-x_1$ and Equation (\ref{eq:corrpress}), we find \beq \begin{split} \langle \delta a_x(\vec \varrho_1,\omega),\delta p(\vec \varrho_2,\omega)\rangle&= -\frac{S(\delta p;\omega)}{(k^{\rm P})^2}\frac{G\rho_0}{\gamma\,p_0}\partial_x\int\drm\Omega_k\e^{\irm\vec k^{\rm P}_\varrho\cdot(\vec \varrho_2-\vec \varrho_1)}\\ &= 4\pi\frac{S(\delta p;\omega)}{k^{\rm P}}\frac{G\rho_0}{\gamma\,p_0}\frac{x_2-x_1}{|\vec \varrho_2-\vec \varrho_1|}j_1(k^{\rm P}|\vec \varrho_2-\vec \varrho_1|) \end{split} \eeq The correlation vanishes for microphones collocated with the test mass. For this reason, the optimization of a microphone array is similar to the optimization of a surface seismometer array for Rayleigh-wave Newtonian-noise cancellation. Formally, the difference is that spherical Bessel functions determine correlations of the infrasound field instead of ordinary Bessel functions since the infrasound field is three dimensional. This results in a slightly weaker correlation of microphones near the test mass with gravity perturbations. \epubtkImage{} \begin{figure}[htbp] \centerline{\includegraphics[width=0.6\textwidth]{Chp7-Residuals_Infrasound.pdf}} \caption[Residuals of infrasound Newtonian-noise cancellation]{Residual spectra after coherent subtraction of infrasound Newtonian-noise. Signal-to-noise ratio of microphones is assumed to be 100 over all frequencies. The spirals have two full windings around the test mass.} \label{fig:residualSound} \end{figure}} The residual spectra using spiral surface arrays of microphones can be seen in Figure \ref{fig:residualSound}. The sensors have a signal-to-noise ratio of 100. Important to realize is that the arrays are very small, and therefore located completely or partially inside the buildings hosting the test masses. In this case the assumptions of an isotropic and homogeneous infrasound field may not be fulfilled. Nonetheless, based on detailed studies of infrasound correlation, it is always be possible to achieve similar noise residuals, potentially with a somewhat increased number of microphones. As a final remark, infrasound waves have properties that are very similar to compressional seismic waves, and the result of Section \ref{sec:arrayNNRay} was that broadband cancellation fo Newtonian noise from compressional waves can be achieved with primitive array designs, provided that the field is not mixed with shear waves. Air does not support the propagation of shear waves, so one might wonder why subtraction of infrasound Newtonian noise does not have these nice properties. The reason lies in the sensors. Microphones provide different information. In a way, they are more similar in their response to seismic strainmeters. According to Equation (\ref{eq:corrNNstrain}), correlations between a strainmeter and gravity perturbations also vanishes if the strainmeter is located at the test mass. What this means though is that a different method to monitor infrasound waves may make a big difference. It is a ``game with gradients''. One could either monitor pressure gradients, or the displacement of air particles due to pressure fluctuations. Both would restore correlations of sensors at the test mass with gravity perturbations. \subsubsection{Demonstration: Newtonian noise in gravimeters} \label{sec:gravimeterNN} The problem of coherent cancellation of Newtonian noise as described in the previous sections is not entirely new. Gravimeters are sensitive to gravity perturbations caused by redistribution of air mass in the atmosphere \cite{Neu2010}. These changes can be monitored through their effect on atmospheric pressure. For this reason, pressure sensors are deployed together with gravimeters for a coherent cancellation of atmospheric Newtonian noise \cite{BaCr1999}. In light of the results presented in Section \ref{sec:arrayNNatm}, it should be emphasized that the cancellation is significantly less challenging in gravimeters since the pressure field is not a complicated average over many sound waves propagating in all directions. This does not mean though that modelling these perturbations is less challenging. Accurate calculations based on Green's functions are based on spherical Earth models, and the model has to include the additional effect that a change in the mass of an air column changes the load on the surface, and thereby produces additional correlations with the gravimeter signal \cite{GuEA2004}. Nonetheless, from a practical point of view, the full result is more similar to the coherent relations such as Equation (\ref{eq:gravinfra}), which means that local sensing of pressure fluctuations should yield good cancellation performance. This is indeed the case as shown in Figure \ref{fig:gravimeterNN}. The original median of gravity spectra is shown as red line. Using a very simple filter, which is based on direct proportionality of local pressure and gravity fluctuations, gravity noise can be reduced by about a factor 5 at 0.1\,mHz. \epubtkImage{} \begin{figure}[htbp] \centerline{\includegraphics[width=0.8\textwidth]{Chp7-Residual_M1.png}} \caption[Noise cancellation in gravimeters]{Spectral histogram of subtraction residuals using a local pressure sensor as reference channel. The two solid curves correspond to the medians of spectral histograms before (red) and after (green) subtraction.} \label{fig:gravimeterNN} \end{figure}} The subtraction residuals are close to the instrumental noise of the gravimeters, which means that the simple scheme based on proportionality of the data is already very effective at these frequencies. Especially at lower frequencies, the filter design needs to be more complicated to achieve good broadband cancellation performance. Typically, a frequency domain version of Wiener filtering is applied in standard subtraction procedures \cite{Neu2010}. Due to non-Gaussianity and non-stationarity of the data, time-domain FIR Wiener filters as discussed in Section \ref{sec:Wiener} are less effective. We want to stress though that cancellation results are not this good in all gravimeters. Sometimes it can be explained by data quality of the pressure sensors, but often it is not clear what the reasons are. It may well be that detailed knowledge of the gravimeter sites can provide ideas for explanations. \subsubsection{Optimizing sensor arrays for noise cancellation} \label{sec:optimarray}\index{array!optimization} In the previous sections, we focussed on the design and performance evaluation of an optimal noise-cancellation filter for a given set of reference sensors. In this section, we address the problem of calculating the array configuration that minimizes noise residuals given sensor noise of a fixed number of sensors. The analysis will be restricted to homogeneous fields of density perturbations. The optimization can be based on a model or measured two-point spatial correlations $C(\delta\rho;\vec r,\omega)$. We start with a general discussion and later present results for the isotropic Rayleigh-wave field. The optimization problem will be formulated as a minimization of the noise residual $R$ defined in Equation (\ref{eq:residualNN}) as a function of sensor locations $\vec r_i$. Accordingly, the optimal sensor locations fulfill the equation \beq \nabla_k R=\vec 0, \label{eq:noisemin} \eeq where the derivatives are calculated with respect to the coordinates of each of the $M$ sensors, i.~e.~$k\in 1,\ldots,M$. In homogeneous fields, the Newtonian-noise spectrum and seismic spectrum are independent of sensor location, \beq \nabla_k C_{\rm NN}=\vec 0,\quad \nabla_k C_{\rm SS}^{kk}=\vec 0, \eeq which allows us to simplify Equation (\ref{eq:noisemin}) into \beq \begin{split} &\nabla_k\vec C^{\,\rm T}_{\rm SN}\cdot \vec w^{\,\rm T}+\vec w\cdot \nabla_k \vec C_{\rm SN}-\vec w\cdot \nabla_k\mathbf C_{\rm SS}\cdot\vec w^{\,\rm T}=\vec 0, \\ &2\vec w\cdot \nabla_k\vec C_{\rm SN}=\vec w\cdot \nabla_k\mathbf C_{\rm SS}\cdot\vec w^{\,\rm T}, \end{split} \eeq where we have introduced the Wiener filter $\vec w=\vec C^{\,\rm T}_{\rm SN}\cdot {\mathbf C}^{-1}_{\rm SS}$. For the following steps, let us use a slightly different notation. We will write the sensor cross-correlation matrix $\mathbf C_{\rm SS}=\mathbf C(\vec s;\vec s)$, and the correlations between sensor and target channels as $\vec C_{\rm SN}=\vec C(\vec s;n)$. Only the component $k$ of the vector $\vec C(\vec s;n)$ and the $k$th row and column of $\mathbf C(\vec s;\vec s)$ depend on the coordinates of the sensor $k$. This means that the derivative $\nabla_k$ produces many zeros in the last equation, which allows us to simplify it into the following form: \beq \nabla_kC(s_k;n)-\vec w\cdot \nabla_k\vec C(\vec s;s_k)=0. \label{eq:optimal} \eeq The optimal array fulfills this equation for derivatives with respect to the coordinates of all $M$ sensors. Solutions to this equation need to be calculated numerically. Optimization of arrays using Equation (\ref{eq:optimal}) produces accurate solutions more quickly than traditional optimization methods, which directly attempt to find the global minimum of the residual $R$. Traditional codes (nested sampling, particle swarm optimization) produce solutions that converge to the ones obtained by solving Equation (\ref{eq:optimal}). In the following, we will we present optimization results for a homogeneous and isotropic Rayleigh-wave field. The correlation functions are given in Equation (\ref{eq:corrRayiso}) and (\ref{eq:corraxiRayiso}). The filled contour plot in Figure \ref{fig:optRayNN} shows the residual $R$ as a function of sensor coordinates for a total of 1 to 3 sensors, from left to right. In the case of a single sensor, the axes represent its $x$ and $y$ coordinates. For more than one sensor, the axes correspond to the $x$ coordinates of two sensors. All coordinates not shown in these plots assume their optimal values. \epubtkImage{} \begin{figure}[htbp] \centerline{ \includegraphics[width=0.32\textwidth]{Chp7-AnalyticalNN_1S.png} \includegraphics[width=0.32\textwidth]{Chp7-AnalyticalNN_2S.png} \includegraphics[width=0.32\textwidth]{Chp7-AnalyticalNN_3S.png}} \caption[Array optimization for cancellation of Rayleigh Newtonian noise]{Array optimization for cancellation of Rayleigh Newtonian noise. The array is optimized for cancellation at a single frequency using 1 to 3 sensors (left to right) with $\rm SNR=100$. The curves represent Equation (\ref{eq:optimal}) for the coordinates in the axis labels. The filled contour plots show the noise residual $R$. All coordinates not shown assume their optimal values.} \label{fig:optRayNN} \end{figure}} The green and orange curves represent Equation (\ref{eq:optimal}) either for the derivatives $\partial_x,\,\partial_y$ or $\partial_{x_1},\,\partial_{x_2}$. These curves need to intersect at the optimal coordinates. It can be seen that they intersect multiple times. The numerical search for the optimal array needs to find the intersection that belongs to the minimum value of $R$. For the isotropic case, it is not difficult though to tune the numerical search such that the global minimum is found quickly. The optimal intersection is always the one closest to the test mass at the origin. While it is unclear if this holds for all homogeneous seismic fields, it seems intuitive at least that one should search intersections close to the test mass in general. In order to find optimal arrays with many sensors, it is recommended to build these solutions gradually from optimal solutions with one less sensor. In other words, for the initial placement, one should use locations of the $M-1$ optimal array, and then add another sensor randomly nearby the test mass. The search relocates all sensors, but it turns out that sensors of an optimal array with a total of $M-1$ sensors only move by a bit to take their optimal positions in an optimal array with $M$ sensors. So choosing initial positions in the numerical search wisely significantly decreases computation time, and greatly reduces the risk to get trapped in local minima. \epubtkImage{} \begin{figure}[htbp] \centerline{ \includegraphics[width=0.6\textwidth]{Chp7-Residuals_SNR100.pdf}} \caption[Minimized noise residuals from Rayleigh Newtonian-noise cancellation]{Minimized noise residuals from Rayleigh Newtonian-noise cancellation. The dashed line marks the sensor-noise limit.} \label{fig:optResiduals} \end{figure}} Figure \ref{fig:optResiduals} shows the noise residuals of Newtonian noise from an isotropic Rayleigh-wave field using optimal arrays with 1 to 6 sensors and sensor $\rm SNR=100$. The residuals are compared with the sensor-noise limit ${\rm 1/SNR}/\sqrt{M}$ (dashed curve). Arrays with $M>3$ yield residuals that are close to a factor $\sqrt{2}$ above the sensor-noise limit. The origin of the factor $\sqrt{2}$ has not been explained yet. It does not appear in all noise residuals, for example, the noise residual of a Wiener filter using a single reference channel perfectly correlated with the target channel, see Equation (\ref{eq:singlechcoh}), is given by 1/SNR. In many situations, it will not be possible to model the correlations $\mathbf C_{\rm SS}$ and $\vec C_{\rm SN}$. In this case, observations of seismic correlations $\mathbf C_{\rm SS}$ can be used to calculate $\vec C_{\rm SN}$, see Equation (\ref{eq:kernelRayaxi}), and also $C_{\rm NN}$, see Equation (\ref{eq:homNNC}). Seismic correlations are observed with seismometer arrays. It is recommended to choose a number of seismometers for this measurement that is significantly higher than the number of seismometers foreseen for the noise cancellation. Otherwise, aliasing effects and resolution limits can severely impact the correlation estimates. Various array-processing algorithms are discussed in \cite{KrVi1996}. Table \ref{tab:residualRf} summarizes the noise residuals from optimized arrays of 1 to 6 sensors with SNR = 100, which may serve as reference values for alternative optimization methods. The $N=7$ array is the first optimal array that requires two seismometers placed on top of each other. Consequently, the broadband performance of the $N=6$ array is similar to the $N=7$ array. Residuals of optimal arrays can be compared with the stepwise optimized arrays as discussed in Section \ref{sec:arrayNNRay}, taking into account that $\rm SNR = 10$ was used in Section \ref{sec:arrayNNRay}. \begin{table}[htbp] \caption{Cancellation of Newtonian noise from isotropic Rayleigh-wave fields at wavelength $\lambda$. Shown are the optimal arrays for 1 to 6 sensors with SNR = 100.} \label{tab:residualRf} \renewcommand{\arraystretch}{1.5} \centerline{ \begin{tabularx}{0.7\textwidth}{|X|l|} \hline Sensor coordinates $[\lambda]$ & Noise residual $\sqrt{R}$ \\ \hline (0.293,0) & 0.568\\ (0.087,0), (-0.087,0) & $2.28\times 10^{-2}$\\ (0.152,-0.103), (0.152,0.103), (-0.120,0) & $1.24\times 10^{-2}$\\ (0.194,0.112), (0.194,-0.112), (-0.194,0.112), \newline(-0.194,-0.112) & $7.90\times 10^{-3}$\\ (0.191,0.215), (0.299,0), (0.191,-0.215), \newline(-0.226,0.116), (-0.226,-0.116) & $6.69\times 10^{-3}$\\ (0.206,0.196), (0.295,0), (0.206,-0.196), \newline(-0.206,0.196), (-0.295,0), (-0.206,-0.196) & $6.04\times 10^{-3}$\\ \hline \end{tabularx}} \end{table} The noise residuals of the stepwise optimization were $R=0.38$, 0.09, and 0.07 for the first three seismometers, while the fully optimized residuals are $R=0.38$, 0.014 and 0.0074, i.~e.~much lower for $N\geq2$. \subsubsection{Newtonian noise cancellation using gravity sensors} \label{sec:gravsub} In the previous sections, we have investigated Newtonian-noise cancellation using auxiliary sensors that monitor density fluctuations near the test masses. An alternative that has been discussed in the past is to use gravity sensors instead. One general concern about this scheme is that a device able to subtract gravity noise can also cancel GW signals. This fact indeed limits the possible realizations of such a scheme, but it is shown in the following that at least Newtonian noise in large-scale GW detectors from a Rayleigh-wave field can be cancelled using auxiliary gravity sensors. However, it will become clear as well that it will be extremely challenging to build a gravity sensor with the required sensitivity. In the following discussion, we will focus on cancellation of gravity noise from isotropic Rayleigh-wave fields. Most of the results can be obtained from the two-point spatial correlation of gravity fluctuations, \index{correlation!gravity-gravity} \beq \langle \delta a_x(\vec 0,\omega), \delta a_x(\vec \varrho,\omega)\rangle =\big(2\pi G\rho_0 \gamma \e^{-hk_\varrho}\big)^2\frac{1}{2}S(\xi_z;\omega)\cdot\left[J_0(k_\varrho \rho)-\cos(2\phi)J_2(k_\varrho \rho)\right], \label{eq:corrNNacc} \eeq evaluated at a specific frequency. Here, $\vec\varrho=\varrho(\cos(\phi),\sin(\phi))$, and $k_\varrho$ is the wavenumber of a Rayleigh wave. This result turns into Equation (\ref{eq:specNNiso}) for $\varrho\rightarrow 0$. The only (conventional) type of gravity sensor that can be used to cancel Newtonian noise in GW detectors is the gravity strainmeter or gravity gradiometer \footnote{Here, we do not consider using seismic data from a gravimeter for Newtonian-noise cancellation.}. As we have discussed in Section \ref{sec:superg}, the sensitivity of gravimeters is fundamentally limited by seismic noise, and any attempt to mitigate seismic noise in gravimeters inevitably transforms its response into a gravity gradiometer type. So in the following, we will only consider gravity strainmeters/gradiometers as auxiliary sensors. Let us first discuss a few scenarios where noise cancellation cannot be achieved. If two identical large-scale GW detectors are side-by-side, i.~e.~with test masses approximately at the same locations, then Newtonian-noise cancellation by subtracting their data inevitably means that GW signals are also cancelled. Let us make the arms of one of the two detectors shorter, with both detectors' test masses at the corner station staying collocated. Already one detector being shorter than the other by a few meters reduces Newtonian-noise correlation between the two detectors substantially. The reason is that correlation of gravity fluctuations between the end test masses falls rapidly with distance according to Equation (\ref{eq:corrNNacc}). It can be verified that subtracting data of these two detectors to cancel at least gravity perturbations of the inner test masses does not lead to sensitivity improvements. Instead, it effectively changes the arm length of the combined detector to $\Delta L$, where $\Delta L$ is the difference of arm lengths of the two detectors, and correspondingly increases Newtonian noise. If Newtonian noise is uncorrelated between two test masses of one arm, then decreasing arm length increases Newtonian strain noise. However, as shown in Figure \ref{fig:rayResp}, if the detector becomes shorter than a seismic wavelength and Newtonian noise starts to be correlated between test masses, Newtonian strain noise does not increase further. Compared to the Newtonian noise in a large-scale detector with arm length $L$, Newtonian noise in the short detector is greater by (up to) a factor $k_\varrho L$. In this regime, the small gravity strainmeter is better described as gravity gradiometer. The common-mode suppression of Newtonian noise in the gradiometer due to correlation between test masses greatly reduces Newtonian-noise correlation between gradiometer and the inner test masses of the large-scale detector. Consequently, a gravity gradiometer cannot be used for noise cancellation in this specific configuration. It turns out though that there is a class of gravity gradiometers, known as full-tensor gradiometers, that can be used for cancellation of Newtonian noise from Rayleigh waves. The key is to understand that gravity gradients $\partial_z\delta a_x = \partial_x\delta a_z$, where $\delta\vec a$ are the fluctuations of gravity acceleration, and $x$ points along the arm of the large-scale detector, are perfectly correlated with $\delta a_x$. This can be seen from Equation (\ref{eq:RayleighS}), since derivatives of the acceleration $\delta a_x$ with respect to $z$, i.~e.~the vertical direction, does not change the dependence on directions $\phi$. The coherence (normalized correlation) between $\delta a_x$ and $\partial_z\delta a_x$ is shown in the left of Figure \ref{fig:gradLIGOCancel} making use of $\langle\delta a_x(\vec 0\,),\partial_z\delta a_x(\vec \varrho\,)\rangle_{\rm norm}=\langle\delta a_x(\vec 0\,),\delta a_x(\vec \varrho\,)\rangle_{\rm norm}$. \epubtkImage{} \begin{figure}[htbp] \centerline{ \includegraphics[width=0.48\textwidth]{Chp7-CorrelationNN.pdf} \includegraphics[width=0.48\textwidth]{Chp7-GradLargeNN.pdf}} \caption[Newtonian-noise cancellation using gravity gradiometers]{Left: coherence of Newtonian noise between two test masses according to Equation (\ref{eq:corrNNacc}) as a function of distance $\varrho$ in units of seismic wavelength ($\phi=0$). Right: maximal noise reduction that can be achieved with the channel $\partial_z\delta a_x$ of a gravity gradiometer as a function of maximal distance between test masses of the gravity gradiometer and the large-scale detector.} \label{fig:gradLIGOCancel} \end{figure}} The idea is now to place one full-tensor gravity gradiometer at each test mass of the large-scale detector, and to cancel Newtonian noise of each mass. In this way, it is also impossible to cancel GW signals since GW signals of the gradiometers cancel each other. The limitations of this scheme are determined by the distance between the test mass of the large-scale detector and test-masses of the gravity gradiometer. The smaller the distance, the better the correlation and the higher the achievable noise reduction. Using Equation (\ref{eq:singlechcoh}), the maximal noise reduction can be calculated as a function of the coherence. In Figure \ref{fig:gradLIGOCancel}, right plot, the achievable noise suppression is shown as a function of distance between test masses. For example, at 10\,Hz, and assuming a Rayleigh-wave speed of 250\,m/s, the distance needs to be smaller than 1\,m for a factor 5 noise reduction. This also means that the size of the gradiometer must be of order 1\,m. Let us calculate what the required sensitivity of the gradiometer has to be. From Equations (\ref{eq:singlechcoh}) and (\ref{eq:corrNNacc}), we find that the maximal noise-suppression factor is given by \beq s \sim \frac{1}{k_\varrho r}, \eeq where $r\ll\lambda$ is the distance between test masses of the large-scale detector and the gradiometer, which one can also interpret as maximal size of the gradiometer to achieve a suppression $s$. A numerical factor of order unity is omitted. Given a Newtonian strain noise $h_{\rm NN}$ of the large-scale detector with arm length $L$, the gradiometer observes \beq h_{\rm NN}^{\rm grad} = h_{\rm NN} k_\varrho L=\xi_{\rm NN} k_\varrho. \eeq Here, $\xi_{\rm NN}$ denotes the relative displacement noise in the large-scale detector. Now, the relative displacement noise in the gradiometer is \beq \xi_{\rm NN}^{\rm grad}=\frac{1}{k_\varrho s}\xi_{\rm NN} k_\varrho=\frac{\xi_{\rm NN}}{s} \eeq While the gravity gradiometer observes much stronger Newtonian noise in units strain, its displacement sensitivity needs to match the displacement sensitivity of the large-scale detector, and even exceed it by a factor $s$. One could raise well-justified doubts at this point if a meter-scale detector can achieve displacement sensitivity of large-scale GW detectors. Nonetheless, the analysis of this section has shown that Newtonian-noise cancellation using gravity sensors is in principle possible. \subsection{Site selection} \label{sec:siteselect} An elegant way to reduce Newtonian noise is to select a detector site with weak gravity fluctuations. It should be relatively straightforward to avoid proximity to anthropogenic sources (except maybe for the sources that are necessarily part of the detector infrastructure), but it is not immediately obvious how efficient this approach is to mitigate seismic or atmospheric Newtonian noise. With the results of Sections \ref{sec:ambient} and \ref{sec:atmos}, and using numerous past observations of infrasound and seismic fields, we will be able to predict the possible gain from site selection. The aim is to provide general guidelines that can help to make a site-selection process more efficient, and help to identify suitable site candidates, which can be characterized in detail with follow-up measurements. These steps have been carried out recently in Europe as part of the design study of the Einstein Telescope \cite{BeEA2012,BBR2015}, and promising sites were indeed identified. Already with respect to the minimization of Newtonian noise, site selection is a complicated process. One generally needs to divide into site selection for gravity measurements at low and high frequencies. The boundary between these two regimes typically lies at a few Hz. The point here is that at sufficiently low frequencies, gravity perturbations produced at or above surface are negligibly suppressed at underground sites with respect to surface sites. At higher frequencies, a detailed site-specific study is required to quantify the gain from underground construction since it strongly depends on local geology. In general, sources of gravity perturbations have different characteristics at lower and higher frequencies. Finally, to complicate the matter even further, one may also be interested to identify a site where one can expect to achieve high noise cancellation through Wiener filtering or similar methods. \subsubsection{Global surface seismicity}\index{seismic noise!surface} We start with the assessment of ambient seismicity. Today this can be done systematically and easily for many surface locations since publicly available data from a global network of seismometers is continuously recorded and archived on servers. For example, Coughlin and Harms have characterized thousands of sites world-wide in this way, processing years of data from broadband seismometers \cite{CoHa2012b}. Among others, the data are provided by the US-based IRIS Data Management Service (archiving global seismic data), \url{http://www.iris.edu/ds/nodes/dmc/}, and the Japanese seismic broadband network F-Net operated by NIED \url{http://www.fnet.bosai.go.jp/}. Seismic data cannot be easily obtained from countries that have not signed the Comprehensive Nuclear-Test-Ban Treaty (which are few though). The results of their analysis were presented in the form of spectral histograms for each site, accessible through a Google Earth kmz file. An example is shown in Figure \ref{fig:seismicityUS} for a seismic station in the US. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.8\textwidth]{Chp7-USmap.png}} \caption[US seismicity on Google Earth]{Information of world-wide ambient seismicity as a function of frequency was made available as Google Earth kmz files by Coughlin and Harms. The files can be downloaded at \url{http://www.ligo.caltech.edu/~jharms/data/GoogleEarth/}.} \label{fig:seismicityUS} \end{figure}} The colors of the markers on the map signify the median of the spectral histograms at a specific frequency. The frequency can be changed with a video slider. Clicking on a marker pops up additional site-specific information. Studying these maps gives an idea where to find quiet places on Earth, and helps to recognize generic patterns such as the influence of mountain ranges, and the proximity to oceans. A more detailed analysis based on these data can be found in \cite{CoHa2012b}. It should be noted that especially in Japan, many seismic stations used in this study are built a few meters underground, which may lead to substantial reduction of observed ambient seismicity above a few Hz with respect to surface sites. Nonetheless, there are regions on all continents with very low surface seismicity above 1\,Hz, approaching a global minimum often referred to as global low-noise model \cite{BDE2004,CoHa2012b}. This means that one should not expect that a surface or underground site can be found on Earth that is significantly quieter than the identified quietest surface sites. Of course, underground sites may still be attractive since the risk is lower that seismicity will change in the future, while surface sites can in principle change seismicity over the course of many years, because of construction or other developments. For the same reason, it may be very challenging to find quiet surface sites in densely populated countries. As a rule of thumb, a site that is at least 50\,km away from heavy traffic and seismically active faults, and at least 100\,km away from the ocean, has a good chance to show low ambient seismicity above a few Hz. To be specific here, ambient seismicity should be understood as the quasi-stationary noise background, which excludes for example the occasional strong earthquake. Larger distances to seismically active zones may be necessary for reasons such as avoiding damage to the instrument. Below a few Hz, ambient seismicity is more uniform over the globe. Oceanic microseisms between 0.1\,Hz and 1\,Hz are stronger within 200\,km to the coast, and then decreasing weakly in amplitude towards larger distances. This implies that it is almost impossible to find sites with a low level of oceanic microseisms in countries such as Italy and Japan. At even lower frequencies, it seems that elevated seismic noise can mostly be explained by proximity to seismically active zones, or extreme proximity to cities or traffic. Here one needs to be careful though with the interpretation of data since quality of low-frequency data strongly depends on the quality of the seismic station. A less protected seismometer exposed to wind and other weather phenomena can have significantly increased low-frequency noise. In summary, the possibility to find low-noise surface sites should not be excluded, but underground sites are likely the only seismically quiet locations in most densely populated countries (which includes most countries in Europe). \subsubsection{Underground seismicity}\index{seismic noise!underground} Seismologists have been studying underground seismicity at many locations over decades, and found that high-frequency seismic spectra are all significantly quieter than at typical surface sites. This can be explained by the exponential fall off of Rayleigh-wave amplitudes according to Equation (\ref{eq:Rayfield}), combined with the fact that high-frequency seismicity is typically generated at the surface, and most surface sites are covered by a low-velocity layer of unconsolidated ground. The last means that amplitude decreases over relatively short distances to the surface. Seismic measurements have been carried out in boreholes \cite{Dou1964,SaHa1964}, and specifically in the context of site characterization for future GW detectors at former or still active underground mines \cite{HaEA2010,BeEA2012,NaEA2014,BBR2015}. There are however hardly any underground array measurements to characterize the seismic field in terms of mode composition. This is mostly due to the fact that these experiments are very costly, and seismic stations have to be maintained under unusual conditions (humidity, temperature, dust,...). Currently, a larger seismic array is being deployed for this purpose as part of the DUGL (Deep Underground Gravity Laboratory) project at the former Homestake mine, now known as the Sanford Underground Research Facility, equipped with broadband seismometers, state-of-the-art data acquisition, and auxiliary sensors such as infrasound microphones. As a consequence of the high cost, the effort could only be realized as collaboration between several groups involving seismologists and GW scientists. The picture seems to be very simple. Underground seismicity above a few Hz is generally very small approaching the global low-noise model. Variations can however be observed, and have in some cases been identified as anthropogenic noise produced underground \cite{HaEA2010}. Therefore, it is important to evaluate how much noise is produced by the underground infrastructure that is either already in place, or is brought to the site for the underground experiment itself. Pumps and ventilation are required for the maintenance of an underground site, which may lead to excess noise. Measurements were carried out in the context of the design study of the Einstein Telescope in Europe \cite{ET2011}. Some of the collected seismic spectra were presented in \cite{BeEA2012}, which is shown again here in Figure \ref{fig:seismicityET}. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.7\textwidth]{Chp7-ET_seismic.pdf}} \caption[Underground seismicity in Europe]{Spectral densities and typical variation of ambient seismic noise at underground sites in Europe. The depths of the seismometers are indicated in the legend. Courtesy of Beker et al \cite{BeEA2012}.} \label{fig:seismicityET} \end{figure}} The underground sites have similar seismic spectra above about 1\,Hz, which are all lower by orders of magnitude compared to the surface spectrum measured inside one of the Virgo buildings. The Virgo spectrum however shows strong excess noise even for a surface site. This can be seen immediately since the spectrum exceeds the global high-noise model drawn as dashed curve between 1\,Hz and 3\,Hz, which means that there is likely no natural cause for the seismic energy in this range. The Virgo infrastructure may have enhanced response to ambient noise at these frequencies, or the seismic sources may be part of the infrastructure. The Netherland spectrum is closer to spectra from typical surface locations, with somewhat decreased noise level though above a few Hz since the measurement was taken 10\,m underground. Nevertheless, the reduction of seismic Newtonian noise to be expected by building a GW detector underground relative to typical surface sites is about 2 orders of magnitude, which is substantial. Whether the reduction is sufficient to meet the requirements set by the ET sensitivity goal is not clear. It depends strongly on the noise models. While results presented in \cite{Bek2012} indicate that the reduction is sufficient, results in \cite{Har2013b} show that further reduction of seismic Newtonian noise would still be necessary. \epubtkImage{} \begin{figure}[htb] \centerline{\includegraphics[width=0.7\textwidth]{Chp7-ET_NN_z200.png}} \caption[Newtonian noise budget for the Einstein Telescope]{Newtonian-noise budget together with the ET-C instrumental-noise model \cite{Har2013b}. The detector is located 200\,m underground. Surface (Rayleigh NN) and underground (Compressional-wave NN) seismic spectra were measured at different sites. The surface data are from a seismometer at a quiet site in the US: TA-V34A. The underground measurement was carried out at the 4100\,ft level of the former Homestake mine in South Dakota.} \label{fig:NNET} \end{figure}} The plot presented in \cite{Har2013b} is shown in Figure \ref{fig:NNET}. Seismic Newtonian noise from the surface, Equation (\ref{eq:RayNN}), and also infrasound Newtonian noise, Equation (\ref{eq:gravinfra}), are sufficiently suppressed according to these results. The body-wave Newtonian noise however lies above the targeted noise level (according to the ET-C model). In addition, spectra are shown for gravity perturbations from a car passing right above one test mass of the detector with 180\,km/h using Equation (\ref{eq:movsingle}). Finally, based on the first 5\,s of the simple model in Equation (\ref{eq:earlytime}), a signal spectrum of a magnitude 5 earthquake is also plotted. It may be possible to find underground sites that are seismically quieter than Homestake, but not by a large factor. According to these results, it is likely that some form of noise cancellation is still required, but only by a modest factor, which, according to Section \ref{sec:arrayNNP}, should be easier to achieve underground than at surface sites. \subsubsection{Site selection criteria in the context of coherent noise cancellation} \label{sec:sitecancel} An important aspect of the site selection that has not been considered much in the past is that a site should offer the possibility for efficient coherent cancellation of Newtonian noise. From Section \ref{sec:cohcancel} we know that the efficiency of a cancellation scheme is determined by the two-point spatial correlation of the seismic field. If it is well approximated by idealized models, then we have seen that efficient cancellation would be possible. However, if scattering is significant, or many local sources contribute to the seismic field, then correlation can be strongly reduced, and a seismic array consisting of a potentially large number of seismometers needs to be deployed. The strongest scatterer of seismic waves above a few Hz is the surface with rough topography. This problem was investigated analytically in numerous publications, see for example \cite{GiKn1960,Abu1962,Hud1967,Ogi1987}. If the study is not based on a numerical simulation, then some form of approximation needs to be applied to describe topographic scattering. The earliest studies used the Born approximation, which means that scattering of scattered waves is neglected. In practice, it leads to accurate descriptions of seismic fields when the seismic wavelength is significantly longer than the topographic perturbation, and the slope of the topography is small in all directions. With this approximation, a systematic evaluation of sites in the US was carried out \cite{CoHa2012}. A topographic map of the US was divided into 10\,km $\times$ 10\,km squares. The elevation rms was calculated for each square. The rms map is shown in Figure \ref{fig:topoUS}. The hope was that flat squares can be found in low-seismicity regions, which would combine the requirements on scattering and seismicity. High elevation sites typically show weak seismic noise (above a few Hz), mostly likely because of smaller population density. \epubtkImage{}{% \begin{figure}[htb] \centerline{\includegraphics[width=0.63\textwidth]{Chp7-TopoUS.pdf} \includegraphics[width=0.37\textwidth]{Chp7-TopoScatt.pdf}} \caption[Topographic site selection]{Investigation of topographic scattering for site selection. The map in the left plot shows the rms of topographies evaluated on 10\,km $\times$ 10\,km squares. Scattering coefficients of incident Rayleigh waves for a high-rms site in Montana (station F13A) are shown in the contour plot on the right.} \label{fig:topoUS} \end{figure}} Combining the rms map with knowledge of ambient seismicity, it was in fact possible to find many sites fulfilling the two requirements. Figure \ref{fig:topoUS} shows the scattering coefficients for incident Rayleigh waves at a high-rms site in Montana. Excluding the Rayleigh-to-Rayleigh scattering channel (which, as explained in the study, does not increase the complexity of a coherent cancellation), a total integrated scatter of 0.04 was calculated. Including the fact that scattering coefficients for body waves are expected to be higher even, this value is large enough to influence the design of seismic arrays used for noise cancellation. Also, it is important to realize that the seismic field in the vicinity of the surface is poorly represented by the Born approximation (which is better suited to represent the far field produced by topographic scattering), which means that spatial correlation at the site may exhibit more complicated patterns not captured by their study. As a consequence, at a high-rms site a seismic array would likely have to be 3D and relatively dense to observe sufficiently high correlation between seismometers. Heterogeneous ground may further add to the complexity, but we do not have the theoretical framework yet to address this problem quantitatively. For this, it will be important to further develop the scattering formalism introduced in Section \ref{sec:scatterNN}. Underground sites that were and are being studied by GW scientists are all located in high-rms regions. This is true for the sites presented in the ET design study, for the Homestake site that is currently hosting the R\&D efforts in the US, and also for the Kamioka site in Japan, which hosts the KAGRA detector. Nonetheless, a careful investigation of spatial correlation and Wiener filtering in high-rms sites has never been carried out, and therefore our understanding of seismic scattering needs to be improved before we can draw final conclusions. \subsection{Noise reduction by constructing recess structures or moats} \label{sec:shieldseismic} Hughes and Thorne suggested that one way to reduce Newtonian noise at a surface site may be to dig moats at some distance around the test masses \cite{HuTh1998}. The purpose is to reflect incident Rayleigh waves and thereby create a region near the test masses that is seismically quieter. The reflection coefficient depends on the depth of the moat \cite{MaKn1965,FuMa1980,BDV1986}. If the moat depth is half the length of a Rayleigh wave, then the wave amplitude behind the moat is weakened by more than a factor 5. Only if the moat depth exceeds a full length of a Rayleigh wave, then substantially better reduction can be achieved. If the distance of the moat to the test mass is sufficiently large, then the reduction factor in wave amplitude should translate approximately into the same reduction of Newtonian noise from Rayleigh waves. There are two practical problems with this idea. First, the length of Rayleigh waves at 10\,Hz is about 20\,m (at the LIGO sites), which means that the moat needs to be very deep to be effective. It may also be necessary to fill moats of this depth with a light material, which can slightly degrade the isolation performance. The second problem is that the scheme requires that Rayleigh waves are predominantly produced outside the protected area. This seems unlikely for the existing detector sites, but it may be possible to design the infrastructure of a new surface site such that sources near the test masses can be avoided. For example, fans, pumps, building walls set into vibration by wind, and the chambers being connected to the arm vacuum pipes are potential sources of seismicity in the vicinity of the test masses. The advantage is that the moats do not have to be wide, and therefore the site infrastructure is not strongly affected after construction of the moats. Another potential advantage, which also holds for the recess structures discussed below, is that the moat can host seismometers, which may facilitate coherent cancellation schemes since 3D information of seismic fields is obtained. This idea certainly needs to be studied quantitatively since seismic scattering from the moats could undo this advantage. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.8\textwidth]{Chp7-LIGO_NN_sketch.pdf}} \caption[Recess structure to reduce Newtonian noise]{Recess structures around test masses reduce mass, which would otherwise carry seismic waves that act as sources of gravity perturbations. } \label{fig:recessLIGO} \end{figure}} Another approach is to dig recess structures around the test masses \cite{HaHi2014}. Here the primary goal is not to reflect Rayleigh waves, but to remove mass around the test masses that would otherwise be perturbed by seismic fields to produce Newtonian noise. A sketch of how a recess structure may look like at a detector site is shown in Figure \ref{fig:recessLIGO}. A central pillar needs to be left to support the test-mass chambers. The recess should have a depth of about 4\,m, provided that the speed of Rayleigh waves is about 250\,m/s at 10\,Hz \cite{HaOR2011}. If the speed is higher by a factor 2, then recess dimensions in all three directions need to be increased by a factor 2 to maintain the same noise reduction. This means that it is infeasible to construct effective recesses at sites with much higher Rayleigh-wave speeds (at Newtonian-noise frequencies). For a 4\,m deep recess and horizontal dimensions as shown in Figure \ref{fig:recessLIGO}, the reduction factor is plotted in the left of Figure \ref{fig:recessNN}. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.45\textwidth]{Chp7-RecessResponse.pdf} \includegraphics[width=0.45\textwidth]{Chp7-RecessNN.pdf}} \caption[Noise reduction performance of a recess structure]{The plot to the left shows the noise reduction factor from a recess. The red regime marks frequencies where significant seismic scattering from the recess may occur. The plot to the right shows the corresponding Newtonian-noise spectrum together with sensitivity models for aLIGO and a possible future version of LIGO.} \label{fig:recessNN} \end{figure}} Even though the primary purpose of the recess is not to reflect Rayleigh waves, seismic scattering can be significant. Due to the methods chosen by the authors, scattering could not be simulated, and validity of this approximation had to be explained. Above some frequency, the wavelength is sufficiently small so that scattering from a 4\,m deep recess is significant. This regime is marked red in the plot, and the prediction of noise reduction may not be accurate. Above 20\,Hz it can be seen that reduction gets weaker. This is because the gravity perturbation starts to be dominated by density perturbations of the central pillar. It is possible that the recess already acts as a moat at these frequencies, and that the central pillar has less seismic noise than simulated in their study. A detailed simulation of scattering from the recess structure using dynamical finite-element methods is necessary to estimate the effect (see Section \ref{sec:numsim} for details). The Newtonian-noise spectrum calculated from the reduction curve is shown in the right of Figure \ref{fig:recessNN}. The green curve models the sensitivity of a possible future version of a LIGO detector. Without noise reduction, it would be strongly limited by Newtonian noise. With recess, Newtonian noise only modestly limits the sensitivity and implementation of coherent noise cancellation should provide the missing noise reduction. It is to be expected that the idea of removing mass around test masses only works at the surface. The reason is that seismic speeds are much larger underground (by a factor 10 at least compared to 250\,m/s). The idea would be to place test masses at the centers of huge caverns, but Figure \ref{fig:cavNNr} tells us that the radius of such a cavern would have to be extremely large (of the order 100\,m for a factor 2 Newtonian-noise reduction at 10\,Hz). \subsection{Summary and open problems} \label{sec:mitisummary} In this section, we have described Newtonian-noise mitigation schemes including coherent noise cancellation using Wiener filters, and passive mitigation based on recess structures and site-selection. While some of the mitigation strategies are well understood (for example, coherent cancellation of Rayleigh-wave Newtonian noise, or site selection with respect to ambient seismicity), others still need to be investigated in more detail. Especially the coherent cancellation of Newtonian noise from seismic body waves depends on many factors, and in this section we could only develop the tools to address this problem systematically. The role of S-waves as coherent noise contribution among seismic sensors serving as reference channels in Wiener filtering has been described in Section \ref{sec:arrayNNP}. Since the cancellation performance presented in Figure \ref{fig:residualXiStrain} is relatively poor and possibly insufficient for future GW detectors that rely on substantial reduction of Newtonian noise, it can be said that developing an effective scheme is one of the top priorities of future investigations in this field. Possible solutions may be to combine seismometers and strainmeters in sensor arrays, and to use multi-axes sensors instead of the single-axis sensors modelled here. Nonetheless, it is remarkable that a simple approach does not lead to satisfactory results as we have seen for the cancellation of Rayleigh-wave and infrasound Newtonian noise in Figures \ref{fig:residualsRay} and \ref{fig:residualSound}. However, we have also been conservative with the body-wave modelling in the sense that we assumed isotropic fields and relatively low P-wave content. Since P-waves experience weaker damping compared to S-waves, it may well be possible that P-wave content is higher in seismic fields. We have also reviewed our current understanding of site-dependent effects on coherent noise cancellation in Section \ref{sec:sitecancel}, which adds to the complexity of the site-selection process. In this context, sites should be avoided where significant seismic scattering can be expected. This is generally the case in complex topographies typical for mountains. It should be emphasized though that a extensive and conclusive study of the impact of scattering on coherent cancellation has not been carried out so far. Concerning passive mitigation strategies, site selection is the preferred option and should be part of any design study of future GW detectors. The potential gain in low-frequency noise can be orders of magnitude, which cannot be guaranteed with any other mitigation strategy. This fact is of course well recognized by the community, as demonstrated by the detailed site-selection study for the Einstein Telescope and the fact that it was decided to construct the Japanese GW detector KAGRA underground. Alternative passive mitigation schemes such as the construction of recess structures around test masses are likely effective at surface sites only as explained in Section \ref{sec:shieldseismic}. The impact of these structures strongly depends on the ratio of structure size to seismic wavelength. Newtonian noise at underground sites is dominated by contributions from body waves, which can have lengths of hundreds of meters even at frequencies as high as 10\,Hz. At the surface, smaller-scale structures may turn out to be sufficient since Rayleigh-wave lengths at 10\,Hz can be a factor 10 smaller than the lengths of body waves underground. Results from finite-element simulations are indeed promising, and more detailed follow-up investigations should be carried out to identify possible problems with this approach. \section{Gravity Perturbations from Objects} \label{sec:objects}\index{Newtonian noise!objects} In the previous sections, Newtonian-noise models were developed for density perturbations described by fields in infinite or half-infinite media. The equations of motion that govern the propagation of disturbances play an important role since they determine the spatial correlation functions of the density field. In addition, gravity perturbations can also be produced by objects of finite size, which is the focus of this section. Typically, the objects can be approximated as sufficiently small, so that excitation of internal modes do not play a role in calculations of gravity perturbations. The formalism that is presented can in principle also be used to calculate gravity perturbations from objects that experience deformations, but this scenario is not considered here. In the case of deformations, it is advisable to make use of a numerical simulation. For example, to calculate gravity perturbations from vibrations of vacuum chambers that surround the test masses in GW detectors, Pepper used a numerical simulation of chamber deformations \cite{Pep2007}. A first analytical study of gravity perturbations from objects was performed by Thorne and Winstein who investigated disturbances of anthropogenic origin \cite{ThWi1999}. The paper of Creighton has a section on gravity perturbations from moving tumbleweeds, which was considered potentially relevant to the LIGO Hanford detector \cite{Cre2008}. Interesting results were also presented by Lockerbie \cite{Loc2012}, who investigated corrections to gravity perturbations related to the fact that the test masses are cylindrical and not, as typically approximated, point masses. Section \ref{sec:thumb} presents rules of thumb that make it possible to estimate the relevance of perturbations from an object ``by eye'' before carrying out any calculation. Sections \ref{sec:objectline} and \ref{sec:oscobj} review well-known results on gravity perturbations from objects in uniform motion, and oscillating objects. A generic analytical method to calculate gravity perturbations from oscillating and rotating objects based on multipole expansions is presented in Sections \ref{sec:vibobj} and \ref{sec:rotobj}. \subsection{Rules of thumb for gravity perturbations} \label{sec:thumb} From our modelling effort so far, we conclude that seismic fields produce the dominant contribution to Newtonian noise above a few Hz. In terms of test-mass acceleration, seismic Newtonian noise is proportional to the displacement $\xi$, and ground density $\rho$: \beq \delta a\sim G\rho\xi \label{eq:scaleg} \eeq This relation is true for any type of seismic field, underground and above surface, with or without scattering, and to make this equation exact, a numerical factor needs to be multiplied, which, in the cases studied so far, should realistically lie within the range 1 -- 10. This was one of the results of Section \ref{sec:ambient}. Other forms of Newtonian noise would be deemed relevant if they lay within a factor 10 to seismic Newtonian noise (this number can increase in the future with improving noise-cancellation performance). The question that we want to answer now is under which circumstances an object would produce gravity perturbations comparable to perturbations from seismic fields. Intuitively, one might think that an object only needs to be close enough to the test mass, but this is insufficient unless the object almost touches the test mass, as will be shown in the following. Let us consider the gravity perturbation from a small mass of volume $\delta V$ and density $\rho_0$ at distance $r$ to the test mass that oscillates with amplitude $\xi(t)\ll r$. We can use the dipole form in Equation (\ref{eq:dipoleacc}) to calculate the gravity perturbation at $\vec r_0=\vec 0$: \beq \delta\vec a(t) = G\rho_0\frac{\delta V}{r^3}\left(\vec\xi(t)-3(\vec e_r\cdot\vec\xi(t))\vec e_r\right) \eeq Acceleration produced by a point mass scales similarly to acceleration from seismic fields according to Equation (\ref{eq:scaleg}), but the amplitude is reduced by $\delta V/r^3$. In numbers, a solid object with 1\,m diameter at a distance of 5\,m oscillating with amplitude equal to seismic amplitudes, and equal density to the ground would produce Newtonian noise, which is about a factor 100 weaker than seismic Newtonian noise. Infrastructure at GW detectors near test masses include neighboring chambers, which can have diameters of several meters, but the effective density is low since the mass is concentrated in the chamber walls. If the distance $r$ is decreased to its minimum when the test mass and the perturbing mass almost touch, then the factor $\delta V/r^3$ is of order unity. It is an interesting question if there exist geometries of disturbing mass and test mass that minimize or maximize the gravitational coupling of small oscillations. An example of a minimization problem that was first studied by Lockerbie \cite{Loc2012} is presented in Section \ref{sec:momentinter}. The maximization of gravitational coupling by varying object and test-mass geometries could be interesting in some experiments. Maybe it is possible to base a general theorem on the multipole formalism for small oscillations introduced in Section \ref{sec:vibobj}. One mechanism that could potentially boost gravity perturbations from objects are internal resonances. It is conceivable that vibration amplitudes are amplified by factors up to a few hundred on resonance, and therefore it is important to investigate carefully the infrastructure close to the test mass. There is ongoing work on this for the Virgo detector where handles attached to the ground are located within half a meter to the test masses. While the rule of thumb advocated in this section rules out any significant perturbation from the handles, handle resonances may boost the gravity perturbations to a relevant level. Finally, we want to emphasize that the rule of thumb only applies to perturbative motion of objects. An object that changes location, or rotating objects do not fall under this category. \subsection{Objects moving with constant speed} \label{sec:objectline} Objects moving at constant speed produce gravity perturbations through changes in distance from a test mass. It is straight-forward to write down the gravitational attraction between test mass and object as a function of time. The interesting question is rather what the perturbation is as a function of frequency. While gravity fluctuations from random seismic or infrasound fields are characterized by their spectral densities, gravity changes from moving objects need to be expressed in terms of their Fourier amplitudes, which are calculated in this section. Since the results should also be applicable to low-frequency detectors where the test masses can be relatively close to each other, the final result will be presented as strain amplitudes. We consider the case of an object of mass $m$ that moves at constant speed $v$ along a straight line that has distance $r_1,\,r_2$ to two test masses of an arm at closest approach. The vectors $\vec r_1,\,\vec r_2$ pointing from the test mass to the points of closest approach are perpendicular to the velocity $\vec v$. The closest approach to the first test mass occurs at time $t_1$, and at $t_2$ to the second test mass. As a function of time, the acceleration of test mass 1 caused by the uniformly moving object reads \beq \delta \vec a_1(t)=-\frac{Gm}{\left(r_1^2+v^2(t-t_1)^2\right)^{3/2}}\left(\vec r_1+\vec v(t-t_1)\right) \label{eq:movatime} \eeq The Fourier transform of $\delta \vec a_1(t)$ can be directly calculated with the result \beq \delta \vec a_1(\omega)=-\frac{2Gm\omega}{v^2}\Big(K_1(r_1\omega/v)\vec r_1/r_1+\irm K_0(r_1\omega/v)\,\vec v/v\Big)\e^{\irm\omega t_1} \label{eq:movsingle} \eeq with $K_n(x)$ being the modified Bessel function of the second kind. This equation already captures the most important properties of the perturbation in frequency domain. The ratio $v/r_1$ marks a threshold frequency. Above this frequency, the argument of the modified Bessel functions is large and we can apply the approximation \beq K_n(x)\approx \sqrt{\frac{\pi}{2x}}\e^{-x}\left(1+\frac{4n^2-1}{8x}+\ldots\right), \label{eq:approxmov} \eeq which is valid for $x\gg|n^2-1/4|$. We see that the Fourier amplitudes are exponentially suppressed above $v/r_1$. The expression in Equation (\ref{eq:movsingle}) has the same form for the second test mass. We can however eliminate $t_2$ in this equation since the distance travelled by the object between $t_1$ and $t_2$ is $L(\vec e_{12}\cdot\vec v)/v$, where $L$ is the distance between the test masses, and $\vec e_{12}$ is the unit vector pointing from test mass 1 to test mass 2, and so $t_2=t_1+L(\vec e_{12}\cdot\vec v)/v^2$. Another substitution that can be made is \beq \vec r_2=\vec r_1-L\vec e_{12}+L(\vec e_{12}\cdot\vec v)\vec v/v^2 \eeq The strain amplitude is then simply given by \beq h(\omega)=-\vec e_{12}\cdot(\delta \vec a_2(\omega)-\delta \vec a_1(\omega))/(\omega^2L) \eeq Let us consider a simplified scenario. The test masses are assumed to be underground at depth $D$, and a car is driving directly above the test masses with $\vec v$ parallel $\vec e_{12}$ and perpendicular to $\vec r_1$. Therefore, $\vec r_1=\vec r_2$, and $t_2-t_1=L/v$. The corresponding strain amplitude is \beq h(\omega)=\frac{2Gm}{v^2\omega L}\irm K_0(\omega D/v)\Big(e^{\irm\omega L/v}-1\Big)e^{\irm\omega t_1} \eeq Notice that the strain amplitude is independent of the test mass separation $L$ at frequencies $\omega\ll v/L$. The plots in Figure \ref{fig:movobj} show the strain amplitudes with varying speeds $v$ and arm lengths $L$. In the former case, the arm length is kept constant at $L=500\,$m, in the latter case, the speed is kept constant at $v=20\,$m/s. The mass of the car is 1000\,kg, and the depth of the test masses is 300\,m. \epubtkImage{} \begin{figure}[htbp] \centerline{\includegraphics[width=0.5\textwidth]{Chp6-LinObject_vv.pdf} \includegraphics[width=0.5\textwidth]{Chp6-LinObject_LL.pdf}} \caption[Gravity perturbations from uniformly moving point mass]{Gravity perturbations from a uniformly moving point mass. In the left plot, distance between test masses is kept constant at $L=500\,$m/s, while in the right plot speed is kept constant at $20\,$m/s.} \label{fig:movobj} \end{figure}} While this form of noise is irrelevant to large-scale GW detectors sensitive above 10\,Hz, low-frequency detectors could be strongly affected. According to the left plot, one should better enforce a speed limit on cars to below 10\,m/s if the goal is to have good sensitivity around 0.1\,Hz. Another application of these results is to calculate Newtonian noise from uniformly advected atmospheric temperature fields as discussed in Section \ref{sec:quasitemp}. For uniform airflow, the remaining integrals in Equation (\ref{eq:advectNN}) are the Fourier transform of Equation (\ref{eq:movatime}), whose solution was given in this section. \subsection{Oscillating point masses} \label{sec:oscobj} Oscillating masses can be a source of gravity perturbations, where we understand oscillation as a periodic change in the position of the center of mass. As we have seen in Section \ref{sec:thumb}, it is unlikely that these perturbations are dominant contributions to Newtonian noise, but in the case of strongly reduced seismic Newtonian noise (for example, due to coherent noise cancellation), perturbations from oscillating objects may become significant. For an accurate calculation, one also needs to model disturbances resulting from the reaction force on the body that supports the oscillation. In this section, we neglect the reaction force. Oscillation is only one of many possible modes of object motion that can potentially change the gravity field. A formalism that can treat all types of object vibrations and other forms of motion is presented in Section \ref{sec:momentinter}. The goal is to calculate the gravity perturbation as strain noise between two identical point masses $m$ at distance $L$ to each other separated along the direction of the unit vector $\vec e_{12}$. While the direction of oscillation is assumed to be constant, the amplitude is random and therefore characterized by a spectral density. As usual, we will denote the amplitudes of oscillation by $\xi(\omega)$ keeping in mind that these only have symbolic meaning and need to be translated into spectral densities. We only allow for small oscillations, i.~e.~with $\xi$ being much smaller than the distance of the object to the two test masses. The acceleration of the first test mass has the well-known dipole form \beq \delta \vec a_1(\omega)=\frac{Gm}{r_1^3}\,\left(\vec\xi(\omega)-3(\vec\xi(\omega)\cdot\vec e_{r_1})\vec e_{r_1}\right) \eeq where $\vec e_{r_1}$ is the unit vector pointing from the first test mass to the object, and $r_1$ is the distance between them. The acceleration of the second test mass has the same form, and we can substitute $\vec e_{r_2}=(\vec e_{r_1}-\lambda\vec e_{12})/\delta$ and $r_2=r_1\delta$ with $\delta\equiv(1+\lambda^2-2\lambda(\vec e_{r_1}\cdot\vec e_{12}))^{1/2}$ and $\lambda\equiv L/r_1$. Let us consider the case of an object oscillating along the direction $\vec e_{12}$, and $\vec e_{12}$ being perpendicular to $\vec e_{r_1}$. Then we can write for the strain noise \beq h_{\|}(\omega)=\vec e_{12}\cdot(\delta \vec a_2(\omega)-\delta \vec a_1(\omega))/(L\omega^2)=\frac{Gm\xi(\omega)}{r_1^4\omega^2}\frac{1}{\lambda}\left(\frac{1-2\lambda^2}{(1+\lambda^2)^{5/2}}-1\right) \label{eq:oscpara} \eeq Changing the direction of oscillation from $\vec e_{12}$ to $\vec e_{r_1}$, the strain noise reads \beq h_{\perp}(\omega)=\frac{Gm\xi(\omega)}{r_1^4\omega^2}\frac{1}{\lambda}\frac{3\lambda}{(1+\lambda^2)^{5/2}} \label{eq:oscperp} \eeq While $h_{\|}(\omega)$ becomes arbitrarily small with decreasing $\lambda$, $h_{\perp}(\omega)$ approaches a constant value. Towards high frequencies, $h_{\perp}$ falls rapidly since there is no force along $\vec e_{12}$ on the first test mass, and the distance of the object to the second test mass increases with growing $\lambda$, and also the projection of the gravity perturbation at the second test mass onto $\vec e_{12}$ becomes smaller. \epubtkImage{} \begin{figure}[htbp] \centerline{\includegraphics[width=0.5\textwidth]{Chp6-OscResp_Para.png} \includegraphics[width=0.5\textwidth]{Chp6-OscResp_Perp.png}} \caption[Strain response to oscillating objects]{Strain response to gravity perturbations from oscillating objects.} \label{fig:oscresp} \end{figure}} Figure \ref{fig:oscresp} shows the gravity strain response to an oscillating object for oscillations parallel to $\vec e_{12}$ (left) and perpendicular to $\vec e_{12}$ (right). The position of the object is parameterized by the angle $\phi={\rm arccos}(\vec e_{12}\cdot\vec e_{r_1})$ (which we call polar angle). Equations (\ref{eq:oscpara}) and (\ref{eq:oscperp}) correspond to the response on the lines $\phi=\pi/2$. The response grows to infinity for $\lambda=1$ and polar angle $\phi=0$ since the object is collocated with the second test mass. Note that $\lambda>1$ and $\phi=0$ means that the object lies between the two test masses. \subsection{Interaction between mass distributions} \label{sec:momentinter} In the following, we discuss gravitational interaction between two compact mass distributions. We consider the case where the distance $R_{AB}$ between the two centers of mass is greater than the object diameters at largest extent. The so-called bipolar expansion allows us to express the gravitational force in terms of mass multipole moments\index{bipolar expansion}. The idea is to split the problem into three separate terms. One term depends on the vector $\vec R_{\rm AB}$ that points from the center of mass $A$ to the center of mass $B$. Each individual mass is expanded into its multipoles according to Equation (\ref{eq:multiext}) calculated in identically oriented coordinate systems, but with their origins corresponding to the two centers of mass. The situation is depicted in Figure \ref{fig:twocenter}. \epubtkImage{} \begin{figure}[htbp] \centerline{\includegraphics[width=0.6\textwidth]{Chp6-Donut.png}} \caption[Bipolar multipole expansion]{Bipolar multipole expansion.} \label{fig:twocenter} \end{figure}} Technically, the origins do not have to be the centers of mass, but in many cases it is certainly the preferred choice. The calculation of the bipolar expansion of the interaction energy between two charge distributions is outlined \cite{SoEA2007}. The result can either be expressed in terms of the Wigner 3-j symbols or Clebsch-Gordan coefficients. We will use Clebsch-Gordan coefficients (see Appendix \ref{sec:clebsch}): \beq \begin{split} U_{AB}(\vec R_{AB}\,)=-G\sum\limits_{l_1=0}^\infty&\sum\limits_{l_2=0}^\infty(-1)^{l_2}{2L\choose 2l_1}^{1/2}\\ &\times\sum\limits_{m_1=-l_1}^{l_1}\sum\limits_{m_2=-l_2}^{l_2}\left(I_L^M(\vec R_{AB}\,)\right)^* X_{l_1}^{m_1,\,A}X_{l_2}^{m_2,\,B}\langle l_1,m_1;l_2,m_2|L,M\rangle, \end{split} \label{eq:multipotential} \eeq where $I_L^M(\cdot)$ are the interior solid spherical harmonics defined in Equation (\ref{eq:solidharm}), and $L\equiv l_1+l_2,\,M\equiv m_1+m_2$. It is not very difficult to generalize this equation for arbitrary mass distributions (one object inside another hollow object, etc), but we will leave this for the reader. The method is essentially an exchange of irregular and regular solid spherical harmonics in Equation (\ref{eq:multipotential}) together with Equations (\ref{eq:multiext}) and (\ref{eq:multiint}). Also, in general it may be necessary to divide the multipole integral in Equation (\ref{eq:multiext}) into several integrals over regular and irregular harmonics. A practical method to calculate the Clebsch-Gordan coefficients $\langle l_1,m_1;l_2,m_2|L,M\rangle$ is outlined in Section \ref{sec:clebsch}. As a first example, we apply the formalism to calculate the gravity force between a point mass and a cylindrical mass. This scenario has been first considered by Lockerbie \cite{Loc2012} to investigate whether the typical approximation of the test mass as a point mass is valid. The only non-vanishing multipole moment of a point mass $M$ in a coordinate system centered on its position is $X_0^0=M$. Therefore the interaction energy can be written \beq U_{AB}(\vec R_{AB}\,)=-GM\sum\limits_{l=0}^\infty\sum\limits_{m=-l}^{l}\left(I_l^m(\vec R_{AB}\,)\right)^* X_l^{m\,B} \label{eq:pointinter} \eeq Let us consider the specific example of a point mass interacting with the quadrupole moment of a cylindrical mass of uniform density (the dipole moment of the cylinder is zero). The cylinder of mass $M$ has a radius $R$ and a height $H$. Aligning the $z$-axis of the coordinate system with the symmetry axis of the cylinder, the only non-vanishing moments of the cylinder have $m=0$ due to axial symmetry. Therefore, the relevant solid spherical harmonic expressed in cylindrical coordinates is given by \beq R_2^0=\frac{1}{2}(2z^2-\rho^2) \eeq According to Equation (\ref{eq:multiext}), the corresponding quadrupole moment with respect to the center of mass is \beq X_2^0=\frac{M_{\rm c}}{12}\left(H^2-3R^2\right) \eeq Since the $z$-axis is defined parallel to the symmetry axis of the cylinder, the spherical angular coordinate $\theta$ in $I_2^0(\vec R_{AB}\,)$ represents the angle between the symmetry axis and the separation vector $\vec R_{AB}$. The interaction energy of the quadrupole term can be written \beq U_{AB}(\vec R_{AB}\,)=-\frac{GMM_{\rm c}}{24R_{AB}^3}\left(H^2-3R^2\right)(3\cos^2(\theta)-1) \label{eq:exquadpoint} \eeq Next we outline briefly how to calculate a gravitational force between two bodies based on the bipolar expansion involving one point mass $m$. The gravitational force is the negative gradient of the interaction energy, which can be calculated using Equation (\ref{eq:gradientscalar}) \beq \begin{split} \vec F(\vec R_{AB}\,)&=GM\sum\limits_{l=0}^\infty\sum\limits_{m=-l}^{l}\left(\nabla I_l^m(\vec R_{AB}\,)\right)^* X_{l}^{m,\,A}\\ &=GM\sum\limits_{l=0}^\infty\sqrt{\dfrac{4\pi}{2l+1}}\dfrac{1}{R_{AB}^{l+2}}\sum\limits_{m=-l}^{l}\left(-(l+1)\vec Y_l^m(\vec R_{AB}\,)+\vec \Psi_l^m(\vec R_{AB}\,)\right)^* X_{l}^{m,\,A}, \end{split} \eeq which involves the vector spherical harmonics defined in Equation (\ref{eq:vectharm}). Interestingly, the quadrupole moment of the cylinder, and the associated interaction energy and force, are zero when $H=\sqrt{3}R$. Since the quadrupole moment can be considered describing the lowest-order correction of the monopole gravitational force, a cylinder with vanishing quadrupole moment behaves very much like a point mass in interactions with nearby point masses. An interesting application of this result is presented in \cite{Loc2002}. The interaction of the point mass with the monopole and quadrupole moments of the cylinder can also lead to a cancellation of certain components of the force. For example, calculating the sum of the monopole and quadrupole terms of the last equation in radial direction, we have \beq \begin{split} 0&=-\frac{GMM_{\rm c}}{R_{AB}^2}-\dfrac{3GMM_{\rm c}}{R_{AB}^4}\dfrac{1}{2}(3\cos^2(\theta)-1) \frac{1}{12}\left(H^2-3R^2\right)\\ R_{AB}^2&=\dfrac{\left(H^2-3R^2\right)}{8}(1-3\cos^2(\theta)) \end{split} \eeq Obviously, cancellation is impossible if $H=\sqrt{3}R$. Conversely, the quadrupole moment can also lead to an enhancement of components of the gravitational force relative to the monopole term. \subsection{Oscillating objects} \label{sec:vibobj}\index{object!oscillation} In the previous section, we introduced the formalism of bipolar expansion to calculate gravitational interactions between two bodies. However, what we typically want is something more specific such as the change in gravity produced by translations and rotations of bodies. Translations in the form of small oscillations will be studied in this section, rotations in the following section. We emphasize that the same formalism can also be used to describe changes in the gravity field due to arbitrary vibrations of bodies by treating these as changes in the coefficients of a multipole expansion. In this section, we assume that the orientation of the two bodies does not change, while the separation between them changes. This can either be incorporated into the formalism as a change of $\vec R_{\rm AB}$, which, according to Equation (\ref{eq:multipotential}), requires a transformation rule for the irregular solid spherical harmonics under translation. Alternatively, we could also treat $\vec R_{\rm AB}$ as constant, but translate one of the bodies inside its own coordinate system leading to changes in its multipole moments. In either case, the effect of translation can be accounted for using the transformation rules on the regular or irregular solid spherical harmonics in the form of addition theorems. In the case of regular solid spherical harmonics, the result can be written in terms of Clebsch-Gordan coefficients \beq \begin{split} R_l^m(\vec r_1+\vec r_2\,) &= \sum\limits_{l'=0}^l\sum\limits_{m'=-l'}^{l'}\mathcal{R}_{lm}^{l'm'}R_{l'}^{m'}(\vec r_1\,)R_{l-l'}^{m-m'}(\vec r_2\,)\\ \mathcal{R}_{lm}^{l'm'} &\equiv {2l\choose 2l'}^{1/2}\langle l',m';l-l',m-m'|lm\rangle \end{split} \label{eq:transregharm} \eeq The addition theorem for the irregular solid spherical harmonics reads \beq \begin{split} I_l^m(\vec r_1+\vec r_2\,) &= \sum\limits_{l'=0}^\infty\sum\limits_{m'=-l'}^{l'}\mathcal{I}_{lm}^{l'm'}R_{l'}^{m'}(\vec r_<\,)I_{l+l'}^{m-m'}(\vec r_>\,)\\ \mathcal{I}_{lm}^{l'm'} &\equiv {2l+2l'+1\choose 2l'}^{1/2}\langle l',m';l+l',m-m'|lm\rangle \end{split} \label{eq:transirregharm} \eeq where $\vec r_>$ is the longer of the two vectors $\vec r_1,\,\vec r_2$, and $\vec r_<$ is the shorter one. These addition theorems can be found in different forms \cite{StRu1973,Cao1978}. We chose the Clebsch-Gordan variant since it bears some similarity to the bipolar expansion. In the following, we describe oscillations of a body as a small change in $\vec R_{\rm AB}$, which means that we need to apply the addition theorem of irregular harmonics. In Equation (\ref{eq:transirregharm}), we set $\vec r_1=\vec R_{\rm AB}$ and $\vec r_2=\vec \xi$. Since the oscillation amplitude $\xi$ is assumed to be small, we only keep terms up to linear order in $\xi$: \beq \begin{split} I_L^M(\vec R_{AB}+\vec \xi\,) &= \sum\limits_{l'=0}^\infty\sum\limits_{m'=-l'}^{l'}\mathcal{I}_{LM}^{l'm'}R_{l'}^{m'}(\vec \xi\,)I_{L+l'}^{M-m'}(\vec R_{AB}\,)\\ &\approx \sum\limits_{l'=0}^1\sum\limits_{m'=-l'}^{l'}\mathcal{I}_{LM}^{l'm'}R_{l'}^{m'}(\vec \xi\,)I_{L+l'}^{M-m'}(\vec R_{AB}\,)\\ &= I_{L}^{M}(\vec R_{AB}\,)+{\mathcal I}_{L,M}^{1,-1}I_{L+1}^{M+1}(\vec R_{AB}\,)R_{1}^{-1}(\vec \xi\,)\\ &\qquad+{\mathcal I}_{L,M}^{1,0}I_{L+1}^{M}(\vec R_{AB}\,)R_{1}^{0}(\vec \xi\,)+{\mathcal I}_{L,M}^{1,1}I_{L+1}^{M-1}(\vec R_{AB}\,)R_{1}^{1}(\vec \xi\,) \end{split} \eeq Let us illustrate this result with an example. We apply the linearized addition theorem to the case of an interaction between a quadrupole moment of a cylinder and an oscillating point mass. The cylinder is meant to represent a test mass of a GW detector, the point mass can represent part of a larger vibrating object in the vicinity of the test mass. Using the notation of the example in the previous section, the perturbed interaction energy can be written as \beq \begin{split} U_{AB}(\vec R_{AB}+\vec \xi\,) &=-GM\left(I_2^0(\vec R_{AB}+\vec \xi\,)\right)^* X_2^{0\,B}\\ &= -\frac{GMM_c}{12}(H^2-3R^2)\\ &\qquad\cdot\left[I_{2}^{0}(\vec R_{AB}\,)+2\sqrt{6}\Re[I_{3}^{1}(\vec R_{AB}\,)R_{1}^{-1}(\vec \xi\,)]-3I_{3}^{0}(\vec R_{AB}\,)R_{1}^{0}(\vec \xi\,)\right] \\ &=-\frac{GMM_c}{24R_{AB}^3}(H^2-3R^2)\\ &\qquad\cdot\left[3(\vec e_{\rm AB}\cdot\vec e_z)^2-1-\frac{3\xi}{R_{\rm AB}}((5(\vec e_{\rm AB}\cdot\vec e_z)^2-1)(\vec e_{\rm AB}\cdot\vec e_\xi)-2(\vec e_{\rm AB}\cdot\vec e_z)(\vec e_\xi\cdot\vec e_z))\right] \end{split} \eeq where $\vec e_z$ is the symmetry axis of the cylinder, $\vec e_{\rm AB}\equiv \vec R_{AB}/R_{AB}$, and $\vec e_\xi\equiv\vec \xi/\xi$. In the case of a point mass being displaced along the radial direction, parallel to $\vec R_{AB}$, the perturbed interaction potential simplifies to \beq \begin{split} U_{AB}(\vec R_{AB}+\vec \xi\,)&=-\frac{GMM_c}{24R_{AB}^3}(H^2-3R^2)(3(\vec e_{\rm AB}\cdot\vec e_z)^2-1)\left(1-\frac{3\xi}{R_{AB}}\right) \end{split} \eeq This result can also be derived directly from Equation (\ref{eq:exquadpoint}). For displacements perpendicular to the radial direction $\vec R_{AB}$, the interaction potential simplifies to \beq \begin{split} U_{AB}(\vec R_{AB}+\vec \xi\,) &= -\frac{GMM_c}{24R_{AB}^3}(H^2-3R^2)\bigg[3(\vec e_{\rm AB}\cdot\vec e_z)^2-1+\frac{6\xi}{R_{AB}}(\vec e_{\rm AB}\cdot\vec e_z)(\vec e_\xi\cdot\vec e_z)\bigg] \end{split} \eeq In this scenario, the displacement $\vec \xi$ of the point mass should be considered a function of time. The result describes the lowest order correction of the monopole-monopole time-varying interaction between a point mass and a cylinder. We see that the quadrupole contribution is suppressed by a factor $\xi/R_{\rm AB}$ (the vibration amplitude is at most a few millimeters). Therefore it is clear that corrections from higher-order moments only matter if gravitational interaction is measured very precisely, or the vibrating point mass is very close to the cylinder. \subsection{Rotating objects} \label{sec:rotobj}\index{object!rotation} Gravity perturbations can be generated by rotating objects such as exhaust fans or motors. We will again use the formalism of the bipolar expansion to calculate the gravitational interaction. In analogy to the previous section, a transformation rule is required for solid spherical harmonics under rotations. For this, we need to work in two coordinate systems. One coordinate system is body fixed. When the body rotates, this coordinate system rotates with it. For the bipolar expansion, we also need to define a coordinate system of the ``laboratory frame'', and the purpose of the rotation transformation is to describe the relative orientation of a body-fixed coordinate system to the laboratory frame. Rotations are easier to describe since we have chosen to work with spherical multipole expansions in this article. According to Equation (\ref{eq:solidharm}), if we understand the transformation of scalar surface spherical harmonics under rotations, then we automatically have the transformation of solid spherical harmonics. The transformation of surface spherical harmonics under rotations can be written in terms of the Wigner D-matrices \cite{RoKr2007,StRu1973}: \beq Y_l^m(\theta,\phi)=\sum\limits_{m'=-l}^lY_l^{m'}(\theta',\phi')D_{m',m}^{(l)}(\alpha,\beta,\gamma) \eeq and since the transformation is unitary: \beq Y_l^m(\theta',\phi')=\sum\limits_{m'=-l}^lY_l^{m'}(\theta,\phi)D_{m,m'}^{(l)^*}(\alpha,\beta,\gamma) \eeq Primed coordinates stand for the body-fixed system, while coordinates without prime belong to the laboratory frame. Rotations preserve the degree $l$ of spherical harmonics. The rotation is defined in terms of the Euler angles $\alpha,\,\beta,\,\gamma$ around three axes derived from the body-fixed system. The first rotation is by $\alpha$ around the $z$-axis of the body-fixed system, then by $\beta$ around the $y'$-axis of the once rotated coordinate system (following the convention in \cite{StRu1973}), and finally by $\gamma$ around the $z''$-axis of the twice rotated coordinate system. Rotations around the $z$-axes lead to simple complex phases being multiplied to the spherical harmonics. The rotation around the $y'$-axis is more complicated, and the general, explicit expressions for the components $D_{m,m'}^{(l)^*}(\alpha,\beta,\gamma)$ of the rotation matrix are given by\cite{StRu1973}: \beq \begin{split} D_{m',m}^{(l)}(\alpha,\beta,\gamma) &= \e^{-\irm m'\alpha}d_{m',m}^{(l)}(\beta)\e^{-\irm m\gamma}\\ d_{m',m}^{(l)}(\beta) &= \sqrt{\frac{(l+m')!(l-m')!}{(l+m)!(l-m)!}}(-1)^{m'-m}\\ & \qquad\cdot\sum\limits_k(-1)^k{l+m\choose k}{l-m\choose l-m'-k}\\ & \qquad\qquad\cdot(\cos(\beta/2))^{2l-m'+m-2k}(\sin(\beta/2))^{m'-m+2k} \end{split} \eeq where the sum is carried out over all values of $k$ that give non-negative factorials in the two binomial coefficients: $\max(0,m-m')\leq k\leq\min(l-m',l+m)$. In the remainder of this section, we apply the rotation transformation to the simple case of a rotating ring of $N$ point masses. Its multipole moments have been calculated in Section \ref{sec:multipole}. The goal is to calculate the gravity perturbation produced by the rotating ring, assumed to have its symmetry axis pointing towards the test mass that is now modelled as a point mass. In this case, we can take Equation (\ref{eq:pointinter}) as starting point. The rotation transforms the exterior multipole moments $X_l^{m,\rm B}$. We have seen that multipole moments of the ring vanish unless $m=0,N,2N,\ldots$ and $l+m$ must be even. Only the first (or last) Euler rotation by an angle $\alpha=\omega t$ is required, which yields \beq U_{\rm AB}(\vec R_{\rm AB},t)=-GM\sum\limits_{l=0}^\infty\sum\limits_{m=-l}^{l}\left(I_l^m(\vec R_{\rm AB}\,)\right)^* X_l^{m,\rm B}\e^{-\irm m\omega t} \eeq This result is obtained immediately using $d_{m',m}^{(l)}(0)=\delta_{m',m}$. This result could have been guessed directly by noticing that the azimuthal angle $\phi_k$ that determines the position of a point mass on the ring appears in the spherical harmonics as phase factor $\exp(\irm m\phi_k)$. When the ring rotates, all azimuthal angles change according to $\phi_k(t)=\phi_k(0)-\omega t$. Since $X_l^m$ vanishes unless $m=0,N,2N,\ldots$, only specific multiples of the rotation frequency $\omega$ can be found in the time-varying gravity field. The number $N$ of point masses on the ring quantifies the level of symmetry of the ring, and acts as an up-conversion factor of the rotation frequency. Therefore, if gravity perturbations are to be estimated from rotating bodies such as a rotor, then the level of symmetry is important. However, the higher the up-conversion, the stronger is the decrease of the perturbation with distance from the ring. It would of course be interesting to study the effect of asymmetries of the ring on gravity perturbations. For example, the point masses can be slightly different, and their distance may not be equal among them. It is not a major effort to generalize the symmetric ring study to be able to calculate the effect of these deviations. \subsection{Summary and open problems} \label{sec:objectsummary} In this section, we reviewed the theoretical framework to calculate gravity perturbations produced by finite-size objects. Models have been constructed for uniformly moving objects, oscillating objects, as well as rotating objects. In all examples, the object was assumed to be rigid, but expanding a mass distribution into multipole moments can also facilitate simple estimates of gravity perturbations from excited internal vibration modes. An``external'' vibration in the sense of an isolated oscillation does not exist strictly speaking since there must always be a physical link to another object to compensate the momentum change, but it is often possible to identify a part of a larger object as main source of gravity perturbations and to apply the formalism for oscillating masses. Many forms of object Newtonian noise have been estimated \cite{Pep2007,DHA2012}. So far, none of the potential sources turned out to be relevant. In Section \ref{sec:thumb}, we learned why it is unlikely that object Newtonian noise dominates over seismic Newtonian noise. Still, one should not take these rules of thumb as a guarantee. Strong vibration, i.~e.~with amplitudes much larger than ground motion, can in principle lead to significant noise contributions, especially if the vibration is enhanced by internal resonances of the objects. Any form of macroscopic motion including rotations (in contrast to small-amplitude vibrations) should of course be avoided in the vicinity of the test masses. A Newtonian-noise budget based on an extensive study of potential sources at the LIGO sites was published in \cite{DHA2012}. The result is shown in Figure \ref{fig:LIGONN}. \epubtkImage{} \begin{figure}[htbp] \centerline{\includegraphics[width=0.75\textwidth]{Chp6-LIGO_NN.pdf}} \caption[Newtonian-noise budget for the LIGO sites]{Newtonian-noise budget for the LIGO sites as published in \cite{DHA2012}. Gravity perturbations from the wall panels, building, and fan were estimated based on equations from this section.} \label{fig:LIGONN} \end{figure}} The curves are based on seismic, sound, and vibration measurements. The seismic Newtonian noise curves are modelled using Equation (\ref{eq:RayleighS}), the sound Newtonian noise using Equation (\ref{eq:atmstrainNN}), and estimates of gravity perturbations from wall panels, the buildings, and fans are modelled using equations from this section. Gravity perturbations from the buildings assume a rocking motion of walls and roof. The exhaust fan strongly vibrates due to asymmetries of the rotating parts, which was taken as source of gravity perturbations. Finally, panels attached to the structure of the buildings show relatively high amplitudes of a membrane like vibration. Nonetheless, these sources, even though very massive, do not contribute significantly to the noise budget. Greater care is required when designing future GW detectors with target frequencies well below 10\,Hz. These will rely on some form of Newtonian-noise mitigation (passive or active), which increases the relative contribution of other forms of gravity perturbations. Also, in some cases, as for the uniform motion discussed in Section \ref{sec:objectline}, there is a link between the shape of the gravity perturbation spectrum and the distance between object and test mass. These classes of gravity perturbations (and we have identified only one of them so far), can be much stronger at lower frequencies. Future work on object Newtonian noise certainly includes a careful study of this problem for low-frequency GW detectors. In general, it would be beneficial to set up a catalogue of potential sources and corresponding gravity models to facilitate the process of estimating object Newtonian noise in new detector designs. Another interesting application of the presented formalism could be in the context of experiments carried out with the intention to be sensitive to gravity perturbations produced by an object (such as the quantum-gravity experiment proposed by Feynman \cite{Zeh2011}). The formalism presented in this section may help to optimize the geometrical design of such an experiment. \section{Gravity Perturbations from Seismic Point Sources} \label{sec:pointsources} In Section \ref{sec:ambient}, we have reviewed our understanding of how seismic fields produce gravity perturbations. We did however not pay attention to sources of the seismic field. In this section, gravity perturbations will be calculated based on models of seismic sources, instead of the seismic field itself. This can serve two purposes. First, a seismic source can be easier to characterize than the seismic field itself, since characterization of a seismic field requires many seismometers in general deployed in a 3D array configuration. Second, it is conceivable to obtain information about a seismic source based on observations of gravity perturbations. For example, it was suggested to promptly detect and characterize fault ruptures leading to earthquakes using low-frequency gravity strain meters \cite{HaEA2015}. In this case, the analysis of gravity data from high-precision gravity strain meters can be understood as a new development in the field of terrestrial gravimetry. Until today, observation of only very slow changes in the terrestrial gravity field (slower than about 1\,mHz) were possible using networks of ground-based gravimeters \cite{CrHi2010} or the satellite mission GRACE \cite{WaEA2004} with applications in hydrology, seismology and climate research. Also co-seismic gravity changes, i.~e.~changes following large earthquakes, were observed with gravimeters \index{gravimeter}\cite{ImEA2004} as well as with GRACE (see for example \cite{WSJ2012,CaSa2013}). These observations were predicted based on a theory of static gravity perturbations from fault rupture first developed by Okubo in \cite{Oku1991,Oku1992,Oku1993}. Only lasting gravity changes can be detected with these instruments, and it is to be expected that high-precision gravity strain meters will contribute significantly to this field by opening a window to gravity changes at higher frequencies. New models need to be constructed that describe time-varying gravity changes from seismic fields produced by various seismic sources. The first steps thereof are outlined in the following. We also want to point out that the same formalism can be applied to point sources of sound waves as shown in Section \ref{sec:shockNN}. We emphasize that all known time-domain models of gravity perturbations from seismic sources are for infinite media. The inclusion of surface effects, which is not always necessary, is one of the important calculations that needs to be done still. We will give some ideas how to approach this problem in Section \ref{sec:pointsummary}. According to the title of this section, the models presented here are for point sources only. It is however numerically trivial to combine point source solutions to represent an extended source. Also certain analytical calculations of gravity perturbations from extended sources should be feasible. \subsection{Gravity perturbations from a point force} \label{sec:forcegrav} Point forces can be a good model of various real sources such as vibrating engines or impacts of small objects on ground. A point force is modelled as force density according to \beq \vec f(\vec r,t) = F(t)\vec e_f\delta(\vec r\,) \eeq with source function \index{source function}$F(t)=0$ for $t<0$, and $\vec e_f$ being the normal vector pointing along the direction of the force. Such a force generates a complicated seismic field that is composed of a near field, and shear and compressional waves propagating in the intermediate and far field \cite{AkRi2009}, all components with different radiation patterns (explicit expressions for a point shear dislocation are given in Section \ref{sec:disdensity}). However, Equation (\ref{eq:gravP}) can be applied here, which means that we only need to know the potential of compressional waves in infinite media to simply write down the corresponding gravity perturbation. It is not too difficult to calculate the seismic potential, but one can also find the solution in standard text books \cite{AkRi2009}. The solution for the perturbed gravity acceleration reads \beq \delta \vec a(\vec r_0,t)=\frac{G}{r_0^3}(\vec e_f-3(\vec e_f\cdot\vec e_{r_0})\vec e_{r_0})\int\limits_0^{r_0/\alpha}\drm\tau\,\tau F(t-\tau)+\frac{G}{r_0\alpha^2}(\vec e_f\cdot\vec e_{r_0})\vec e_{r_0}F(t-r_0/\alpha) \eeq with the source being located at the origin. This perturbation is based on the full seismic field produced by the point force. The solution consists of a component proportional to an integral over the source function, and another component proportional to the retarded source function. At early times, when $t<r_0/\alpha$, i.~e.~when the seismic waves produced by the source have not yet reached the location $\vec r_0$, the second term vanishes while the integral can be rewritten as double time integral \beq \int\limits_0^t\drm\tau\,\tau F(t-\tau)=\int\limits_0^t\drm\tau\int\limits_0^\tau\drm\tau'\,F(\tau')\equiv \mathcal{I}_2[F](t) \eeq The acceleration simplifies to \beq \delta \vec a(\vec r_0,t)=\frac{G}{r_0^3}(\vec e_f-3(\vec e_f\cdot\vec e_{r_0})\vec e_{r_0})\mathcal{I}_2[F](t),\,\mbox{for}\;t<r_0/\alpha \label{eq:earlyF} \eeq Interestingly, the early-time solution is independent of any geophysical parameters such as ground density and seismic speeds (assuming that the ground is homogeneous). The gravity perturbation from a point force can be used to model the contribution of local sources to Newtonian noise based on a measured source time function $F(t)$. In this section, we use it to present another interesting result. It has often been conjectured that a transient source of seismic vibrations would be a problem to coherent mitigation schemes since the gravity perturbation starts to be significant before any of the seismometers can sense the first ground motion produced by this source. Therefore, it would be impossible to coherently remove a significant contribution to Newtonian noise using seismic data. Some evidence speaking against this conjecture was already found in numerical simulations of approaching wavefronts from earthquakes \cite{HaEA2009b}, but there was no analytical explanation of the results. We can make up for this now. Let us make the following Gedankenexperiment. Let us assume that all seismic noise is produced by a single source. Let us assume that this source is switched on at time $t=0$. Before this time, the entire seismic field is zero. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.6\textwidth]{Chp4-PointForce_a.png}} \caption[Gravity perturbation point force]{Gravity perturbation from a point force assumed to be the only source of seismic noise, and switching on at $t=0$.} \label{fig:forcepoint} \end{figure}} Now the source starts to irradiate seismic waves. The waves do not reach the test mass before $t=r_0/\alpha$, where $r_0$ is the distance between the source and the test mass. The situation is illustrated in Figure \ref{fig:forcepoint}. The dashed line marks the arrival of seismic waves. From that time on, we have the usual Newtonian noise from ambient seismic fields. Interesting however is what happens before arrival. The Newtonian noise is hardly visible. Therefore, an inset plot was added to show gravity perturbations before wave arrival. Not only is the rms of the gravity perturbation much lower, but as expected, it evolves much slower than the Newtonian noise from ambient seismic fields. Equation (\ref{eq:earlyF}) says that the source function is filtered by a double integrator to obtain the gravity acceleration. Another double integrator needs to be applied to convert gravity acceleration into test-mass displacement. Therefore, whatever the source function is, and the corresponding source spectrum $F(\omega)$\index{source spectrum}, gravity perturbations will be strongly suppressed at high frequencies. Due to the transient character of this effect, it is difficult to characterize the problem in terms of Newtonian-noise spectra, but it should be clear that a seismic source would have to be very peculiar (i.~e.~radiating very strongly at high frequencies and weakly at low frequencies), to cause a problem to coherent Newtonian-noise cancellation, without causing other problems to the detector such as a loss of cavity lock due to low-frequency ground disturbances. \subsection{Density perturbation from a point shear dislocation in infinite homogeneous media} \label{sec:disdensity} In this subsection, we briefly review the known solution of the seismic field produced by a shear dislocation\index{shear dislocation}. The shear dislocation is modelled as a double couple\index{double couple}, which consists of two perpendicular pairs of forces pointing against each other with infinitesimal offset. The coordinate system used in the following is shown in Figure \ref{fig:pointshear}. Its origin coincides with the location of the shear dislocation, with the $z$-axis being parallel to the slip direction, and the $x$-axis perpendicular to the fault plane. Spherical coordinates $r,\theta,\phi$ will be used in the following that are related to the Cartesian coordinates via $x=r\sin(\theta)\cos(\phi)$, $y=r\sin(\theta)\sin(\phi)$, $z=r\cos(\theta)$, with $0<\theta<\pi$, and $0<\phi<2\pi$. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.5\textwidth]{Chp4-DoubleCouple.png}} \caption[Point shear dislocation]{Definition of the coordinate system used to describe a point shear dislocation.} \label{fig:pointshear} \end{figure}} The double couple drives a displacement field that obeys conservation of linear and angular momenta. Its explicit form is given in Aki \& Richards \cite{AkRi2009}. It consists of a near-field component: \beq \begin{split} \vec\xi_{\rm N}(\vec r\,,t) &= \frac{1}{4\pi\rho_0}\vec A_{\rm N}\frac{1}{r^4}\int\limits_{r/\alpha}^{r/\beta}\drm\tau\,\tau M_0(t-\tau),\\ \vec A_{\rm N} &\equiv 9\sin(2\theta)\cos(\phi)\vec e_r-6(\cos(2\theta)\cos(\phi)\vec e_\theta-\cos(\theta)\sin(\phi)\vec e_\phi), \end{split} \eeq an intermediate-field component \beq \begin{split} \vec\xi_{\rm I}(\vec r\,,t) &= \frac{1}{4\pi\rho_0\alpha^2}\vec A_{\rm IP}\frac{1}{r^2} M_0(t-r/\alpha)+\frac{1}{4\pi\rho_0\beta^2}\vec A_{\rm IS}\frac{1}{r^2} M_0(t-r/\beta),\\ \vec A_{\rm IP} &\equiv 4\sin(2\theta)\cos(\phi)\vec e_r-2(\cos(2\theta)\cos(\phi)\vec e_\theta-\cos(\theta)\sin(\phi)\vec e_\phi),\\ \vec A_{\rm IS} &\equiv -3\sin(2\theta)\cos(\phi)\vec e_r+3(\cos(2\theta)\cos(\phi)\vec e_\theta-\cos(\theta)\sin(\phi)\vec e_\phi), \end{split} \eeq and a far-field component \beq \begin{split} \vec\xi_{\rm F}(\vec r\,,t) &= \frac{1}{4\pi\rho_0\alpha^3}\vec A_{\rm FP}\frac{1}{r} \dot M_0(t-r/\alpha)+\frac{1}{4\pi\rho_0\beta^3}\vec A_{\rm FS}\frac{1}{r}\dot M_0(t-r/\beta),\\ \vec A_{\rm FP} &\equiv \sin(2\theta)\cos(\phi)\vec e_r,\\ \vec A_{\rm FS} &\equiv \cos(2\theta)\cos(\phi)\vec e_\theta-\cos(\theta)\sin(\phi)\vec e_\phi, \end{split} \eeq which have to be added to give the total displacement field $\vec\xi(\vec r\,,t)$. The source function $M_0(t)$ of the double couple is called moment function. As for the point force, we assume again that the source function is zero for $t<0$. If a double couple is used to represent fault ruptures, than the source function increases continuously as long as the fault rupture lasts. In contrast to the intermediate and far-field terms, the near-field term does not describe a propagating seismic wave. The far field is the only component that generally vanishes for $t\rightarrow\infty$. According to Equation (\ref{eq:densinh}), density perturbations in infinite, homogeneous media can only be associated with compressional waves, since the divergence of the shear field is zero. This is confirmed by inserting the total displacement field into Equation (\ref{eq:densinh}). One obtains the density change \beq \begin{split} \delta\rho(\vec r\,,t) &= -\rho_0\nabla\cdot\vec\xi(\vec r\,,t)\\ &= \frac{3\cos(\phi)\sin(2\theta)}{4\pi r^3\alpha^2}\left(M_0(t-r/\alpha)+\frac{r}{\alpha}\dot M_0(t-r/\alpha)+\frac{r^2}{3\alpha^2}\ddot M_0(t-r/\alpha)\right)\\ &\equiv \cos(\phi)\sin(2\theta)R(r,t) \end{split} \label{eq:densdis} \eeq The density perturbation assumes a much simpler form than the seismic field. The perturbation propagates with the speed of compressional waves, and has a quadrupole radiation pattern. A lasting density change is built up proportional to the final moment $M_0(t\rightarrow\infty)$ of the shear dislocation. \epubtkImage{} \begin{figure}[htbp] \centerline{\includegraphics[width=0.7\textwidth]{Chp4-DensityField.png}} \caption[Density perturbation of double couple]{Density perturbation produced by a double couple.} \label{fig:densdouble} \end{figure}} In Figure \ref{fig:densdouble}, the gravity perturbation is shown for $\theta = \pi/4,\,\phi=0$. The source function is $M_0(t)=M_0\tanh(t/\tau)$ for $t>0$ and zero otherwise. A log-modulus transform is applied to the density field since its value varies over many orders of magnitude \cite{JoDr1980}. This transform preserves the sign of the function it is applied to. A transient perturbation carried by compressional waves propagates parallel to the line $t=r/\alpha$. A lasting density change, which quickly decreases with distance to the source, forms after the transient has passed. \subsection{Gravity perturbations from a point shear dislocation} \label{sec:disgravity} \index{earthquakes} Fault slip generates elastodynamic deformation (static and transient), including compression and dilation that induce local perturbations of the material density. These in turn lead to global perturbations of the gravity field. In this section, we consider an elementary problem: we develop an analytical model of time-dependent gravity perturbations generated by a point-shear dislocation in an infinite, elastic, and homogeneous medium. We are interested in frequencies higher than $0.01$\,Hz, for which we can ignore the effects of self-gravitation \cite{DaTr1998}: we compute the gravity changes induced by mass redistribution caused by elastic deformation, but ignore the effect of gravity force fluctuations on the deformation. The results in this subsection were published in \cite{HaEA2015}. The gravity perturbation can either be obtained analogously to the case of a point force by seeking for a known solution of the P-wave potential and rewriting it as gravity potential, or by attempting a direct integration of density perturbations. First, we will show how to carry out the direct integration. The density perturbation $\delta\rho(\vec r\,,t)$ caused by the displacement field $\vec\xi(\vec r\,,t)$ was presented in Equation (\ref{eq:densdis}). The perturbation of the gravity potential at some point $\vec r_0$ is obtained by integrating over the density field according to \beq \delta\psi(\vec r_0\,,t)=-G\int{\rm d} V\,\frac{\delta\rho(\vec r\,,t)}{|\vec r-\vec r_0|}. \label{eq:potential} \eeq The integration can be carried out using a multipole expansion of the gravity potential. This requires us to divide the integration over the radial coordinate $r$ into two intervals: $0<r<r_0$ and $r_0<r$. Over the first interval, one obtains the exterior multipole expansion:\index{multipole expansion} \beq \delta\psi_{\rm ext}(\vec r_0\,,t) = \sum\limits_{l=0}^\infty\sum\limits_{m=-l}^l I_l^m(\vec r_0)^* \cdot\int\limits_0^{r_0}{\rm d} r\,r^2\int{\rm d}\Omega\,\delta\rho(\vec r,t)R_l^m(\vec r) \label{eq:exterior} \eeq The corresponding expression for the interior multipole expansion is given by \beq \delta\psi_{\rm int}(\vec r_0\,,t) = \sum\limits_{l=0}^\infty\sum\limits_{m=-l}^l R_l^m(\vec r_0)^* \cdot\int\limits_{r_0}^\infty{\rm d} r\,r^2\int{\rm d}\Omega\,\delta\rho(\vec r,t)I_l^m(\vec r), \label{eq:interior} \eeq where we used the solid spherical harmonics defined in Equation (\ref{eq:solidharm}). The two integrals in Equations (\ref{eq:exterior}) and (\ref{eq:interior}) are readily solved by expressing the radiation pattern in Equation (\ref{eq:densdis}) in terms of the surface spherical harmonics (see Table \ref{tab:sphereharm}), \beq \sin(2\theta)\cos(\phi) =2\sqrt{2\pi/15}\,\left(Y_2^{-1}(\theta,\phi)-Y_2^1(\theta,\phi)\right), \eeq and subsequently making use of the orthogonality relation in Equation (\ref{eq:normalY}). For example, inserting the density perturbation into the exterior multipole expansion of the gravity potential we have \beq \begin{split} \delta\psi_{\rm ext}(\vec r_0\,,t) &= \sum\limits_{l=0}^\infty\sum\limits_{m=-l}^l I_l^m(\vec r_0)^* \cdot\int\limits_0^{r_0}{\rm d} r\,r^2R(r,t)\int{\rm d}\Omega\,\sin(2\theta)\cos(\phi)R_l^m(\vec r)\\ &= \frac{4\pi}{5}\frac{1}{r_0^3}\sum\limits_{m=-2}^2 Y_2^m(\theta_0,\phi_0)^* \cdot\int\limits_0^{r_0}{\rm d} r\,r^4R(r,t)\int{\rm d}\Omega\,\sin(2\theta)\cos(\phi)Y_2^m(\theta,\phi) \end{split} \eeq The integral over angles can be carried out using Equation (\ref{eq:conjugateY}). The result is again a quadrupole radiation pattern. The integral over the radius can be simplified considerably by integration by parts. The solution $\delta \psi(\vec r_0\,,t)=\delta \psi_{\rm ext}(\vec r_0\,,t)+\delta \psi_{\rm int}(\vec r_0\,,t)$ for the gravity potential perturbation can then be written in the form \beq \delta \psi(\vec r_0\,,t)=G \sin(2\theta_0)\cos(\phi_0)\left[\frac{1}{r_0\alpha^2}M_0(t-r_0/\alpha)-\frac{3}{r_0^3}\int\limits_0^{r_0/\alpha}{\rm d} u\,u M_0(t-u)\right] \label{eq:potentialext} \eeq The second, more elegant approach to solve Equation\;(\ref{eq:potential}) is again based on Equation (\ref{eq:gravP}). Given the known solution for seismic potentials from a point force in infinite media \cite{AkRi2009}, one can derive the corresponding expression for a double-couple by applying derivatives to the gravity potential with respect to the source coordinates along two orthogonal directions, and rescale it according to Equation (\ref{eq:potential}) to obtain the expression of the perturbed gravity potential given in Equation (\ref{eq:potentialext}). As for the point force, the gravity potential perturbation from a double couple has a particularly simple structure at early times, $t<r_0/\alpha$, i.e. before the arrival of P waves at $\vec r_0$: \beq \begin{split} \delta \psi(\vec r_0\,,t) &= -\frac{3G}{r_0^3} \sin(2\theta_0)\cos(\phi_0)\mathcal{I}_2[M_0](t)\\ &= -\frac{6G}{r_0^3} (\vec e_{r_0}\cdot\vec e_x)(\vec e_{r_0}\cdot\vec e_z)\mathcal{I}_2[M_0](t) \end{split} \label{eq:psiearly} \eeq The early gravity potential perturbation appears to emerge from the acausal component of the P-wave potential, whose contribution to the seismic wavefield is cancelled out by a similar contribution from the S-wave potential. Finally, we also give the early-time solution for the gravity acceleration: \beq \delta \vec a(\vec r_0\,,t) = \frac{6G}{r_0^4} ((\vec e_{r_0}\cdot\vec e_z)\vec e_x+(\vec e_{r_0}\cdot\vec e_x)\vec e_z-5(\vec e_{r_0}\cdot\vec e_x)(\vec e_{r_0}\cdot\vec e_z)\vec e_{r_0})\mathcal{I}_2[M_0](t) \label{eq:accearly} \eeq Written in this form, the expression becomes frame independent. The directions $\vec e_x,\vec e_z$ are physical directions denoting fault normal and slip direction. They can be reexpressed in any other coordinate system, as for example an Earth coordinate system. \subsubsection{Gravity-gradient tensor} The gravity-gradient tensor $\ddot h(\vec r_0\,,t)$, whose components can be measured by torsion-bar antennas or atom interferometers, is obtained by calculating \beq \ddot h(\vec r_0\,,t)=-(\nabla\otimes\nabla)\delta \psi(\vec r_0\,,t) \label{eq:hdddef} \eeq where '$\otimes$' denotes the Kronecker product (also known as dyadic or tensor product). For arbitrary $t$, the result is a symmetric tensor that can be divided into four distinct parts. The first part is proportional to the density perturbation at $\vec r_0$: \beq \ddot h_1(\vec r_0\,,t)=-4\pi G\,\delta\rho(\vec r_0\,,t)\vec e_r\otimes\vec e_r \label{eq:hddpart1} \eeq It is the only contribution with non-vanishing trace. Using ${\rm Tr}(\vec a\otimes\vec b\,)=\vec a\cdot\vec b$, one obtains \beq {\rm Tr}(\ddot h_1(\vec r_0,t))=-4\pi G\,\delta\rho(\vec r_0,t), \eeq consistent with the Poisson equation. The second part can be cast into the form \beq \ddot h_2 (\vec r_0\,,t) = -\frac{6 G}{r_0^5}S(\theta,\phi)\int\limits_0^{r_0/\alpha}{\rm d} u\,u M_0(t-u) \label{eq:hddpart2} \eeq where \beq \begin{split} S(\theta,\phi) &= 5(\vec e_x\cdot\vec e_r)(\vec e_z\cdot\vec e_r)(3 {\hbox{$1\hskip -1.2pt\vrule depth 0pt height 1.6ex width 0.7pt \vrule depth 0pt height 0.3pt width 0.12em$}} -7\vec e_r\otimes\vec e_r)\\ &\quad+4(\vec e_x\otimes\vec e_z)_{\rm sym}+5((\vec e_x\times\vec e_r)\otimes(\vec e_z\times\vec e_r))_{\rm sym}. \end{split} \eeq Here $(\vec a\otimes\vec b)_{\rm sym}\equiv\vec a\otimes\vec b+\vec b\otimes\vec a$. The third part is given by \beq \ddot h_3(\vec r_0\,,t) = \frac{2 G}{5 r_0^3\alpha^2}\left(6M_0(t-r_0/\alpha)+\frac{r_0}{\alpha}\dot M_0(t-r_0/\alpha)\right)\left(S(\theta,\phi)+(\vec e_x\otimes\vec e_z)_{\rm sym}\right), \label{eq:hddpart3} \eeq and the last part is proportional to the moment function \beq \ddot h_4(\vec r_0\,,t)=-\frac{2 G}{\alpha^2r_0^3} M_0(t-r_0/\alpha) \cdot(\vec e_x\otimes\vec e_z)_{\rm sym}, \label{eq:hddpart4} \eeq Note that the unit vectors $\vec e_x,\,\vec e_z$ are not arbitrary coordinate axes, but have a physical interpretation being normal to the shear plane, and along shear direction respectively. The full gravity gradient is simply the sum of these four contributions. The first and last two contributions vanish for $\alpha t<r_0$ since $M_0(t)=0$ for $t<0$, and the integral of the second contribution can be rewritten into \beq \ddot h (\vec r_0\,,t)=-\frac{6 G}{r_0^5}S(\theta,\phi) I_2[M_0](t). \label{eq:earlytime} \eeq None of the four contributions vanishes for $t\rightarrow\infty$. Instead the time derivatives of the moment function go to zero, and the moment function itself can be substituted by its final value $M_0(t\rightarrow\infty)$. The result is a gravity-gradient tensor whose components decrease with $1/r_0^3$. In addition, the gravity gradient for $t\rightarrow \infty$ is identical to the static gravity perturbation found by \cite{Oku1991} for shear dislocations in a half space, provided that his result is evaluated for an event far from the surface (so that surface effects are suppressed). For small times $\alpha t<r_0$, the gravity-gradient perturbation is not delayed by $r_0/\alpha$. This delay only emerges once the P waves have reached the point $\vec r_0$. In other words, the point-shear dislocation behaves as a point source of gravity perturbations for $\alpha t<r_0$ even though the actual source is an expanding wavefront of seismic compressional waves. In this case, the effective (point) source function of gravity-gradient perturbations is the fourth time integral of the moment function, which also entails that contributions to the gravity gradient from higher-frequency components of the moment spectrum are strongly suppressed. \subsection{Gravity perturbation from the Tohoku earthquake} \label{sec:Tohoku} \index{focal mechanism} In this section, the specific example of the 2011 Tohoku earthquake will be used to estimate gravity perturbations. The Tohoku event had a magnitude of 9.0, and ruptured a fault of width and length of several hundred kilometers \cite{AmEA2011}. The hypocenter was located at latitude N37.52 and longitude E143.05. The estimate of the gravity perturbation will be based on the early-time approximation given in Equation (\ref{eq:psiearly}). The result should not be expected to be accurate, since the source cannot be approximated as a point source, and the influence of the surface on the gravity perturbation may be substantial, but nonetheless, it serves as an order-of-magnitude estimate. With these simplifications, the main work is to understand specifications of focal mechanisms published by seismological institutions such as USGS \footnote{http://earthquake.usgs.gov/}, and translate them into the fault-slip oriented coordinate system of Section \ref{sec:disgravity}. The focal mechanisms can be specified by three angles: the strike angle $\gamma_{\rm s}$, the dip angle $\gamma_{\rm d}$, and the rake angle $\gamma_{\rm r}$. The strike angle is subtended by the intersection of the fault plane with the horizontal plane, and the North cardinal direction. The dip is the angle between the fault plane and the horizontal plane. Finally, the rake is subtended by the slip vector and the horizontal direction on the fault plane. The fault geometry is displayed in Figure \ref{fig:fault}. \epubtkImage{}{% \begin{figure}[htbp] \centerline{\includegraphics[width=0.6\textwidth]{Chp4-Fault.pdf}} \caption[Focal mechanism]{Focal mechanism.} \label{fig:fault} \end{figure}} For the Tohoku earthquake, the angles are $\gamma_{\rm s}=3.54$, the dip angle $\gamma_{\rm d}=0.17$, and the rake angle $\gamma_{\rm r}=1.54$. In the coordinate system shown in Figure \ref{fig:pointshear}, the normal vector of the fault defines the direction of the $x$-axis, and the slip vector defines the direction of the $z$-axis. In this coordinate system, the gravity perturbation is given by: \beq \delta \vec a(\vec r_0\,,t) = -\frac{6G}{r_0^4} \left[(\vec e_x\cdot\vec e_{r_0})\vec e_z+(\vec e_z\cdot\vec e_{r_0})\vec e_x-5(\vec e_x\cdot\vec e_{r_0})(\vec e_z\cdot\vec e_{r_0})\vec e_{r_0}\right]\, I_2[M_0](t). \eeq Now vectors are to be expressed in a new coordinate system whose axes correspond to the cardinal directions $\vec e_{\rm E},\,\vec e_{\rm N}$, and the normal vector of Earth's surface $\vec e_{\rm V}$. Based on the geometry shown in Figure \ref{fig:fault}, the following relations can be found \beq \begin{split} \vec e_x &= R(\vec e_{\rm V},-\gamma_{\rm s})\cdot R(\vec e_{\rm N},-\gamma_{\rm d})\cdot\vec e_{\rm V},\\ \vec e_z &= R(\vec e_{\rm V},-\gamma_{\rm s})\cdot R(\vec e_{\rm N},-\gamma_{\rm d})\cdot R(\vec e_{\rm V},\gamma_{\rm r})\cdot\vec e_{\rm N}, \end{split} \eeq and $\vec e_y=\vec e_z\times\vec e_x$. A matrix $R(\vec a,\alpha)$ describes a rotation around axis $\vec a$ by an angle $\alpha$. Sensors designed to monitor changes in gravity acceleration are called gravimeters \index{gravimeter}. For example, networks of gravimeters have been used in the past to detect coseismic gravity changes following large earthquakes \cite{ImEA2004}. However, these were pre-post event comparisons of DC gravity changes. A prompt detection of a coseismic gravity perturbation using gravimeters has not been achieved yet. \epubtkImage{} \begin{figure}[htbp] \centerline{\includegraphics[width=0.49\textwidth]{Chp4-TimeSeries_KA.pdf} \includegraphics[width=0.49\textwidth]{Chp4-Tohoku_early.pdf}} \caption[Modelled Tohoku gravity perturbation]{The left plot shows gravimeter time series at the Kamioka site high-passed at 2\,mHz days before the Tohoku earthquake. The right plot shows the modelled gravity perturbation in vertical direction of the Tohoku earthquake at the Kamioka site.} \label{fig:Tohoku} \end{figure}} The results of this and the previous subsection can be used to make quantitative predictions of gravity perturbations from earthquakes. Since the model of Equation (\ref{eq:potentialext}) is valid for fault ruptures in infinite space, a prediction of gravity perturbations for sources buried in half spaces can only be valid as long as the seismic waves have not reached the surface. In reality, this typically allows us to model up to a few seconds of time series of an earthquake, but it was shown in \cite{HaEA2015} using numerical simulations that the duration of the modelled gravity perturbation can be extended for some time without causing major deviations from the half-space signal. It would of course be useful to have the analytical half-space solution in hand. This said, it should nevertheless be true that the infinite space solution provides a useful order-of-magnitude estimate of the gravity perturbation, even beyond the duration validated by numerical simulations, at least as long as seismic waves have not yet reached the location of the gravity sensor. Figure \ref{fig:Tohoku} shows the result for the perturbation of gravity acceleration in vertical direction. The first 60\,s of the signal are simulated for a gravimeter at the Kamioka station at latitude N36.42 and longitude E137.31, about 500\,km away from the hypocenter of the earthquake. The curve uses the estimated source function of the Tohoku earthquake \footnote{\url{http://www.tectonics.caltech.edu/slip_history/2011_tohoku_joint/index.html}}, which had a total rupture duration of about 300\,s, with almost all of the total seismic moment, $5\times 10^{22}$\,Nm, already released after 120\,s. After 68\,s, the first seismic waves reach the gravimeter, which makes gravity measurements impossible for more than a day. A signal of about $-10^{-8}\,\rm m/s^2$ is substantial. The rms of the data between 2\,mHz and 0.5\,Hz is about $5\times10^{-9}\,\rm m/s^2$ during relatively quiet times, and gravimeter data are highly non-stationary (mostly due to direct seismic perturbation of the instrument). It may be possible to detect this signal before arrival of the seismic waves, based on a fit to the predicted gravity perturbation and integrating over the available 68\,s of data. \subsection{Seismic sources in a homogeneous half space} \label{sec:sourcehalf} We now consider the case of gravity perturbations above surface produced by seismic fields in a homogeneous half space. There are two major differences to the case of infinite space. First, the explicit solution of the seismic field produced by point sources contains an integral, which is impossible to solve except for the easiest source time functions. Sophisticated analytical techniques known as Cagniard -- de Hoop methods had to be invented to obtain these solutions \cite{dHo1961,AkRi2009}. Second, even when the seismic field is left unspecified, the explicit solution of the gravity perturbation involves other integrals, while the infinite space solution of Equation (\ref{eq:gravP}) was free of integrals. This is at least the conclusion of the preliminary investigation presented in the following. The purpose of this section is to introduce a suitable theoretical framework and to simplify the expression for the gravity perturbation as much as possible. The starting point is Equation (\ref{eq:gravHelm}) without the last term since it vanishes above surface. The $z$-axis is chosen as surface normal: $\vec n=(0,0,1)$. The goal is to calculate a gravity perturbation directly above the free surface at $z=0$. In general, the gravity potential above surface takes the form \beq \begin{split} \delta\phi_{\rm surf}(\vec r_0,t)&=-G\rho_0\int\drm S\,\vec n\cdot\left[\vec\psi_{\rm s}(\vec r,t)\times\nabla\frac{1}{|\vec r-\vec r_0|}+\phi_{\rm s}(\vec r,t)\nabla\frac{1}{|\vec r-\vec r_0|}\right]\\ &=-G\rho_0\int\drm S\,(\psi_{\rm s}^x(\vec r,t)\partial_y-\psi_{\rm s}^y(\vec r,t)\partial_x+\phi_{\rm s}(\vec r,t)\partial_z)\frac{1}{|\vec r-\vec r_0|}\\ &= G\rho_0\left[\partial_{y_0}\int\drm S\,\frac{\psi_{\rm s}^x(\vec r,t)}{|\vec r-\vec r_0|}-\partial_{x_0}\int\drm S\,\frac{\psi_{\rm s}^y(\vec r,t)}{|\vec r-\vec r_0|}+\partial_{z_0}\int\drm S\,\frac{\phi_{\rm s}(\vec r,t)}{|\vec r-\vec r_0|}\right] \end{split} \label{eq:surfpot} \eeq Seismic fields in half spaces can be elegantly represented as expansion into cylindrical harmonics (see Section \ref{sec:cylindrical}). A good review of the theory with application to seismology can be found in \cite{Kim1989}. Looking at Equation (\ref{eq:surfpot}), we see that in addition to an expansion of the seismic potentials, one also needs an expansion of the inverse distance into cylindrical harmonics. Evaluating seismic fields on the surface $z=0$, and gravity above surface so that $z_0>0$, the inverse distance can be expanded according to \beq \frac{1}{|\vec r-\vec r_0|}=\sum\limits_{n=0}^\infty \int\limits_0^\infty\drm k\,(2-\delta_{n0})J_n(k\varrho_0)J_n(k\varrho)\cos(n (\phi-\phi_0))\e^{-kz_0}, \eeq The expansion of the scalar potential components in Equation (\ref{eq:surfpot}) are given by \cite{Kim1989} \beq \begin{split} \psi^x_{\rm s}(\rho,\phi,z)&=\e^{\irm\omega t}\sum\limits_{m=-\infty}^\infty\int\limits_0^\infty\drm p\,J_m(p\varrho)\e^{\irm m\phi}\left(a_m^{x,1}(p)\e^{zk_z^{\rm S}(p)}+a_m^{x,2}(p)\e^{-zk_z^{\rm S}(p)}\right)\\ \psi^y_{\rm s}(\rho,\phi,z)&=\e^{\irm\omega t}\sum\limits_{m=-\infty}^\infty\int\limits_0^\infty\drm p\,J_m(p\varrho)\e^{\irm m\phi}\left(a_m^{y,1}(p)\e^{zk_z^{\rm S}(p)}+a_m^{y,2}(p)\e^{-zk_z^{\rm S}(p)}\right)\\ \phi_{\rm s}(\rho,\phi,z)&=\e^{\irm\omega t}\sum\limits_{m=-\infty}^\infty\int\limits_0^\infty\drm p\,J_m(p\varrho)\e^{\irm m\phi}\left(b_m^{1}(p)\e^{zk_z^{\rm P}(p)}+b_m^{2}(p)\e^{-zk_z^{\rm P}(p)}\right) \end{split} \label{eq:cylexpand} \eeq The integration variable $p$ can be interpreted as horizontal wavenumber of the harmonics that constitute the seismic field, while the vertical wavenumbers $k_z^{\rm P}(p),\,k_z^{\rm S}(p)$ have the form in Equation (\ref{eq:wavek}). Explicit expressions of the amplitudes $a_m^x(p),\,a_m^y(p),\,b_m(p)$ depend on the nature of the seismic source, and can be found for a few important cases in \cite{Kim1989}. They also depend on the depth $z_{\rm s}$ of the seismic source. The evaluation of the surface integrals is analogous for the three potentials. We outline the calculation for the P-wave potential $\phi_{\rm s}$: \beq \begin{split} \partial_{z_0}\int\drm S\,\frac{\phi_{\rm s}(\vec r,t)}{|\vec r-\vec r_0|} &= \partial_{z_0}\int\limits_0^{2\pi}\drm\phi\int\limits_0^\infty\drm\varrho\,\varrho\sum\limits_{n=0}^\infty \int\limits_0^\infty\drm k\,(2-\delta_{n0})J_n(k\varrho_0)J_n(k\varrho)\cos(n (\phi-\phi_0))\e^{-kz_0}\\ &\hspace{4cm}\cdot\e^{\irm\omega t}\sum\limits_{m=-\infty}^\infty\int\limits_0^\infty\drm p\,(b_m^1(p)+b_m^2(p))J_m(p\varrho)\e^{\irm m\phi}\\ &=2\pi\e^{\irm\omega t}\partial_{z_0}\sum\limits_{m=-\infty}^\infty \int\limits_0^\infty\drm k\,(2-\delta_{m0})J_m(k\varrho_0)\e^{-kz_0}\e^{\irm n\phi_0}\int\limits_0^\infty\drm p\,(b_m^1(p)+b_m^2(p))\\ &\hspace{4cm}\cdot\int\limits_0^\infty\drm\varrho\,\varrho J_m(p\varrho)J_m(k\varrho)\\ &=2\pi\e^{\irm\omega t}\partial_{z_0}\sum\limits_{m=-\infty}^\infty \int\limits_0^\infty\drm k\,(2-\delta_{m0})\frac{b_m^1(k)+b_m^2(k)}{k}J_m(k\varrho_0)\e^{-kz_0}\e^{\irm m\phi_0}\\ &=-2\pi\e^{\irm\omega t}\sum\limits_{m=-\infty}^\infty \int\limits_0^\infty\drm k\,(2-\delta_{m0})(b_m^1(k)+b_m^2(k))J_m(k\varrho_0)\e^{-kz_0}\e^{\irm m\phi_0} \end{split} \eeq The last equation has strong similarity with the expansion in Equation (\ref{eq:cylexpand}) of the P-wave potential itself. The important difference is that the vertical wavenumber $k_z^{\rm P}(p)$ is now substituted by the horizontal wavenumber $k$. We have seen this already happening in the explicit solutions for Rayleigh and plane body waves, where the seismic amplitude changes with depth $z$ in terms of a vertical wavenumber, but the gravity perturbation changes with the horizontal wavenumber (see Sections \ref{sec:bodyhalf} \& \ref{sec:gravRayleigh}). For this reason, it is unfortunately impossible to express the P-wave contribution to the gravity potential directly in terms of the P-wave potential. However, the solution simplifies if the gravity field is to be calculated directly above surface at $z_0=0$: \beq \partial_{z_0}\int\drm S\,\frac{\phi_{\rm s}(\vec r,t)}{|\vec r-\vec r_0|}\bigg|_{z_0=0} = 2\pi\e^{\irm\omega t}\int\limits_0^\infty\drm k\,(b_0^1(k)+b_0^2(k))J_0(k\varrho_0)-4\pi\phi_{\rm s}(\rho_0,\phi_0,z_0=0,t) \label{eq:halfgravP} \eeq The gravity perturbation contributed by the P-wave potential consists of a part that has the same form as the infinite-space solution in Equation (\ref{eq:gravP}), and a second part that is a simple integral involving only the zero-order Bessel function. It may be possible to carry out the remaining integral for specific seismic sources, and possibly also to carry out the inverse Fourier transform explicitly to obtain a full time-domain solution. It should be kept in mind also that the seismic potentials in half space are considerably more complicated than in infinite space, and therefore the solution in Equation (\ref{eq:halfgravP}) has only formal similarity with the infinite-space solution. We leave it to the reader to repeat the exercise for the components of the shear potential, which can be simplified by realizing that the horizontal components of the shear potential can be obtained from a single scalar potential using the identity $\vec\psi_{\rm s}(\vec r\,)=\nabla\times(0,0,\Lambda_{\rm s}(\vec r\,))+(0,0,\psi_{\rm s}^z(\varrho,\phi))$. As a final note, in order to translate the gravity model into a gravimeter signal, one also needs to take into account the self-gravity effect described in Section \ref{sec:gravground}, which means that gravity fluctuations induce surface motion. Whether corrections from self-gravity effects are significant depends on the distance of the gravimeter to the source as well as on the spectrum of the gravity fluctuations \cite{Run1980,Wan2005}. \subsection{Summary and open problems} \label{sec:pointsummary} We have shown how to calculate gravity perturbations based on models of seismic sources. The general expressions for these perturbations can be complicated, but especially when neglecting surface effects, the gravity perturbations assume a very simple form due to a fundamental equivalence between seismic and gravity potentials according to Equation (\ref{eq:gravP}). We have seen demonstrations of this principle in Section \ref{sec:forcegrav} for point forces, and in Section \ref{sec:disgravity} for point shear dislocations. The solution of the point force was used to highlight the difference between locally generated gravity perturbations, i.~e.~at the test mass, and perturbations from an incident seismic wavefront. It was shown that due to the strong low-pass filtering effect of gravity perturbations from distant seismic wavefronts, seismic sources need to have very peculiar properties to produce significant, instantaneous gravity perturbations at the test mass. Consequently, gravity perturbations from distant seismic wavefronts are more likely to play a role in sub-Hz GW detectors, and also there the seismic event producing the wavefront needs to be very strong. As an example, we have presented the formalism to estimate perturbations from earthquakes in Sections \ref{sec:disdensity} and \ref{sec:Tohoku}. These results also have important implications for coherent Newtonian-noise cancellation schemes. It was argued in the past that seismic sensors deployed around the test mass can never provide information of gravity perturbations from incident seismic disturbances that have not yet reached the seismic array. Therefore, there would be a class of gravity perturbations that cannot be subtracted with seismic sensors. While the statement is generally correct, we now understand that the gravity perturbations are significant only well below the GW detection band (of any $>1\,$Hz GW detector), unless the source of the seismic wavefront has untypically strong high-frequency content. The theory of gravity perturbations from seismic point sources has just begun to be explored. Especially a thorough analysis of surface effects is essential for future developments. In Section \ref{sec:sourcehalf}, a first calculation of gravity perturbations from point sources in half spaces was outlined. The full solution still needs to be analyzed in detail. Open questions are how the Rayleigh waves generated in half spaces affect gravity perturbations at larger distances, and also how the contribution of body waves is altered by reflection from the surface. In light of the possible applications of low-frequency GW detectors in geophysics, further development of the theory may significantly influence future directions in this field. \section{Acknowledgements} \label{sec:acknowledgements} I was lucky to have been given the opportunity to enter the field of Newtonian noise and terrestrial gravity perturbations at a time when outstanding experimental problems had to be addressed for future GW detector concepts. I took my first steps in this field as part of the group of Prof Vuk Mandic at the University of Minnesota, Twin Cities. I have to thank Prof Mandic for his continuous support, and especially for taking the time to return comments on this manuscript. With his DUGL project currently proceeding at a steep rise, his time is very precious. During these first two years, I started to collaborate with Prof Giancarlo Cella, who by that time had already written seminal papers on Newtonian-noise modelling and mitigation. I thank Prof Cella for the many discussions on Newtonian noise, and also for pointing out important past work on Newtonian noise missing in an earlier version of the manuscript. While working on the experimental realization of an underground seismic array at the former Homestake mine as part of the DUGL project, I had the privilege to collaborate with and learn from my colleagues Dr Riccardo DeSalvo and Dr Mark Beker (at that time graduate student). Their motivation to do science in its best way without hesitation in complicated situations has inspired me since then. I thank both of them for comments and contributions to this manuscript. Starting in 2010, I was given the opportunity at Caltech to apply my experience with seismic fields and gravity modelling to investigate Newtonian noise for the LIGO detectors. I have to thank Prof Rana Adhikari for supporting me not only with my LIGO work, but also for making sure that I keep an open mind and broad view on science. I am especially thankful that I could work with one of Prof Adhikari's graduate students, Jennifer Driggers, with whom I was able to lie the foundation for future work on Newtonian noise at the LIGO sites. I thank Jenne for her dedication and for keeping me focussed on the important problems. I am currently supported by a Marie-Curie Fellowship (FP7-PEOPLE-2013-IIF) at the Universit\`a di Urbino, which gives me the freedom to contribute to the development of the field in any possible direction. Therefore, I want to thank the committee of the European Commission who evaluated my past accomplishments in a favorable way. I want to thank Prof Flavio Vetrano and my colleagues at the INFN Firenze, who involved me in exciting experimental developments in Europe on low-frequency gravity sensing, especially atom interferometry. As I could hopefully demonstrate in this paper, terrestrial gravity perturbations is a complex problem, which means that observations in the future should be expected to hold surprises for us and unexpected applications may emerge. Last but not least, I want to thank Marica Branchesi who made sure that I never lose motivation to write this article, and whose dedication to science and people is always an inspiration to me. \\[1cm] I acknowledge the use of Mathematica and Matlab for the generation of the plots in this paper, and as a help with some analytical studies. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
9,127
Q: Can't get JSON data from array inside of array using for loop in javascript/jquery I have a problem with getting some data from JSON format. I already got most of the data, but there is data which is an array inside of an array and it's more complicated. I can't seem to find out what should I do. The JSON file is like that:(that's a part of it - i can't change it) { "smth": [ { "part": { "d": 295.15 }, "rock": [ { "description": "heavy", "icon": "22sd" } ], "song": 10 } { "part": { "d": 295.15 }, "rock": [ { "description": "soft", "icon": "33sd" } ], "song": 10 } ] } My code, a part which doesn't work as I want: var icon=[]; var desc=[]; $.getJSON(url, function(json) { $.each(json.smth, function() { $.each(this.rock, function() { for (var i in json.smth){ main[i] = this.description; icon[i] = this.icon; } }); }); document.getElementById('icon1').src='..//img/w/'+icon[0]+'.png'; document.getElementById('icon2').src='..//img/w/'+icon[1]+'.png'; document.getElementById('icon3').src='..//img/w/'+icon[2]+'.png'; document.getElementById('icon4').src='..//img/w/'+icon[3]+'.png'; document.getElementById('icon5').src='..//img/w/'+icon[4]+'.png'; document.getElementById('main1').innerHTML=desc[0]; document.getElementById('main2').innerHTML=desc[1]; document.getElementById('main3').innerHTML=desc[2]; document.getElementById('main4').innerHTML=desc[3]; document.getElementById('main5').innerHTML=desc[4]; }); I need to get data as Array. But I can't wrote that part like: this[i].description. The other data worked well like this: rain[i]=json.smth[i].part.d; There I didn't need to use $.each(json.list, function() Also, I don't know if it's somehow possible for the last 10 rows in my code to use a loop.. Does anyone have any suggestions? I'd be really grateful.. A: Try this $.getJSON(url, function (json) { $.each(json.smth, function () { $.each(this.rock, function (idx) { document.getElementById('icon' + (idx + 1).toString()).src = '..//img/w/' + this.icon + '.png'; document.getElementById('main' + (idx + 1).toString()).innerHTML = this.description; } ); }); }); A: So if I'm understand what you are trying to do correctly, then you should be able to do this: $.each(json.smth, function() { $.each(this.rock, function() { icon.push(this.icon); desc.push(this.description); } } And you'll end up with your two arrays populated. Personally, if those two items, icon and description, are associated together, I'd rather keep them together in a single object. Something like: var foo = []; $.each(json.smth, function() { $.each(this.rock, function() { foo.push({icon:this.icon,desc:this.description}); } } Which you can now access each component: foo[0].icon foo[0].desc Now to do your DOM manipulation, you can just loop through your new array (or arrays if you keep them separate): for (var i=0; i < foo.length; i++) { $("#icon" + (i+1)).prop("src","..//img/w/"+foo[i].icon+".png"); $("#main" + (i+1)).html(foo[i].desc); } Or as @malkam suggested, if you don't actually need the arrays, you can just do the DOM manipulation right inside your JSON processing loop.
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,309
Q: Using Actionscript 2.0 to access value in XHTML file I have a (.xhtml) file that I need to access a particular <div> from my flash movie using Actionscript 2.0. My (.xhtml) file is roughly in this format: <html> <head> <title>Landing</title> </head> <div id="mainArea"> <div id="content"> <div id="calloutContent"> <div id="callout1">Search Our QA Database</div> </div> <div id="TitleContent"> <div id="title1">Heading One</div> <div id="title2">Heading Two</div> </div> </div> </div> </body> </html> I need to access the value of <div "title1">, so that would be the text "Heading One", then I need to assign that to a variable in my flash move, then display it in a text field. I have looked at "XPathAPI", but I just cannot get it working, as the file is not pure XML. I have looked at loading XML, and access nodes within that, which works fine, but when I try (.xhtml), in the format above, I just cannot get it to work. Any help would be greatly appreciated. If you need any further information, please let me know. Thanks, I am a bit desperate. Alan... A: Using meta tags you can send variables into a flash file when embedding the swf file onto a page... I don't remember how exactly but 1 quick google should reveal it... after that if the value int he div will be changing just use javascript to keep up with it.. or php..
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,288
{"url":"https:\/\/chemistry.tutorvista.com\/inorganic-chemistry\/mole-chemistry.html","text":"Top\n\n# Mole Chemistry\n\nAtoms and molecules are very, very tiny. We cannot count the individual particles of a molecule. Hence, there are measures such as atomic mass, atomic weight, molecular count to estimate the number of atoms or molecules. John Dalton and Thomson first determined atomic weights between 1803 and 1805. Atomic weight was originally defined relative to that of the lightest element hydrogen taken as 1.00. Dalton took 1\/16th of the mass of an atom of oxygen as the unit of atomic mass.\n\nNowadays, the carbon-12 isotope is considered as the standard atomic mass unit as a reference for measuring atomic masses. One atomic mass unit can be defined as the mass unit that is equal to 1\/12th of the mass of 1 atom of C-12. The unit of atomic mass or amu can be defined as the mass of 1 H-atom. Because the relative weights of hydrogen, carbon and nitrogen are 1.01, 12.01 and 14.01, we can also say that the weight of hydrogen is 1.01 amu per atom, the weight of carbon is 12.01 amu per atom and the weight of nitrogen is 14.01 amu per atom. Another way to represent atomic and molecular mass is \"Mole concept\".\u00a0 Let\u2019s discuss about mole concept and its application.\n\n Related Calculators conversion of grams to moles calculator convert moles to grams calculator Mole Fraction Calculator\n\n## What is a Mole in Chemistry?\n\nOne mole of a chemical species contains the same number of particles as there are atoms in 12 grams of the isotope of Carbon \u2013 12.\nThus,\n1 mole of a substance = 1 gram molecular mass = GMW\n= 1 gram molecule = 6.02 x 1023 molecules\n= Atomicity x 6.023 x 1023 atoms\n= Atomicity x 1 gm atom\n= 22.4 liters of gaseous substance at S.T.P. (GMW)\n\nUse the below widget to calculate the mole fraction.\n\n## Mole Definition\n\nThe mole, in chemistry, is a measure of the amount of substance. It is defined according to the number of particles that the substance contains. In particular, we use the following definition.\nA mole is the amount of substance of a system which contains as many elementary entities as there are atoms in 0.012 kilogram of carbon-12, where the carbon-12 atoms are unbound, at rest and in their ground state. The number of atoms in 0.012 kilogram of carbon is known as Avogadro number, and is determined empirically. The currently accepted value is 6.0221415 x 1023 mol-1.\nExplanation\n\nMole of oxygen = 16 + 16 = 32 grams of Oxygen.\n= Weight of 6.02 x 1023 molecules of Oxygen\n= Weight of 2 x 6.02 x 1023 atoms of Oxygen\n= 2 x 1 gram atom of Oxygen\n= Weight of 22.4 liters of Oxygen at S.T.P\n\n6.023 x 1023 is called the Avogadro numberThe number of particles in a mole is called Avogadro's constant or Avogadro's number. This number may be used to measure any kind of particle, such as atoms, ions or molecules. The number of atoms in 12 grams of Carbon \u2013 12 is approximately equal to 6.023 x 1023.\n\nThis is the number of particles per mole of Carbon -12 and it gives us our definition of Avogadro's number or Avogadro's constant.\n\nSo, 12.0 grams of Carbon contains 6.023 x 1023 atoms, therefore, 16.0 grams of oxygen will also contain 6.023 x 1023 atoms of oxygen. Similarly, one gram \u2013 atom of any other element will also contain 6.023 x 1023 atoms of that element and one molecule of any substance will contain 6.023 x 1023 molecules of that compound.\n\nMolar mass and moles\n\nMolar mass of an element is numerically equal to the element's atomic mass and has units grams per mole. Thus molar masses are conversion factors for moles and mass of a compound or an element.\nMass of one mole of particles is called Molar mass. In the case of atoms molar mass is equal to the atomic mass and in the case of molecules, molar mass is equal to the gram molar mass. So, Mass of 6.023 x 1023 molecules of any substance (element or compound) is equal to its gram molecular mass or one gram molecule.\n\nMole concept can be applied even to an ionic compound. One mole of an ionic compound represents one mole (6.023 x 1023) formula units of that compound.\n1 mole of atoms or ions = 6.023 x 1023 atoms or ions.\n1 mole of electrons = 6.023 x 1023 electrons\n\nMoles and chemical equations:\n\nA chemical equation is an extremely useful way of summarizing a lot of information. The equation should show the amount of each substance involved in the reaction and its state. Here is an example:\n\n2H2(g) + O2(g) \u2192 H2O(l)\n\n2 moles of hydrogen molecules react with 1 mole of oxygen molecules to form 2 moles of water, all in gaseous state.\n\n## Mole Concept and Stoichiometry\n\nSome Points about Mole Concept and Stoichiometry:\n\u2022 Calculation of the mass of an atom of an element\n= Gram atomic mass \/ Avogadro number\n\u2022 Calculation of the mass of one molecule of a substance:\n= Gram molecular mass \/ Avogadro number\n\u2022 Calculation of the number of atoms in a given mass of an element:\n= Mass of element in grams \/ gram atomic mass x N0\n\u2022 Number of molecules in a given mass of the substance\n= Mass of substance in grams \/ gram molecular mass x N0\n\u2022 Calculation of the number of molecules present in V liters of gas at S.T.P\n= Volume of the gas in liters \/ 22.4 liters x N0\n\n## Mole Calculations\n\nThe formula used for all the mole calculations is :\nNumber of moles of a substance = Mass of the substance in grams \/ Molar mass of the substanceExample:\n\nMoles of a compound\n\nThe formula for water says that 1 mole of water contains 2 moles of hydrogen and 1 mole of oxygen atoms. Similarly, the formula for ammonia which is NH3, tells us the molar composition of the gas:\n1 mole of nitrogen atoms combines with 3 moles of hydrogen atoms to form ammonia.\n\nOnce we have established the number of moles of each element present in a covalent substance like water or ammonia, we can work out its molar mass.\n\nM(H2O) = 2 x M(H) + 1 x M(O)\n= 2 x 1 g\/mol + 1 x 16 grams\/mol\n= 18 grams \/ mole\n(Oxygen has an atomic mass of 16 and Hydrogen has an atomic mass of 1)\n\nTherefore, say for example, we need to find the mass of 1 mole of water.\nMass of 1 mole of water = 18 grams\/mol x 1 mol = 18 grams\n\n## Chemistry Mole Problems\n\n### Solved Examples\n\nQuestion\u00a01: A bottle contains 6 grams of Magnesium ribbon. How many moles of Mg are present?\nSolution:\n\nNumber of moles of magnesium\n\n= $\\frac{Mass\\ in\\ grams\\ of\\ Magnesium} {Atomic\\ mass\\ of\\ Mg}$\n\n= $\\frac{6 g}{24 g\/ mol}$\n\n= 0.25 mole of Mg.\n\nQuestion\u00a02: Calculate the number of moles in 25 g of Calcium Carbonate.\nSolution:\n\nMolecular mass of CaCO3 = 100\nNumber of moles of Calcium Carbonate\n=$\\frac{ Mass\\ of\\ Calcium\\ carbonate\\ in\\ grams}{Molar\\ mass\\ of\\ calcium\\ carbonate}$\n\n= $\\frac{25 g}{100 g. mol^{-1}}$ = 0.25 mole of calcium carbonate.\n\nQuestion\u00a03: Calculate the number of moles in 11 grams of CO2 3.01 x 1023 molecules of CO2\nSolution:\n\n1. Molecular mass of CO2 = 44 grams\/mole.\n\nMoles of CO2\n=$\\frac{ Mass\\ of\\ Carbon\\ dioxide }{ molar\\ mass\\ of\\ carbon\\ dioxide}$\n\n= $\\frac{11 grams}{44 grams .mol^{-1}}$\n\n= 0.25 moles of carbon dioxide.\n\n2. 6.02 x 1023 molecules of Carbon dioxide = 1 mole of CO2\n\n3.01 x 1022 molecules of CO2\n\n=$\\frac{ 1 }{ 6.02 \\times 1023 \\times 3.01\\times 1022 }$moles\n\n= 0.05 moles of Carbon dioxide.\n\nQuestion\u00a04: Calculate the mass of half a mole of Nitrogen.\nSolution:\n1 mole of Nitrogen = 2 x 1gram atom of N2 = 2 x 14 = 28 grams.\n(ATOMIC MASS OF N IS 14)\nTherefore, half a mole of Nitrogen = 28 x \u00bd = 14 grams.\n\nQuestion\u00a05: How many atoms of oxygen are present in 300 grams of CaCO3?\nSolution:\n\nGram molecular mass of CaCO3 = 100 g\nNow, one mole of CaCO3 contains = 3 moles of Oxygen atoms.\nOr, 100 grams of CaCO3 contains = 3 x 6.02 x 1023 oxygen atoms.\n\nTherefore,\nOxygen atom in 300 grams of CaCO3\n\n= $\\frac{3 \\times 6.02 \\times 10^{23}}{100\\times 300 }$= 54.207 x 1023 oxygen atoms.\n\n More topics in\u00a0Mole Chemistry Molar Mass Molar Volume Molar Concentration\n NCERT Solutions NCERT Solutions NCERT Solutions CLASS 6 NCERT Solutions CLASS 7 NCERT Solutions CLASS 8 NCERT Solutions CLASS 9 NCERT Solutions CLASS 10 NCERT Solutions CLASS 11 NCERT Solutions CLASS 12\n Related Topics Chemistry Help Chemistry Tutor\n*AP and SAT are registered trademarks of the College Board.","date":"2019-05-23 09:40:46","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.39832550287246704, \"perplexity\": 1311.8259397816096}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-22\/segments\/1558232257197.14\/warc\/CC-MAIN-20190523083722-20190523105722-00106.warc.gz\"}"}
null
null
Boscia angustifolia är en kaprisväxtart som beskrevs av Achille Richard. Boscia angustifolia ingår i släktet Boscia och familjen kaprisväxter. Utöver nominatformen finns också underarten B. a. corymbosa. Bildgalleri Källor Externa länkar Kaprisväxter angustifolia
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,645
Q: Angular 2.3 Component Inheritance I am trying to put some common code, which adds server errors to the form etc, and would be used in all my form components, in a base form component. I have simplified the example code to demonstrate my problem. import { Component, } from '@angular/core'; import { Validators } from '@angular/forms'; import { FormGroup, FormBuilder } from '@angular/forms'; export abstract class BaseFormComponent { form: FormGroup; fb = new FormBuilder; submitted = false; busy: boolean; onSubmit() { this.submitted = true; this.busy = true; } } @Component({ selector: 'my-form', template: ` <form [formGroup]="form" (ngSubmit)="onSubmit()" novalidate> <div class="form-group"> <label>Name</label> <input formControlName="name" class="form-control"> </div> <button type="submit" [disabled]="busy" class="btn btn-primary">Submit</button> </form> ` }) export class FormComponent extends BaseFormComponent { constructor() { super(); this.createForm(); } protected createForm() { this.form = this.fb.group({ name: ['', Validators.compose([Validators.required, Validators.maxLength(10)]) ], }); } } For some reason I am getting errors that the template cannot find the base class properties. Error: [21, 24]: The property "form" that you're trying to access does not exist in the class declaration. [21, 42]: The method "onSubmit" that you're trying to access does not exist in the class declaration. [26, 41]: The property "busy" that you're trying to access does not exist in the class declaration. [21, 24]: The property "form" that you're trying to access does not exist in the class declaration. [21, 42]: The method "onSubmit" that you're trying to access does not exist in the class declaration. [26, 41]: The property "busy" that you're trying to access does not exist in the class declaration. Comparing my code to this article, it seems pretty much identical except I am not decorating the base form class as a component. Decorating the base class does not seem to make any difference anyway. I am using: * *typescript@2.3.4 *@angular/common@2.4.7 *webpack@2.2.1 A: To answer my own question. As @estus mentioned, there is nothing wrong with the code and the issue is related to Codelyzer. The issue can be found here. https://github.com/mgechev/codelyzer/issues/191 A temporary fix to the issue is to simply ignore it by placing the following comment at the top of the file, until issues between tslint and codelyzer has been sorted out. // tslint:disable:no-access-missing-member
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,647
Nella scienza della traduzione, la localizzazione – di indole semiotico-linguistica – è un processo di adattamento culturale di un prodotto, dispositivo o testo (in genere, la traduzione di un sito web o software), volto a renderlo fruibile dai parlanti di una data nazione (specie in vista delle locali differenze sociali e comunicative); in virtù della sua natura interdisciplinare, tale processo include pure il design, l'ingegneria, il marketing e, in generale, ogni scienza necessaria al raggiungimento di tale adattamento. Tale processo può comportare la modifica anche profonda del testo o prodotto iniziale, in linea con le teorie sull'accettabilità linguistica e dell'usabilità, e necessita dell'applicazione di tecniche specialistiche e competenze culturali e di traduzione (sia della lingua e area di origine, sia di quelle di destinazione). I prodotti che possono essere oggetto di tale processo sono numerosi: dalle campagne di comunicazione alla pubblicità (televisiva, editoriale), dai film e serie televisive ai software (sistemi operativi, applicazioni e programmi), dai siti web ai manuali d'uso, dalle pubblicazioni mediche e scientifiche, sino alle etichette dei prodotti e così via. Elementi di applicazione La localizzazione si applica a ciò che ci si propone di diffondere su un mercato estero particolare. Alcuni degli elementi di applicazione del processo di localizzazione comprendono: La lingua alfabeti e sistemi di scrittura; sistemi di numerazione; direzione della scrittura (es. da sinistra verso destra e da destra verso sinistra). La cultura i significati e i significanti i sensi, doppi sensi (ed eventualmente l'ironia e il sarcasmo, quando necessari) il metatesto e metadiscorso le mappe semantiche l'usabilità e accessibilità date e orari: formato, calendari e fusi orari la valuta e i sistemi di peso e misura immagini e colori: questioni relative alla comprensibilità e all'appropriatezza culturale nomi, titoli, documenti d'identità, passaporti, numeri di telefono, indirizzi e codici postali internazionali trasporti; segnaletica bilingue, posizione di guida (a destra nel Regno Unito o nei paesi del Commonwealth) Il concetto di locale "Locale" è quel gruppo di parametri che definisce la lingua, il paese e qualsiasi altra variante specifica. Normalmente, in particolare nel software o nei siti web, un identificatore locale è formato da un identificatore di lingua (in minuscolo) e da un identificatore di regione (in maiuscolo). L'identificatore di regione è particolarmente importante, ad esempio, per quelle applicazioni che utilizzano il fuso orario. Di seguito sono riportate alcune lingue parlate e utilizzate comunemente in più di una nazione e i loro relativi identificatori locali. Note Voci correlate Scienza della traduzione Traduzione Accettabilità (traduzione) Linguistica computazionale Traduzione Economia internazionale
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,720
{"url":"https:\/\/math.stackexchange.com\/questions\/1017316\/understanding-the-expression-of-fractal-dimension-in-plants","text":"# Understanding the expression of fractal dimension in plants\n\nI just finished a small, demo exercise on fractal dimension of a plant by using MATLAB and box-count method. There were two different treatments. A plant treated with a specific hormone and a plant without treated with anything.\n\nAbove there are the initial pictures of these plants [treated and untreated].\n\nHere are the logarithmic plots for these two treatments\n\nFinally i got the derivatives of ln(N)\/ln(R) and as you can see :\n\nSo after all these photos the fractal dimension of treated plant was 1.8853 and the dimension of the untreated was 1.9322\n\nSo i would like to understand what these two number expressing. Can we get a conclusion about these two different treatments ?\n\nAlso in the derivative plots. What the (kind-of) lineal part of the function means ? Can we say that in these regions our image acts like a fractal?\n\nWhen the log-log plot of $n(r)$ is approximately linear, this means that the number of boxes covering the set grows as some power of $r$, that is $n(r)\\approx Cr^{-d}$. This is an indication of the set having box-counting (Minkowski) dimension equal to $d$. Yes, you can say that in this range of scales $r$ the image looks like a fractal.","date":"2019-09-17 16:19:39","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8526062965393066, \"perplexity\": 235.60193492353704}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-39\/segments\/1568514573098.0\/warc\/CC-MAIN-20190917161045-20190917183045-00050.warc.gz\"}"}
null
null
\section{Introduction} \label{sec:introduction} The numerical solution of Einstein's equations has made great progress in recent years. Orbits and mergers of binary systems of black holes and neutron stars are now routinely published by a number of research groups, using independent codes and methodologies~\cite{Pretorius:2005gq, Baker:2005vv, Campanelli:2005dd, Scheel:2008rj}. A number of important astrophysical phenomena associated with binary black hole mergers have been studied in some detail. In particular, the recoil of the merger remnant has been studied for a number of different initial data models~\cite{Gonzalez:2006md, Gonzalez:2007hi, Campanelli:2007ew, Campanelli:2007cg, Herrmann:2007ac, Koppitz:2007ev, Pollney:2007ss, Lousto:2008dn}, and its final mass and spin has been mapped out for fairly generic merger models involving spinning and unequal mass black holes~\cite{Rezzolla:2007xa, Rezzolla:2007rd, Rezzolla:2007rz, Tichy:2008, Lousto:2007mh, Barausse:2009uz}. Since these quantities are determined by the last few quasi-circular orbits before merger, they can be calculated to good approximation with fairly short evolutions, and with minimal influence of an artificial outer boundary. Of particular topical relevance, however, is the construction of long waveforms which can be used for gravitational-wave analysis of the binary~\cite{Reisswig:2009vc}, and also to construct a family of templates~\cite{Ajith:2007qp,Ajith:2007kx,Ajith:2007xh,Ajith:2009bn}, so to inform and improve gravitational wave detection algorithms. Here the requirements are particularly challenging for numerical simulations, requiring waveforms which are accurate in phase and amplitude over multiple cycles to allow for an unambiguous matching to post-Newtonian waveforms at large separation. Some recent studies have shown very promising results in this direction for particular binary black hole models~\cite{Baker:2006ha, Buonanno06imr, Hannam:2007ik, Hannam:2007wf, Damour:2007vq, Buonanno:2007pf, Damour:2008te, Buonanno:2009qa, Boyle:2007ft}. However, they have also highlighted the problems associated with producing long waveforms of sufficient accuracy. First of all, for binaries with a larger separation, systematic errors associated with gravitational waveform extraction at a finite radius become more pronounced. Typically a number of extraction radii are used, and the results extrapolated to infinite radius (assuming such a consistent extrapolation exists given potential issues of gauge). In order to have some confidence in the results, the outermost ``extraction sphere'' needs to be at a large radius, say on the order of $150-200M$ (where $M$ is the mass of the system and sets the fiducial length scale). Even at this radius, the amplitude of the extrapolated waveform differs significantly from the measured waveform. Unfortunately, extracting at larger radii comes at a computational expense. One of the standard methods in use today is finite differencing in conjunction with ``mesh refinement'', in which the numerical resolution is chosen based on the length scale of the problem. A minimum number of discrete data points are required to resolve a feature of a given size accurately, which sets a limit on the minimum resolution which should be applied in a region. Thus, even with mesh refinement there is a limit on the coarseness of the grid which can be allowed in the wave-zone. For a Cartesian grid, the number of computational points scales as $r^3$, so that requiring a sufficient resolution to $200M$ already comes at significant expense, and increasing this distance further becomes impractical. An additional difficulty arises from the requirement that the outer boundary have minimal influence on the interior evolution, since it is (in all current binary black hole codes) an artificial boundary. This places an additional requirement on the size of the computational grids, so that even outside the wave-zone region where the physics is accurately resolved, it is conventional to place several even coarser grids. This is done in the knowledge that the physical variables can not be resolved in these regions, but the grids are helpful as a numerical buffer between the outer boundary and interior domain. Again, adding these outer zones comes at a computational expense. The boundaries with under-resolved regions also lead to unphysical reflections which can contaminate the solution. The problem of increasing the grid size can be significantly reduced if, rather than a Cartesian coordinate system, one uses a discretisation which has a radial coordinate. Then, for a fixed angular resolution, the number of points on the discrete grid increases simply as a linear function of $r$, rather than the $r^3$ of the Cartesian case. This has two advantages. The gravitational wave-zone has spherical topology and therefore, a numerical approximation would be most efficiently represented by employing a spherical grid. A further computational motivation comes from the fact that non-synchronous mesh-refinement (such as the Berger-Oliger algorithm) can greatly complicate the parallelisation of an evolution scheme, and thus having many levels of refinement clearly has an impact on the efficiency of large scale simulations. This will become particularly relevant for the coming generations of peta-scale machines which strongly emphasise parallel execution (possibly over several thousand cores) over single processor performance. The use of spherical-polar coordinates has largely been avoided in 3-dimensional general relativity due to potential problems associated with the coordinate singularity at the poles. Additionally, even if regularisation were simple, the inhomogeneous areal distribution of latitude-longitude grid points over the sphere make spherical-polar coordinates sub-optimal. A number of alternative coordinate systems have been proposed and implemented for studies of black holes in 3D. The Pittsburgh null code avoids the problem of regularisation at the poles by implementing a 2D stereographic patch system~\cite{Bishop97b}. Cornell/Caltech have developed a multipatch system which has been used for long binary black hole evolutions~\cite{Scheel:2006gg, Scheel:2008rj}~\footnote{Multi-domain spectral methods have previously been applied to the problem of generating initial data for binaries in~\cite{Gourgoulhon:2000nn, Gourgoulhon02, Grandclement:2001ed}.}. This code, using spectral spatial differentiation, uses an intricate patch layout in order to reduce the overall discretisation error. The boundary treatment between patches is based on the transfer of characteristic variables. A similar approach was implemented by the LSU group, for the case of finite differences with penalty boundary conditions~\cite{Schnetter:2006pg}, and used to successfully evolve single perturbed black holes with a fixed background~\cite{Dorband:2006gg} and have recently been attempted for binary black hole systems~\cite{Pazos:2009vb}. In this paper we describe a binary black hole evolution code based on adapted radial coordinates in the wave zone, for generic evolution systems. In particular, we demonstrate stable and accurate binary black hole evolutions using BSSNOK in conjunction with this coordinate system. The grids in the wave zone follow a prescription which was first used by Thornburg~\cite{Thornburg:2004dv}, in which six regular patches cover the sphere, and data at the boundaries of the patches are filled by interpolation. The six patch wave zone is coupled to an interior Cartesian code, which covers the domain in which the bodies move, and optionally allows for mesh refinement around each of the individual bodies. The resulting code has the advantages of making use of established techniques for moving puncture evolutions on Cartesian grids, while having excellent efficiency (and consequently accuracy) in the wave zone due to the use of adapted radially-oriented grids. In the following sections we detail the coordinate structures which we use. We then describe our Einstein evolution code, which is largely based on conventional techniques common to Cartesian puncture evolutions. Finally we perform evolutions of a binary black hole system in order to validate the code against known results, as well as demonstrate the ability to extract accurate waves at a large radius with comparatively low computational cost. \section{Spacetime Discretisation} \label{sec:multipatch} This section describes the implementation of a generic code infrastructure for evolving spacetimes which are covered by multiple overlapping grid patches. A key feature of our approach is its flexibility. It is not restricted to any particular formulation of the Einstein equations; the mechanism for passing data between patches (interpolation) is also formulation independent (though characteristic~\cite{Pfeiffer:2002wt} or penalty-patch boundaries~\cite{Carpenter1994a, Diener05b1, Pazos:2009vb} are also an option); the size, placement and local coordinates of individual patches are completely adaptable, provided that there is sufficient overlap between neighbours to transfer boundary data. Further, each patch is a locally Cartesian grid with the ability to perform mesh-refinement to better resolve localised steep gradients, if necessary. The particular application demonstrated in this paper is to provide a more efficient covering of the wave-zone of an isolated binary black hole inspiral. At the same time, we would like to take advantage of the fact that black hole evolutions via the ``moving puncture'' approach are well established as a simple and effective method for stably evolving black hole spacetimes~\cite{Baker:2005vv, Campanelli:2005dd}. By this method, gauge conditions are applied to prevent the spacetime from reaching the curvature singularity, so that an artificial boundary is not required within the horizons~\cite{Hannam:2006vv}. The usual approach is to discretise using Cartesian grids which cover the black holes with an appropriate resolution, without special treatment or boundary conditions for the black hole interiors, relying rather on the causal structure of the evolution system to prevent error modes from emerging~\cite{Brown:2009ki}. The Cartesian grids are extended to cover the wave zone (at reduced resolution for the sake of efficiency), extending to a cubical grid outer boundary where an artificial condition is applied. A principal difficulty faced by this method is that the discretisation is not well suited to model radial waves at large radii. In order to resolve the wave profile, a certain minimum radial resolution is required and must be maintained as the wave propagates to large radii. The angular resolution, however, can remain fixed -- if a wave is resolved at a certain angular resolution as small radii, then it is unlikely to develop significant angular features as it propagates to large distances from the isolated source. Cartesian grids with fixed spacing, however, resolve spheres with an angular resolution which scales according to $r^2$. Thus, to maintain a given required radial resolution, the angular directions become extremely over-resolved at large radii, and this comes at a large computational cost. Namely, for a Cartesian grid to extend in size or increase it's resolution by a factor $n$, the cost in memory and number of computations per timestep increase by $n^3$, while for a radial grid with fixed angular resolution, the increase is linear, $n$~\footnote{Note that the Courant limit introduces an additional factor of $n$ in each case due to the requirement of a reduced timestep with increasing resolution.}. For the near-zone, in the neighbourhood of the orbits of the individual bodies, the geometrical situation is not as straightforward, since there is no clearly defined radial propagation direction between the bodies. If the local geometry is reasonably well known (for instance, the location of horizon surfaces), adapted coordinates can also be considered in this regime. The technical requirements of such coordinate systems can, however, be high. Since the bodies are moving, the grids must move with them, or dynamical gauges chosen such that the bodies remain in place relative to the numerical coordinates. Potential problems arise from the coordinate singularity if the grids are extended to $r=0$, as is the case with the standard puncture approach. Thus, in the near-zone, Cartesian coordinates can provide significant simplification to the overall infrastructure requirements, while the previously mentioned drawbacks of Cartesian coordinates are less prevalent, as it is useful to have homogeneous resolution in each direction in situations where there is no obvious symmetry. The evolution code which we have constructed for the purpose of modelling waveforms from an isolated system is based on a hybrid approach, involving a Cartesian mesh-refined region covering the near zone in which the bodies orbit, and a set of adapted radial grids which efficiently cover the wave zone. The overall structure is illustrated in Fig.~\ref{fig:7-patch-with-ghost-zones} (top), which shows an equatorial slice of the numerical grid. Computations on each local patch are carried out in a globally Cartesian coordinate system. In the particular implementation considered here, the grids overlap by some distance so that data at the boundaries between each local coordinate patch can be communicated by interpolation from neighbouring patches. The resulting code is both efficient, but also simple in structure and able to take advantage of well established methods for evolving moving puncture black holes. If suitable interpolation methods are used, then such a system can also be used for solutions with discontinuities and shocks as are present in hydrodynamics. The code has been implemented within the \texttt{Cactus} framework~\cite{Goodale02a, cactusweb1} via extensions to the \texttt{Carpet} driver~\cite{Schnetter-etal-03b, Schnetter06a, carpetweb}, which handles parallelisation via domain decomposition of grids over processors, as well as providing the required interpolation operators for boundary communication and analysis tools. \subsection{Coordinate systems} \label{sec:coords} The configuration displayed in Fig.~\ref{fig:7-patch-with-ghost-zones} consists of seven local coordinate patches: an interior Cartesian grid, and six outer patches corresponding to the faces of the interior cube. Each patch consists of a uniformly spaced (in local coordinates) grid which can be refined (though in practise we will only use this feature for the interior grid). The outer patches have a local coordinate system $(\rho,\sigma, R)$ corresponding to the ``inflated cube'' coordinates which have previously been used in relativity for single black hole evolutions~\cite{Thornburg:2004dv}, and are displayed in the lower plot of Fig.~\ref{fig:7-patch-with-ghost-zones}. The local angular coordinates $(\rho,\sigma)$ range over $(-\pi/4,+\pi/4)\times(-\pi/4,+\pi/4)$ and can be related to global angular coordinates $(\mu,\nu,\phi)$ which are given by \begin{subequations} \begin{align} \mu \equiv \text{rotation angle about the x-axis} &= \arctan (y/z), \\ \nu \equiv \text{rotation angle about the y-axis} &= \arctan (x/z), \\ \phi \equiv \text{rotation angle about the z-axis} &= \arctan (y/x). \end{align} \end{subequations} For example, on the $+z$ patch, the mapping between the local $(\rho,\sigma,R)$ and Cartesian $(x,y,z)$ coordinates is given by: \begin{subequations} \begin{align} \rho \equiv \nu &= \arctan (x/z), \\ \sigma \equiv \mu &= \arctan (y/z), \\ R &= f(r), \end{align} \end{subequations} with appropriate rotations for each of the other cube faces, and where $r=\sqrt{x^2 + y^2 + z^2}$. As emphasised by Thornburg~\cite{Thornburg:2004dv}, in addition to avoiding pathologies associated with the axis of standard spherical polar coordinates, this choice of local coordinates has a number of advantages. In particular, the angular coordinates on neighbouring patches align so that interpolation is only 1-dimensional, in a line parallel to the face of the patch. This potentially improves the efficiency of the interpolation operation as well as the accuracy. The coordinates also cover the sphere more uniformly than, say, a stereographic 2-patch system. \begin{figure}[ht!] \begin{center} \includegraphics[width=\linewidth,trim=220 100 60 8,clip=true] {figures/seven-amr0000-with-lines} \includegraphics[width=\linewidth,trim=100 0 110 0,clip=true] {figures/6patchsphere.png} \end{center} \vspace{-2cm} \caption{A schematic view of the $z=0$ slice of the grid setup that is used. The upper plot shows the central Cartesian grid surrounded by six ``inflated-cube'' patches (the four equatorial patches are shown, shaded). The boundaries of the nominal grids owned by each patch are indicated by thick lines. The lower plot shows an $r=\textrm{constant}$ surface of the exterior patches, indicating their local coordinate lines.} \label{fig:7-patch-with-ghost-zones} \end{figure} The local radial coordinate, $R$, is determined as a function of the global coordinate radius, $r$. We can use this degree of coordinate freedom to increase the physical (global) extent of the wave-zone grids, at the cost of some spatial resolution. In practise, we use a function of the form \begin{subequations} \begin{equation} f(r) = A (r - r_0) + B \sqrt{1+(r-r_0)^2/\epsilon}, \end{equation} with \begin{equation} R = f(r) - f(0). \end{equation} \label{eq:radial_stretch} \end{subequations} in order to transition between two almost constant resolutions (determined by the parameters $A$ and $B$) over a region whose width is determined by $\epsilon$, centred at $r_0$. The effect of the radial transformation is illustrated in Fig.~\ref{fig:radial_stretch}. The coordinate $R$ is a nearly constant rescaling of $r$ at small and large radii. The change in the scale factor is largely confined to a transition region. Note that since we apply the same global derivative operators (described below) to analysis tools as are used for the the evolution, it is possible to do analysis (e.g., measure waveforms, horizon finding) within regions where the radial coordinate is non-uniform. The regions of near-constant resolution are, however, useful in order to make comparisons of measurements at different radii without the additional complication of varying numerical error due to the underlying grid spacing. \begin{figure} \begin{center} \includegraphics[width=\linewidth,trim=0 50 0 30]{figures/radial_stretch} \end{center} \caption{The local radial coordinate, $R$ (solid line), can be stretched as a function of the global coordinate, $r$, in order to increase the effective size of the grid. The function specified by Eqs.~\eqref{eq:radial_stretch} transitions between two almost constant radial resolutions over a distance $\epsilon$ centred at $r_0$.} \label{fig:radial_stretch} \end{figure} Data on each patch are evaluated at grid-points which are placed at uniformly spaced points of a Cartesian grid. Thus, local derivatives can be calculated via standard finite difference techniques. These are then transformed to a common underlying Cartesian coordinate system by applying an appropriate Jacobian which relates the local to global coordinates. That is, the global (Cartesian) coordinates, $x_i$, are related to the local coordinates, $a_i$, by \begin{equation} x_i=x_i(a_j), \qquad i,j=0,1,2. \end{equation} and derivatives, $\ensuremath{\partial}/\ensuremath{\partial} a_i$, of fields are determined using finite differences in the regularly spaced $a_i$ coordinates, which are then transformed using \begin{subequations} \begin{align} \frac{\ensuremath{\partial}}{\ensuremath{\partial} x_i} &= \left(\frac{\ensuremath{\partial} a_j}{\ensuremath{\partial} x_j}\right)\frac{\ensuremath{\partial}}{\ensuremath{\partial} a_j}, \\ \frac{\ensuremath{\partial}^2}{\ensuremath{\partial} x_i \ensuremath{\partial} x_j} &= \left(\frac{\ensuremath{\partial}^2 a_k}{\ensuremath{\partial} x_i \ensuremath{\partial} x_j}\right) \frac{\ensuremath{\partial}^2}{\ensuremath{\partial} a_k^2} + \left(\frac{\ensuremath{\partial} a_k}{\ensuremath{\partial} x_i} \frac{\ensuremath{\partial} a_l}{\ensuremath{\partial} x_j}\right) \frac{\ensuremath{\partial}^2}{\ensuremath{\partial} a_k \ensuremath{\partial} a_l}, \end{align} \end{subequations} in order to determine their values in the global frame. We store and evaluate tensor components and their evolution equations in the common global frame, so that there is no need to apply transformations when relating data across patch boundaries. In addition to the obvious simplification of the inter-patch boundary treatment, this has a number of other advantages, particularly when it comes to analysis tools (surface finding, gravitational wave measurements, visualisation) which may reference data on multiple patches. Since the data is represented in the common global basis, these tools do not need to know anything about the local grid structures or coordinates. \subsection{Inter-patch interpolation} \label{sec:interpolation} Data is communicated between patches by interpolating onto overlapping points. Each patch, $p$, is responsible for determining the numerical solution for a region of the spacetime. The boundaries of these patches can overlap neighbouring patches, $q$, (and in fact must do so for the case of the interpolating boundaries considered here), creating regions of the spacetime which are covered by multiple patches. We define three sets of points on a patch. The \emph{nominal} regions, $\mathcal{N}_p$, contain the points where the numerical solution is to be determined. The nominal regions of the patches do not overlap, $\bigcap_p\mathcal{N}_p=\emptyset$, so that if data is required at any point in the spacetime, it can be obtained without ambiguity by referencing the single patch in whose nominal region it resides. A patch, $p$, is bounded by a layer of \emph{ghost} points, $\mathcal{G}_p$, which overlap the nominal zones of neighbouring patches, $q$, $\mathcal{G}_p\cap\bigcup_q\mathcal{N}_q=\mathcal{G}_p$, and are filled by interpolation. (These points are conceptually similar to the inter-processor ghost-zones used by domain decomposition parallelisation algorithms in order to divide grids over processors.) The size of these regions is determined by the width of the finite difference stencil to be used in finite differencing the evolution equations on the nominal grid. Finally, an additional layer of \emph{overlap} points, $\mathcal{O}_q$, is required: i) to ensure that the set of stencil points, $\mathcal{S}_q\subset\mathcal{O}_q\cup\mathcal{N}_q$, used to interpolated to the ghost zone does not itself originate from the ghost zone of the interpolating patch, $\mathcal{S}_q\cap\mathcal{G}_q=\emptyset$, $\mathcal{O}_q\cap\bigcup_p\mathcal{N}_p=\mathcal{O}_q$; and ii) to compensate for any difference in the grid spacing between the local coordinates on the two patches. An illustration of the scheme in 1-dimension the scheme is provided in Fig.~\ref{fig:overlap}. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/GZ_interp} \caption{Schematic of interpolating patch boundaries in 1-dimension, assuming 4-point finite difference and interpolation stencils. Points in the nominal zones, $\mathcal{N}_{p,q}$, are indicated by filled circles, points in ghost zones, $\mathcal{G}_{p,q}$, by open squares, and points in overlap zones, $\mathcal{O}_{p,q}$, by closed squares. The vertical dotted line demarcates the boundary between nominal zones on each patch. Ghost points on patch $p$ are evaluated by centred interpolation operations from points in $\mathcal{S}_q$ on the overlapping patch (arrows) and \emph{vice versa}.} \label{fig:overlap} \end{figure} Note that points in $\bigcup_q\mathcal{O}_q\subset\bigcup_p\mathcal{N}_p$ are not interpolated, but rather are evolved in the same way as nominal grid points within $\bigcup_p\mathcal{N}_p$. That is, in these regions points on each grid are evolved independently, and is in principle multi-valued. However, since the union of set of nominal points on each patch $\bigcup_p\mathcal{N}_p$ uniquely and unambiguously covers the entire simulation domain, \textit{i.e.\ } $\bigcap_p\mathcal{N}_p=\emptyset$, and since the overlap regions are a subset of the nominal grid, if data is required at a point within these overlap zones, there is exactly one patch owing this point on its nominal grid, and it will be returned uniquely from this patch. The differences between evolved field values evaluated in these overlap points converge away with the finite difference order of the evolution scheme. The use of additional overlap points makes this inter-patch interpolation algorithm somewhat simpler than the one implemented by Thornburg in~\cite{Thornburg:2004dv}. That algorithm required inter-patch boundary conditions to be applied in a specific order to ensure that all interpolation stencils are evaluated without using undefined grid points, and requires off-centring interpolation stencils under certain circumstances, which is not necessary when overlap points are added. It also relies on the particular property of the inflated-cube coordinates which ensured that the ghost-zones could be filled using 1-dimensional interpolation in a direction orthogonal to the boundary. This property would be non-trivial (and often impossible) to generalise to match arbitrary patch boundaries, such as that between the Cartesian and radially oriented grids of Fig.~\ref{fig:7-patch-with-ghost-zones}. Another significant difference between Thornburg's approach and the approach presented here is that former stores tensor components in the patch-local frame, while we store them in the global coordinate frame. Evaluating components in the patch-local frame requires a basis transformation while interpolating. This is further complicated in the case of non-tensorial quantities (such as the $\tilde\Gamma^i$ of the BSSNOK formulation) which have quite complicated basis transformation rules involving spatial derivatives. Instead, we store tensor components in the global coordinate frame, which requires no basis transformation during inter-patch interpolations. The number of ghost points in $\mathcal{G}_p$ can be reduced using finite difference stencils which become lop-sided towards the boundaries of the patch, and may provide an important optimisation since interpolation between grids can be expensive, particularly if processor communication is involved. However, this tends to be at the cost of increased numerical error in the finite difference operations towards the grid boundaries. We have generally found it preferable to use centred stencils throughout the nominal, $\mathcal{N}_p$, and overlap, $\mathcal{O}_p$, zones and have applied certain optimisations to the interpolation operators as described below. Another optimisation can be achieved by using lower order interpolation so that it is possible to reduce the number of overlapping points in $\mathcal{O}_p$. The interpolation scheme for evaluating data in ghost zones is based on Lagrange polynomials using data from the overlapping patch. In 1-dimension, the Lagrange interpolation polynomial can be written as \begin{subequations} \begin{equation} \mathcal{L}_x[\phi](x) = \sum_{i}^{N} b_{i}(x)\, \phi_{i}\,, \end{equation} where the coefficients are \begin{equation} b_{i}(x) = \prod_{k\neq i}\frac{(x-x_{k})} {(x_{i}-x_{k})}\,. \label{eq:interp-coeff} \end{equation} \end{subequations} In these expressions, $x\in\mathcal{G}_p$ is the coordinate of the interpolation point and $\phi_i\in\mathcal{S}_q\subset\mathcal{N}_q\cup\mathcal{O}_q$ are the values at grid-points in the interpolation molecule surrounding $x$. The number of grid-points in the interpolation molecule, $N$, determines the interpolation order, and interpolation of $n$-th order accuracy is given by $N=n+1$ stencil points in the molecule. For interpolation in $d$-dimensions, the interpolation polynomial can be constructed as a tensor product of 1-dimensional Lagrange interpolation polynomials along coordinate directions, $\bold{x}=(x^1,...,x^d)$: \begin{align} \mathcal{L}[\phi](\bold{x}) & = \mathcal{L}_{x^1}[\phi](x^1)\otimes\ldots\otimes\mathcal{L}_{x^d} [\phi](x^d) \nonumber \\ & = \left(\sum_{i}^{N} b_{i}(x^1)\, \phi_{i}\,,\right) \cdots\left(\sum_{j}^{N} c_{j}(x^d)\,\phi_{j}\right)\,. \end{align} Therefore, for $d$-dimensional interpolation of order $n$, one has to determine $N^d$ neighbouring stencil points and associated interpolation coefficients, Eq.~\eqref{eq:interp-coeff}, \emph{for each} point in the ghost-zone of a given patch. Most generally, full 3-dimensional interpolation is required, though in particular cases coordinates between two patches can be constructed such that they align locally so that only 1-dimensional interpolation is needed. This is, for instance, the case for the overlap region between the inflated-cube spherical patches used here (see Fig.~\ref{fig:7-patch-with-ghost-zones}). We have optimised the current code to automatically take advantage of this. In order to interpolate to a point for which the coordinates $a^p_i$ given in the basis of patch $p$ are given, we need to know the patch owning the nominal region containing this point. For this we first convert $a^p_i$ to the global coordinate basis $x_i$, then determine which patch $q$ owns the corresponding nominal region $\mathcal{N}_q$, and then convert $x_i$ to the local coordinate bases this patch $a^q_i$. By construction, patch $q$ has sufficient overlap points to evaluate the interpolation stencil there: \begin{subequations} \begin{align} x_i & := \textrm{local-to-global}_p (a^p_i)\,, \\ q & := \textrm{owning-patch} (x^i)\,, \\ a^q_i & := \textrm{global-to-local}_q (x^i) \,. \end{align} \end{subequations} The three operations ``$\textrm{local-to-global}$'', ``$\textrm{owning-patch}$'', and ``$\textrm{global-to-local}$'' depend on the patch system and their local coordinate systems. We can then find the points of patch $q$ that are closest to the interpolation point $a^q_i$ in the local coordinates this patch. In order to find these points, we exploit the uniformity of the grid in local coordinates. The grid indices of the stencil points in a given direction are determined via \begin{equation} i \in\left\{ \text{floor}(j+k)\; \middle| \; j=\frac{x-x_0}{\Delta x},\; k=-\frac{n}{2},\cdots,\frac{n}{2} \right\}\,, \end{equation} where $x_0$ is the origin of the local grid, $n$ is the interpolation order, and ``floor'' denotes rounding downwards to the nearest integer. There remains to be determined the refinement level which contains the region surrounding the interpolation point, as well as the processor that owns this part of the grid. For this purpose, an efficient tree-search algorithm has been implemented. In this algorithm, the individual patches and refinement levels are defined as ``super-regions'', i.e., bounding boxes that delineate the global grid extent before processor decomposition. Each of these super-regions is recursively split into smaller regions. The leaves of the resulting tree structure represent the individual local components of the processor decomposition. Locating a grid point in this tree structure requires $O(\log n)$ operations on $n$ processors, whereas a linear search (that would be necessary without a tree structure) would require $O(n)$ operations. While the corresponding tree structure is generic, the actual algorithm used in \texttt{Carpet} splits the domain into a $kd$ tree of depth $d$ in $d=3$ dimensions. That is, the domain is first split into $k$ sub-domains in the $x$ direction, each of these sub-domains is then independently split into several in the $y$ direction, and each of these is then split in the $z$ direction. This leads to cuboid sub-domains for each processor, where the sub-domains do not overlap, and where each sub-domain can have a different shape. \texttt{Carpet} balances the load so that each processor receives approximately the same number of grid points, while keeping the sub-domains' shapes as close to a cube as possible. Our implementation pre-calculates and stores most of the above information when the grid structure is set up, saving a significant amount of time during interpolation. In particular, the following are stored: \begin{itemize} \item For each ghost-point, the source patch (where the interpolation is performed), and the local coordinates on this patch; \item For each ghost-point, the interpolation stencil coefficients (\ref{eq:interp-coeff}); \item For each processor, the communication schedule specifying which interpolation points need to be sent to what other processor. \end{itemize} When the grid structure changes, for example, when a mesh-refinement grid is moved or resized, these quantities have to be recalculated. Altogether, the inter-patch interpolation therefore consists of applying processor-local interpolation stencils, sending the results to other processors, receiving results from other processors, and storing these results in the local ghost-points. These are all operations requiring no look-up in complex data structures, and which consequently execute very efficiently on modern hardware. \subsection{Finite differencing} Spatial derivatives are computed using standard finite difference stencils, which have currently been implemented up to 8th-order~\cite{Diener05b1}. The stencils are centred, except for the terms corresponding to an advection by the shift vector, of the form $\beta^i\partial_i u$ (see Sec.~\ref{sec:evolution}, below). These derivatives are calculated using an ``upwind'' stencil which is shifted by one point in the direction of the shift, and of the same order. We find that these upwind stencils provide a significant increase in the numerical accuracy of the puncture motion at a given resolution (see Appendix~\ref{sec:upwind}). The particular stencils which we use can be generated via a recursion algorithm, as outlined in~\cite{Fornberg-1988}. On each patch we allow the local grids to be refined in order to increase the accuracy of computations in localised regions. For the application of the evolution of an isolated binary considered here, we only refine the central Cartesian grid in the neighbourhood the bodies. This is done using standard $2:1$ Berger-Oliger mesh refinement techniques via the \texttt{Carpet} infrastructure~\cite{Schnetter-etal-03b, Schnetter06a, carpetweb}. The time step for the outer patches is taken to be the same as the coarse grid step of the interior patch, so that no time-interpolation is required at inter-patch boundaries. Time integration is carried out using standard method-of-lines integrators. We find that for the time resolution we are using, a 4th-order Runge-Kutta (RK4) method provides a good compromise between sufficient accuracy and a low memory footprint. We set the time resolution of the outer grids according to the timestep of the coarsest Cartesian grid, which is limited by the Courant condition at the specified spatial resolution. By placing the Cartesian-spherical boundary at a small radius (and thus extending to finer Cartesian grids) we attain a high time resolution in the wave zone, potentially important for determining higher harmonics. \subsection{Surface integration and harmonic decomposition} A number of quantities of physical interest are measured by projecting them onto closed surfaces surrounding the source. In particular, gravitational wave measurements rely on computations on constant coordinate spheres $S^2$, parameterized by local spherical-polar coordinates $(\theta, \phi)$ with constant coordinate radius $r$. In principle, it would be possible to construct coordinates on these 2-dimensional spheres which correspond to the underlying grid coordinates of the evolution, for instance as portrayed in the lower figure of Fig.~\ref{fig:7-patch-with-ghost-zones}. In this case, data can be mapped directly onto the spheres. More generally, however, interpolation can be used to obtain data at points on the measurement spheres, according to the procedure outlined in Sec.~\ref{sec:interpolation}, above. For the purpose of analysis, it is often convenient to decompose the data on $S^2$ in terms of (spin-weighted) spherical harmonic modes, \begin{equation} A_{\ell m}=\int d\Omega \sqrt{-g} A(\Omega) {}_s\bar{Y}_{\ell m}(\Omega)\,, \label{eq:harmonic} \end{equation} where $g$ is the determinant of the surface metric and $\Omega$ angular coordinates. In standard spherical-polar coordinates $(\theta, \phi)$, \begin{equation} \sqrt{-g} = \sin^2\theta\,. \end{equation} The integral, Eq.~\eqref{eq:harmonic}, is solved numerically as follows. In the spherical polar case, we can take advantage of an highly accurate Gauss quadrature scheme which is exact for polynomials of order up to $2N-1$, where $N$ is the number of gridpoints. More specifically, we use Gauss-Chebyshev quadrature. The scheme can be written out as \begin{equation} \int d\Omega f(\Omega) = \sum_i^{N_\theta}\sum_j^{N_\phi} f_{ij} w_{j} + \mathcal{O}(N_\theta)\,, \label{eq:poly-int} \end{equation} where $N_\theta$ and $N_\phi$ are the number of angular gridpoints and $w_j$ are weight functions~\cite{Driscoll94, Bateman55}, \begin{eqnarray} w_{j} &=& \frac{2\pi}{N_\phi} \frac{1}{N_\theta\sqrt{2\pi}} \sum_{l=0}^{N_\theta/2-1}\frac{1}{2l+1}\sin\left([2l+1]\frac{\pi j}{N}\right)\,, \nonumber \\ & & \qquad j=0,...,N_\theta-1\,. \end{eqnarray} In our implementation, the weight functions are pre-calculated for fast surface integration. \section{Evolution system} \label{sec:evolution} We evolve the spacetime using a variant of the ``BSSNOK'' evolution system~\cite{Nakamura87, Shibata95, Baumgarte99, Alcubierre99d} and a specific set of gauges \cite{Alcubierre02a, vanMeter:2006vi}, which have been shown to be effective at treating the coordinate singularities of Bowen-York initial data. The 4-geometry of a spacelike slice $\Sigma$ (with timelike normal, $n^\alpha$) is determined by its intrinsic 3-metric, $\gamma_{ab}$ and extrinsic curvature, $K_{ab}$, as well as a scalar lapse function, $\alpha$, and shift vector, $\beta^a$ which determine the coordinate propagation. The standard BSSNOK system defines modified variables by performing a conformal transformation on the 3-metric, \begin{equation} \label{eq:def_g} \phi := \frac{1}{12}\ln \det \gamma_{ab}, \qquad \tilde\gamma_{ab} := e^{-4\phi} \gamma_{ab}, \end{equation} subject to the constraint \begin{equation} \label{eq:phi_constraint} \det\tilde\gamma_{ab} = 1, \end{equation} and by removing the trace of $K_{ab}$, \begin{align} \label{eq:def_K} K & := \tr K_{ij} = g^{ij} K_{ij}, \\ \tilde A_{ij} & := e^{-4\phi} (K_{ij} - \frac{1}{3}\gamma_{ij} K), \end{align} with the constraint \begin{equation} \label{eq:A_constraint} \tilde A := \tilde\gamma^{ij}\tilde A_{ij} = 0. \end{equation} Additionally, three new variables are introduced, defined in terms of the Christoffel symbols of $\tilde{\gamma}_{ab}$ by \begin{equation} \label{eq:def_Gamma} \tilde\Gamma^a := \tilde\gamma^{ij}\tilde\Gamma^a_{ij}. \end{equation} In principle the $\tilde\Gamma^a$ can be determined from the $\tilde\gamma_{ab}$, on a slice however their introduction is key to establishing a strongly hyperbolic (and thus stable) evolution system. In practise, only the constraint Eq.~(\ref{eq:A_constraint}) is enforced during evolution~\cite{Alcubierre99e}, while Eq.~(\ref{eq:phi_constraint}) and Eq.~(\ref{eq:def_Gamma}) are simply monitored as indicators of numerical error. Thus, the traditional BSSNOK prescription evolves the variables \begin{equation} \phi,\quad \tilde\gamma_{ab},\quad K,\quad \tilde A_{ab},\quad \tilde\Gamma^a, \end{equation} according to evolution equations which have been written down a number of times (see~\cite{Baumgarte:2002jm, Alcubierre:2008} reviews). In the context of puncture evolutions, it has been noted that alternative scalings of the conformal factor may exhibit better numerical behaviour in the neighbourhood of the puncture as compared with $\phi$. In particular, a variable of the form \begin{equation} \hat{\phi}_\kappa := (\det\gamma_{ab})^{-1/\kappa}, \end{equation} has been suggested~\cite{Campanelli:2005dd, Marronetti:2007wz}. In~\cite{Campanelli:2005dd}, it is noted that certain singular terms in the evolution equations for Bowen-York initial data can be corrected using $\chi := \hat{\phi}_3$. Alternatively,~\cite{Marronetti:2007wz} notes that $W := \hat{\phi}_6$ has the additional benefit of ensuring $\gamma$ remains positive, a property which needs to be explicitly enforced with $\chi$. In terms of $\hat{\phi}_\kappa$, the BSSNOK evolution equations become: \begin{subequations} \begin{align} \ensuremath{\partial}_t \hat{\phi}_\kappa = & \frac{2}{\kappa} \hat{\phi}_\kappa \alpha K + \beta^i\ensuremath{\partial}_i\hat{\phi}_\kappa - \frac{2}{\kappa} \hat{\phi}_\kappa\ensuremath{\partial}_i\beta^i, \label{eq:evo_phi} \\ \ensuremath{\partial}_t \tilde\gamma_{ab} = & -2 \alpha \tilde A_{ab} + \beta^i\ensuremath{\partial}_i\tilde\gamma_{ab} + 2 \tilde\gamma_{i(a}\ensuremath{\partial}_{b)}\beta^i \label{eq:evo_tg} \\ & - \frac{2}{3}\tilde\gamma_{ab} \ensuremath{\partial}_i\beta^i, \nonumber \\ \ensuremath{\partial}_t K = & -D_iD^i \alpha + \alpha (A_{ij}A^{ij} + \frac{1}{3} K^2) + \beta^i\ensuremath{\partial}_iK, \label{eq:evo_K} \\ \ensuremath{\partial}_t \tilde A_{ab} = & (\hat{\phi}_\kappa)^{\kappa/3} (-D_aD_b\alpha + \alpha R_{ab})^\text{TF} + \beta^i\ensuremath{\partial}_i\tilde A_{ab} \label{eq:evo_tA} \\ & + 2\tilde A_{i(a}\ensuremath{\partial}_{b)}\beta^i - \frac{2}{3}A_{ab}\ensuremath{\partial}_i\beta^i, \nonumber \\ \ensuremath{\partial}_t \tilde\Gamma^a = & \tilde\gamma^{ij}\ensuremath{\partial}_i\beta_j\beta^a + \frac{1}{3} \tilde\gamma^{ai}\ensuremath{\partial}_i\ensuremath{\partial}_j\beta^j - \tilde\Gamma^i\ensuremath{\partial}_i\beta^a \label{eq:evo_tG} \\ & + \frac{2}{3}\tilde\Gamma^a \ensuremath{\partial}_i\beta^i - 2\tilde A^{ai} \ensuremath{\partial}_i\alpha \nonumber \\ & + 2\alpha (\tilde\Gamma^a_{ij} \tilde A^{ij} - \frac{\kappa}{2} \tilde A^{ai}\frac{\ensuremath{\partial}_i\hat{\phi}_\kappa}{\hat{\phi}_\kappa} - \frac{2}{3} \tilde\gamma^{ai}\ensuremath{\partial}_iK), \nonumber \end{align} \label{eq:bssn} \end{subequations} where $D_a$ is the covariant derivative determined by $\tilde\gamma_{ab}$, and ``TF'' indicates that the trace-free part of the bracketed term is used. We have implemented the traditional $\phi$ form of the BSSNOK evolution equations, as well as the $\chi$ and $W$ variants implicit in the evolution system, Eqs.~\eqref{eq:bssn}. All three evolution systems produce stable evolutions of binary black holes, though the $\chi$ variant requires some special treatment if, due to numerical error particularly in the neighbourhood of the punctures, $\hat{\phi}_3\le 0$~\cite{Brugmann:2008zz}. Generally we find that the advection of the puncture (and thus the phase accuracy of the simulation) exhibits lower numerical error when using the $\chi$ and $W$ variants (see Appendix~\ref{sec:conformal_factor}). Convergence properties of physical variables (such as measured gravitational waves, or constraints measured outside of the horizons), however, are not affected by the choice of conformal variable. The Einstein equations are completed by a set of four constraints which must be satisfied on each spacelike slice: \begin{subequations} \begin{align} \label{eq:einstein_ham_constraint} \mathcal{H} &\equiv R^{(3)} + K^2 - K_{ij} K^{ij} = 0, \\ \label{eq:einstein_mom_constraints} \mathcal{M}^a &\equiv D_i(K^{ai} - \gamma^{ai}K) = 0. \end{align} \end{subequations} Again, we do not actively enforce these equations, but rather monitor their magnitude in order to determine whether our numerical solution is deviating from a solution to the Einstein equations. The gauge quantities, $\alpha$ and $\beta^a$, are evolved using the prescriptions that have been commonly applied to BSSNOK black hole, and particularly puncture, evolutions. For the lapse, we evolve according to the ``$1+\log$'' condition~\cite{Bona95b}, \begin{equation} \partial_t \alpha - \beta^i\partial_i\alpha = -2 \alpha K, \label{eq:one_plus_log} \end{equation} while the shift is evolved using the hyperbolic ``$\tilde\Gamma$-driver'' equation~\cite{Alcubierre02a}, \begin{subequations} \begin{align} \partial_t \beta^a - \beta^i \partial_i \beta^a & = \frac{3}{4} \alpha B^a\,, \\ \partial_t B^a - \beta^j \partial_j B^i & = \partial_t \tilde\Gamma^a - \beta^i \partial_i \tilde\Gamma^a - \eta B^a\,, \end{align} \end{subequations} where $\eta$ is a parameter which acts as a (mass dependent) damping coefficient, and is typically set to values on the order of unity for the simulations carried out here. The advective terms in these equations were not present in the original definitions of~\cite{Alcubierre02a}, where co-moving coordinates were used, but have been added following the experience of more recent studies using moving punctures~\cite{Baker:2005vv, vanMeter:2006vi}. \subsection{Wave extraction} The Newman-Penrose formalism \cite{Newman62a} provides a convenient representation for a number of radiation related quantities as spin-weighted scalars. In particular, the curvature component \begin{equation} \psi_4 \equiv -C_{\alpha\beta\gamma\delta} n^\alpha \bar{m}^\beta n^\gamma \bar{m}^\delta, \label{eq:psi4def} \end{equation} is defined as a particular component of the Weyl curvature, $C_{\alpha\beta\gamma\delta}$, projected onto a given null frame, $\{\boldsymbol{l}, \boldsymbol{n}, \boldsymbol{m}, \bar{\boldsymbol{m}}\}$. The identification of the Weyl scalar $\psi_4$ with the gravitational radiation content of the spacetime is a result of the peeling theorem \cite{Sachs61, Newman62a, Penrose:1963}, which states that in an appropriate frame and for sufficiently smooth and asymptotically flat initial data near spatial infinity, the $\psi_4$ component of the curvature has the slowest fall-off with radius, $\mathcal{O}(1/r)$. The most straight-forward way of evaluating $\psi_4$ in numerical (Cauchy) simulations is to define an orthonormal basis in the three space $(\hat{\boldsymbol{r}}, \hat{\boldsymbol{\theta}}, \hat{\boldsymbol{\phi}})$, centered on the Cartesian grid center and oriented with poles along $\hat{\boldsymbol{z}}$. The normal to the slice defines a time-like vector $\hat{\boldsymbol{t}}$, from which we construct the null frame \begin{equation} \label{numframe} \boldsymbol{l} = \frac{1}{\sqrt{2}}(\hat{\boldsymbol{t}} - \hat{\boldsymbol{r}}),\quad \boldsymbol{n} = \frac{1}{\sqrt{2}}(\hat{\boldsymbol{t}} + \hat{\boldsymbol{r}}),\quad \boldsymbol{m} = \frac{1}{\sqrt{2}}(\hat{\boldsymbol{\theta}} - {\mathrm i}\hat{\boldsymbol{\phi}}) \ . \end{equation} Note that in order to make the vectors $\{\boldsymbol{l}, \boldsymbol{n}, \boldsymbol{m}, \bar{\boldsymbol{m}}\}$ null, $(\hat{\boldsymbol{r}}, \hat{\boldsymbol{\theta}}, \hat{\boldsymbol{\phi}})$ have to be orthonormal relative to the spacetime metric. In practice, we fix $\hat{\boldsymbol{r}}$ and then apply a Gram-Schmidt orthonormalization procedure to determine $\hat{\boldsymbol{\theta}}$ and $\hat{\boldsymbol{\phi}})$~\footnote{Alternative frame constructions have also been used, such as orthonormalising relative to one of the angular basis vectors~\cite{Baker:2001sf}, or omitting the orthonormalisation altogether~\cite{Scheel:2008rj}. We have generally found these modifications do not lead to significantly different measurements}. It is then possible to calculate $\psi_4$ via a reformulation of (\ref{eq:psi4def}) in terms of the geometrical variables on the slice~\cite{Shinkai94} via the electric and magnetic parts of the Weyl tensor, \begin{equation} \psi_4 = C_{ij} \bar{m}^i \bar{m}^j\,, \label{eq:psi4_adm} \end{equation} where \begin{equation} C_{ij} \equiv E_{ij}-iB_{ij} = R_{ij} - K K_{ij} + K_i{}^k K_{kj} - {\rm i}\epsilon_i{}^{kl} \nabla_l K_{jk}\,. \label{eq:cij} \end{equation} The remaining Weyl scalars can be similarly obtained and read \begin{subequations} \begin{eqnarray} \psi_3 &=& \frac{1}{\sqrt{2}}C_{ij}\bar{m}^i e_r^j\,, \\ \psi_2 &=& \frac{1}{2}C_{ij}e_r^i e_r^j\,, \\ \psi_1 &=& -\frac{1}{\sqrt{2}}C_{ij}m^i e_r^j\,, \\ \psi_0 &=& C_{ij}m^i m^j\,, \label{eq:psi0_adm} \end{eqnarray} \end{subequations} where $(e_r^j)\equiv \hat{\boldsymbol{r}}$ is the unit radial vector. In relating $\psi_4$ to the gravitational radiation, one is limited by the fact that the measurements have been taken at a finite radius from the source. Local coordinate and frame effects can complicate the interpretation of $\psi_4$. These problems can largely be alleviated by taking measurements at several radii and performing polynomial extrapolations to $r\rightarrow\infty$. Procedures for doing so have been studied in~\cite{Boyle:2009vi, Pollney:2009MP-unpublished}. In~\cite{Pollney:2009MP-unpublished} we have shown that if a sufficiently large outermost extrapolation radius is used, the variation in this procedure is of the order $\Delta A=0.03\%$ and $\Delta\phi = 0.003\,\ensuremath{\text{rad}}$ in amplitude and phase respectively, and is consistent throught the evolution, including inspiral, merger and ringdown. The extrapolation error is larger than the numerical error determined in Sec.~\ref{sec:accuracy}, below, even if it is performed using data at $r=1000M$ distant from the source, highlighting the need for measurements at large radii. For the ``extrapolated'' data plotted in this paper, we have performed polynomial extrapolations as detailed in~\cite{Pollney:2009MP-unpublished}, using the six outermost measurements at $r=\{280M,300M,400M,500M,600M,1000M\}$. In a companion paper~\cite{Reisswig:2009us}, we use the same dataset to calculate $\psi_4$ directly at $\ensuremath{\mathcal{J}^+}$ using characteristic extraction~\cite{Bishop98b, Babiuc:2009}. Here the traditional approach (which is gauge dependent and has a finite-radius cut-off error) presented here is replaced by a characteristic formulation of the Einstein equations in order to determine the fields out to future null infinity. In this paper, we restrict ourselves to a discussion of the numerical error inherent in the evolution procedure via the multi-patch code, and will report in more detail on systematic measurement errors due to finite radius effects and the characteristic extraction procedure elsewhere~\cite{Reisswig:2009us, Reisswig:2009-cce-long}. \section{Code verification} \label{sec:convergence} \subsection{Initial data} \label{sec:initial_data} To demonstrate the efficacy of the infrastructure described in the previous sections, we have carried out an evolution of the by now well-studied case of the late-inspiral and merger of a pair of non-spinning equal-mass black holes (see, for example, \cite{Hannam:2009hh} for an extensive discussion of numerical results involving this model). The particular numerical evolution which we have carried out starts from an initial separation $d/M=11.0$ and goes through approximately 8 orbits (a physical time of around $1360M$), merger and ring-down. The masses of the punctures are set to $m=0.4872$ and are initial placed on the $x$-axis with momenta $p=(\pm0.0903, \mp 0.000728, 0)$, giving the initial slice an ADM mass $M_{\rm ADM}=0.99051968 \pm 2\times 10^{-8}$. These initial data parameters were determined using a post-Newtonian evolution from large initial separation, following the procedure outlined in~\cite{Husa:2007rh}, with the conservative part of the Hamiltonian accurate to 3PN, and radiation-reaction to 3.5PN, and determines orbits with a measured eccentricity of $e=0.004\pm0.0005$. \subsection{Grid setup} The binary black hole evolution was carried out on a 7-patch grid structure, as described in Sec.~\ref{sec:multipatch}, incorporating a Cartesian mesh-refined region which covers the near-zone, and six radially oriented patches covering the wave-zone. The inner boundary of the radial grids was placed at $r_t=35.2M$ relative to the centre of the Cartesian grid. As a general rule, this boundary should be made as small as possible to improve efficiency in terms of memory usage. However other factors may make it preferable to move it further out. In particular, since we do not perform time interpolation at grid boundaries, the time step $dt$ of the coarsest Cartesian grid determines the timestep of the radial grids, and thus the wave zone. Updates of the radial grids tend to be expensive, as they are large, so that if $dt$ is too small, computation time may be spent over-resolving (in time) the wave zone. Particularly if the principle interest is in the lower order wave modes, it may be optimal to add an additional Cartesian mesh-refinement grid with a coarser time-step, and thus move $r_t$ outwards. The outer boundary for the spherical grids was chosen based on the expected time duration of the measurement and radius of the furthest detector, in order to remove any influence of the artificial outer boundary condition. In particular, given that the evolution takes a time $T_m$ for the entire inspiral, merger and ringdown, and gravitational wave measurements taken at a finite radius $r_d$, we would like to ensure that a disturbance travelling at the speed of light from the outer boundary does not reach the measurement radius (see Fig.~\ref{fig:causal_bc}). That is, noting that the physical modes travel at the speed of light, $c=1$~\cite{Brown2007a, Brown2007b}\footnote{The $1+\log$ slicing condition which we use propagates at $\sqrt{2}c$~\cite{Bona95b}, however this is a gauge mode and empirically we find it to have negligible effect on measurements.}, we place the boundary at \begin{equation} r_b > T_m + 2 r_d. \end{equation} For the particular evolution considered here, $T_m\simeq 1350M$, and our outermost measurements are taken at $r_d=1000M$. We have placed the outer boundary of the evolution domain at $r_b=3600M$. The near-zone grids incorporate 5 levels of 2:1 mesh refinement, covering regions centred around each of the black holes. For the highest resolution we have considered here, the finest grid (covering the black hole horizon) has a grid spacing of $dx=0.02M$. The wave-zone grids have an inner radial resolution which is commensurate with the coarse Cartesian grid resolution, $dr=0.64M$ in this case. This resolution is maintained essentially constant to the outermost measurement radius ($r=1000M$), at which point we apply a gradual decrease in resolution (as described in Sec.~\ref{sec:coords}) over a distance of $r=500M$. From $r=1500M$ to the outer boundary, we maintain a resolution of $dx=2.56M$, sufficient to resolve the inspiral frequencies of the dominant $(\ell,m)=(2,2)$ mode of the gravitational wave signal. The transition between the resolutions is performed over a distance of $500M$ between $r=1000M$ and $r=1500M$. The angular coordinates have 31 points (30 cells) in $\nu$ and $\phi$ on each of the 6 patches. The time-step of the wave-zone grids is $dt=0.144$, and we take wave measurements at each iteration. We have carried out evolutions at three resolutions in order to estimate the convergence of our numerical methods. The grid described above is labelled $h_{0.64}$. The lower resolutions, labelled $h_{0.80}$ and $h_{0.96}$ have each of the specified grid spacings scaled by $0.80/0.64$ and $0.96/0.64$, respectively. \begin{figure} \begin{center} \includegraphics[width=\linewidth,clip,trim=0 0 0 0]{figures/causal-bc} \end{center} \caption{Schematic of the causal propagation of information during the evolution. The gravitational wave source is located in the vicinity of $r=0$, with waves propagating outward at the speed of light $c=1$, and are measured at radius $r_d$ for a time of interest which would include the inspiral, merger and ringdown of a binary system. The unphysical outer boundary of the grid is located at $r_b$, which is chosen to be sufficiently far removed that the future Cauchy horizon of the domain of dependence of the initial slice does not reach $r_b$ until the measurement is complete.} \label{fig:causal_bc} \end{figure} \subsection{Results} \begin{figure*} \begin{center} \includegraphics[width=86mm,clip,trim=10 0 0 0]{figures/psi4_waves} \includegraphics[width=86mm,clip,trim=5 0 0 0]{figures/psi4_amp_freq} \end{center} \vspace{-0.5cm} \caption{The dominant spherical harmonic modes of $\psi_4$ for $\ell=2,4,6,8$, measured at $r=200M$ from the coordinate centre. The plots on the right show amplitude and frequency evolution during the late inspiral, merger and ringdown..} \label{fig:psi4-modes} \end{figure*} The binary black hole initial data described in Sec.~\ref{sec:initial_data} evolves for about 8 orbits ($\simeq 1350M$) before merger. Various $(\ell,m)$ modes of $\psi_4$ are plotted in Fig.~\ref{fig:psi4-modes}. We find that for the grids we have used, the modes to $(\ell,m)=(4,4)$ mode are quite well resolved throughout the evolution. The $(6,6)$ mode is also measurable, and shows a clear signal, particularly during ringdown. The $(8,8)$ mode is dominated by noise for most of the inspiral, though during the merger and ringdown phase, a clear signal is present and the amplitude and frequency can be estimated. Tests with an analytical solution confirm that the angular resolutions which we have used are at best marginal for resolving this mode. In the following sections, we report results regarding the convergence and accuracy of these measurements, as well as determine the parameters of the merger remnant. By analysing the ring-down behaviour of the waves we conclude that the remnant is indeed a Kerr black hole (see Sec.~\ref{sec:ringdown}, below). \subsubsection{Numerical convergence} We can establish the consistency of our discretisation by showing that it does indeed converge to a unique solution in the continuum limit. Ideally, an exact solution can be used to test this. However, since there are no exact solutions which adequately model the physical scenario which we wish to consider (inspiralling black hole binaries), an alternative is to evaluate numerical solutions at several (at least three) different resolutions and establish that the differences decrease as resolution is increased. For an implementation in which all of the discrete operations are carried out with the same order of accuracy, the convergence test should yield a clear exponent corresponding to that order. The evolution code incorporates a number of discrete operations, which for various practical reasons, are carried out to different orders of accuracy. These are listed in Table~\ref{tab:conv-order}. The primary operation which is carried out over the bulk of the grid is the computation of finite difference derivative operations in order to evaluate the right-hand side of the evolution equations~\eqref{eq:evo_phi}--\eqref{eq:evo_tG}. For the tests carried out in this paper, 8th-order stencils are used for this operation, including the upwinded advection terms. It is common to apply a small amount of artificial dissipation in order to smooth high-frequency effects. This is done at one higher order (9th) than the interior finite differencing in order to maintain the correct continuum limit. (In our experiments, however, we have noted that dissipation at this high order has a negligible impact on the solution, and can effectively be omitted.) Various boundary operations (inter-patch boundary communication, mesh-refinement boundaries) are carried out at lower order. This is done largely for efficiency reasons, as the communication involved in boundary interpolation can be time-consuming if the stencil widths are large. Intuitively, the numerical error associated with these operations may have reduced influence in any case, as they are applied only at 2D interfaces. In practise this does seem to be the case, for instance, as experiments with 4th and 5th order interpolation operators between patches show similar accuracy in the final solution. Similarly, operations involving different time-levels are at lower order, again for efficiency reasons. The time resolution of our evolutions tends to be high enough that one might expect a small error coefficient of the RK4 integrator. The lowest order operation which we use is the 2nd-order time interpolation at mesh-refinement boundaries. Applying higher order here would require keeping more time levels in memory (currently we store three). Our results are consistent with previous studies using mesh-refinement for black hole evolution which suggest that the influence of the low order time-interpolation boundary conditions is negligible for the time resolutions which we apply (see, for example, \cite{Brugmann:2008zz}). \begin{table} \begin{ruledtabular} \begin{tabular}{lc} Numerical method & Order \\ \hline Grid interior finite differencing & 8 \\ Inter-patch interpolation & 5 \\ Kreiss-Oliger dissipation & 9 \\ Time integration (RK4) & 4 \\ Mesh-refinement: & \\ \hspace{5mm} Spatial prolongation & 5 \\ \hspace{5mm} Spatial restriction & n/a \\ \hspace{5mm} Time interpolation & 2 \\ Analysis tools: & \\ \hspace{5mm} Interpolation & 4 \\ \hspace{5mm} Finite differencing & 8 \\ \hspace{5mm} Surface integration & $2N-1$ \\ \end{tabular} \end{ruledtabular} \caption{\label{tab:conv-order}Table of convergence order of various numerical aspects of the evolution code. Spatial restriction is carried out by a direct copy. The surface integration is exact for polynomials up to degree $2N-1$, where $N$ is the number of grid-points along one direction on the sphere.} \end{table} For test cases involving a single non-spinning black hole, in fact we find 8th-order convergence in the Hamiltonian constraints. This is likely due to the relatively constant values (except for some gauge evolution) maintained by the evolution variables during the evolution, which minimises error due to time-integration or propagation across boundaries. A more relevant situation is that of a binary black hole inspiral, which we have tested using the parameters described above in Sec.~\ref{sec:initial_data}. For this model, we have measured the gravitational waveform, $\psi_4$, integrated over spheres at radii from $r=100M$ to $r=1000M$, at the three resolutions $h_{0.96}$, $h_{0.80}$ and $h_{0.64}$. Results for the $(\ell,m)=(2,2)$ mode are shown in Fig.~\ref{fig:d550_convergence_22}. The evolution lasts for about $1350M$ before merger, and the plots encompass the inspiral, merger (at $t=0M$ on this time axis), and ringdown. The figure plots the error in phase $\Delta\phi$ and relative amplitude $\Delta A$ for the $(2,2)$ mode extracted at $r=100M$ and $r=1000M$, respectively, between medium $h_{0.80}$ and low $h_{0.96}$ resolutions and high $h_{0.64}$ and medium $h_{0.80}$ resolutions in the wave-zone. The latter error is scaled such that the curves will overlap in the case of a 4th-order convergent solution. At both radii, we find that during the inspiral phase, the rescaled error of the higher resolutions lies below that of the lower resolution, suggesting better than 4th-order convergence (in fact, closer to 8th-order over significant portions of the plot). At later times, around the peak of the waveform, the curves are more closely aligned, indicating 4th-order convergence. The plot suggests that during the very dynamical late stages of the inspiral, the lower order boundary conditions and/or the time integration, play a more important role relative to the early inspiral phase of the evolution, where the convergence order is closer to that of the interior finite differencing. The results are, however, convergent over the entire evolution (including merger and ringdown). As we will see in the next section, the accuracy is excellent for these resolutions so that the rate of convergence is not a particular issue. We have verified convergence for a number of different modes of the $\psi_4$ waveform at different radii. For instance, Fig.~\ref{fig:d550_convergence_66} shows similar results for the $(\ell,m)=(6,6)$ mode, which is some two orders of magnitude smaller in peak amplitude than the $(\ell,m)=(2,2)$ mode (see Fig.~\ref{fig:psi4-modes}). During the early inspiral, it is difficult to evaluate a convergence order due to high frequency noise which is large relative to the waveform amplitude. However, a measurable signal is clear in the last orbit, merger and ringdown phase, and converges at a clear 3rd order. \begin{figure*} \begin{center} \includegraphics[width=190mm,clip,trim=100 0 50 0] {figures/d550_convergence_22} \end{center} \vspace{-\baselineskip} \caption{Convergence in amplitude (top) and phase (bottom) of the $(\ell,m)=(2,2)$ mode of $\psi_4$ for detectors at $r=100M$ and $r=1000M$. The higher resolution difference, $h_{0.80}-h_{0.64}$, is scaled for 4th-order convergence.} \label{fig:d550_convergence_22} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=1.\linewidth,clip,trim=25 0 25 0] {figures/d550_convergence_66} \end{center} \vspace{-\baselineskip} \caption{Convergence in amplitude (top) and phase (bottom) of the $(\ell,m)=(6,6)$ mode of $\psi_4$ for detector at $r=100M$ during the late through merger. The higher resolution difference, $h_{0.80}-h_{0.64}$, is scaled for 3rd-order convergence.} \label{fig:d550_convergence_66} \end{figure} \subsubsection{Accuracy} \label{sec:accuracy} We estimate the numerical phase and amplitude error by means of a Richardson expansion at a given resolution $\Delta$, \begin{equation} u_\Delta(t,x)=u(t,x)+\Delta e_1(t,x)+\Delta^2 e_2(t,x)+\cdot\cdot\cdot\,, \label{eq:Richardson-expand} \end{equation} where $u(t,x)$ is the solution of the original differential equation, and the $e_i(t,x)$ are error terms at different orders in $\Delta$. Assuming convergence at a fixed order, $n$, we can expect some of these error functions to vanish. Using solutions, $u$, obtained at two resolutions, $\Delta_1$ and $\Delta_2$, the Richardson expansion implies \begin{align} u_{\Delta_1}-u_{\Delta_2} & = e_n(\Delta_1^n-\Delta_2^n) + \mathcal{O}(\Delta^{n+1}) \nonumber \\ & = e_n\Delta_2^n(C^n-1) + \mathcal{O}(\Delta^{n+1}) \nonumber \\ & \sim \epsilon_{\Delta_2}(C^n-1)\,, \end{align} where $\epsilon_{\Delta_2}$ is the estimated solution error on the higher resolution grid, and where \begin{equation} C^n := \left(\frac{\Delta_1}{\Delta_2}\right)^n\,. \end{equation} We thus obtain an estimate for the solution error that is at least accurate to order $n+1$, \begin{equation} \epsilon_{\Delta_2}\sim\frac{1}{C^n-1}(u_{\Delta_1}-u_{\Delta_2})\,, \label{eq:numerical-error} \end{equation} which we use as an estimate of the numerical error in our solutions. During the inspiral phase (which for this purpose we regard as being the period $t\leq-100M$), we have found roughly 8th-order convergence in the amplitude and phase, as described above. The remaining relative error for the $(\ell,m)=(2,2)$ mode can be estimated as \begin{subequations} \begin{align} \max_{T\in[-1350,-100]}{\text{err}(A)}_{\rm inspiral} & = 0.090\%\,, \\ \max_{T\in[-1350,-100]}{\text{err}(\phi)}_{\rm inspiral} & = 0.010\%\,. \end{align} \end{subequations} where $\text{err}(A) := \Delta A/A$ and $\text{err}(\phi) := \Delta\phi/\phi$, i.e., the rate of loss of phase with $\phi$. During merger and ring-down ($t>-100M$), we observe 4th-order convergence in the amplitude, while maintaining 8th-order convergence in the phase. This results in the estimate \begin{subequations} \begin{align} \max_{T\in(-100,150]}{\text{err}(A)}_{\rm merger} & = 0.153\%\,, \\ \max_{T\in(-100,150]}{\text{err}(\phi)}_{\rm merger} & = 0.003\%\,. \end{align} \end{subequations} The time evolution of the numerical error in phase and amplitude is shown in Fig.~\ref{fig:error}. We note that these errors are of comparable order to the errors inherent in the extrapolation~\cite{Pollney:2009MP-unpublished}. Moreover, as is pointed out in~\cite{Reisswig:2009us}, the error between extrapolated waveforms and those determined at future null infinity, $\ensuremath{\mathcal{J}^+}$, by characteristic extraction, is an order of magnitude larger than the numerical error determined here. This highlights the importance of reducing systematic errors inherent in finite radius measurements of $\psi_4$. \begin{figure} \begin{center} \includegraphics[width=1.\linewidth,clip,trim=25 0 25 0] {figures/error} \end{center} \vspace{-\baselineskip} \caption{Absolute numerical error in the amplitude (top) and phase (bottom) accumulated over the course of the evolution for the highest resolution run, determined according to Eq.~\eqref{eq:numerical-error} for the point-wise differences in amplitude and phase between medium and high resolution runs. For the phase we assume the measured 8th-order convergence over the entire evolution, while for the amplitude we use 8th-order before $t\leq-100$, and 4th-order thereafter (see text).} \label{fig:error} \end{figure} \subsubsection{Properties of the merger remnant} The merger remnant can be measured with high accuracy, using either the isolated horizon formalism~\cite{Dreyer02a, Ashtekar:2004cn}, or geometrical measures of the apparent horizon~\cite{Brandt94c, Alcubierre:2004hr}. Some results are reported in Table~\ref{tab:remnant}, along with estimated numerical errors. The results agree well with previous high-accuracy measurements, such as those obtained by spectral evolution~\cite{Scheel:2008rj, Hannam:2009hh}, with the spin and irreducible mass agreeing within three decimal and four decimal places, respectively. places. While this is larger than the reported errors, we note that we have evolved a different initial data set than~\cite{Scheel:2008rj}. As reported in Sec.~\ref{sec:initial_data} our evolution has somewhat more eccentricity, and the level of agreement can be used to judge the influence of small amounts of eccentricity on the result. By comparing the properties of the merger remnant with the integrated radiated energy, $E_{\rm rad}$, and angular momentum, $J_{\rm rad}$, determined from the gravitational waveforms, we find the residuals \begin{subequations} \begin{align} |M_f + M_{\rm rad} - M_{\rm ADM}| & = 2.6 \times 10^{-4}, \\ |S_f + J_{\rm rad} - J_{\rm ADM}| & = 3.1 \times 10^{-4}. \end{align} \end{subequations} Here we have used the extrapolations of the gravitational waveforms to $r\rightarrow\infty$ based on the 6 outermost measurement radii. A more detailed discussion of this procedure is given in~\cite{Pollney:2009MP-unpublished}. The results can be further improved through taking measurements at $\ensuremath{\mathcal{J}^+}$, as outlined in~\cite{Reisswig:2009us, Reisswig:2009-cce-long}. \begin{table} \begin{ruledtabular} \begin{tabular}{lD{.}{.}{2.20}} Total ADM mass, $M_{\rm ADM}$ & 0.99051968 \pm 20\times 10^{-9} \\ Total ADM angular momentum, $J_{\rm ADM}$ & 0.99330000 \pm 10\times 10^{-17} \\ \hline Irreducible mass, $M_{\rm irr}$& 0.884355 \pm 20\times 10^{-6} \\ Spin, $S_f/M_f^2$ & 0.686923 \pm 10\times 10^{-6} \\ Christodoulou mass, $M_{\rm f}$& 0.951764 \pm 20\times 10^{-6} \\ Angular momentum, $S_f$ & 0.622252 \pm 10\times 10^{-6} \\ \hline Radiated energy, $E_{\rm rad}$ & 0.038546 \pm 51\times 10^{-6} \\ Radiated angular momentum, $J_{\rm rad}$ & 0.370391 \pm 17\times 10^{-6} \\ \end{tabular} \end{ruledtabular} \caption{Properties of the merger remnant as measured on the apparent horizon ($M_{\rm irr}$, $S_f/M_f^2$) and from the gravitational radiation ($E_{\rm rad}$, $J_{\rm rad}$). Ranges indicate the estimated numerical error. For the error in $J_{\rm ADM}$, we have simply quoted machine precision (it is an analytical expression of the input momenta on the conformally flat initial slice).} \label{tab:remnant} \end{table} \subsubsection{Quasi-normal modes of the merger remnant} \label{sec:ringdown} In Fig.~\ref{fig:psi4-modes}, we have shown the late-time behaviour of the amplitude and frequency for the dominant spherical harmonic modes of $\psi_4$, to $(\ell,m)=(8,8)$. We note that during ring-down, the frequencies settle to a constant value. If the final black hole is a Kerr black hole, these frequencies are given by the quasi-normal modes of a Kerr black hole with given spin $a$. As reported in the previous section, our evolution leads to a merger remnant with $a=0.686923\pm 1\times 10^{-5}$ (see Table~\ref{tab:remnant}), as measured on the horizon. The real part of the prograde quasi-normal mode (QNM) frequencies for modes up to $(\ell,m)=(7,7)$, can be found tabulated in~\cite{Berti06c}. For example, $M\omega_{22}=0.526891$ for the $(\ell,m)=(2,2)$ mode, given a final black hole of the measured mass $M_f$ and spin $S_f$. At this point it is worth noting that the QNM determined from perturbations of a Kerr black hole are most naturally expressed in terms of a basis of spin-weighted \textit{spheroidal} harmonics. By contrast, our waveforms have been decomposed relative to a basis of spin-weighted \textit{spherical} harmonics, which are easily calculated via Legendre functions. In order to make an appropriate comparison between these modes with the perturbative results we need to apply a transformation to the wave-modes. We have \begin{equation} \hat{\psi}_4^{\ell'm'} = \sum_{\ell,m} \psi_4^{\ell,m} \langle \ell,m| \ell',m' \rangle\,, \end{equation} where a dash denotes labelling of the spheroidal harmonic modes, and $\langle \ell,m| \ell',m' \rangle$ is the overlap defined by \begin{equation} \langle \ell,m| \ell',m' \rangle = \int_\Omega d\Omega {}_{-2}\bar{S}_{\ell' m'}(c_{\ell'm'}) {}_{-2}Y_{\ell m}\,. \end{equation} The spheroidal harmonics parameter $c_{\ell'm'}=a \omega_{\ell' m'}$ depends on the spin $a$ of the black hole and the corresponding prograde or retrograde QNM frequency $\omega_{\ell' m'}$ of the $(\ell' m')$ spheroidal harmonic mode\footnote{We restrict attention to the $N=0$ harmonic only.}. If $c=0$ (as is the case for non-spinning black holes), the spheroidal harmonics reduce to the spherical harmonics. The spin-weighted spheroidal harmonics used here have been implemented following Leaver~\cite{Leaver85} and are reviewed in \cite{Berti06c}. The frequencies measured during the ringdown are plotted in Fig.~\ref{fig:psi4_amp_freq_modes-ringdown} for the modes $(\ell,m)=(2,2)$,$(4,4)$ and $(6,6)$. We have plotted data for the $r=1000M$ measurement, as well as the value obtained by extrapolating the waveforms extracted at the outermost 6 measurement spheres to $r\rightarrow\infty$, and find that in fact the extrapolation has little effect on the frequency of the lower order modes at these distances from the source. We note that there is a modulation of the ringdown frequency, particularly apparent in the $(2,2)$ mode. This is a result of mode mixing, which stems from the use of the spherical harmonic basis for the $\psi_4$ measurements. By transforming the $r=1000M$ result to spheroidal harmonics, this modulation visible in the $t<40M$ signal is largely removed (dashed line). As the amplitude of the wave declines exponentially to the level of numerical error, the frequencies become difficult to measure accurately. We estimate the ringdown frequency for each mode by performing a least-squares fit of a horizontal line through the measured spheroidal harmonic frequency over the range $t\in[40,80]M$ (dotted line) with the standard deviation of the fit as a gauge of the error (grey region). These constant lines represent the estimated frequency of the associated QNM modes, and are tabulated as $\omega^{\rm NR}$ in Table~\ref{tbl:ringdown}. They agree to high precision with the prograde QNM frequencies, $\omega^{\rm lit.}$, determined Kerr black holes by perturbative methods~\cite{Berti06c}. We conclude that the merger remnant is compatible with a Kerr black hole within the given error estimates. \begin{table} \begin{ruledtabular} \begin{tabular}{lccc} $(\ell,m)$ & $M_f\omega^{\rm lit.}$ & $M_f\omega^{\rm NR}$ & $|M_f\omega^{\rm NR}-M_f\omega^{\rm lit.}|$ \\ \hline $(2,2)$ & $0.526891$ & $0.5267 \pm 0.0011$ & $1.9\times10^{-4}$ \\ $(4,4)$ & $1.131263$ & $1.1312 \pm 0.0028$ & $6.3\times10^{-5}$ \\ $(6,6)$ & $1.707630$ & $1.7074 \pm 0.0662$ & $2.3\times10^{-4}$ \end{tabular} \end{ruledtabular} \caption{\label{tab:qnms}Prograde $N=0$ QNM frequencies for different modes and spin $a=0.6869$ as determined by perturbative methods~\cite{Berti06c}, $\omega^{\rm lit.}$, and as measured during ringdown in the numerical relativity simulation, $\omega^{\rm NR}$.} \label{tbl:ringdown} \end{table} \begin{figure} \begin{center} \includegraphics[width=86mm,clip,trim=5 50 0 40] {figures/ringdown_freq} \end{center} \vspace{-\baselineskip} \caption{The ringdown frequencies for the dominant $\psi_4$ modes to $\ell=6$ of the merger remnant. From top to bottom, the plots show the frequencies of the $(\ell,m)=(2,2)$, $(4,4)$ and $(6,6)$ modes respectively, over a timescale from the $(2,2)$ waveform peak to $100M$ later, at which point the waveform amplitude is too small to measure an accurate frequency. The $\psi_4$ data measured at $r=1000M$ is plotted, in addition to the value extrapolated to $r\rightarrow\infty$, and the transformation to spheroidal harmonics. The expected quasi-normal mode frequency is plotted as a dotted line, as well as a fit to the spheroidal harmonic data over the range $t\in[40M,80M]$, with error-bars determined by the standard deviation of the fit.} \label{fig:psi4_amp_freq_modes-ringdown} \end{figure} \section{Discussion} \label{sec:discussion} The results of this paper provide a demonstration of the usefulness of adapted coordinates in numerical relativity simulations. The precision of the calculations have allowed us to obtain convergent modes to $\ell=6$, through merger and ringdown, with accurate predictions of the quasi-normal ringdown frequencies of the remnant. Our implementation of non-singular radially adapted coordinates for the wave zone is based on the use of multiple grid patches with interpolating boundaries, coupled to a BSSNOK evolution code. Thornburg~\cite{Thornburg:2004dv} first demonstrated that such a setup could lead to stable evolutions in the case of a spinning black hole in Kerr-Schild coordinates. We have demonstrated that the approach is also effective and robust for dynamical puncture evolutions, and in particular the problem of binary black holes. The implementation described here has a number of advantages, principle among them being its flexibility. While we have presented results for a particular grid structure adapted to radially propagating waves, there are no principle problems with restructuring the grids to cover any required domain, for instance adapted to excision boundaries or toroidal fields. Since data is stored in the underlying Cartesian basis, and passed by interpolation across boundaries, the coordinates used on each patch are largely independent of the others, and there is no need for numerical grid generating schemes. While we have used the BSSNOK formalism to evolve the Einstein equations, in principle any stable strongly hyperbolic system can be substituted. The BSSNOK system has, however, proven particularly useful for evolving black holes via the puncture approach, which itself has proven to be a very flexible methodology. We have demonstrated results for the most well-studied test case, non-spinning equal-mass black holes, the same techniques can be applied to different mass ratios and spinning black holes, simply by changing the physical input parameters. (The appendices include some examples of spinning black hole evolutions.) Finally, we emphasise again the accuracies which can be attained by this approach. Our finite difference results show numerical error estimates which are on par with those achieved using spectral spatial discretisation~\cite{Scheel:2008rj}. The adapted radial coordinate allows us to take measurements at radii much larger than have been used before, as well as obtain accurate measurements of higher $\ell$ modes during merger, which have an amplitude more than two orders of magnitude smaller than the dominant $(\ell,m)=(2,2)$ mode. One of the aspects which makes this possible is the fact that we are able to extend our grids to a distance such that the measurements are included in the future domain of dependence of the initial data (causally disconnected outer boundaries), and the waves are reasonably well resolved over this entire domain so that internal reflections are minimised. Furher, we note that our results are consistent with other puncture-method calculation in that the results are convergent and can be consistently extrapolated to $r\rightarrow\infty$ throughout the entire evolution, including late inspiral and ringdown~\cite{Pollney:2009MP-unpublished}, where other approaches have had difficulties. The absence of artificial boundaries, as well as dissipative regions in the wave zone, removes an important source of potential error in solving the Einstein equations as an initial-boundary value problem. The remaining errors can be categorised in three forms. First, numerical error due to the discretisation. This can be reduced through the use of higher order methods for the operations performed in various parts of the code, and fortunately is also easy to quantify by performing tests at multiple resolutions. We note that for finite differences, the largest improvement in accuracy occurs in going from 2nd to 4th-order for the interior computations, and beyond that there are diminishing returns~\cite{Gustafsson95}. While it does not yet seem to be a limiting factor, except possibly during the merger, the RK4 time-stepping will at some level of resolution be a determining factor in the accuracy regardless of the spatial order (and this is also the case for current implementations of spectral methods). The second source of error is a physical error, inherent in the choice of initial data parameters for the binary evolution. At the separations which are practical for numerical relativity (say $d<20M$), the physical model is expected to have shed all of its eccentricity. We have used post-Newtonian orbital parameters to attempt to place our black holes in low eccentricity trajectories, and this is quite effective. Alternative approaches, involving iteratively correcting the initial data parameters until a tolerable eccentricity has been reached, are able to reduce the eccentricity still further~\cite{Pfeiffer:2007yz}. This technique can in principle also be adapted to the moving puncture approach. The final source of error arises in the measurement of $\psi_4$, which is done at a finite radius, and then extrapolated to $r\rightarrow\infty$ by some procedure. We have attempted to minimise this error by placing detectors at large radii, well into the region where the perturbations are linear, and have shown that the extrapolations are consistent with measurements at larger radii, as well as with each other in the $r\rightarrow\infty$ limit~\cite{Pollney:2009MP-unpublished}. However, there remain ambiguities particularly in gauge-dependent quantities such as the choice of surface on which measurements are taken, and the definition of time and radial distance to be used in the extrapolation. In a companion paper~\cite{Reisswig:2009us}, we have demonstrated that these ambiguities can be removed entirely by the procedure of \emph{characteristic extraction}, whereby evolution data on a world-tube is used as an inner boundary condition for a fully relativistic characteristic evolution, extending to null infinity, $\ensuremath{\mathcal{J}^+}$. The results suggest that systematic errors inherent in finite radius measurements of $\psi_4$ are more than an order of magnitude larger than the numerical errors reported here. \begin{acknowledgments} We dedicate this paper to the memory of Thomas Radke, who has made invaluable contributions to the development and optimisation of Cactus, Carpet and the code described here. The authors are pleased to thank: Ian Hinder, Sascha Husa, Badri Krishnan, Philipp Moesta, Christian D. Ott, Luciano Rezzolla, Jennifer Seiler, Jonathan Thornburg, and Burkhard Zink for their helpful input; the developers of Cactus \cite{Goodale02a, cactusweb1} and Carpet \cite{Schnetter-etal-03b, Schnetter06a, carpetweb} for providing an open and optimised computational infrastructure on which we have based our code; Nico Budewitz for optimisation work with our local compute cluster, \texttt{damiana}; support from the DFG SFB/Transregio~7, the VESF, and by the NSF awards no.~0701566 \emph{XiRel} and no.~0721915 \emph{Alpaca}. Computations were performed at the AEI, at LSU, on LONI (numrel03), on the TeraGrid (TG-MCA02N014), and the Leibniz Rechenzentrum M\"unchen (h0152). \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,562
Home | Henry F. Cooper Tags: hypersonic | russia | brilliant pebbles | space America Needs Money to Defend Against Hypersonic Threats (Denisismagilov/Dreamstime.com) By Henry F. Cooper Monday, 08 April 2019 03:05 PM Current | Bio | Archive Loren Thompson writes in his April 5 Forbes' article "Hypersonic Weapons Are Coming" that the Pentagon needs to spend more on defending against this growing threat from Russia and China. I could not agree more — moreover, considerably more spending is well justified. Thompson reports that the Pentagon is spending a lot of money to develop U.S. hypersonic weapons that can fly at speeds up to 20-25 times the speed of sound. The proposed funding for 2020 is $2.6 billion — to catch-up with and surpass the growing threat, as Defense Under Secretary for Research and Engineering Michael Griffin has said is his top priority. Meanwhile, Thompson reports that the administration proposes only $170 million to support efforts to defend against this growing threat — which Vladimir Putin claimed last year he already had mastered. He could have been exaggerating, but on the other hand… Russia's Avangard hypersonic weapon can be launched on one of many Soviet intercontinental ballistic missiles (ICBMs), "boosted" toward space and then released high in or above the atmosphere. It then could maneuver as it "glides" through the atmosphere at orbital velocities — 20-25 times the speed of sound — to evade intercept by any of our current ballistic missile defense (BMD) system interceptors. It should be understood that hypersonics is not a new idea. Sandia National Laboratories was testing such a "boost-glide" system in 2012, based on decades of previous work. It could evade defenses to attack tactical systems, like important naval ships. Or it potentially could carry nuclear weapons to evade BMD systems, as Putin emphasized was Avangard's objective. It certainly is appropriate again to advance our understanding of such hypersonic missiles, and even to exceed Russian and Chinese capabilities. But, is it wise to invest much less than 10 percent as much on defending against this important, potentially existential, threat? Moreover, we should remember that three decades ago we spent much more actively exploring defenses to defeat such a threat — we were successful and had an answer that especially now should be reconsidered. President Reagan's Strategic Defense Initiative (SDI) actively explored a space-based interceptor system, Brilliant Pebbles, that could defeat Russia's Avangard hypersonic missile — had we not abandoned that effort in 1993 for purely political reasons. The Brilliant Pebbles effort began at Lawrence Livermore National Laboratories (LLNL) as a classified program in the late 1980s — and was made public when President Reagan vetoed the 1989 National Defense Authorizations Act because it imposed a major cut to SDI spending on space-based interceptors. That no doubt made enemies of the powerful Chairman of the House Armed Services Committee (HASC) and his counterpart in the Senate. Once Brilliant Pebbles was exposed, its development came under additional scrutiny. The second SDI Director formed a separate Task Force to manage the effort through a major series of technical reviews by the Defense Science Board, the JASON — a group of notable outside technology experts from academia, and the Pentagon's acquisition authorities. They found "no showstoppers," and Brilliant Pebbles in 1990 was approved by the Pentagon's top acquisition authority to enter a formal Demonstration and Validation (DemVal) program. A competition between five contractor teams led to a down-selection to two, led by Martin Marietta and TRW-Hughes — both primes have since been subsumed in other companies and the original industrial teams have dispersed. But back then, the expected costs for development, deployment and operations for 20 years was $10 billion in 1988 dollars — now inflated to about $20 billion. SDI Directors from that era (1983-93) consider Brilliant Pebbles to have been the SDI's most cost-effective product. But at the time, the SASC and HASC Chairmen continued to oppose Brilliant Pebbles and cut its funding requests causing major disruption and waste in SDI efforts. As SDI Director during this era, I am most familiar with this history that led to a major cut in the appropriations for Fiscal Year 1993 — to $300 million, now inflated to over $500 million. But compare that level of appropriated funding for 1993 with the $170 million the Pentagon reportedly is now requesting for 2020 to study defenses against hypersonic missiles, according to Loren Thompson's Forbes article. About three times as much. Whatever, when Les Aspin became President Clinton's Secretary of Defense, the former HASC Chairman "took the stars out of Star Wars" (as he said) — gutting the SDI program and completely dispersing the entire Brilliant Pebbles team. It is long past time for reviving this most important Brilliant Pebbles space-based system concept, that should fit with the Trump administration's efforts to exploit the private sector's technology in building small satellites to accomplish other missions. Those missions include tracking attacking ballistic missiles and providing that information to our ground-based interceptors, like THAAD as Thompson mentioned in his Forbes' article. Why not, as Brilliant Pebbles planned, also employ miniature thrusters on those small satellites to enable boost-phase intercept capabilities from space? It would beat the hypersonic threat. A question perhaps in Thursday's SASC Hearing on President Trump's Space Force initiative, don't you think? Ambassador Henry F. (Hank) Cooper, Chairman of High Frontier and an acknowledged expert on strategic and space national security issues, was President Ronald Reagan's Chief Negotiator at the Geneva Defense and Space Talks with the Soviet Union and Strategic Defense Initiative (SDI) Director during the George H.W. Bush administration. Previously, he served as the Assistant Director of the Arms Control and Disarmament Agency, Deputy Assistant USAF Secretary and Science Advisor to the Air Force Weapons Laboratory. In the private sector he was Chairman of Applied Research Associates, a high technology company; member of the technical staff of Jaycor, R&D Associates and Bell Telephone Laboratories; a Senior Associate of the National Institute for Public Policy; and Visiting Fellow at the Heritage Foundation. He received B.S. and M.S. degrees from Clemson and a PhD from New York University, all in Mechanical Engineering. To read more of his reports — Click Here Now. Posts by Henry F. Cooper America Takes Mere Cybersecurity Stutter-Steps Beware Threatening 'Black Swan' Events View More Posts by Henry F. Cooper HenryFCooper It certainly is appropriate again to advance our understanding of such hypersonic missiles, and even to exceed Russian and Chinese capabilities. But, is it wise to invest much less than 1 percent as much on defending against this important, potentially existential, threat? hypersonic, russia, brilliant pebbles, space Monday, 08 April 2019 03:05 PM
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,840
;(function(run, global, undefined){ "use strict"; var modules = {}; var factories = {}; var original = {}; /** * This converts relative paths into absolute module names */ function normalize(id, base){ // Some code golf to get the number of "directories" of the id back. var dots = /(\.?\.\/?)*/.exec(id)[0]; var dirCount = Math.floor(dots.replace(/\//g, '').length/1.9 + 0.99); if(dirCount){ // Reduce base by found number of "directories" var reduced = base.split('/'); reduced = reduced.slice(0, reduced.length - dirCount); reduced = reduced.join('/'); if(reduced){ id = reduced + '/' + id.slice(dots.length); } } return id.replace(/\/$/, ''); } /** * @Public */ var define = run.define = function(id, factory) { var newid = id.replace(/\/index$/, ''); // store mapped id to resolve path relative to old path if(newid !== id) original[newid] = id; // store factory under both new and old ids factories[id] = factories[newid] = factory; }; /** * @Public */ var require = run.require = function(id) { if (!modules[id]){ var message = "Could not load module: '"+id+"'"; if(!factories[id])throw new Error(message); // creates custome require function var customeRequire = function(newid){ return require(normalize(newid, original[id] || id)); }; // This stops infinite recursion with circular dependencies var module = {exports: modules[id] = {}}; // Build module factories[id](customeRequire, module.exports, module); modules[id] = module.exports; } return modules[id]; }; ;define("src/belt", function(require, exports, module){/* get(...) -> Intermediate get -> Intermediate Intermediate.has(...) -> Intermediate Intermediate.json(...) -> Intermediate Intermediate.toJSON(...) -> JSON Intermediate.immediate(options) Intermediate.each(...) -> Intermediate Node.iterate() -> Runs directly Node.get() -> Intermediate Node.append(newnode) Node.before(childnode, newnode) Node.after(childnode, newnode) Node.detach(childnode) -> Root Node.wrap(childnode, newnode) Node.replace(childnode, newnode) -> childnode Node.removeAll() -> Stream???? could potentialy replace unwrap Node.unwrap(childnode) -> wrappingnode Node.clone(childnode) -> ROOT Node.prop(...) Node.attr(...) Node.tags(...) Node.path(...) Node.text(...) */ }); ;define("src/child", function(require, exports, module){ // A static collection of elements which can be extended to serve data from ajax // and other asynchronous sources. A stream does not provide a mechanism to // model mutable or changing data. Hence any call to a stream with the same // arguments must always return the same result. This initial implementation of // streams serves an array of elements. /** * A possibly infinite stream of elements. * @constructor * @param {Array} array - array of elements to serve * @param {function=} mapper - all elements are lazily mapped with this function */ function Stream(array, mapper){ this.elements = array || []; this.mapper = mapper; } /** * Asynchronous, return the next element that may match the assertions. This may * be overrided to do more concrete enhancements that take the assertions into * account while traversing. * @param {number} minindex - the result must be at least this * @param {DNF|Assertion} assertions - may be used to improve iteration * @param {function} done - invoked with the result of the method */ Stream.prototype.next = function(minindex, assertions, done){ if(minindex >= this.elements.length){ done(null, minindex, null, {length: this.elements.length}); }else{ var element = this.elements[minindex]; if(this.mapper)element = this.mapper(element); done(null, minindex, element); } }; // This can be used to iterate over the indexes in the stream. Assertions may be // used by a concrete implementation to enhance the iteration. // ``` // stream.each(function(index, element, next){ // // use index and element // next(); // }, assertions); // ``` /** * Iterate over each element that may match the assertions beginning at the * start index. * @param {function} each - called for every element found * @param {DNF|Assertion} assertions - assertion that may filter elements * @param {number} start - starting index where the iteration begins * @param {function} done - called when no more elements can be found. */ Stream.prototype.each = function(each, assertions, start, done){ var self = this; self.next(start||0, assertions, function(err, index, element, ended){ if(err)return done(err); else if(ended) return done && done(); each(index, element, function(){ self.each(each, assertions, index+1, done); }); }); }; /** * Next element at index and offset all indexes by a given number. * @param {Stream|Children} target - * @param {function} cb - * @param {number} index - * @param {DNF|Assertion} assertions - * @param {number} offset - */ function next(target, cb, index, assertions, offset){ if(!offset)offset = 0; function offsetCb(err, i, element, ended){ if(ended)ended.length -= offset; cb(err, i - offset, element, ended); } target.next(index + offset, assertions, offsetCb); } /** * Build a stream that is a concatination of two existing streams. * @constructor */ function AppendStream(original, stream){ this.next = function(minindex, assertions, done){ next(original, function(err, i, element, ended){ if(err)return done(err); if(!ended){ done(err, i, element, null); }else{ next(stream, done, minindex, assertions, -ended.length); } }, minindex, assertions); }; } /** * Build a substream of an existing stream. * @constructor */ function SubStream(original, index, count, fill){ this.next = function(minindex, assertions, done){ if(minindex >= count && fill){ return done(null, null, null, {length: count}); } next(original, function(err, i, element, ended){ if(err)return done(err); if((!ended || fill) && i >= count){ done(err, null, null, {length: count}); }else{ if(fill && ended){ done(err, i, undefined, null); }else{ done(err, i, element, ended); } } }, minindex, assertions, index); }; } var infity = Number.POSITIVE_INFINITY; /** * Childreen class that contains a collection of nodes and exposes all basic * transformations that can be executed on the collection. */ function Children(initial){ if(!(initial instanceof Stream)){ initial = null; } this.stream = initial || new Stream(); } Children.prototype = new Stream(); Children.prototype.get = function(index, done){ return this.stream.next(index, null, function(){ }); }; Children.prototype.next = function(minindex, assertions, done){ return this.stream.next(minindex, assertions, done); }; Children.prototype.each = function(each, assertions, start, done){ return this.stream.each(each, assertions, start, done); }; Children.prototype.append = function(element){ if(!(element instanceof Stream)){ element = new Stream([element]); } this.stream = new AppendStream(this.stream, element); }; Children.prototype.insert = function(index, element){ if(!(element instanceof Stream)){ element = new Stream([element]); } var head = new SubStream(this.stream, 0, index, true); var tail = new SubStream(this.stream, index, infity); tail = new AppendStream(element, tail); this.stream = new AppendStream(head, tail); }; Children.prototype.detach = function(index, length){ var head = new SubStream(this.stream, 0, index, true); var tail = new SubStream(this.stream, index+length, infity); var mid = new SubStream(this.stream, index, length); this.stream = new AppendStream(head, tail); return mid; }; module.exports = Stream; Stream.Append = AppendStream; Stream.Sub = SubStream; }); ;define("src/curry", function(require, exports, module){ // Identify the underscore variable var _curry = (global._ || (global._ = {})).runid = { equals: function(target){ return target && this === target.runid; } }; function curry(func, enableUncurry){ var uncurry = false; var curryable = function(){ var args = Array.prototype.slice.call(arguments, 0); var self = this; var posCurry = [], last = false; if(!uncurry){ for(var i = 0; i < args.length; i++){ if(_curry.equals(args[i])){ posCurry.push(i); } } last = _curry.equals(args[args.length-1]); } if(posCurry.length === 0 || uncurry){ return func.apply(self, args); } return curry(function(){ var cargs = Array.prototype.slice.call(arguments, 0); var expected = posCurry.length - (last ? 1 : 0); if(cargs.length < expected){ throw new Error("Expect at least "+expected+" arguments."); } if(last)args.pop(); for(var i = 0; i < cargs.length; i++){ if(i < expected)args[posCurry[i]] = cargs[i]; else args.push(cargs[i]); } return func.apply(self, args); }, true); }; curryable.uncurry = function(){ if(!enableUncurry)throw "Uncurry disabled."; uncurry = true; return this; }; return curryable; } module.exports = curry; }); ;define("src/emitter", function(require, exports, module){ }); ;define("src/get", function(require, exports, module){}); ;define("src/index", function(require, exports, module){var curry = require('./curry'); /** * Map Shim - https://gist.github.com/jed/1031568 */ if(![].map)Array.prototype.map = function(func){ var self = this; var length = self.length; var result = []; for (var i = 0; i < length; i++){ if(i in self){ result[i] = func.call( arguments[1], // an optional scope self[i], i, self ); } } result.length = length; return result; };}); ;define("src/input", function(require, exports, module){// https://github.com/ichord/Caret.js/blob/master/src/jquery.caret.js // http://stackoverflow.com/questions/6930578/get-cursor-or-text-position-in-pixels-for-input-element // http://stackoverflow.com/questions/2897155/get-cursor-position-in-characters-within-a-text-input-field // Bounding client rect. // http://stackoverflow.com/questions/11955345/function-to-get-position-of-an-element-relative-to-the-top-most-window // http://stackoverflow.com/questions/12194113/how-to-get-range-of-selected-text-of-textarea-in-javascript function getTextSelection(el) { var start = 0, end = 0, normalizedValue, range, textInputRange, len, endRange; if (typeof el.selectionStart == "number" && typeof el.selectionEnd == "number") { start = el.selectionStart; end = el.selectionEnd; } else { range = document.selection.createRange(); if (range && range.parentElement() == el) { len = el.value.length; normalizedValue = el.value.replace(/\r\n/g, "\n"); // Create a working TextRange that lives only in the input textInputRange = el.createTextRange(); textInputRange.moveToBookmark(range.getBookmark()); // Check if the start and end of the selection are at the very end // of the input, since moveStart/moveEnd doesn't return what we want // in those cases endRange = el.createTextRange(); endRange.collapse(false); if (textInputRange.compareEndPoints("StartToEnd", endRange) > -1) { start = end = len; } else { start = -textInputRange.moveStart("character", -len); start += normalizedValue.slice(0, start).split("\n").length - 1; if (textInputRange.compareEndPoints("EndToEnd", endRange) > -1) { end = len; } else { end = -textInputRange.moveEnd("character", -len); end += normalizedValue.slice(0, end).split("\n").length - 1; } } } } return {start: start, end: end}; } }); ;define("src/node", function(require, exports, module){ /** * Global id generator to return ascending ids. */ var ID = { current: 1, ascending: function(){ return ID.current++; } }; /** * Any cloned node can have multiple simultaneous versions */ function VersionedNode(initial){ this[initialVersion] = initial; } // Global initial Version var initialVersion = ID.ascending(); // Used during iteration VersionedNode.prototype.getVersion = function(id){ if(!this[id]){ throw new Error('Sanity Check Failed: Wrong Version'); }else if(this[id] instanceof Node){ return this[id]; }else{ return this[this[id]]; } }; // Internal operation on the node VersionedNode.prototype.cloneVersion = function(original, id){ if(this[original] instanceof Node){ this[id] = original; }else if(this[this[original]] instanceof Node){ this[id] = this[original]; }else{ throw new Error('Sanity Check Failed: No Original Version'); } }; // Used during iteration when a match was found VersionedNode.prototype.changeVersion = function(id, clonefunc){ if(this[id] instanceof Number){ this[id] = clonefunc(this[this[id]]); } }; /** * Versioned root */ function VersionedRoot(vnode, version){ this.getRoot = function(){ vnode.getVersion(version || initialVersion); }; } // Intermediate.each(...) -> Intermediate // Node.iterate() -> Runs directly function Node(){ } function Root(){ var version; } function extend(original, param, mapping){ for(var key in param){ if (param.hasOwnProperty(key) && !original.hasOwnProperty(key)){ original[key] = mapping ? mapping(param[key]) : param[key]; } } return original; } function KeyValue(initial){ var store = initial || {}; // The result of this constructor var result = function(key){ if(arguments.length > 1){ store[key] = arguments[1]; } return store[key]; }; result.clone = function(){ return new KeyValue(extend({}, store)); }; result.extend = function(values){ extend(store, values); }; return result; } function InternalNode(){ this.attr = {}; this.prop = {}; this.path = {}; this.tags = {}; this.text = {}; this.children = null; } var get = {}; // tags.index = [0,4,1] // tags.name = 'xyz' // tags.root = true // prop.1 = 1 var Stream = require('./child'); get.json = function(object, options){ var flatten = (options||{}).flatten; var root = makenode(object); if(!root){ throw new Error('Expected toplevel array or object.') } root.tags['root'] = true; return root; function makenode(object){ var result = new InternalNode(); var children = []; function iterator(index, found, tag){ var node = makenode(found); if(node){ node.tags[tag] = index; children.push(node); }else{ result.prop[index] = found; } } if(typeof object.length === 'number'){ result.tags['array'] = true; arraysearch(object, [], iterator); }else if(object.constructor === Object){ objectsearch(object, iterator); }else{ return null; } result.children = new Stream(children) return result; } function objectsearch(object, cb){ for(var i in object){ if(object.hasOwnProperty(i)){ cb(i, object[i], 'name'); } } } function arraysearch(array, index, cb){ for(var i = 0; i < array.length; i++){ var nindex = index.slice(0).push(i); if(flatten && array[i].length){ arraysearch(array[i], nindex, cb); }else{ cb(nindex, array[i], 'index'); } } } } function convert(object, ) // Add Children from child.js // build operation object //tree.get('').each(function(){}).then(...); //tree.get('').live(function(){}).then(...); // read in js, build tree // get.json() -> Root }); ;define("src/parse", function(require, exports, module){/** * A parsable stream of tokens * @constructor */ function Parsable(original){ this.original = original; } /** * Create a stream of tokens from a regex */ Parsable.tokenize = function(string, regex){ var pos = 0; return new Parsable(string.match(regex).map(function(token){ var result = { data: token, pos: pos, assert: function(expected, returnResult){ var token = this.data; if(expected === 'name'){ if(token.name || '#:[]=!<>.*{}'.indexOf(token[0])===-1){ return true; } }else if(token === expected ){ return true; } if(returnResult)return false; throw new Error([ 'Unexpected symbol "'+token+'" ', 'expected "'+expected+'" ', 'at column: '+this.pos ].join('')); } }; pos += token.length; return result; })); }; /** * Map a stream of tokens. */ Parsable.prototype.parse = function(each){ var current, state = 0; var parsable = new Parsable(this.original.map(function(token, i){ if(!token)return token; var removed = false; var actions = { // Modify current state setState: function(s){ state = s; }, setCurrent: function(c){ current = c; }, // Remove the current token from the parsable stream remove: function(){ removed = true; }, // Validate the type of the current token expected: function(expected, returnResult){ return token.assert(expected, returnResult); } }; var result = each.call(actions, token.data, state, current); return removed ? null : { data: result, pos: token.pos, assert: token.assert }; })); if(state > 0)throw new Error('Unexpected ending.'); return parsable; }; exports.parse = function(string){ var parsed = Parsable.tokenize(string, /\#|\:|\[|\]|[\=\!<>]+|\{|\}|\*+|\.|[^\#\:\[\]\=\!<>\{\}\*\.]+/g ).parse(function(token, state, current){ var name; // 0 -> current may be falsy or the current name array // 1 -> {{ // 2 -> {{temp // 3 -> {{temp} // 4 -> {{temp}} switch (state) { case 0: if(token[0] === '*'){ name = {wildcard: token.length}; break; }else if(this.expected('name', true)){ name = {constant: token}; break; }else if(token === '{'){ this.setState(1); }else{ this.setCurrent(null); return token; } break; case 1: this.expected('{'); this.setState(2); break; case 2: this.expected('name'); name = {breakets: token}; this.setState(3); break; case 3: this.expected('}'); this.setState(4); break; case 4: this.expected('}'); this.setState(0); break; } if(!current){ this.setCurrent(current = (name ? [name] : [])); return {name: current}; }else{ this.remove(); if(name)current.push(name); } }).parse(function(token, state, current){ var newCurrent; // -1 -> possibly [ after : // 0 -> default // 1 -> name after # // 2 -> name after : // 3 -> allready found [ // 4 -> allready found [name // 5 -> allready found [name= // 6 -> expect ] switch (state) { case -1: if(token === '['){ this.setState(3); break; } /* falls through */ case 0: if(token.name){ newCurrent = {type: '_', name: token.name}; }else{ switch (token) { case '.': return token; case '#': this.setState(1); break; case ':': this.setState(2); break; case '[': this.setState(3); break; default: this.expected('#, : or ['); } newCurrent = {type: token}; } break; case 1: case 2: this.expected('name'); current.name = token.name; this.setState(state === 1 ? 0 : -1); break; case 3: this.expected('name'); current.prop = token.name; this.setState(4); break; case 4: if('=!<>'.indexOf(token[0])!==-1){ current.assert = token; this.setState(5); }else{ this.expected(']'); this.setState(0); } break; case 5: this.expected('name'); current.value = token.name; this.setState(6); break; case 6: this.expected(']'); this.setState(0); break; } if(newCurrent){ this.setCurrent(newCurrent); return newCurrent; }else{ this.remove(); } }); // var result = [[]]; parsed.parse(function(token){ if(token === '.'){ result.push([]); }else{ result[result.length-1].push(token); } }); return result; }; }); ;define("src/query", function(require, exports, module){ /** * A query containing multiple selector traces. * @constructor */ function Query(parameters){ } Query.prototype.concat = function(parameters, has){ }; Query.prototype.toStateMachine = function(){ }; // plugin system }); ;define("src/state", function(require, exports, module){// ** This file describes the state machine that underlies runjs selectors. They // are specified in a declarative manner. ** // Disjunctive normal form (DNF) is a normalized format that any boolean logic // formula can be transformed to. This may incur in some cases an exponential // growth of the resulting DNF formula. Runjs uses DNF because it allows boolean // expressions to be more easily reasoned about. // // DNF<exp> := [term] // term := {truthy: [exp], falsey: [exp]} // // A term contains expressions that must all be truthy and falsey respectively // for the term to evaluate true. For a DNF to evaluate true only a single term // in the array has to be true. So the outer array constitutes OR expressions // and the inner arrays are AND expressions. /** * Constructs a DNF term. * @constructor * @param {Array=} truthy - the DNF term is true these objects resolve to truthy * @param {Array=} falsey - the DNF term is true these objects resolve to falsey */ function DNF(truthy, falsey){ this.terms = [{ truthy: truthy || [], falsey: falsey || [] }]; } /** * Concatenate two DNF expressions using or. * @param {DNF} target - these DNF terms are added to this terms. * @returns {DNF} - this */ DNF.prototype.or = function(target){ if(!(target instanceof DNF))target = new DNF([target]); this.terms = this.terms.concat(target.terms); return this; }; // Assertions are used to filter node objects. Attributes, properties and meta // information are asserted using predicate functions. /** * Construct an assertion * @constructor * @param {string} type - one of 'attr', 'prop', 'tags' and 'meta' * @param {string} name - the key of the property to be tested * @param {function(?,?)} predicate - evaluate property and return a boolean * @param {?} value - the reference value to be tested against */ function Assertion(type, name, predicate, value){ /** * Test a node against assertion * @param {Object} node - this is the node that is asserted * @returns {boolean} - true only when the assertion is true */ this.resolve = function(node){ var expected = (node[type] || {})[name]; return predicate(value, expected); }; } // Now we extend DNF to include the same resolve method that Assertion uses. /** * Resolve if a DNF expression is true or false by calling resolve on the * objects in the terms. * @param {Object=} node - this node is passed to every resolve method * @returns {boolean} - true only when all DNF terms resolve to true */ DNF.prototype.resolve = function(node){ var result = false; for(var i = 0; i < this.terms.length; i++){ var termResult = true; var t = this.terms[i].truthy; var f = this.terms[i].falsey; for(var j = 0; j < t.length; j++){ termResult &= t[j].resolve(node); } for(var k = 0; k < f.length; k++){ termResult &= !f[k].resolve(node); } if(termResult)result = true; } return result; }; // The States object contains both the state machine and all active states. When // a node needs to be matched `transition` is called to get a new States object // with the same state machine but possible different active states. /** * State objects contain both a state machine and all active states. * @constructor * @param {Array.<number>=} states - the active states * @param {Object=} transitions - transitions between states * @param {Object=} endStates - the end states of the state machine */ function States(states, transitions, endStates){ states = states || [0]; transitions = transitions || {}; endStates = endStates || {}; /** * Add transition between states to the state machine. * @param {number} from - transition from this state * @param {number} to - transition to this state * @param {DNF|Assertion} assertions - assertions for this transition */ this.addTransition = function(from, to, assertions){ if(!transitions[from])transitions[from] = []; transitions[from].push({ next: to, dnfa: assertions }); }; /** * Set a state to be an end state. * @param {number} state - the state that becomes an end state */ this.setEndState = function(state){ endStates[state] = true; }; /** * Test if the active states contains end states. * @returns {boolean} - true only when an end state was reached */ this.resolve = function(){ for(var i = 0; i < states.length; i++){ if(endStates[states[i]])return true; } return false; }; /** * Build a new States object with active states corresponding to transition * that are resolved with DNF assertions. * @param {Object} node - the node that is used to transition the states * @returns {States} - new object with possibly different active states */ this.transition = function(node){ if(states.length === 0)return this; var added = {}; var result = []; for(var i = 0; i < states.length; i++){ var trans = transitions[states[i]] || []; for(var j = 0; j < trans.length; j++){ if(trans[j].dnfa.resolve(node)){ var newValue = trans[j].next; if(!added[newValue]){ added[newValue] = true; result.push(newValue); } } } } return new States(result, transitions, endStates); }; } // We already defined a resolve method that can be used to evaluate a DNF // expression of States. Now we extend DNF to also include the same transition // method that States uses. /** * Transition all States object in a DNF expression given the node. * @param {Object} node - node that is matched in the state transition * @returns {DNF} - same terms as this but transitioned */ DNF.prototype.transition = function(node){ function copy(array){ var result = []; for(var i = 0; i < array.length; i++){ result[i] = array[i].transition(node); } return result; } var result = new DNF(); for(var i = 0; i < this.terms.length; i++){ result.terms[i] = { truthy: copy(this.terms[i].truthy), falsey: copy(this.terms[i].falsey) }; } return result; }; module.exports.States = States; module.exports.DNF = DNF; module.exports.Assertion = Assertion;}); ;define("src/toml", function(require, exports, module){ /** * Filters comments from a single line of toml * @expose uncomment * @examples * uncomment.exec('code# comment')[1] // 'code' * uncomment.exec('test# # comment')[1] // 'test' * uncomment.exec('"test#" # comment')[1] // '"test#" ' */ var uncomment = (function(){ var comment = '(?:#.*)?'; var string = '("([^\\\\"]|(\\\\.))*("|$))'; var single = string.replace(/\"/g, '\''); var regexp = string.replace(/\"/g, '/'); var other = '([^"\\/\\\'#]*)'; var isline = [string, single, regexp, other].join('|'); return new RegExp('^(('+isline+')*)'+comment+'$'); }()); /** * RegExp to find and parse table entities * @expose isTable * @examples * isTable.test('[table]') // true * isTable.exec(' [[table]]')[1] // ' ' * isTable.exec(' [[table]]')[2] // '[' * isTable.exec(' [[table]]')[3] // 'table' */ var isTable = /^(\s*)\[(\[?)([^\[\]]*)\]\]?\s*(?:#.*)?$/; /** * Attribute lines allways start with a name followed by an equals. * @expose canBeLine * @examples * canBeLine.test('test=xyz') // true * canBeLine.exec('test=xyz')[1] // 'test' * canBeLine.exec('test=xyz')[2] // 'xyz' */ var canBeLine = /^\s*([^ \t\[\]]*)\s*=(.*)$/; exports.parse = function (code, walker){ // Split into lines and normalize whitespace code = code.replace(/\r/g, '').split('\n'); for(var i = 0; i < code.length; i++){ var table = isTable.exec(code[i]); if(!table){ // Remove all comments table does this code[i] = code[i].replace(uncomment,'$1'); } code[i] = code[i].trim(); if(!code[i])continue; var line = canBeLine.exec(code[i]); parseLine(table, line, i, code[i], false, walker); } parseLine(null, null, code.length, null, true, walker); return walker.result(); }; // Current expression (can span multiple lines) var lastExpr = ''; // Current table var lastTable = null; // Current attribute name var lastAttr = null; // Current attributes store var lastAttrs = null; function parseLine(table, line, i, current, end, walker){ var valid = walker.parseExpression(unescape(lastExpr)); if(lastAttr){ // has propertie that neeeds to be valid. if((table || end) && !valid){ walker.error('Expression invalid', i, lastExpr); // if walker didnt throw Exception lastAttr = null; lastExpr = ''; }else if((table || end || line) && valid){ if(!lastAttrs)lastAttrs = {}; lastAttrs[lastAttr] = valid.value; lastAttr = null; lastExpr = ''; }else{ lastExpr += '\n' + current; } } if(!lastAttr){ if(table || end){ if(lastTable){ var tabspace = walker.smallTabs ? ' ' : ' '; var indent = lastTable[1].replace(/\t/g, tabspace); var isDouble = !!lastTable[2].length; var name = lastTable[3].trim(); name = name.replace(/(^\.+)|(\.+$)/g, ''); name = unescape(name.replace(/\.+/g, '.')); if(name !== lastTable[3]){ walker.error('Invalid tablename', i, lastTable[3]); } sendTable(indent.length, isDouble, name, lastAttrs, walker); }else if(lastAttrs){ walker.root(lastAttrs); } lastTable = table; lastAttrs = {}; }else if(line){ lastAttr = line[1].trim(); lastExpr = line[2]; }else{ walker.error('Can not parse', i, current); } } } // Previously indented tables var indentions = []; // Keys found in the root var rootkeys = {}; /** * The walker must at least support these methods: * - parseKey() -> {id: , attr: {}} * - push(key {id,attr}, attr, leaf, duplicate) * - pop() * - parseExpression() -> {value: } or. false * - error(message, linenumber) * - smallTabs = false */ function sendTable(indent, isArray, name, attr, walker){ while((indentions[indentions.length-1]||{}).indent >= indent){ var count = indentions.pop().level; for(var i = 0; i < count; i++){ walker.pop(); } } name = name.split('.'); var childreen; if(indentions.length > 0){ childreen = indentions[indentions.length-1].childreen; }else{ childreen = rootkeys; } indentions.push({ level: name.length, indent: indent, childreen: {} }); var keys = ''; for(var k = 0; k < name.length; k++){ var key = walker.parseKey(name[k]); keys += '.'+key.key; var duplicate = !!childreen[keys]; childreen[keys] = true; walker.push(key, attr, k === name.length - 1, duplicate, isArray); } } }); ;define("src/traverse", function(require, exports, module){ /** * Global id generator to return ascending ids. */ var ID = { current: 1, ascending: function(){ return ID.current++; } }; /** * Represents a tree node */ function Node(){ this.attr = {}; this.prop = {}; this.tag = null; this.text = null; this.childreen = new Childreen(); this.pending = []; // Array of pending operations to be applied on childreen this.previous = {}; // Operations that are pending or true when allready run this.minimum = -1; // Pending can't have an id less then this // TODO write methods to transform pending and operation.count } Node.prototype.transform = function(){ // operations can only change childreen, attributes or properties // detach, append, insert, dowrap, unwrap, maping, seting, hassub // Traverse over subtree and produce result - hassub // operations that "deal" with a child like "detach child at index n" // means that every previous operation like "wrap child at index n" has // to be executed or added to pending operation on child n first. // This function can call resolve for this and every child of this. }; /** * Propagate transformations of id <= maximum to the child at index * This is called on traversal to pass operations to childreen */ Node.prototype.propagate = function(index, maximum){ // copy only to childreen if this transformation was not applied to parent // xyz.** -> detach does not detach all descendents of xyz only the childreen // But an operation can specifically attach new operations to this. // They are not run on this node but pending to be appliable to the childreen. }; }); ;define("src/usage", function(require, exports, module){ var interoperability = { "ResultSet": { "totalResultsAvailable": "1827221", "totalResultsReturned": 2, "firstResultPosition": 1, "Result": [ { "Title": "potato jpg", "Summary": "Kentang Si bungsu dari keluarga Solanum tuberosum L ini ternyata memiliki khasiat untuk mengurangi kerutan jerawat bintik hitam dan kemerahan pada kulit Gunakan seminggu sekali sebagai", "Url": "http://www.mediaindonesia.com/spaw/uploads/images/potato.jpg", "ClickUrl": "http://www.mediaindonesia.com/spaw/uploads/images/potato.jpg", "RefererUrl": "http://www.mediaindonesia.com/mediaperempuan/index.php?ar_id=Nzkw", "FileSize": 22630, "FileFormat": "jpeg", "Height": "362", "Width": "532", "Thumbnail": { "Url": "http://thm-a01.yimg.com/nimage/557094559c18f16a", "Height": "98", "Width": "145" } }, { "Title": "potato jpg", "Summary": "Introduction of puneri aloo This is a traditional potato preparation flavoured with curry leaves and peanuts and can be eaten on fasting day Preparation time 10 min", "Url": "http://www.infovisual.info/01/photo/potato.jpg", "ClickUrl": "http://www.infovisual.info/01/photo/potato.jpg", "RefererUrl": "http://sundayfood.com/puneri-aloo-indian-%20recipe", "FileSize": 119398, "FileFormat": "jpeg", "Height": "685", "Width": "1024", "Thumbnail": { "Url": "http://thm-a01.yimg.com/nimage/7fa23212efe84b64", "Height": "107", "Width": "160" } } ] } }; // Only global object is get // get.func -> Type: FunctionWrapper // get.json -> Type: Pointer // get.toml -> Type: Pointer // get('path') -> Type: Pointer // pointer.get(['path', 'or']) // pointer.filter(); // pointer.json(); // or read?? // pointer.toml(); // pointer.prop(); // pointer.attr(); // pointer.tags(); // pointer.path(); // pointer.each(); // pointer.text(); // pointer.toJSON({}) // seperate from read so that options can be passed // pointer.version(); // pointer.clone(); // pointer.restore(); // pointer.extend(); // (clone coud be version + extend) // pointer.component(); // Pointer only store: roots, paths and operations // Never stores any nodes // Immediate runs these directly but doesnt store found nodes // Except => detached roots // pointer.immediate(); // pointer.prop('propname').immediate() -> returns result; // When not run used immediate filter for propname // apply operation op with timestamp to node n // 1. Check than n has never run this op -> otherwise skip all // 2. When node was detached after op was issued -> increase counter // 3. Prepare meta data // 4. Match selector and if it matches run op // 5. Add timestamp to op.previous = {} // 6. Add operation to op.pending = Queue if op is still matchable // 7. If n is no root increase op counter on parent // Whenever any end is reached -> cleanup finished operations // Objects: Operation is passed to n this has a counter which is increased // Operation is matched and cloned (reset counter) and stored in n // pointer.append(); === each(function(){this.append();}) // pointer.before(); // pointer.after(); // pointer.detach(); // pointer.wrap(); // pointer.replace(); // pointer.unwrap(); // TODO thinking // 1. Can we have an operation that runs for every newly attached node (e.g set every node to span) // 2. Can we bind to mutations in node and execute action when change // in attr or e.g. children was detected // 3. meta information // - index of current node // - depth in tree (stored for every op and increased on cloning) // - NO contextual information or scopes // 4. how does extend work lazily // - maybe node.intercept(...) can intercept all calls to children, attr, ... // TODO // 1. path expression parser // 2. child.js rethink use of indexes!!!! // titles.each(function(node, next){ // node may only be accessed until next is called -> afterwards it throws exception // }); // detached nodes do not expire // wrap called on a detached node should makes the formerly detached node expire. // => Explicit Root Type on which root may be called without expiring -> just the node in .root is changed and expired // Why no two way binding // Because of shared state -> far more versatile and concise // html templates have no for or ifs and only very basic {{binding}} // -> because there are only usefull to avoid a flash of unstyled content // -> when all data has arived we can draw from json anyway /* var titles = get('ResultSet.Result.[Title]').json(interoperability, { // ResultSet checks tags.name and then tags.prop // [Title] checks attr first and then prop // ResultSet.Result.:prop(Title) // ResultSet.Result.* // ResultSet.Result.:tags(type=='Object') // :tags(prop=='ResultSet').Result.:tags(prop falsey) flatten: false // arrays in arrays become node and are not flattened }); //Could also do: get.read(interoperability).get(''); titles.prop('Title', 'New title'); titles.prop('Title', function(value){ return value + ' dom link'; }); titles.each(function(){ this.prop('Title', this.prop('Title') + " dom link"); }); titles.prop('Title'); // Error: Can only get prop of resolved nodes titles.immediate().prop('Title'); // Set function titles.prop('Title', get.func(alert)); titles.prop('Title'); // -> alert titles.each(function(node){ this.append({ name: 'title', text: this.prop('Title'), attr: { 'class': ['custome', 'title'] } }); }); titles.each('[Title]',function(){ var node = this.append({}); node.text(this.prop('Title')); }); titles.filter(':single').each(function(){ // ResultSet.Result.[Title]:single // Returns a single instance - which may not be the first instance in the document // !== titles.each(':single',function(){ this filters just before each is run }); titles.each(function(){ // Special cases here first and each work different var node = this.first('title'); // actual node this.each('title', function(){ // runs immediately and in order //this.detach(); // is this possible }); this.before('selector:any', { // usual case. Doesnt support any . or ** selectors... text: 'text' }); // !!!! might not be the same position as when gotten... this.after(node, { // most imperative control text: 'text' }); // ResultSet.Result.[Title]:any }); // When resolved with .immediate() meta data contains the position?? // Differenece between node in each loop or pointer??? // Global plugin setTag: titles.setTag({ 'div:c1': 'ResultSet.Result.[Title].title' }); // Components titles.append({ name: '' }); get('ResultSet.Result').make(new XYComponent({ 'test': 'value', 'test2.path': 'path' })); // virtual paths // path : { 'xy', 'path'} // called with test.xy. get('xyz').toml('# and no newlines and maxlength', { walker: custome_walker }); var old = titles.version({recurse: false}); old.restore(); // Maybe not needed get('xyz').html('<div> </div>'); // extend, ... // How can the elements found under #result be merged into titles? titles.draw('#result'); // close tree insertion titles.each(function(node){ other.append(node.detach()); }).immediate(); // Get syntax get('{{title}}.name.{{other}}', { title: 'title', other: 'other' }); get('title').get(1).get('x.y').get(-1) get('title1', 'title2', '{{title}}', { title: 'title3' }); get(['title1', 'title2', '{{title}}', { title: 'title3' }]); // !!! There is a difference between first child element and first match // !!! get(1) works but get('title:1.xyz') does not! // While in an each difference between each and get titles.each(function(){ this.each('filter',...); // runs immediate this.get('filter').each(...); // runs later this.first(); // runs immediate this.nth(3); // runs immediate this.nth(-1); // runs immediate this.last(); // runs immediate }); // Children Length ?? // nth(-1) & last() & get(-1) can only run when the size of // children length is known or a search is started... // Global matches // Components may 'require' global matches // If a component is added to a root so are the global path matchers // The component can then access them. // Is div / span inline seperator also component ?? */ // clone difficulty // Startegy: // 1. get('...').clone(); // 2. Subtree is dupplicated lazily // 3. clone the get node // 4. iterate over every node below and get('**').cloneMark() them. // => whenever change happens the matched node is dupplicated // 5. since child streams are constant the child collection can simply be dupplicated // 6. Any change will dupplicated full node: child collection, attr, prop... // The problem is link the parent of the node has on the original node // => Store the dupplicate and the original in the same VersionedNode object // => Durring iteration get knows which node to take the duplicate or the original // => If the original node was allready cloned but not jet dupplicated -> the original // reference should point directlly to the original of the original // => Get knows to take the original or the dupplicated based on a id that is in the root // and is attached to the current get state mashine state // => get('**') will reach exactly the nodes that were present when the operation was // attached (This is garantied by get) // ?? Attach different version to subtree -> add operation that changes version number?? // ?? Clone tree and then attach it to the original ?? // => root.node contains the current active version even after attachment // Make subtree readonly}); ;define("src/utils", function(require, exports, module){exports.warn = function(message){ };}); run.require("src"); }((typeof exports === "undefined" ? window.run={} : exports),Function("return this")()));
{ "redpajama_set_name": "RedPajamaGithub" }
7,465
Congratulations! If you were directed here, it means that you have an intake meeting scheduled with Human Resources. Prior to your interview, please make sure that your application is completed on Applicant Tracking and you have gathered together your reference letters and all other required documents. Take some time to Print and Sign the designated forms, take to your Doctor the required health forms (if possible–not required before your intake meeting), and review the Read Only documents. Official Transcripts [in a sealed envelope] (Required of all employees)–If you are unable to bring them to your intake meeting, please be aware that you will be required to request they be sent to HR as soon as possible. Print and use the check off list to keep track of your documents. It is preferred that you bring completed documents to your meeting. However, if returned by scan/email, please send as PDFs to hardoid@dearbornschools.org. Please do not scan your documents as one large file. You should scan and email each document as a separate file, all attached to one email. Print the Acknowledgement form and sign only after you read the listed online Policies. New teachers are encouraged to visit School Improvement/Leadership Coaching/Dearborn Teacher University to learn more about professional development requirements and opportunities, Dearborn Teacher University, Mentor/Mentee program, etc. Within a few days after your intake meeting with the Human Resources Director, HR will assign you an employee ID # and you will be able to contact the Media & Information Technology Department to obtain your computer/email access. Please refer to the this link for more information: Accessing Email. If you have any questions, you may contact David Hardoin (Secretary, Instructional) either by email at hardoid@dearbornschools.org or by phone at (313)-827-3069.
{ "redpajama_set_name": "RedPajamaC4" }
4,001
\section{Introduction} The aim of this work is to derive von-Kármán plate theory from nonlinear, three-di\-men\-sion\-al, atomistic models in a certain energy scaling as the interatomic distance $\varepsilon$ and the thickness of the material $h$ both tend to zero. The passage from atomistic interaction models to continuum mechanics (i.e., the limit $\varepsilon \to 0$) has been an active area of research over the last years. In particular, this limit has been well studied for three-dimensional elasticity, cf., e.g.,\ \cite{BLL:02, alicandrocicalese, schmidtlinelast, braunschmidt13, emingstatic, ortnertheil13, BraunSchmidt16, Braun17}. At the same time, there have emerged rigorous results deriving effective thin film theories from three-dimensional nonlinear (continuum) elasticity in the limit of vanishing aspect ratio (i.e., the limit $h \to 0$), cf.\ \cite{LeDretRaoult:95,FJM:02,FJM:06,ContiMaggi2008,olbermannruna17}. First efforts to combine these passages and investigate the simultaneous limits $\varepsilon\to0$ and $h\to0$ were made in \cite{FJ:00,schmidt-membrane,Sch:08b} for membranes (whose energy scales as the thickness $h$) and in \cite{Sch:06} for Kirchhoff plates (whose energy scales like $h^3$). In particular, this left open the derivation of the von-Kármán plate theory, which describes plates subject to small deflections with energy scale $h^5$ and might even be the most widely used model for thin structures in engineering. Though we do want to mention \cite{bartels17} for a result regarding discrete von-Kármán plate theory that is motivated numerically and not physically. Our first aim is to close this gap. For thin films consisting of many atomic layers one expects the scales $\varepsilon$ and $h$ to separate so that the limit $\varepsilon,h\to0$ along $\frac{h}{\varepsilon} \to \infty$ is equivalent to first passing to the continuum limit $\varepsilon\to0$ and reducing the dimension from 3d to 2d in the limit $h\to0$. We will show in Theorem~\ref{thm:Gammalimit}a) that this is indeed true. By way of contrast, for {\em ultrathin} films consisting of only a few atomic layers, more pecisely, if $\varepsilon,h\to0$ such that the number of layers $\nu = \frac{h}{\varepsilon}+1$ remains bounded, the classical von-Kármán theory turns out to capture the energy only to leading order in $\frac{1}{\nu}$. The next aim is thus to derive a new finite layer version of the von-Kármán plate theory featuring additional explicit correction terms, see Theorem~\ref{thm:Gammalimit}b). In view of the fabrication of extremely thin layers, such an analysis might be of some interest also in engineering applications. An interesting question related to such applications, which we do not address here, would be to extend our analysis to heterogeneous structures as in \cite{DeBenitoSchmidt:19a,DeBenitoSchmidt:19b}. Our third aim concerns a more fundamental modelling point of view which is based on the very low energy of the von-Kármán scaling: If the the plate is not too thick (more precisely, if $\frac{h^5}{\varepsilon^3} \to 0$), we strengthen the previous results to allow for a much wider range of interaction models, that allow for much more physically realistic atomic interactions (compared to \cite{FJM:02,FJM:06}) as they can now be invariant under reflections and no longer need to satisfy growth assumptions at infinity, see Theorems~\ref{thm:Gammalimit2} and \ref{thm:Gammalimit3}. In particular, this includes Lennard-Jones-type interaction models, see Example~\ref{ex:mass-spring-LJ}. Finally, on a technical note, the proof of the our main result set forth in Section~\ref{section:Proofs} elucidates the appearance and structure of the correction terms in the ultrathin film regime. Both in \cite{Sch:06} and the present contribution, at the core of the proof lies the identification of the limiting strain, which in the discrete setting can be seen as a $3 \times 8$ matrix rather than a $3 \times 3$ matrix. In \cite{Sch:06} this has been accomplished with the help of adhoc techniques that allowed to compare adjacent lattice unit cells. Now, for the proof of Proposition~\ref{prop:limiting-strain} we introduce a more general and flexible scheme to capture discreteness effects by splitting the deformation of a typical lattice unit cell into affine and non-affine contributions and passing to weak limits of taylor-made finite difference operators. While for $h \gg \varepsilon$ these operators will tend to a differential operator in the limit, if $h \sim \varepsilon$, finite differences in the $x_3$ direction will not become infinitestimal and lead to lower order corrections in $\frac{1}{\nu}$. This work is organized as follows: In Section \ref{section:Models-and-Results}, we first describe the atomistic interaction model and then present our results. Our main theorem, Theorem \ref{thm:Gammalimit}, details the $\Gamma$-limits for both the \emph{thin} ($\nu \to \infty$) and \emph{ultrathin} ($\nu$ bounded) case. Theorems \ref{thm:Gammalimit2} and \ref{thm:Gammalimit3} then extend these results to more general and more physically realistic models. Section \ref{sec:preparations} contains a few technical tools to circumvent rigidity problems at the boundary and to compare continuous with discrete quantities. Using these tools we then prove our results in Section \ref{section:Proofs}. \section{Models and Results}\label{section:Models-and-Results} \subsection{Atomic Model} Let $S \subset \mathbb{R}^2 = \mathbb{R}^2 \times \{ 0 \} \subset \mathbb{R}^3$ be an open, bounded, connected, nonempty set with Lipschitz boundary. To keep the notation simple we will only consider the cubic lattice. Let $\varepsilon>0$ be a small parameter describing the interatomic distance, then we consider the lattice $\varepsilon \mathbb{Z}^3$. We denote the number of atom layers in the film by $\nu \in \mathbb{N}$, $\nu \geq 2$ and the thickness of the film by $h=(\nu-1) \varepsilon$. In the following let us consider sequences $h_n, \varepsilon_n, \nu_n$, $n \in \mathbb{N}$, such that $\varepsilon_n, h_n \to 0$. The macroscopic reference region is $\Omega_n = S \times (0,h_n)$ and so the (reference) atoms of the film are $\Lambda_n = \overline{\Omega}_n \cap \varepsilon_n \mathbb{Z}^3$. We will assume that the energy can be written as a sum of cell energies. More precisely, as in \cite{Sch:06} we let $z^1, \dots, z^8$ be the corners of the unit cube centered at $0$ and write \[Z=(z^1, \dots, z^8)= \frac{1}{2}\begin{pmatrix*}[r] -1&1&1&-1&-1&1&1&-1\\-1&-1&1&1&-1&-1&\phantom{-}1&1\\-1&-1&-1&-1&1&1&1&1 \end{pmatrix*}.\] Furthermore, by $\Lambda_n'= \big( \bigcup_{x \in \Lambda_n} (x + \varepsilon_n \{ z^1, \ldots, z^8 \}) \big) \cap \big( \mathbb{R}^2 \times (0, h_n) \big)$ we denote the set of midpoints of lattice cells $x + [-\varepsilon_n /2,\varepsilon_n /2]^3$ contained in $\mathbb{R}^2 \times [0, h_n]$ for which at least one corner lies in $\Lambda_n$. Additionally, let $\vec{w}(x) = \frac{1}{\varepsilon_n}(w(x+\varepsilon_n z^1), \dots, w(x+\varepsilon_n z^8)) \in \mathbb{R}^{3 \times 8}$. Then, we assume that the atomic interaction energy for a deformation map $w \colon \Lambda_n \to \mathbb{R}^3$ can be written as \begin{equation} \label{energyinteratom} E_{\rm atom}(w) = \sum_{x \in \Lambda_n'} W(x,\vec{w}(x)), \end{equation} where $W(x,\cdot) : \mathbb{R}^{3\times 8} \to [0,\infty)$ only depends on those $\vec{w}_i$ with $x+\varepsilon_n z^i \in \Lambda_n$, which makes \eqref{energyinteratom} meaningful even though $w$ is only defined on $\Lambda_n$. As a full interaction model with long-range interaction would be significantly more complicated in terms of notation and would result in a much more complicated limit for finitely many layers, we restrict ourselves to these cell energies. In the following we will sometimes discuss the upper and lower part of a cell separately. We write $A = (A^{(1)}, A^{(2)})$ with $A^{(1)}, A^{(2)} \in \mathbb{R}^{3 \times 4}$ for a $3 \times 8$ matrix $A$. If the full cell is occupied by atoms, i.e., $x+ \varepsilon_n z^i \in \Lambda_n$ for all $i$, then we assume that $W$ is is given by a homogeneous cell energy $W_{\rm cell}:\mathbb{R}^{3\times 8} \to [0,\infty)$ with the addition of a homogeneous surface energy $W_{\rm surf}:\mathbb{R}^{3\times 4} \to [0,\infty)$ at the top and bottom. That means, \[W(x,\vec{w}) = \begin{cases} W_{\rm cell}(\vec{w}) & \text{if } x_3 \in (\varepsilon_n/2, h_n - \varepsilon_n/2), \\ W_{\rm cell}(\vec{w})+ W_{\rm surf}(\vec{w}^{(2)}) & \text{if } \nu_n \geq 3 \text{ and } x_3 = h_n-\varepsilon_n/2, \\ W_{\rm cell}(\vec{w}) + W_{\rm surf}(\vec{w}^{(1)}) & \text{if } \nu_n \geq 3 \text{ and } x_3 = \varepsilon_n/2, \\ W_{\rm cell}(\vec{w}) + \sum_{i=1}^2 W_{\rm surf}( \vec{w}^{(i)}) & \text{if } \nu_n = 2, \text{ and } x_3 = h_n/2. \end{cases}\] \begin{example}\label{ex:mass-spring} A basic example is given by a mass-spring model with nearest and next to nearest neighbor interaction: \begin{align*} E_{\rm atom}(w) &= \frac{\alpha}{4} \sum_{x,x' \in \Lambda_n \atop |x - x'| = \varepsilon_n} \Big( \frac{|w(x) - w(x')|}{\varepsilon_n} - 1 \Big)^2 + \frac{\beta}{4} \sum_{x,x' \in \Lambda_n \atop |x - x'| = \sqrt{2} \varepsilon_n} \Big( \frac{|w(x) - w(x')|}{\varepsilon_n} - \sqrt{2} \Big)^2. \end{align*} $E_{\rm atom}$ is of the form \eqref{energyinteratom} if we set \begin{align*} W_{\rm cell}(\vec{w}) &= \frac{\alpha}{16} \sum_{1 \le i,j \le 8 \atop |z^i - z^j| = 1} \big( |w_i - w_j| - 1 \big)^2 + \frac{\beta}{8} \sum_{1 \le i,j \le 8 \atop |z^i - z^j| = \sqrt{2}} \big( |w_i - w_j| - \sqrt{2} \big)^2 \end{align*} and \begin{align*} W_{\rm surf}(w_1,w_2,w_3,w_4) &= \frac{\alpha}{8} \sum_{1 \le i,j \le 4 \atop |z^i - z^j| = 1} \big( |w_i - w_j| - 1 \big)^2 \\ &\qquad\qquad + \frac{\beta}{8} \sum_{1 \le i,j \le 4 \atop |z^i - z^j| = \sqrt{2}} \big( |w_i - w_j| - \sqrt{2} \big)^2. \end{align*} \end{example} We will also allow for energy contributions from body forces $f_n \colon \Lambda_n \to \mathbb{R}^3$ given by \[E_{\rm body}(w)=\sum_{x \in \Lambda_n} w(x) \cdot f_n(x).\] We will assume that the $f_n$ do not depend on $x_3$, that $f_n(x)=0$ for $x$ in an atomistic neighborhood of the lateral boundary, see \eqref{eq:force-bdry0}, and that there is no net force or first moment, \begin{align}\label{eq:force-bed} \sum_{x \in \Lambda_n} f_n(x) =0, \quad \sum_{x \in \Lambda_n} f_n(x) \otimes x'=0, \end{align} to not give a preference to any specific rigid motion. At last, we assume that after extension to functions $\bar{f}_n$ which are piecewise constant on each $x + (-\frac{\varepsilon_n}{2}, \frac{\varepsilon_n}{2})^2$, $x \in \varepsilon_n \mathbb{Z}^2$, $h_n^{-3} \bar{f}_n \to f$ in $L^2(S)$. Overall, the energy is given as the sum \begin{equation}\label{eq:energy} E_n(w) = \frac{\varepsilon_n^3}{h_n} \big(E_{\rm atom}(w) + E_{\rm body}(w) \big). \end{equation} Due to the factor $\frac{\varepsilon_n^3}{h_n}$ this behaves like an energy per unit (undeformed) surface area. Let us make some additional assumptions on the interaction energy. We assume that $W_{\rm cell}$, $W_{\rm surf}$, and all $W(x, \cdot)$ are invariant under translations and rotations, i.e., they satisfy $$ W(A) = W(A + (c,\dots,c)) \mbox{ and } W(RA) = W(A) $$ for any $A \in \mathbb{R}^{3 \times 8}$ or $A \in \mathbb{R}^{3 \times 4}$, respectively, and any $c \in \mathbb{R}^3$ and $R \in \SO(3)$. Furthermore, we assume that $W_{\rm cell}(Z)=W(x, Z) =0$, which in particular implies $W_{\rm surf}(Z^{(1)})=W_{\rm surf}(Z^{(2)})=0$, where $(Z^{(1)}, Z^{(2)}) = Z$. At last we assume that $W$ and $W_{\rm cell}$ are $C^2$ in a neighborhood of $Z$, while $W_{\rm surf}$ is $C^2$ in neighborhood of $Z^{(1)}$. Since our model is translationally invariant, it is then equivalent to consider the discrete gradient \[ \bar{\nabla} w(x) = \frac{1}{\varepsilon_n} \big(w(x+\varepsilon_n z^1) - \langle w \rangle , \dots, w(x+\varepsilon_n z^8) - \langle w \rangle\big) \] with \[\langle w \rangle = \frac{1}{8} \sum_{i=1}^8 w(x+\varepsilon_n z^i) \] instead of $\vec{w}(x)$ for any $x$ with $x+ \varepsilon_n z^i \in \Lambda_n$ for all $i$. In particular, the discrete gradient satisfies \[ \sum_{i=1}^8 (\bar{\nabla} w(x))_{\cdot i} =0.\] The bulk term is also assumed to satisfy the following single well growth condition. \begin{itemize} \item[{\bf (G)}] Assume that there is a $c_0>0$ such that \[W_{\rm cell}(A) \geq c_0 \dist^2(A, \SO(3)Z)\] for all $A \in \mathbb{R}^{3 \times 8}$ with $\sum_{i=1}^8 A_{\cdot i} =0$. \end{itemize} \subsection{Rescaling and Convergence of Displacements} It turns out to be convenient to rescale our reference sets to the fixed domain $\Omega = S \times (0,1)$. For $x \in \mathbb{R}^3$ let us always write $x=(x', x_3)^T$ with $x'\in \mathbb{R}^2$. We define $\tilde{\Lambda}_n = H_n^{-1} \Lambda_n$ and $\tilde{\Lambda}_n'= H_n^{-1} \Lambda_n'$ with the rescaling matrix \[H_n = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0& 0& h_n\\ \end{pmatrix}.\] A deformation $w \colon \Lambda_n \to \mathbb{R}^3$ can be identified with the rescaled deformation $y \colon \tilde{\Lambda}_n \to \mathbb{R}^3$ given by $y(x) = w(H_n x)$. We then write $E_n(y)$ for $E_n(w)$. The rescaled discrete gradient is then given by \[(\bar{\nabla}_n y(x))_{\cdot i} := \frac{1}{\varepsilon_n}(y(x' + \varepsilon_n (z^i)', x_3 + \frac{\varepsilon_n}{h_n} z^i_3)-\langle y \rangle) = \bar{\nabla}w (H_n x)\] for $x \in \tilde{\Lambda}_n'$, where now \[\langle y \rangle = \frac{1}{8} \sum_{i=1}^8 y(x'+\varepsilon_n (z^i)',x_3 + \frac{\varepsilon_n}{h_n} z^i_3 ).\] For a differentiable $v \colon \Omega \to \mathbb{R}^k$ we analogously set $\nabla_n v := \nabla v H_n^{-1} = (\nabla'v, \frac{1}{h_n} \partial_3 v)$. In Section \ref{sec:preparations} we will discuss a suitable interpolation scheme with additional modifications at $\partial S$ to arrive at a $\dtilde{y}_n \in W^{1,2}(\Omega; \mathbb{R}^3)$ corresponding to $y_n$. Furthermore, for sequences in the von-Kármán energy scaling we will expect $y_n$ and $\dtilde{y}_n$ to be close to a rigid motion $x \mapsto R^*_n(x+c_n)$ for some $R^*_n, c_n$ and will therefore be interested in the normalized deformation \begin{align}\label{eq:yn-tilde-def} \tilde{y}_n := {R^*_n}^T\dtilde{y}_n - c_n, \end{align} which would then be close to the identity. The von-Kármán displacements in the limit will then be found as the limit objects of \begin{align} u_n(x') &:= \frac{1}{h_n^2} \int_0^1 (\tilde{y}_n)'-x'\,dx_3, \text{ and} \label{eq:un-def} \\ v_n(x') &:= \frac{1}{h_n} \int_0^1 (\tilde{y}_n)_3\,dx_3. \label{eq:vn-def} \end{align} \subsection{The $\Gamma$-convergence result} To describe the limit energy, let $Q_{\rm cell}(A) = D^2W_{\rm cell}(Z)[A,A]$ for $A\in\mathbb{R}^{3 \times 8}$ and $Q_{\rm surf}(A) = D^2W_{\rm surf}(Z^{(1)})[A,A]$ for $A \in \mathbb{R}^{3 \times 4}$. By frame indifference, \begin{align}\label{eq:Q-invariance} Q_{\rm cell}(A Z + c \otimes (1, \ldots, 1)) = Q_{\rm surf}(A Z^{(1)} + c \otimes (1,1,1,1)) = 0 \end{align} for all $c \in \mathbb{R}^3$ and all skew symmetric $A \in \mathbb{R}^{3 \times 3}$. We introduce a relaxed quadratic form on $\mathbb{R}^{3 \times 8}$ by \begin{align*} Q_{\rm cell}^{\rm rel}(A) &= \min_{b \in \mathbb{R}^3} Q_{\rm cell} \big( a_1-\tfrac{b}{2}, \ldots, a_4-\tfrac{b}{2}, a_5+\tfrac{b}{2}, \ldots, a_8+\tfrac{b}{2} \big) \\ &= \min_{b \in \mathbb{R}^3} Q_{\rm cell}(A + (b \otimes e_3) Z) = \min_{b \in \mathbb{R}^3} Q_{\rm cell}(A + {\rm sym} (b \otimes e_3) Z). \end{align*} By Assumption {\bf (G)} $Q_{\rm cell}$ is positive definite on $(\mathbb{R}^3 \otimes e_3) Z$. Therefore, for each $A \in \mathbb{R}^{3 \times 8}$ there exists a (unique) $b = b(A)$ such that \begin{align}\label{eq:bmin-Q3} Q_{\rm cell}^{\rm rel}(A) &= Q_{\rm cell}(A + (b(A) \otimes e_3) Z) = Q_{\rm cell}(A + {\rm sym} (b(A) \otimes e_3) Z) \end{align} and the mapping $A \mapsto b(A)$ is linear. (If $((v_i \otimes e_3) Z)_{i=1,2,3}$ is a $Q_{\rm cell}$-orthonormal basis of $(\mathbb{R}^3 \otimes e_3) Z$, then $b(A) = - \sum_{i=1}^3 Q_{\rm cell}[(v_i \otimes e_3) Z, A]$, where $Q_{\rm cell}[\cdot, \cdot]$ denotes the symmetric bilinear form corresponding to the quadratic form $Q_{\rm cell}(\cdot)$.) At last, let us write \[ Q_2(A) = Q_{\rm cell}^{\rm rel} \bigg( \begin{pmatrix} A & 0 \\ 0 & 0 \end{pmatrix} Z \bigg), \qquad Q_{2,{\rm surf}}(A) = Q_{\rm surf} \bigg( \begin{pmatrix} A & 0 \\ 0 & 0 \end{pmatrix} Z \bigg) \] for any $A \in \mathbb{R}^{2 \times 2}$. We are now in place to state our main theorem in its first version. \begin{theorem} \label{thm:Gammalimit} {\rm a)} If $\nu_n \to \infty$, then $\frac{1}{h_n^4} E_n\stackrel{\Gamma}{\longrightarrow} E_{\rm vK}$ with \begin{align*} E_{\rm vK}(u,v,R^*) &:= \int_S \tfrac{1}{2} Q_2( G_1(x')) + \tfrac{1}{24} Q_2(G_2(x')) + f(x') \cdot v(x') R^* e_3 \, dx', \end{align*} where $G_1(x') = {\rm sym} \nabla' u(x') + \tfrac{1}{2} \nabla' v(x') \otimes \nabla' v(x')$ and $G_2(x') = - (\nabla')^2 v(x')$. More precisely, for every sequence $y_n$ with bounded energy $\frac{1}{h_n^4} E_n(y_n) \leq C$, there exists a subsequence (not relabeled), a choice of $R^*_n \in \SO(3), c_n \in \mathbb{R}^3$, and maps $u\in W^{1,2}(S;\mathbb{R}^2)$, $v \in W^{2,2}(S)$ such that $(u_n, v_n)$ given by \eqref{eq:un-def}, \eqref{eq:vn-def} and \eqref{eq:yn-tilde-def} satisfy $u_n \rightharpoonup u$ in $W^{1,2}_{\rm loc}(S;\mathbb{R}^2)$, $v_n \to v$ in $W^{1,2}_{\rm loc}(S)$, $R^*_n \to R^*$, and \[\liminf_{n \to \infty} \frac{1}{h_n^4} E_n(y_n) \geq E_{\rm vK}(u,v, R^*).\] On the other hand, this lower bound is sharp, as for every $u\in W^{1,2}(S;\mathbb{R}^2)$, $v \in W^{2,2}(S)$, and $R^* \in \SO(3)$ there is a sequence $y_n$ such that $u_n \rightharpoonup u$ in $W^{1,2}_{\rm loc}(S;\mathbb{R}^2)$, $v_n \to v$ in $W^{1,2}_{\rm loc}(S)$ (where we can take $R^*_n = R^*$, $c_n=0$ without loss of generality) and \[\lim_{n \to \infty} \frac{1}{h_n^4} E_n(y_n) = E_{\rm vK}(u,v,R^*).\] \medskip \noindent {\rm b)} If $\nu_n \equiv \nu \in \mathbb{N}$, then $\frac{1}{h_n^4} E_n\stackrel{\Gamma}{\longrightarrow} E^{(\nu)}_{\rm vK}$, to be understood in exactly the same way as in a), where \begin{align*} E^{(\nu)}_{\rm vK}(u,v, R^*) &= \int_S \tfrac{1}{2} Q_{\rm cell}^{\rm rel} \bigg( \begin{pmatrix} \sym G_1(x') & 0 \\ 0 & 0 \end{pmatrix} Z + \tfrac{1}{2(\nu-1)} G_3(x') \bigg) \\ &~~\qquad + \tfrac{\nu(\nu-2)}{24(\nu-1)^2} Q_{\rm cell}^{\rm rel} \bigg( \begin{pmatrix} G_2(x') & 0 \\ 0 & 0 \end{pmatrix} Z\bigg) \\ &~~\qquad + \tfrac{1}{\nu-1} Q_{\rm surf} \bigg( \begin{pmatrix} \sym G_1(x') & 0 \\ 0 & 0 \end{pmatrix} Z^{(1)} + \frac{\partial_{12} v(x')}{4(\nu-1)} M^{(1)} \bigg) \\ &~~\qquad + \tfrac{1}{4(\nu-1)} Q_{\rm surf} \bigg( \begin{pmatrix} G_2(x) & 0 \\ 0 & 0 \end{pmatrix} Z^{(1)}\bigg) \\ &~~\qquad + \tfrac{\nu}{\nu-1} f(x') \cdot v(x') R^* e_3 \, dx'. \end{align*} Here, \begin{align} G_3(x') &= \begin{pmatrix} G_2(x') & 0 \\ 0 & 0 \end{pmatrix} Z_- + \partial_{12}v(x') M , \label{eq:G3def} \\ M &= (M^{(1)}, M^{(2)}) = \tfrac{1}{2} e_3 \otimes (+1, -1, +1, -1, +1, -1, +1, -1), \label{eq:M-def} \\ Z_- &= (-Z^{(1)},Z^{(2)}) = (-z^1,-z^2,-z^3,-z^4,+z^5,+z^6,+z^7,+z^8). \label{eq:Zminus-def} \end{align} \end{theorem} In the following we use the notation $E_{\rm vK}(u,v)$, respectively, $E^{(\nu)}_{\rm vK}(u,v)$, for the functionals without the force term. \begin{example}\label{ex:mass-spring-orient} Theorem~\ref{thm:Gammalimit} applies to the interaction energy of Example~\ref{ex:mass-spring} if $W_{\rm cell}$ is augmented by an additional penalty term $+ \chi(\vec{w})$ which vanishes in a neighborhood of $\SO(3) Z$ but is $\ge c > 0$ in a neighborhood of ${\rm O}(3)Z \setminus \SO(3) Z$, so as to guarantee orientation preservation. \end{example} \begin{remark} \begin{enumerate} \item The result in a) is precisely the functional one obtains by first applying the Cauchy-Born rule (in 3d) in order to pass from the discrete set-up to a continuum model and afterwards computing the (purely continuum) $\Gamma$-limit on the energy scale $h^5$ as $h\to0$ as in \cite{FJM:06}. Indeed, the Cauchy-Born rule associates the continuum energy density \[ W_{\rm CB}(A) = W_{\rm cell}(A Z) \] to the atomic interaction $W_{\rm cell}$, and so $Q_{\rm cell}(AZ) = D^2W_{\rm CB}(Z)[A,A] =: Q_{\rm CB}(A)$ for $A\in\mathbb{R}^{3 \times 3}$, in particular, \[ Q_2(A) = \min_{b \in \mathbb{R}^3} Q_{\rm CB} \bigg( \begin{pmatrix} A & 0 \\ 0 & 0 \end{pmatrix} + b \otimes e_3 \bigg). \] \item In contrast, for finite $\nu$ non-affine lattice cell deformations of the form $A Z_- + a M$, $A \in \mathbb{R}^{3 \times 3}$, $a \in \mathbb{R}$ need to be taken into account. While $A Z_-$ is non-affine in the out-of-plane direction, $aM$ distorts a lattice unit cell in-plane in a non-affine way. \item Suppose that in addition $W_{\rm cell}$ and $W_{\rm surf}$ satisfy the following antiplane symmetry condition: \begin{align*} W_{\rm cell}(w_1, \ldots, w_8) &= W_{\rm cell} (P w_5, \ldots, P w_8, P w_1, \ldots, P w_4), \\ W_{\rm surf}(w_1, \ldots, w_4) &= W_{\rm surf}(P w_1, \ldots, P w_4), \end{align*} where $P$ is the reflection $P(x',x_3) = (x',-x_3)$. This holds true, e.g., in mass-spring models such as in Example~\ref{ex:mass-spring}. As both terms in $G_3$ switch sign under this transformation, while the affine terms with $G_1$ and $G_2$ remain unchanged, one finds that the quadratic terms in $E^{(\nu)}_{\rm vK}$ decouple in this case and we have \begin{align*} E^{(\nu)}_{\rm vK}(u,v) &= \int_S \tfrac{1}{2} Q_2 ( G_1(x') ) + \tfrac{\nu(\nu-2)}{24(\nu-1)^2} Q_2 ( G_2(x') ) + \tfrac{1}{8(\nu-1)^2} Q_{\rm cell}^{\rm rel} ( G_3(x') ) \\ &~~\qquad + \tfrac{1}{\nu-1} Q_{2,{\rm surf}} ( G_1(x') ) + \tfrac{(\partial_{12} v(x'))^2}{16(\nu-1)^3} Q_{\rm surf} ( M^{(1)} ) \\ &~~\qquad + \tfrac{1}{4(\nu-1)} Q_{2,{\rm surf}} ( G_2(x) ) \, dx' \\ &= E_{\rm vK}(u,v) + \int_S \tfrac{1}{\nu-1} \Big[ Q_{2,{\rm surf}} ( G_1(x') ) + \tfrac{1}{4} Q_{2,{\rm surf}} ( G_2(x) ) \Big] \\ &\qquad\qquad\qquad\qquad + \tfrac{1}{8(\nu-1)^2} \Big[Q_{\rm cell}^{\rm rel} ( G_3(x') ) - \tfrac{1}{3} Q_2 ( G_2(x') ) \Big] \\ &\qquad\qquad\qquad\qquad\qquad + \tfrac{1}{16(\nu-1)^3} (\partial_{12} v(x'))^2 Q_{\rm surf} ( M^{(1)} ) \, dx'. \end{align*} \item Standard arguments in the theory of $\Gamma$-convergence show that for a sequence $(y_n)$ of almost minimizers of $E_n$ the in-plane displacement $u_n$, the out-of-plane displacement $v_n$ and the overall rotation $R^*_n$ converge (up to subsequences) to a minimizer $(u, v, R^*)$ of $E_{\rm vK}$, respectively, $E_{\rm vK}^{(\nu)}$. \item For the original sequence $y_n$ near the lateral boundary there can be lattice cells for which only a subset of their corners belong to $\Lambda_n$. As a consequence these deformation cannot be guaranteed to be rigid on such cells and the scaled in-plane and out-of-plane displacements may blow up. We thus chose to modify in an atomistic neighborhood of the lateral boundary so as to pass to the globally well behaved quantities $\tilde{y}_n$, see Section \ref{sec:preparations}. For the original sequence $y_n$, Theorem~\ref{thm:Gammalimit} implies a $\Gamma$-convergence result with respect to weak convergence in $W^{1,2}_{\rm loc}$. \end{enumerate} \end{remark} \medskip \subsection{The $\Gamma$-convergence result under weaker assumptions} One physically unsatisfying aspect of Theorem \ref{thm:Gammalimit} is the strong growth assumption {\bf (G)} which is in line with the corresponding continuum results \cite{FJM:06}. The problem is actually two-fold. First, typical physical interaction potentials, like Lennard-Jones potentials, do not grow at infinity but converge to a constant with derivatives going to $0$. And second, {\bf (G)} also implies that $W_{\rm cell}(-Z)>W_{\rm cell}(Z)$. In particular, the atomistic interaction could not even be $\Oo(3)$-invariant. Contrary to the continuum case, it is actually possible to remove these restrictions in our atomistic approach. Indeed, if one assumes $\nu_n^5 \varepsilon_n^2 \to 0$ or equivalently $h_n^5/\varepsilon_n^3 \to 0$, then the von-Kármán energy scaling implies that the cell energy at every single cell must be small. In terms of the number of atom layers $\nu$, this condition includes the case of fixed $\nu$, as well as the case $\nu_n \to \infty$ as long as this divergence is sufficiently slow, namely $\nu_n \ll \varepsilon_n^{-2/5}$. In this case, growth assumptions at infinity should no longer be relevant. In fact, we can replace {\bf (G)} by the following much weaker assumption with no growth at infinity and full $\Oo(3)$-invariance. \begin{itemize} \item[{\bf (NG)}] Assume that $W_{\rm cell}(A)= W_{\rm cell}(-A)$ and that there is some neighborhood $U$ of $\Oo(3)Z$ and a $c_0>0$ such that \[W_{\rm cell}(A) \geq c_0 \dist^2(A, \Oo(3)Z)\] for all $A \in U$ with $\sum_{i=1}^8 A_{\cdot i} =0$ and \[W_{\rm cell}(A) \geq c_0\] for all $A \notin U$ with $\sum_{i=1}^8 A_{\cdot i} =0$. \end{itemize} One natural problem arising from this is that atoms that are further apart in the reference configuration can end up at the same position after deforming. In particular, due to the full $\Oo(3)$-symmetry, neighboring cells can be flipped into each other without any cost to the cell energies, which completely destroys any rigidity that one expects in this problem. As a remedy, whenever we assume {\bf (NG)}, we will add a rather mild non-penetration term to the energy that can be thought of as a minimal term representing interactions between atoms that are further apart in the reference configuration. To make this precise, for small $\delta, \gamma > 0$ let $V \colon \mathbb{R}^3\times \mathbb{R}^3 \to [0,\infty]$ be any function with $V(v,w) \geq \gamma$ if $\lvert v -w \rvert < \delta$ and $V(v,w) = 0$ if $\lvert v -w \rvert \geq 2\delta$. Then define \[E_{\rm nonpen}(w) = \sum_{x,\bar{x} \in \Lambda_n} V\Big(\frac{w(x)}{\varepsilon},\frac{w(\bar{x})}{\varepsilon}\Big).\] Then, $\gamma >0$ ensures that there is a positive energy contribution whenever two atoms are closer than $\delta \varepsilon$. The overall energy is then given by \begin{equation} \label{eq:energywithnonpen} E_n(w) = \frac{\varepsilon_n^3}{h_n} \big(E_{\rm atom}(w) + E_{\rm body}(w) + E_{\rm nonpen}(w) \big). \end{equation} \begin{theorem} \label{thm:Gammalimit2} Assume that $\nu_n^5 \varepsilon_n^2 \to 0$, that $f_n=0$, that $E_n$ is given by \eqref{eq:energywithnonpen}, and that {\bf (G)} is replaced by {\bf (NG)}. Then all the statements of Theorem \ref{thm:Gammalimit} remain true, where now $R^*_n, R^* \in \Oo(3)$. \end{theorem} Note that in this version, we assume $f_n =0$. Indeed, if one were to include forces, one can typically reduce the energy by moving an atom infinitely far away in a suitable direction. Without any growth assumption in the interaction energy this can easily lead to $\inf E_n = - \infty$ and a loss of compactness. However, this is just a problem about global energy minimization. Not only should there still be well-behaved local minima of the energy, but the energy barrier in between should become infinite in the von-Kármán energy scaling. In the spirit of local $\Gamma$-convergence, we can thus consider the set of admissible functions \[ \mathcal{S}_\delta = \{w: \Lambda_n \to \mathbb{R}^3 \text{ such that } \dist(\bar{\nabla} w(x),\SO(3)Z) <\delta \text{ for all } x \in {\Lambda'_n}^{\circ}\}, \] where ${\Lambda'_n}^{\circ}$ labels `interior cells' away from the lateral boundary, cf.\ Section~\ref{sec:preparations}. This leads us to the total energy \begin{equation} \label{eq:energylocal} E_n(w) = \begin{cases} \frac{\varepsilon_n^3}{h_n} \big(E_{\rm atom}(w) + E_{\rm body}(w) \big) & \text{if } w \in \mathcal{S}_\delta, \\ \infty & \text{else}. \end{cases} \end{equation} We then have a version of the $\Gamma$-limit that does allow for forces. \begin{theorem}\label{thm:Gammalimit3} Assume that $\nu_n^5 \varepsilon_n^2 \to 0$, that $E_n$ is given by \eqref{eq:energylocal} with $\delta>0$ sufficiently small, and that {\bf (G)} is replaced by {\bf (NG)}. Then all the statements of Theorem \ref{thm:Gammalimit} remain true. Furthermore, there is an infinite energy barrier in the sense that \[ \lim_{n \to \infty} \inf \Big\{ \frac{1}{h_n^4} E_n(w) : w \in \mathcal{S}_\delta \backslash \mathcal{S}_{\delta/2} \Big\}= \infty.\] \end{theorem} \begin{remark} \begin{enumerate} \item For $n$ large enough, the energy barrier implies that minimizers of the restricted energy \eqref{eq:energywithnonpen} correspond to local minimizers of the unrestricted energy \eqref{eq:energy}. The results thus implies convergence of local minimizers of \eqref{eq:energy} in $\mathcal{S}_\delta$. \item To formulate it differently, if a sequence $(w_n)$ is not separated by a diverging (unrestricted) energy barrier from the reference state $\id$, i.e.\ each $w_n$ can be connected by a continuous path of deformations $(w_n^t)_{t \in [0,1]}$ with equibounded energy $E_{\rm atom}(w_n^t) + E_{\rm body}(w_n^t)$, then $w_n \in \mathcal{S}_\delta$ for large $n$. This implies convergence of minimizers of the unrestricted energy under the assumption that a diverging energy barrier cannot be overcome. \item As the energy only has to be prescribed in $\mathcal{S}_\delta$, Theorem~\ref{thm:Gammalimit3} also describes local minimizers of energy functionals which are invariant under particle relabeling for point configurations which after labeling with their nearest lattice site by $\{w(x) : x \in \Lambda_n\}$ belong to $\mathcal{S}_{\delta}$, where their energy can be written in the form \eqref{eq:energylocal}. \end{enumerate} \end{remark} \begin{example}\label{ex:mass-spring-LJ} In the setting of Theorems~\ref{thm:Gammalimit2} and \ref{thm:Gammalimit3}, Example~\ref{ex:mass-spring-orient} can be generalized to energies of the form \begin{align*} E_{\rm atom}(w) &= \frac{\alpha}{4} \sum_{x,x' \in \Lambda_n \atop |x - x'| = \varepsilon_n} V_1\Big( \frac{|w(x) - w(x')|}{\varepsilon_n} - 1 \Big) + \frac{\beta}{4} \sum_{x,x' \in \Lambda_n \atop |x - x'| = \sqrt{2} \varepsilon_n} V_2 \Big( \frac{|w(x) - w(x')|}{\varepsilon_n} - \sqrt{2} \Big), \end{align*} where $V_1, V_2$ are pair interaction potentials with $V_i(0) = 0$, $V_i$ $C^2$ in a neighborhood of $0$ and $V_i(r) \ge c_0 \min\{ r^2, 1 \}$ for some $c_0 > 0$. (This is satisfied, e.g., for the Lennard-Jones potential $r \mapsto (1+r)^{-12} - 2 (1+r)^{-6} + 1$.) Due to the non-penetration term in \eqref{eq:energywithnonpen} no additional penalty terms for orientation preservation are necessary. Most notably, it is not assumed that $V_i(r) \to \infty$ as $r \to \infty$. \end{example} \section{Preparations} \label{sec:preparations} We first extend a lattice deformation slightly beyond $\Lambda_n$, thereby possibly modifying near the lateral boundary $\partial S \times [0,h_n]$ where lattice cells might not be completely contained in $\bar{\Omega}_n$. Then we interpolate so as to obtain continuum deformations to which the continuum theory set forth in \cite{fjm02fvK,FJM:06} applies. For $x \in \Lambda_n'$, with $\Lambda_n'$ as defined at the beginning of Section~\ref{section:Models-and-Results}, we set \[ Q_n(x) = x + (-\tfrac{\varepsilon_n}{2}, \tfrac{\varepsilon_n}{2})^3. \] and also write $Q_n(\xi) = Q_n(x)$ whenever $\xi \in Q_n(x)$. \subsection{Modification and Extension} On a cell that has a corner outside of $\Lambda_n $ there is no analogue to {\bf (G)} (or {\bf (NG)}) and hence no control of $\vec{w}(x)$ in terms of $W(x,\vec{w}(x))$. For this reason we modify our discrete deformations $w : \Lambda_n \to \mathbb{R}^3$ near the lateral boundary of $\Omega_n$. Let $S_n = \{ x \in S : \dist(x, \partial S) > \sqrt{2} \varepsilon_n \}$ and note that, for $\varepsilon_n > 0$ sufficiently small, $S_n$ is connected with a Lipschitz boundary. (This follows from the fact that $\partial S$ can be parameterized with finitely many Lip\-schitz charts.) If $x \in \Lambda_n'$ is such that $\overline{Q_n(x)} \cap (S_n \times \mathbb{R}) \ne \emptyset$, we call $Q_{n}(x)$ an {\em inner cell} and write $x \in {\Lambda_n'}^{\circ}$. The corners of these cells are the interior atom positions $\Lambda_n^{\circ} = {\Lambda_n'}^{\circ} + \varepsilon_n \{z^1, \ldots, z^8\}$ and the part of the specimen made of such inner cells is denoted $$ \Omega^{\rm in}_{n} = \bigg(\bigcup_{x \in {\Lambda_n'}^{\circ}} \overline{Q_{n}(x)} \bigg)^{\circ}. $$ Recall the definition of $\Lambda_n'$ from Section~\ref{section:Models-and-Results} and set $$ \bar{\Lambda}_n = \Lambda_n' + \{ z^1, \ldots, z^8 \}, \qquad \Omega^{\rm out}_{n} = \bigg(\bigcup_{x \in \Lambda_n'} \overline{Q_{n}(x)} \bigg)^{\circ}. $$ The (lateral) {\em boundary cells} $Q_{n}(x)$ are those for which $$ x \in \partial \Lambda_n' := \Lambda_n' \setminus {\Lambda_n'}^{\circ}. $$ Later we will also use the rescaled versions of these sets which are denoted $\tilde{\Lambda}_n = H_n^{-1} \Lambda_n$, $\tilde{\bar{\Lambda}}_n = H_n^{-1} \bar{\Lambda}_n$, ${\tilde{\Lambda}_n}^{\circ} = H_n^{-1} \Lambda_n^{\circ}$, $\tilde{\Lambda}'_n = H_n^{-1} \Lambda_n'$, $(\tilde{\Lambda}_n')^{\circ} = {\Lambda_n'}^{\circ}$. The rescaled lattice cells are $\tilde{Q}_n(x) = H_n^{-1} Q_n(H_n x)$. If $w : \Lambda_n \to \mathbb{R}^3$ is a lattice deformation, following \cite{schmidtlinelast} we define a modification and extension $w' : \bar{\Lambda}_n \to \mathbb{R}^3$ as follows. First we set $w'(x) = w(x)$ if $x \in \Lambda_n^{\circ}$. Now partition $\partial \Lambda_n'$ into the $8$ sublattices $\partial \Lambda_{n,i}' = \partial \Lambda_n' \cap \varepsilon_n ( z^i + 2\mathbb{Z}^3 )$. We apply the following extension procedure consecutively for $i = 1, \ldots, 8$: For every cell $Q = Q_{n}(x)$ with $x \in \partial \Lambda_{n,i}'$ such that there exists a neighboring cell $Q' = Q_{n}(x')$, i.e. sharing a face with $Q$, on the corners of which $w'$ has been defined already, we extend $w'$ to all corners of $Q$ by choosing an extension $w'$ such that $\dist^2({\bar{\nabla} w(x)}, \SO(3)Z)$ is minimal. As a result of this procedure, $w'$ will be defined on every corner of each cell neighboring an inner cell. Now we repeat this procedure until $w'$ is extended to $\bar{\Lambda}_n$, i.e., to every corner of all inner and boundary cells. Since $S$ is assumed to have a Lipschitz boundary, the number of iterations needed to define $w'$ on all boundary cells is bounded independently of $\varepsilon$. This modification scheme guarantees that the rigidity and displacements of boundary cells can be controlled in terms of the displacements, respectively, rigidity of inner cells, see \cite[Lemmas~3.2 and 3.4]{schmidtlinelast}\footnote{We apply these lemmas without a Dirichlet part of the boundary, i.e., $\partial \mathcal{L}'_{\varepsilon}(\Omega)_* = \emptyset$ in the notation of \cite{schmidtlinelast}. Note also that there is a typo in the statement of these lemmas. The set $\mathcal{B}_{\varepsilon}$ should read $\{ \bar{x} \in \mathcal{L}_{\varepsilon}'(\Omega)^{\circ} \cup \partial \mathcal{L}_{\varepsilon}'(\Omega)_* : \bar{x} \notin V_{\varepsilon} \}$, which in our notation (and without Dirichlet part of the boundary) is s subset of ${\Lambda_n'}^{\circ}$.}: \begin{lemma}\label{lemma:bdry-control} There exist constants $c, C > 0$ (independent of $n$) such that for any $w : \Lambda_n \to \mathbb{R}^3$ and $R^* \in \SO(3)$ $$ \sum_{x \in \partial \Lambda_n'} | \bar{\nabla} w'(x) - R^* Z |^2 \le C \sum_{x \in {\Lambda_n'}^{\circ}} | \bar{\nabla} w'(x) - R^* Z |^2 $$ as well as $$ \sum_{x \in \partial \Lambda_n'} \dist^2(\bar{\nabla} w'(x), \SO(3)Z) \le C \sum_{x \in {\Lambda_n'}^{\circ}} \dist^2(\bar{\nabla} w'(x), \SO(3)Z). $$ \end{lemma} For the sake of notational simplicity, we will sometimes write $w$ instead of $w'$. \subsection{Interpolation} Let $w : \bar{\Lambda}_n \to \mathbb{R}^3$ be a (modified and extended) lattice deformation. We introduce two different interpolations: $\tilde{w}$ and $\bar{w}$. $\tilde{w} \in W^{1,2}(\Omega^{\rm out}_n; \mathbb{R}^3)$ is obtained by a specific piecewise affine interpolation scheme as in \cite{Sch:06,schmidtlinelast} which in particular associates the exact average of atomic positions to the center and to the faces of lattice cells. This will allow for a direct application of the results in \cite{FJM:06} on continuum plates. By way of contrast, $\bar{w}$ is a piecewise constant interpolation on the lattice Voronoi cells of $\bar{\Lambda}_n$. The advantage of this interpolation will be that a discrete gradient of $w$ translates into a continuum finite difference operator acting on $\bar{w}$. Let $x \in \Lambda_n'$. In order to define $\tilde{w}$ on the cube $\overline{Q(x)}$ we first set $\tilde{w}(x) = \frac{1}{8} \sum_{i = 1}^8 w(x + \varepsilon_n z^i)$. Next, for the six centers $v^1, \ldots, v^6$ of the faces $F^1, \ldots, F^6$ of $[-\frac{1}{2}, \frac{1}{2}]^3$ we set $\tilde{w}(x + \varepsilon_n v^i) = \frac{1}{4} \sum_j w(x + \varepsilon_n z^j)$, where the sum runs over those $j$ such that $z^j$ is a corner of the face with center $v^i$. Finally, we interpolate linearly on each of the 24 simplices \[ \operatorname{co} ( x, x + \varepsilon_n v^k, x + \varepsilon_n z^i, x + \varepsilon_n z^j ) \] with $|z^i - z^j| = 1$, $|z^i - v^k| = |z^j - v^k| = \tfrac{1}{\sqrt{2}}$, i.e., whose corners are given by the cube center and the center and two neighboring vertices of one face. Note that for this interpolation \begin{align} \tilde{w}(x) &= \Xint-_{Q(x)} \tilde{w}(\xi) \, d\xi, \\ \tilde{w}(x + \varepsilon_n v^k) &= \Xint-_{x + \varepsilon_n F^k} \tilde{w}(\zeta) \, d \zeta, \label{eq:interpolsurf} \end{align} for every face $x + \varepsilon_n F^k$ of $Q(x)$. For the second interpolation we first let $V_n^{\rm out} := \big( \bigcup_{x \in \bar{\Lambda}_n} ( x + [-\tfrac{\varepsilon_n}{2}, \tfrac{\varepsilon_n}{2}]^3 ) \big)^{\circ}$ and then define $\bar{w} \in L^2(V_n^{\rm out}; \mathbb{R}^3)$ by $\bar{w}(\xi) = w(x)$ for all $\xi \in x + (-\tfrac{\varepsilon_n}{2}, \tfrac{\varepsilon_n}{2})^3$, $x \in \bar{\Lambda}_n$. Note that \[ \bar{\nabla} \bar{w}(x) = \frac{1}{\varepsilon_n} \big(\bar{w}(x+\varepsilon_n z^1) - \langle \bar{w} \rangle , \dots, \bar{w}(x+\varepsilon_n z^8) - \langle \bar{w} \rangle\big) \] with $\langle \bar{w} \rangle = \frac{1}{8} \sum_{i=1}^8 \bar{w}(x+\varepsilon_n z^i)$ defines a piecewise constant mapping on $\Omega^{\rm out}_{n}$ such that \[ \bar{\nabla} \bar{w}(\xi) = \bar{\nabla} w(x) \quad \text{whenever} \quad \xi \in Q_n(x),~ x \in \Lambda_n'. \] It is not hard to see that the original function controls the interpolation and vice versa. \begin{lemma}\label{lemma:interpol-Vergleich} There exist constants $c, C > 0$ such that for any (modified, extended and interpolated) lattice deformation $\tilde{w} : \Omega^{\rm out}_{n} \to \mathbb{R}^3$ and any cell $Q = Q_n(x)$, $x \in \Lambda_n'$, \begin{align*} c |\bar{\nabla} w(x)|^2 \le \varepsilon_n^{-3} \int_Q |\nabla \tilde{w}(\xi)|^2 \, d\xi \le C |\bar{\nabla} w(x)|^2. \end{align*} \end{lemma} \begin{proof} After translation and rescaling we may without loss assume that $\varepsilon_n = 1$ and $Q = (0,1)^3$, hence $x = (\tfrac{1}{2}, \tfrac{1}{2}, \tfrac{1}{2})^T$. The claim then is an immediate consequence of the fact that both $$ \tilde{w} \mapsto |\bar{\nabla} \tilde{w}(x)| \quad \mbox{and} \quad \tilde{w} \mapsto \| \nabla \tilde{w} \|_{L^2(Q; \mathbb{R}^{3 \times 3})} $$ are norms on the finite dimensional space of continuous mappings $\tilde{w}$ which are affine on each $\operatorname{co} ( x, x + v^k, x + z^i, \varepsilon_n z^j )$ with $|z^i - z^j| = 1$, $|z^i - v^k| = |z^j - v^k| = \tfrac{1}{\sqrt{2}}$, and which have $\int_Q \tilde{w}(\xi) \, d\xi = 0$. \end{proof} \begin{lemma}\label{lemma:interpol-rigidity} There exist constants $c, C > 0$ such that for any (modified, extended and interpolated) lattice deformation $\tilde{w} : \Omega^{\rm out}_{n} \to \mathbb{R}^3$ and any cell $Q = Q_n(x)$, $x \in \Lambda_n'$, \begin{equation*} c \dist^2(\bar{\nabla} w(x), \SO(3)Z) \leq \varepsilon_n^{-3} \int_Q \dist^2 (\nabla \tilde{w}(\xi), \SO(3)) \, d\xi \leq C \dist^2(\bar{\nabla} w(x), \SO(3)Z). \end{equation*} \end{lemma} This is in fact \cite[Lemma 3.6]{schmidtlinelast}. We include a simplified proof. \begin{proof} After translation and rescaling we may without loss assume that $\varepsilon_n = 1$ and $Q = (0,1)^3$. The geometric rigidity result \cite[Theorem 3.1]{FJM:02} (indeed, an elementary version thereof) yields \begin{equation*} c \min_{R \in \SO(3)} \| \nabla \tilde{w} - R \|_{L^{2}(Q)}^2 \leq \int_Q \dist^2 (\nabla \tilde{w}(\xi), \SO(3)) \, d\xi \leq C \min_{R \in \SO(3)} \| \nabla \tilde{w} - R \|_{L^{2}(Q)}^2. \end{equation*} By definition also \begin{align*} \dist^2(\bar{\nabla} w(x), \SO(3)Z) = \min_{R \in \SO(3)} | \bar{\nabla} w(x) - RZ |. \end{align*} The claim then follows from applying Lemma \ref{lemma:interpol-Vergleich} to $\xi \mapsto \tilde{w}(\xi) - R\xi$ for each $R \in \SO(3)$. \end{proof} For a sequence $w_n$ of (modified and extended) lattice deformations $w_n : \bar{\Lambda}_n \to \mathbb{R}^3$ with interpolations $\tilde{w}_n : \Omega^{\rm out}_{n} \to \mathbb{R}^3$ and $\bar{w}_n : V_n^{\rm out} \to \mathbb{R}^3$ we consider the rescaled deformations $\dtilde{y}_n : \tilde{\Omega}^{\rm out}_n \to \mathbb{R}^3$ defined by $$ \dtilde{y}_n(x) := \tilde{w}_n(H_n x) \quad\mbox{with}\quad \tilde{\Omega}^{\rm out}_n := H_n^{-1} \Omega^{\rm out}_n $$ and $\dbar{y}_n : \tilde{V}^{\rm out}_n \to \mathbb{R}^3$ defined by $$ \dbar{y}_n(x) := \bar{w}_n(H_n x) \quad\mbox{with}\quad \tilde{V}^{\rm out}_n := H_n^{-1} V^{\rm out}_n. $$ (Later we will normalize by a rigid change of coordinates to obtain $\tilde{y}_n$ and $\bar{y}_n$.) Their rescaled (discrete) gradients are $$ \nabla_n \dtilde{y}_n(x) := \nabla \tilde{w}_n(H_n x) \quad\mbox{and}\quad \bar{\nabla}_n \dbar{y}_n(x) := \bar{\nabla} \bar{w}_n(H_n x) $$ for all $x \in\tilde{\Omega}^{\rm out}_{n}$. Finally, the force $f_n$ after extension to $\bar{\Lambda}_n$ is assumed to satisfy \begin{align}\label{eq:force-bdry0} f_n(x) = 0 \quad \text{for}\quad x \in \bar{\Lambda}_n \setminus \Lambda_n^{\circ} \end{align} and its the piecewise constant interpolation is $\bar{f}_n : \tilde{V}^{\rm out}_n \to \mathbb{R}^3$. \begin{remark}\label{rmk:convergence-notions} Suppose $\nu_n = \nu$ constant. We note that for a sequence of mappings $y_n : \Lambda_n \to \mathbb{R}^3$, if $\dtilde{y}_n \to y$ in $L^2(\Omega; \mathbb{R}^3)$ then $y$ is continuous in $x_3$ and affine in $x_3$ on the intervals $(\frac{i-1}{\nu-1}, \frac{i}{\nu-1})$, $i = 1, \ldots, \nu$. Similarly, if $\dbar{y}_n \to y^*$ in $L^2(S \times (\frac{-1}{2(\nu-1)}, \frac{2\nu -1}{2(\nu-1)}); \mathbb{R}^3)$, then $y^*$ is constant in $x_3$ on the intervals $(\frac{2i-1}{2(\nu-1)}, \frac{2i + 1}{2(\nu-1)})$, $i = 0, \ldots, \nu -1$. Suppose $y, y^* \in L^2(\Omega; \mathbb{R}^3)$ are piecewise affine, respectively, constant in $x_3$ as detailed above with $y^*(x',x_3) = y(x',\frac{i}{\nu-1})$ if $x_3 \in (\frac{2i-1}{2(\nu-1)}, \frac{2i + 1}{2(\nu-1)})$, $i = 0, \ldots, \nu -1$. It is not hard to see that the following are equivalent. \begin{itemize} \item $\dtilde{y} \to y$ in $L^2(\Omega; \mathbb{R}^3)$. \item $\dbar{y} \to y^*$ in $L^2(S \times (\frac{-1}{2(\nu-1)}, \frac{2\nu -1}{2(\nu-1)}); \mathbb{R}^3)$. \item $\frac{\varepsilon_n^3}{h_n} \sum_{x \in \tilde{\Lambda}_n} | y_n(x) - \Xint-_{x + (-\frac{\varepsilon_n}{2},\frac{\varepsilon_n}{2})^2 \times (-\frac{\varepsilon_n}{2h_n},\frac{\varepsilon_n}{2h_n})} y^*(\xi) \, d\xi|^2 \to 0$. \end{itemize} The same is true in case $\nu_n \to \infty$ for $y = y^*$ if in the second statement $S \times (\frac{-1}{2(\nu-1)}, \frac{2\nu -1}{2(\nu-1)})$ is replaced by $\Omega$. In particular, limiting deformations do not depend on the interpolation scheme. \end{remark} \section{Proofs}\label{section:Proofs} \subsection{Compactness} For the compactness we will heavily use the corresponding continuum rigidity theorem from \cite[Theorem 3]{fjm02fvK} and \cite[Theorem 6]{FJM:06}: \begin{theorem} \label{thm:platerigidity} Let $y \in W^{1,2}(\Omega;\mathbb{R}^3)$ and set $\mathcal{I}= \mathcal{I}(y)= \int_\Omega \dist^2(\nabla_n y, \SO(3))\,dx$. Then there exists maps $R:S \to \SO(3)$ and $\tilde{R} \in W^{1,2}(S; \mathbb{R}^{3 \times 3})$ with $\lvert \tilde{R} \rvert \leq C$, and a constant $R^* \in \SO(3)$ such that \begin{align} \lVert \nabla_n y - R \lVert^2_{L^2(\Omega)} &\leq C \mathcal{I},\\ \lVert R - \tilde{R} \lVert^2_{L^2(S)} &\leq C \mathcal{I},\\ \lVert \nabla\tilde{R} \lVert^2_{L^2(S)} &\leq \frac{C \mathcal{I}}{h_n^2},\\ \lVert \nabla_n y - R^* \lVert^2_{L^2(\Omega)} &\leq \frac{C \mathcal{I}}{h_n^2},\\ \lVert R - R^* \lVert^2_{L^p(S)} &\leq \frac{C_p \mathcal{I}}{h_n^2},\ \forall p<\infty. \end{align} Crucially, none of the constants depend on $n$, $y$, or $\mathcal{I}$. \end{theorem} Furthermore, we will also use the continuum compactness result \cite[Lemmas~4 and 5]{fjm02fvK} and \cite[Lemma~1, Eq.~(96), and Lemma~2]{FJM:06} based on the previous rigidity result applied to some sequence $(\hat{y}_n)$. \begin{theorem} \label{thm:contcomp} Let $\hat{y}_n \in W^{1,2}(\Omega;\mathbb{R}^3)$ with $\mathcal{I}(\hat{y}_n)\leq Ch_n^4$. Then there are $R^*_n\in \SO(3)$, $c_n \in \mathbb{R}^3$ as well as a $u \in W^{1,2}(S;\mathbb{R}^2)$ and a $v \in W^{2,2}(S)$ such that $y_n = {R^*_n}^T \hat{y}_n -c_n$ satisfies \begin{align} \lVert \nabla_n y_n - R_n \lVert^2_{L^2(\Omega)} &\leq Ch_n^4 \label{eq:ty-closeto-SO}\\ \lVert R_n - \tilde{R}_n \lVert^2_{L^2(S)} &\leq Ch_n^4\\ \lVert \nabla\tilde{R} \lVert^2_{L^2(S)} &\leq Ch_n^2\\ \lVert \nabla_n y_n - \Id \lVert^2_{L^2(\Omega)} &\leq Ch_n^2\label{eq:grad-yn-close-to-Id}\\ \int_\Omega (\nabla_n y_n)_{12} - (\nabla_n y_n)_{21}\,dx &=0. \label{eq:ty-no-skew} \end{align} And, up to extracting subsequences, \begin{align} \frac{1}{h_n^2}\int_0^1 y_n' - x' \,dx_3=:u_n &\rightharpoonup u \text{ in } W^{1,2}(S;\mathbb{R}^2),\ i=1,2, \label{eq:un-conv}\\ \frac{1}{h_n}\int_0^1 (y_n)_3 \,dx_3=:v_n &\to v \text{ in } W^{1,2}(S;\mathbb{R}),\label{eq:vn-conv} \end{align} \begin{align} \frac{\nabla_n y_n - \Id}{h_n} =:A_n &\to A = e_3 \otimes \nabla' v - \nabla' v \otimes e_3 \text{ in } L^2(\Omega; \mathbb{R}^{3 \times 3}),\label{eq:grad-yn-Id-A}\\ 2\frac{\sym( R_n - \Id)}{h_n^2} &\to A^2 \text{ in } L^p(S; \mathbb{R}^{3 \times 3}),\ \forall p<\infty \label{eq:sym-Rn-Id}, \end{align} \begin{align}\label{eq:cont-strain-convergence} \frac{R_n^T \nabla_n y_n - \Id}{h_n^2} \rightharpoonup G \text{ in } L^2(\Omega; \mathbb{R}^{3 \times 3}), \end{align} where the upper left $2 \times 2$ submatrix $G''$ of $G$ is given by \begin{align}\label{eq:cont-strain-identify-i} G''(x) &= G_1(x') + (x_3 - \tfrac{1}{2}) G_2(x'), \end{align} with \begin{align}\label{eq:cont-strain-identify-ii} \sym G_1 &= \tfrac{1}{2} ( \nabla' u + (\nabla u)^T ) + \nabla 'v \otimes \nabla ' v, \quad G_2 = - (\nabla')^2 v. \end{align} \end{theorem} The following proposition allows us to apply these continuum results. \begin{proposition}\label{prop:energy-estimates} In the setting of Theorem \ref{thm:Gammalimit}, consider a sequence $w_n$ with \begin{equation} \label{eq:seqbddenergy} E_n(w_n) \leq C h_n^4 \end{equation} Then, \begin{equation} \label{eq:bdonSOdist} 0 \leq \mathcal{I}(\dtilde{y}_n)= \int_\Omega \dist^2(\nabla_n \dtilde{y}_n, \SO(3))\,dx \leq C h_n^4. \end{equation} Here, $\dtilde{y}_n\in W^{1,2}(\Omega;\mathbb{R}^3)$ is the rescaled, modified, and interpolated version of $w_n$ according to Section~\ref{sec:preparations}. In the setting of Theorem \ref{thm:Gammalimit3} the statement remains is true as well, while in the setting of Theorem \ref{thm:Gammalimit2} \eqref{eq:bdonSOdist} is still true but now $\dtilde{y}_n$ is the rescaled, modified, and interpolated version of either $w_n$ or $-w_n$ where the correct sign does depend on $w_n$. \end{proposition} \begin{proof} Rescaling the $w_n$ and applying the modification and interpolation steps from Section \ref{sec:preparations}, we have sequences $\dtilde{y}_n \in W^{1,2}(\Omega;\mathbb{R}^3)$ and $\dbar{y}_n \in L^{2}(\Omega;\mathbb{R}^3)$. In particular, we can use Theorem \ref{thm:platerigidity} for this sequence. Take $R^*_n$ according to Theorem~\ref{thm:contcomp}. Then by Lemmas~\ref{lemma:bdry-control} and \ref{lemma:interpol-rigidity}, \[ \frac{\varepsilon_n^3}{h_n}\sum_{ x \in \tilde{\Lambda}_n'} \lvert \bar{\nabla}_n \dbar{y}_n (x) - R^*_n Z \rvert^2 \leq C\int_\Omega \lvert \nabla_n \dtilde{y}(x) - R^*_n \rvert^2 \,dx \leq C \frac{\mathcal{I}_n}{h_n^2}.\] A standard discrete Poincaré-inequality then shows \[ \frac{\varepsilon_n^3}{h_n}\sum_{ x \in {\tilde{\Lambda}_n}^{\circ}} \Big\lvert \dbar{y}_n(x) - R^*_n \begin{pmatrix}x' \\ hx_3 \end{pmatrix} - \bar{c}_n \Big\rvert^2 \leq\frac{\varepsilon_n^3}{h_n}\sum_{ x \in \tilde{\Lambda}_n'} \lvert \bar{\nabla}_n \dbar{y}_n (x) - R^*_n Z \rvert^2 \leq C \frac{\mathcal{I}_n}{h_n^2} \] for a suitable $\bar{c}_n \in \mathbb{R}^3$. Now $f_n$ does not depend on $x_3$, vanishes close to $\partial S$ where the modification takes place, and satisfies $\sum_{x \in \Lambda_n} f_n =0$, as well as $\sum_{x \in \Lambda_n} f_n \otimes x' =0$. Hence, we see that \begin{align*} \frac{\varepsilon_n^3}{h_n} E_{\rm body}(w_n) &= \frac{\varepsilon_n^3}{h_n} \sum_{x \in {\tilde{\Lambda}_n}^{\circ}} f_n(x') \cdot y_n(x)\\ &= \frac{\varepsilon_n^3}{h_n} \sum_{x \in {\tilde{\Lambda}_n}^{\circ}} f_n(x') \cdot \Big(\dbar{y}_n(x) - R^*_n \begin{pmatrix}x' \\ hx_3 \end{pmatrix} - \bar{c}_n \Big). \end{align*} Using $\lVert \bar{f}_n \rVert_{L^2(S)} \leq C h_n^3$ and abbreviating $\mathcal{I}(\dtilde{y}_n) = \mathcal{I}_n$, we thus find \[\Big\lvert \frac{\varepsilon_n^3}{h_n} E_{\rm body}(w_n) \Big\rvert \leq C \sqrt{\mathcal{I}_n} h_n^2.\] On the other hand, due to ${\bf (G)}$ and Lemmas~\ref{lemma:bdry-control} and \ref{lemma:interpol-rigidity} we have \begin{align*} \frac{\varepsilon_n^3}{h_n} E_{\rm atom}(w_n) &\geq c_0 \frac{\varepsilon_n^3}{h_n} \sum_{ x \in ({\tilde{\Lambda}_n}')^{\circ}} \dist^2(\bar{\nabla}_n y_n (x),\SO(3)Z) \\ &\geq c \frac{\varepsilon_n^3}{h_n} \sum_{ x \in \tilde{\Lambda}'_n} \dist^2(\bar{\nabla}_n \dbar{y}_n (x),\SO(3)Z) \geq c \mathcal{I}_n. \end{align*} Hence, \begin{align*} 0 \leq \mathcal{I}_n &\leq C \frac{\varepsilon_n^3}{h_n} E_{\rm atom}(w_n) \leq C h_n^4 + C\frac{\varepsilon_n^3}{h_n} \lvert E_{\rm body}(w_n) \rvert \leq C h_n^4 + C \sqrt{\mathcal{I}_n} h_n^2. \end{align*} We thus have \begin{equation*} 0 \leq \mathcal{I}_n \leq C h_n^4. \end{equation*} All these statements remain true in the setting of Theorem \ref{thm:Gammalimit3} as the Assumptions ${\bf (G)}$ and ${\bf (NG)}$ are equivalent on $\mathcal{S}_\delta$. Now, consider the setting of Theorem \ref{thm:Gammalimit2} with Assumption ${\bf (NG)}$ instead of ${\bf (G)}$, as well as $f_n=0$ and $\nu_n^5 \varepsilon_n^2 \to 0$ with the energy given by \eqref{eq:energywithnonpen}. Using \eqref{eq:seqbddenergy}, we find \[0 \leq W_{\rm cell}(\bar{\nabla} w(x)) \leq C \frac{h_n^5}{\varepsilon_n^3}\] for every $x \in {\Lambda'_n}^{\circ}$ and \[0 \leq V\Big(\frac{w_n(\bar{x})}{\varepsilon_n},\frac{w_n(\dbar{x})}{\varepsilon_n}\Big) \leq C \frac{h_n^5}{\varepsilon_n^3}\] for all $\bar{x}, \dbar{x} \in \Lambda_n$. As $\frac{h_n^5}{\varepsilon_n^3} \to 0$, for $n$ large enough, the right hand side is strictly smaller then $c_0$ or $\gamma$, respectively. Therefore, for all $n$ large enough we have \[\bar{\nabla} w_n(x) \in U \text{ for all } x \in {\Lambda'_n}^{\circ}\] and \begin{equation} \label{eq:nonpen} \lvert w_n(\bar{x}) - w_n( \dbar{x} ) \rvert > \varepsilon_n \delta \end{equation} for all $\bar{x}, \dbar{x} \in \Lambda_n$. $\bar{\nabla} w_n(x) \in U $ implies $W_{\rm cell}(\bar{\nabla} w_n(x)) \geq c_0 \dist^2(\bar{\nabla} w_n(x), \Oo(3)Z) $. In particular, we thus find \[\dist^2(\bar{\nabla} w_n(x), \Oo(3)Z) \leq C \frac{h_n^5}{\varepsilon_n^3}.\] Again, for $n$ large enough, this means that every $x\in {\Lambda'_n}^{\circ}$ the discrete gradient $\bar{\nabla} w_n(x)$ is arbitrarily close to $\Oo(3)Z$ and thus very close to $\sigma_n(x)\SO(3)Z$ with a unique $\sigma_n(x) \in \{\pm 1\}$. We now want to show that the sign $\sigma_n(x)$ is the same for all $x$ in the interior cells. As the interior of the union of all these cells is connected, it suffices to show that $\sigma_n$ is the same on any two cells that share a $(d-1)$-face. Indeed, if that were false, we would have some $x,x'$ in cells that share a $(d-1)$-face such that \[\dist^2(\bar{\nabla} w_n(x), \Oo(3)Z)= \lvert \bar{\nabla} w_n(x) - QZ \rvert^2 \leq C \frac{h_n^5}{\varepsilon_n^3},\] and \[\dist^2(\bar{\nabla} w_n(x'), \Oo(3)Z)= \lvert \bar{\nabla} w_n(x') + Q'Z \rvert^2 \leq C \frac{h_n^5}{\varepsilon_n^3},\] with $Q,Q' \in \SO(3)$. Without loss of generality assume $x=x' + \varepsilon_n e_3$. Then\[\bar{\nabla} w_n(x') (0, b)^T = \bar{\nabla} w_n(x) (b, 0)^T\] for all $b\in \mathbb{R}^4$ with $\sum_i b_i =0$. In particular choosing $b = (-1,+1,+1,-1)$ and $b = (-1,-1,+1,+1)$, we get $\lvert (Q+ Q') e_i \rvert \leq C \frac{h_n^5}{\varepsilon_n^3}$ for $i=1,2$. As $Q,Q' \in \SO(3)$, we find $\lvert (Q- Q') e_3 \rvert \leq C \frac{h_n^5}{\varepsilon_n^3}$. Overall, we see that both deformed cells are almost on top of each other. More specifically, \begin{align*} &\lvert w(x'+\varepsilon_n z^1) - w(x + \varepsilon_n z^5) \rvert\\ &~~= \lvert w(x+\varepsilon_n z^5) - w(x+\varepsilon_n z^1) + w(x'+\varepsilon_n z^5) - w(x'+\varepsilon_n z^1) \rvert \\ &~~\leq \varepsilon_n \Big( \lvert Q z^5 - Qz^1 - Q' z^1 + Q'z^5 \rvert +C \frac{h_n^5}{\varepsilon_n^3} \Big) \\ &~~= \varepsilon_n \Big( \lvert (Q - Q') e_3 \rvert +C \frac{h_n^5}{\varepsilon_n^3} \Big) \leq \varepsilon_n C \frac{h_n^5}{\varepsilon_n^3} \leq \delta \varepsilon_n \end{align*} for $n$ large enough. This is a contradiction to the non-penetration condition \eqref{eq:nonpen}. That means, we have \begin{equation} \frac{\varepsilon_n^3}{h_n}\sum_{ x \in {\Lambda'_n}^{\circ}} \dist^2(\sigma_n \bar{\nabla} w_n(x), \SO(3)Z) \leq C h_n^4 \nonumber \end{equation} for an $x$-independent $\sigma_n \in \{\pm 1\}$. Applying the modification and interpolation procedure from Section~\ref{sec:preparations} to $\sigma_n w_n$ as in the case {\bf (G)} above, we find \[ \int_\Omega \dist^2( \nabla_n \dtilde{y}(x), \SO(3)Z)\,dx \leq C h_n^4. \qedhere \] \end{proof} Now we can directly apply Theorems \ref{thm:platerigidity} and \ref{thm:contcomp} for the continuum objects $\dtilde{y}_n$. In particular, for $\tilde{y}_n = {R^*_n}^T \dtilde{y}_n - c_n$ as defined in \eqref{eq:yn-tilde-def} and corresponding $u_n$ and $v_n$ as in \eqref{eq:un-def}, respectively, \eqref{eq:vn-def}, after extracting a subsequence from \eqref{eq:un-conv} and \eqref{eq:vn-conv} we get that \begin{align}\label{eq:unvn-conv} u_n \rightharpoonup u \text{ in } W^{1,2}(S;\mathbb{R}^2), \qquad v_n \to v \text{ in } W^{1,2}(S;\mathbb{R}). \end{align} For later we also introduce $\bar{y}_n = {R^*_n}^T \dbar{y}_n - c_n$. We will also use the following finer statement. \begin{proposition}\label{prop:finer-statement} In the setting of Theorem \ref{thm:contcomp}, applied to $\dtilde{y}_n$ and with $\tilde{y}_n = {R^*_n}^T \dtilde{y}_n - c_n$, we have \begin{align} \frac{1}{h_n^2} \big((\tilde{y}_n)' - x'\big)=:\hat{u}_n &\rightharpoonup \hat{u} \text{ in } W^{1,2}(\Omega;\mathbb{R}^2), \label{eq:hat-u-conv} \\ \frac{1}{h_n} (\tilde{y}_n)_3=:\hat{v}_n &\rightharpoonup \hat{v} \text{ in } W^{1,2}(\Omega), \label{eq:hat-v-conv} \end{align} where \begin{align} \hat{u}(x)&=u(x') - (x_3-\tfrac{1}{2}) \nabla' v(x'), \label{eq:hat-u-form}\\ \hat{v}(x)&=v(x') + (x_3-\tfrac{1}{2}). \label{eq:hat-v-form} \end{align} \end{proposition} \begin{proof} According to Korn's inequality \begin{align*} \lVert \hat{u}_n \rVert_{W^{1,2}(\Omega; \mathbb{R}^2)} &\leq C \Big( \lVert \sym \nabla' \hat{u}_n \rVert_{L^2(\Omega; \mathbb{R}^{2\times 2})}+ \Big\lVert \frac{\partial\hat{u}_n}{\partial x_3} \Big\rVert_{L^2(\Omega; \mathbb{R}^2)}\\ &\quad + \Big\lvert \int_{\Omega} \skewo \nabla' \hat{u}_n \,dx \Big\rvert + \Big\lvert \int_{\Omega} \hat{u}_n \,dx \Big\rvert \Big). \end{align*} According to Theorem \ref{thm:contcomp}, $\sym \nabla' \hat{u}_n$ is bounded in $L^2$ by \eqref{eq:ty-closeto-SO} and \eqref{eq:sym-Rn-Id}, $\int \skewo \nabla' \hat{u}_n \,dx=0$ by \eqref{eq:ty-no-skew}, and $\int \hat{u}_n \,dx$ is bounded due to \eqref{eq:un-conv}. As \begin{equation*} \frac{\partial(\hat{u}_n)_i}{\partial x_3} = \frac{1}{h_n} ( \nabla_n \tilde{y}_n - \Id )_{i3}, \end{equation*} $i=1,2$, this term is bounded in $L^2$ as well. This shows compactness. To identify the limit and thus show convergence of the entire sequence, note that \begin{equation*} \int_0^1 \hat{u}_n \,dx_3 \rightharpoonup u \text{ in } W^{1,2}(S;\mathbb{R}^2), \end{equation*} by \eqref{eq:un-conv} and \begin{equation*} \frac{\partial(\hat{u}_n)_i}{\partial x_3} = \frac{1}{h_n} ( \nabla_n \tilde{y}_n - \Id )_{i3} \to - \frac{\partial v}{\partial x_i} \text{ in } L^2(\Omega), \end{equation*} for $i=1,2$ by \eqref{eq:grad-yn-Id-A}. \eqref{eq:grad-yn-close-to-Id} and \eqref{eq:vn-conv} in Theorem \ref{thm:contcomp} also show that $\hat{v}_n$ is bounded in $W^{1,2}(\Omega)$ with $\frac{\partial \hat{v}_n}{\partial x_3} \to 1$ and \[ \int_0^1 \hat{v}_n \,dx_3 \to v. \qedhere \] \end{proof} As a first consequence, we will now describe the limiting behavior of the force term $E_{\rm body}(w_n) = E_{\rm body}(y_n) = \sum_{x \in \tilde{\bar{\Lambda}}_n} f_n(x) \cdot y_n(x)$, where $f_n(x) = f_n(x')$ satisfies \eqref{eq:force-bed}, \eqref{eq:force-bdry0} and $h_n^{-3} \bar{f}_n \to f$ in $L^2(S)$. Note that the forces considered are a bit more general than in \cite{FJM:06}. \begin{proposition}\label{prop:limiting-force} Let $y_n$ be a sequence with $E_n(y_n) \le C h_n^4$ and suppose that \eqref{eq:unvn-conv} holds true for $\tilde{y}_n$, $u_n$, $v_n$ as defined in \eqref{eq:yn-tilde-def}, \eqref{eq:un-def}, \eqref{eq:vn-def}. Assume that $R^*_n \to R^*$. Then \[ \frac{\varepsilon_n^3}{h_n^5} E_{\rm body}(y_n) \to \begin{cases} \int_S f(x') \cdot v(x') R^*e_3 \, dx', & \text{if } \nu_n \to \infty, \\ \frac{\nu}{\nu-1} \int_S f(x') \cdot v(x') R^*e_3 \, dx', & \text{if } {\nu_n = \nu \text{ constant}}, \\ \end{cases} \] as $n \to \infty$. \end{proposition} \begin{proof} In terms of the extended and interpolated force density we have \begin{align*} \frac{\varepsilon_n^3}{h_n^5} E_{\rm body}(y_n) &= \frac{1}{h_n^4} \int_{\tilde{V}^{\rm out}_n} \bar{f}_n(x) \cdot \dbar{y}_n(x) \, dx \\ &= \frac{1}{h_n^4} \int_{\tilde{V}^{\rm out}_n} \bar{f}_n(x) \cdot \Big( \dbar{y}_n(x) - R^*_n \begin{pmatrix} x' \\ 0 \end{pmatrix} - R^*_n c_n \Big) \, dx \\ &= \int_{\tilde{V}^{\rm out}_n} h_n^{-3} {R^*_n}^T \bar{f}_n(x) \cdot h_n^{-1} \Big( \bar{y}_n - \begin{pmatrix} x' \\ 0 \end{pmatrix} \Big) \, dx. \end{align*} By Proposition~\ref{prop:finer-statement}, $ h_n^{-1} \Big( \tilde{y}_n - \begin{pmatrix} x' \\ 0 \end{pmatrix} \Big) \to \hat{v} e_3 $ in $L^2(\Omega; \mathbb{R}^3)$ with $\hat{v}$ as in \eqref{eq:hat-v-conv} and so Remark~\ref{rmk:convergence-notions} shows that \begin{align*} \frac{\varepsilon_n^3}{h_n^5} E_{\rm body}(y_n) \to \int_{\Omega} {R^*}^T f(x) \cdot \hat{v}(x) e_3 \, dx = \int_{\Omega} f(x') \cdot v(x') R^* e_3 \, dx' \end{align*} if $\nu_n \to \infty$, where in the last step we have used that \eqref{eq:force-bed} together with $f_n(x) = f_n(x')$ also implies that $\sum_{x \in \Lambda_n} x_3 f_n(x) = 0$. If $\nu_n = \nu$ constant, then Remark~\ref{rmk:convergence-notions} gives \begin{align*} \frac{\varepsilon_n^3}{h_n^5} E_{\rm body}(y_n) &\to \frac{1}{\nu-1} \sum_{j=0}^{\nu -1} \int_{S} {R^*}^T f(x') \cdot \hat{v}(x',\tfrac{j}{\nu-1}) e_3 \, dx' \\ &= \frac{\nu}{\nu-1} \int_{S} f(x') \cdot v(x') R^* e_3 \, dx' \end{align*} with an analogous argument for the last step. \end{proof} \subsection{Lower bounds} To show the lower bounds in our $\Gamma$-convergence results, we have to understand the limit of the discrete strain. Let $(y_n)$ satisfy $E_n(y_n) \leq C h_n^4$ and set \begin{equation*} \bar{G}_n := \frac{1}{h_n^2} (R_n^T \bar{\nabla}_n \bar{y}_n - Z). \end{equation*} By Proposition~\ref{prop:energy-estimates} $(\dtilde{y}_n)$ satisfies the assumptions of Theorem~\ref{thm:contcomp} and Proposition~\ref{prop:finer-statement} so that, after a rigid change of coordinates, $\tilde{y}_n$ satisfies \eqref{eq:ty-closeto-SO}--\eqref{eq:cont-strain-identify-ii} and \eqref{eq:hat-u-conv}--\eqref{eq:hat-v-form}. In particular, by \eqref{eq:cont-strain-convergence} we know that for a subsequence the continuum strain converges as \[ \frac{1}{h_n^2} (R_n^T \nabla_n \tilde{y}_n - \Id) \rightharpoonup G \text{ in } L^2(\Omega; \mathbb{R}^{3 \times 3}), \] where $G$ satisfies \eqref{eq:cont-strain-identify-i} and \eqref{eq:cont-strain-identify-ii}. For the discussion of discrete strains, recall that we defined \begin{align*} Z_- &= (-z^1, -z^2, -z^3 , -z^4, +z^5, +z^6, +z^7, +z^8), \\ M &= \frac{1}{2} e_3 \otimes (+1, -1, +1, -1, +1, -1, +1, -1). \end{align*} We define a projection $P$ acting on maps via \[ Pf(x) = \Xint-_{(k-1)/(\nu-1)}^{k/(\nu-1)} f(x',t) \, dt \qquad \text{if} \qquad \tfrac{k-1}{\nu-1} \le x_3 < \tfrac{k}{\nu-1} \] in case $\nu_n \equiv \nu < \infty$ and $P = \id$ in case $\nu_n \to \infty$. \begin{proposition}\label{prop:limiting-strain} Let $(y_n)_n$ satisfy $E_n(y_n) \leq C h_n^4$ with $\frac{1}{h_n^2} (R_n^T \nabla_n \tilde{y}_n - \Id) \rightharpoonup G$ in $L^2(\Omega; \mathbb{R}^{3 \times 3})$. Then, \[ \bar{G}_n \rightharpoonup \bar{G} := \begin{cases} GZ, & \text{if } \nu_n \to \infty, \\ PGZ + \frac{1}{2(\nu-1)} G_3, & \text{if } \nu_n \equiv \nu \in \mathbb{N}, \end{cases} \] in $L^2(\Omega; \mathbb{R}^{3 \times 8})$, where $G_3$ is as in Theorem \ref{thm:Gammalimit}. \end{proposition} \begin{proof} The compactness follows from Theorem \ref{thm:contcomp}. On a subsequence (not relabeled) we thus find $\bar{G}_n \rightharpoonup \bar{G}$. As $R_n \to \Id$ in $L^2$ while being uniformly bounded, we also find \begin{equation*} R_n\bar{G}_n = \frac{1}{h_n^2} (\bar{\nabla}_n \bar{y}_n - R_n Z) \rightharpoonup \bar{G}. \end{equation*} We have \[ \lim_{n \to \infty} \frac{1}{h_n^2} (R_n^T \nabla_n \tilde{y}_n - \Id) = \lim_{n \to \infty} \frac{1}{h_n^2} (\nabla_n \tilde{y}_n - R_n) = G, \] weakly in $L^2(\Omega; \mathbb{R}^{3 \times 3})$ where $G$ satisfies \eqref{eq:cont-strain-identify-i} and \eqref{eq:cont-strain-identify-ii}. In order to discuss the discrete strains in more detail, we separate affine and non-affine contributions. We say that a $b \in \mathbb{R}^8$ is affine if it is an element of the linear span of $b^0, b^1, b^2, b^3$, where $b^0 = (1, \ldots, 1)$ and $b^i = Z^T e_i$, $i = 1,2,3$. Any $b \in \mathbb{R}^8$ which is perpendicular to all affine vectors is called non-affine. I.e., a non-affine $b$ is characterized by $\sum_{i=1}^8 b_i =0$ and $Zb=0$. We begin by identifying the easier to handle affine part of the limiting strain. By construction we have $R_n \bar{G}_n b^0 \equiv 0$ and so $\bar{G} b^0 = 0 = G Z b^0$. For $i \in \{1, 2, 3\}$ we use that on any $\tilde{Q}_n(x)$, $x \in \tilde{\Lambda}'_n$, \[ \bar{\nabla}_n \bar{y}_n (x) b^1 = \frac{1}{2\varepsilon_n} \big( (y_2 + y_3 + y_6 + y_7) - ( y_1 + y_4 + y_5 + y_8) \big), \] where $y_i = \tilde{y}_n(x' + \varepsilon_n (z^{i})', x_3 + \tfrac{\varepsilon_n}{h_n} z^i_3)$. So, using \eqref{eq:interpolsurf} for $\tilde{y}_n$, \begin{align*} \bar{\nabla}_n \bar{y}_n (x) b^1 &= \frac{2}{\varepsilon_n} \Xint-_{x + \{-\frac{\varepsilon_n}{2}\} \times (-\frac{\varepsilon_n}{2}, \frac{\varepsilon_n}{2}) \times (-\frac{\varepsilon_n}{2h_n}, \frac{\varepsilon_n}{2h_n})} \tilde{y}_n(\xi + \varepsilon_n e_1) - \tilde{y}_n(\xi) \, d\xi \\ &= 2 \Xint-_{\tilde{Q}_n(x)} \partial_1 \tilde{y}_n(\xi) \, d\xi. \end{align*} Analogous arguments yield \begin{align*} \bar{\nabla}_n \bar{y}_n (x) b^2 &= 2 \Xint-_{\tilde{Q}_n(x)} \partial_2 \tilde{y}_n(\xi) \, d\xi \quad \mbox{and} \\ \bar{\nabla}_n \bar{y}_n (x) b^3 &= \frac{2}{h_n} \Xint-_{\tilde{Q}_n(x)} \partial_3 \tilde{y}_n(\xi) \, d\xi. \end{align*} By $P_n$ we denote the projection which maps functions to piecewise constant functions via $P_n f(x) = \Xint-_{\tilde{Q}_n(x)} f(\xi) \, d\xi$ on $\tilde{Q}_n(x)$. Then $P_n [ R_n\bar{G}_n ] \rightharpoonup \bar{G}$. On the other hand, observing that $Z Z^T = 2 \Id_{3\times 3}$, we find \begin{align*} P_n [ R_n\bar{G}_n ] b^i = \frac{2}{h_n^2} P_n \big[ \partial_i \tilde{y}_n - R_n e_i \big] \rightharpoonup 2 P G e_i = P G Z b^i, \quad i = 1,2 \end{align*} and \begin{align*} P_n [ R_n\bar{G}_n ] b^3 = \frac{2}{h_n^2} P_n \big[ h_n^{-1} \partial_3 \tilde{y}_n - R_n e_3 \big] \rightharpoonup 2 P G e_3 = P G Z b^3. \end{align*} In summary we get that for every affine $b \in \mathbb{R}^8$ \begin{align}\label{eq:Gbar-affine} \bar{G} b = P G Z b. \end{align} For the discussion of the non-affine part of the strain we fix a non-affine $b \in \mathbb{R}^8$, i.e., a $b$ satisfying $\sum_{i=1}^8 b_i =0$, $Zb=0$, and write $b^T= ((b^{(1)})^T, (b^{(2)})^T)$, where $b^{(1)}, b^{(2)} \in \mathbb{R}^4$. Let $Z^{\rm 2dim} := ((z^1)', (z^2)', (z^3)', (z^4)') \in \mathbb{R}^{2 \times 4}$ be the matrix of two-dimensional directions. Then $Z^{\rm 2dim} (b^{(1)} + b^{(2)})=0$ and $\sum_{i=1}^4 b^{(1)}_i = \sum_{i=1}^4 b^{(2)}_i =0$. We introduce the difference operator \[ \bar{\nabla}^{\rm 2dim} f (x) := \frac{1}{\varepsilon_n} \Big( f(x' + \varepsilon_n (z^i)', x_3) - \frac{1}{4} \sum_{j=1}^4 f(x' + \varepsilon_n (z^j)', x_3) \Big)_{i=1,2,3,4}. \] The idea is now to separate differences into in-plane and out-of-plane differences, as all in-plane differences are infinitesimal, while out-of-plane differences stay non-trivial if $\nu_n \equiv \nu$ and have to be treated more carefully. Using \begin{align*} \bar{\nabla}_n \bar{y}_n(x) &= \Big( \bar{\nabla}_n^{\rm 2dim} \bar{y}_n \big( x - \frac{\varepsilon_n}{2h_n}e_3 \big), \bar{\nabla}_n^{\rm 2dim} \bar{y}_n \big(x + \frac{\varepsilon_n}{2h_n}e_3 \big) \Big) \\ &\quad + \frac{1}{2 h_n} \Xint-_{\tilde{Q}_n(x)} \partial_3 \tilde y_n(\xi) \, d \xi \otimes (-1, -1, -1, -1, +1, +1, +1, +1) \end{align*} we find \begin{align} R_n \bar{G}_n(x) b &= \frac{1}{h_n^2} \bar{\nabla}_n \bar{y}_n(x) b\nonumber \\ &=\frac{1}{h_n^2} \Big( \bar{\nabla}_n^{\rm 2dim} \bar{y}_n \big( x + \frac{\varepsilon_n}{2h_n}e_3 \big) - \bar{\nabla}_n^{\rm 2dim} \bar{y}_n \big( x - \frac{\varepsilon_n}{2h_n}e_3 \big)\Big) b^{(2)} \label{eq:strainidentification1}\\ &\quad +\frac{1}{h_n^2} \bar{\nabla}_n^{\rm 2dim} \bar{y}_n \big( x - \frac{\varepsilon_n}{2h_n}e_3 \big) (b^{(1)}+b^{(2)})\label{eq:strainidentification2}, \end{align} where we have used that $\sum_{i=1}^4 b^{(1)}_i = \sum_{i=1}^4 b^{(2)}_i = 0$. First consider the term \eqref{eq:strainidentification2}. Since $ \bar{\nabla}_n^{\rm 2dim} \bar{\id}(x - \frac{\varepsilon_n}{2h_n}e_3) = Z^{\rm 2dim}$ and $Z^{\rm 2dim} (b^{(1)}+b^{(2)}) = 0$, for any $\varphi \in C_c^\infty(\Omega)$ and $i=1,2$ by \eqref{eq:hat-u-conv} and Remark~\ref{rmk:convergence-notions} we have \begin{align} &\frac{1}{h_n^2} e_i^T \int_\Omega \bar{\nabla}_n^{\rm 2dim} (\bar{y}_n - \bar{\id}) \big( x - \frac{\varepsilon_n}{2h_n}e_3 \big) (b^{(1)}+b^{(2)}) \varphi(x)\,dx \notag \\ &~~= \frac{1}{h_n^2} e_i^T \int_\Omega (\bar{y}_n - \bar{\id}) \big( x - \frac{\varepsilon_n}{2h_n}e_3 \big) (\bar{\nabla}_n^{\rm 2dim})^* \varphi(x) (b^{(1)}+b^{(2)})\,dx \notag \\ &~~\to -\int_\Omega \hat{u}_i(\tilde{x}) \nabla' \varphi(x) Z^{\rm 2dim} (b^{(1)}+b^{(2)})\,dx = 0,\label{eq:strainpart212} \end{align} where, either $\tilde{x}=x$ (if $\nu_n \to \infty$), or $\tilde{x} = (x', \frac{\lfloor(\nu-1)x_3\rfloor}{\nu-1})$ (if $\nu_n =\nu$ is constant). For the third component, we instead have \begin{align*} \frac{1}{h_n^2} e_3^T &\int_\Omega \bar{\nabla}_n^{\rm 2dim} \bar{y}_n \big( x - \frac{\varepsilon_n}{2h_n}e_3 \big) (b^{(1)}+b^{(2)}) \varphi(x)\,dx\\ &= \frac{1}{h_n \varepsilon_n (\nu_n-1)} e_3^T \int_\Omega \big( \bar{\nabla}_n^{\rm 2dim} \bar{y}_n \big( x - \frac{\varepsilon_n}{2h_n}e_3 \big) \\ &\qquad\qquad\qquad\qquad\qquad - \nabla_n' \bar{y}_n \big( x - \frac{\varepsilon_n}{2h_n}e_3 \big) Z^{\rm 2dim} \big) (b^{(1)}+b^{(2)}) \varphi(x)\,dx\\ &= \frac{1}{(\nu_n-1)\varepsilon_n} \int_\Omega\frac{(\bar{y}_n)_3( x - \frac{\varepsilon_n}{2h_n}e_3)}{h_n} \big((\bar{\nabla}_n^{\rm 2dim})^\ast \varphi(x) \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad + \nabla_n' \varphi(x) Z^{\rm 2dim} \big) (b^{(1)}+b^{(2)}) \,dx. \end{align*} Now, \begin{align*} &\frac{1}{\varepsilon_n} \Big((\bar{\nabla}_n^{\rm 2dim})^\ast \varphi(x) + \nabla_n' \varphi(x) Z^{\rm 2dim} \Big) \\ &~~ \to \Big(\tfrac{1}{2}\nabla'^2 \varphi (x) [(z^i)',(z^i)'] - \tfrac{1}{8} \sum_{j=1}^4 \nabla'^2 \varphi (x) [(z^j)',(z^j)'] \Big)_{i=1,...,4} \end{align*} uniformly. Therefore, \eqref{eq:hat-v-conv} gives \begin{align}\label{eq:strainpart23infty} \frac{1}{h_n^2} e_3^T \int_\Omega \bar{\nabla}_n^{\rm 2dim} \bar{y}_n \big( x - \frac{\varepsilon_n}{2h_n}e_3 \big) (b^{(1)}+b^{(2)}) \varphi(x)\,dx\to 0, \end{align} if $\nu_n \to \infty$. For $\nu_n = \nu$ constant however, using \eqref{eq:hat-v-conv} and \eqref{eq:hat-v-form} we find \begin{align} &\frac{1}{h_n^2} e_3^T \int_\Omega \bar{\nabla}_n^{\rm 2dim} \bar{y}_n \big( x - \frac{\varepsilon_n}{2h_n}e_3 \big) (b^{(1)}+b^{(2)}) \varphi(x)\,dx \notag \\ &~~\to \frac{1}{(\nu-1)} \int_\Omega\hat{v} \big(x', \frac{\lfloor (\nu-1) x_3 \rfloor}{\nu-1} \big) \Big(\tfrac{1}{2}\nabla'^2 \varphi (x) [(z^i)',(z^i)']\Big)_{i=1,...,4} (b^{(1)}+b^{(2)}) \,dx \notag \\ &~~=\frac{1}{(\nu-1)} \int_\Omega \Big(\tfrac{1}{2}\nabla'^2 v(x') [(z^i)',(z^i)']\Big)_{i=1,...,4} (b^{(1)}+b^{(2)}) \varphi (x) \,dx, \label{eq:strainpart23finite} \end{align} where we have used that $\sum_{i=1}^8 b_i = 0$. We still need to find the limit of \eqref{eq:strainidentification1}. For any test function $\varphi \in C_c^\infty(\Omega; \mathbb{R}^3)$ we find \begin{align*} &\int_\Omega \frac{1}{h_n^2} \Big( \bar{\nabla}_n^{\rm 2dim} \bar{y}_n \big( x + \frac{\varepsilon_n}{2h_n}e_3 \big) - \bar{\nabla}_n^{\rm 2dim} \bar{y}_n \big( x - \frac{\varepsilon_n}{2h_n}e_3 \big) \Big) b^{(2)} \cdot \varphi(x)\,dx\\ &~~= \frac{\varepsilon_n}{h_n} \int_\Omega \frac{1}{\varepsilon_n h_n} \Big( \bar{y}_n \big( x + \frac{\varepsilon_n}{2h_n}e_3) - \bar{y}_n \big( x - \frac{\varepsilon_n}{2h_n} e_3 \big) \Big) \cdot (\bar{\nabla}_n^{\rm 2dim})^{*} P_n \varphi(x)b^{(2)}\,dx\\ &~~= \frac{1}{h_n^2} \int_\Omega \Xint-_{ \tilde{Q}_n(x)} \Big( \bar{y}_n \big( \xi + \frac{\varepsilon_n}{2h_n}e_3) - \bar{y}_n \big( \xi - \frac{\varepsilon_n}{2h_n} e_3 \big) \Big) \, d\xi \cdot (\bar{\nabla}_n^{\rm 2dim})^{*} P_n \varphi(x)b^{(2)}\,dx\\ &~~= \frac{\varepsilon_n}{h_n^3} \int_\Omega \partial_3 \tilde{y}_n (x) \cdot (\bar{\nabla}_n^{\rm 2dim})^{*} P_n \varphi(x)b^{(2)}\,dx\\ &~~= \frac{\varepsilon_n}{h_n} \int_\Omega P_n A_n (x) e_3 \cdot (\bar{\nabla}_n^{\rm 2dim})^{*} \varphi(x)b^{(2)}\,dx. \end{align*} Here the penultimate step is true by our specific choice of interpolation to define $\tilde{y}_n$, whereas the last step follows from \eqref{eq:grad-yn-Id-A} and $\bar{\nabla}_n^{\rm 2dim} \frac{1}{h_n} e_3 = 0$. If $\nu_n \to \infty$ this converges to $0$. In case $\nu_n = \nu$ constant we obtain from \eqref{eq:grad-yn-Id-A} \begin{align} &\lim_{n \to \infty} \int_\Omega \frac{1}{h_n^2} \Big( \bar{\nabla}_n^{\rm 2dim} \bar{y}_n \big( x + \frac{\varepsilon_n}{2h_n}e_3 \big) - \bar{\nabla}_n^{\rm 2dim} \bar{y}_n \big( x - \frac{\varepsilon_n}{2h_n}e_3 \big) \Big) b^{(2)} \cdot \varphi(x)\,dx \notag \\ &~~= - \frac{1}{\nu-1} \int_\Omega P A (x) e_3 \cdot \nabla'\varphi(x) Z^{\rm 2dim} b^{(2)}\,dx \notag \\ &~~ = \frac{1}{\nu-1} \int_\Omega (\partial_1v(x'), \partial_2 v(x'), 0) \nabla'\varphi(x) Z^{\rm 2dim} b^{(2)}\,dx \notag \\ &~~ = - \frac{1}{\nu-1} \int_\Omega \begin{pmatrix} \nabla'^2 v(x') Z^{\rm 2dim} b^{(2)} \\ 0 \end{pmatrix} \cdot \varphi(x) \,dx. \label{eq:strainpart1} \end{align} Summarizing \eqref{eq:strainpart212}, \eqref{eq:strainpart23infty}, \eqref{eq:strainpart23finite}, and \eqref{eq:strainpart1}, we see that for non-affine $b$ we have $\bar{G} b = 0$ in case $\nu_n \to \infty$ and \begin{align*} \bar{G}b &= \begin{pmatrix} -\frac{1}{\nu-1}\nabla'^2 v(x') Z^{\rm 2dim} b^{(2)} \\ \frac{1}{\nu-1} \sum_{i=1}^4 \tfrac{1}{2}\nabla'^2 v(x') [(z^i)',(z^i)'] (b^{(1)} + b^{(2)})_i \end{pmatrix} \\ &= \begin{pmatrix} -\frac{1}{2(\nu-1)}\nabla'^2 v(x') Z^{\rm 2dim} (b^{(2)} - b^{(1)}) \\ \frac{1}{2(\nu-1)} \sum_{i=1}^8 \nabla'^2 v(x') [(z^i)',(z^i)'] b_i \end{pmatrix} - \frac{1}{8(\nu-1)} \Delta v(x') \sum_{j=1}^8 b_j e_3 \end{align*} as $\sum_{j=1}^8 b_j = 0$, if $\nu_n \equiv \nu$. Elementary computations show that for the affine basis vectors $b^k$, $k \in \{0,1,2,3\}$, \begin{align*} Z^{\rm 2dim} ((b^k)^2 - (b^k)^1) = 0 \end{align*} and also \begin{align*} \sum_{i=1}^8 \nabla'^2 v(x') [(z^i)',(z^i)'] b^k_i - \frac{1}{4} \Delta v(x') \sum_{j=1}^8 b^k_j = 0. \end{align*} Thus combining with \eqref{eq:Gbar-affine}, for every $b \in \mathbb{R}^8$ we get \[ \bar{G}b = G Z b \] if $\nu_n \to \infty$ and \[ \bar{G}b = P G Z b + \begin{pmatrix} -\frac{1}{2(\nu-1)}\nabla'^2 v(x') Z^{\rm 2dim} (b^{(2)} - b^{(1)}) \\ \frac{1}{2(\nu-1)} \sum_{i=1}^8 \nabla'^2 v(x') [(z^i)',(z^i)'] b_i \end{pmatrix} - \frac{1}{8(\nu-1)} \Delta v(x') \sum_{j=1}^8 b_j e_3. \] if $\nu_n = \nu$ is constant. So $\bar{G} = GZ$ if $\nu_n \to \infty$ and \begin{align*} \bar{G} = P G Z &-\frac{1}{2(\nu-1)} \begin{pmatrix} \nabla'^2 v(x') 0 \\ 0 & 0 \end{pmatrix} Z_- + \frac{1}{2(\nu-1)} e_3 \otimes ( \nabla'^2 v(x') [(z^i)',(z^i)'] )_{i=1,\ldots,8} \\ &\quad - \frac{1}{8(\nu-1)} \Delta v(x')) e_3 \otimes (1, \ldots, 1). \end{align*} with $Z_-$ as in \eqref{eq:Zminus-def} if $\nu_n = \nu$ is constant. Noting that \[ \nabla'^2 v(x') [(z^i)',(z^i)'] = \begin{cases} \frac{1}{4} (\partial_{11} v(x') + 2 \partial_{12} v(x') + \partial_{22} v(x')) & \text{if } i \in \{1,3,5,7\}, \\ \frac{1}{4} (\partial_{11} v(x') - 2 \partial_{12} v(x') + \partial_{22} v(x')) & \text{if } i \in \{2,4,6,8\}, \\ \end{cases} \] with $M$ as in \eqref{eq:M-def} this can be written as \begin{align*} \bar{G} = P G Z -\frac{1}{2(\nu-1)} \begin{pmatrix} \nabla'^2 v(x') & 0 \\ 0 & 0 \end{pmatrix} Z_- + \frac{1}{2(\nu-1)} \partial_{12} v(x') M. \end{align*} Last, we note that subsequences were indeed not necessary, as the limit is characterized uniquely. \end{proof} Having established convergence of the strain, the $\liminf$ inequality in Theorems~\ref{thm:Gammalimit}, \ref{thm:Gammalimit2} and \ref{thm:Gammalimit3} can now be shown by a careful Taylor expansion of $W(x, \cdot)$, cf.\ \cite{FJM:02,FJM:06,Sch:06}. \begin{proof}[Proof of the $\liminf$ inequality in Theorems~\ref{thm:Gammalimit}, \ref{thm:Gammalimit2} and \ref{thm:Gammalimit3}] The $\liminf$ inequality in Theorem~\ref{thm:Gammalimit3} is an immediate consequence of the $\liminf$ inequality in Theorem~\ref{thm:Gammalimit} applied to a cell energy $W_{\rm cell}'$ of the form \[ W_{\rm cell}'(A) = \begin{cases} W_{\rm cell}(A), & \text{if } \dist(A, \SO(3) Z) < \delta, \\ \dist^2(A, \SO(3) Z), & \text{if } \dist(A, \SO(3) Z) \ge \delta. \end{cases} \] Furthermore, in view of Proposition~\ref{prop:limiting-force} it suffices to establish the lower bound for $f_n = 0$. Assume that $(y_n)$ is a sequence of atomistic deformations such that \[ \sup_n E_n (y_n) < \infty \] so that by Proposition~\ref{prop:energy-estimates} its modification and interpolation $(\tilde{y}_n)$ verifies the assertions of Theorem \ref{thm:contcomp}. Set \begin{equation*} \bar{G}_n := \frac{1}{h_n^2} (R_n^T \bar{\nabla}_n \bar{y}_n - Z). \end{equation*} By frame indifference and nonnegativity of the cell energy we have \begin{align*} &\frac{\varepsilon_n^3}{h_n^5} E_n(y_n) \ge \frac{\varepsilon_n^3}{h_n^5} \sum_{ x \in (\tilde{\Lambda}_n')^{\circ}} W((x', h_n x_3), \bar{\nabla}_n \bar{y}_n(x)) \\ &~~= \frac{1}{h_n^4} \int_{\Omega^{\rm in}_n} W \big( \varepsilon_n ( \lfloor \tfrac{x_1}{\varepsilon_n} \rfloor + \tfrac{1}{2}, \lfloor \tfrac{x_2}{\varepsilon_n} \rfloor + \tfrac{1}{2}, \lfloor \tfrac{h_n x_3}{\varepsilon_n} \rfloor + \tfrac{1}{2}), Z + h_n^2 \bar{G}_n(x) \big) \, dx. \end{align*} First assume that $\nu_n \to \infty$ as $n \to \infty$. Due to nonnegativity of $W_{\rm surf}$ we can estimate \begin{align*} \frac{\varepsilon_n^3}{h_n^5} E_n(y_n) &\ge \frac{1}{h_n^4} \int_{\Omega} \chi_{n}(x) W_{\rm cell} (Z + h_n^2 \bar{G}_n(x)) \, dx \\ &= \int_{\Omega} \frac{1}{2} Q_{\rm cell} \big( \chi_{n}(x) \bar{G}_n(x) \big) - h_n^{-4} \chi_n(x) \omega \big( | h_n^2 \bar{G}_n(x) | \big) \, dx, \end{align*} where $\chi_n$ is the characteristic function of $\{ x \in \Omega^{\rm in}_n : \bar{G} \le h_n^{-1} \} \subset \Omega$ and \[ \omega(t) := \sup \big\{ | \tfrac{1}{2} Q_{\rm cell}(F) - W_{\rm cell}(Z + F) | : F \in \mathbb{R}^{3 \times 8} \text{ with } |F| \le t \big\} \] so that $t^{-2} \omega(t) \to 0$ as $t \to 0$. Since $\bar{G}_n^2$ is bounded in $L^1(\Omega; \mathbb{R}^{3 \times 8})$ and $\chi_n (h_n^2 \bar{G}_n)^{-2} \omega(h_n^2 \bar{G}_n)$ converges to $0$ uniformly, \[ h_n^{-4} \chi_n \omega \big( h_n^2 \bar{G}_n \big) = \bar{G}_n^2 \chi_n (h_n^2 \bar{G}_n)^{-2} \omega(h_n^2 \bar{G}_n) \to 0 \text{ in } L^1(\Omega; \mathbb{R}^{3 \times 8}). \] Moreover, $\chi_n \to 1$ boundedly in measure and so by Proposition \ref{prop:limiting-strain} $\chi_{n} \bar{G}_n \rightharpoonup \bar{G} = GZ$, where $G$ satisfies \eqref{eq:cont-strain-identify-i} and \eqref{eq:cont-strain-identify-ii}. By lower semicontinuity it follows that \begin{align*} \liminf_{n \to \infty} \frac{\varepsilon_n^3}{h_n^5} E_n(y_n) &\ge \frac{1}{2} \int_{\Omega} Q_{\rm cell} \big( \bar{G}(x) \big) \, dx \ge \frac{1}{2} \int_{\Omega} Q_{\rm cell}^{\rm rel} \big( \bar{G}(x) \big) \, dx \\ &= \frac{1}{2} \int_{\Omega} Q_{\rm cell}^{\rm rel} \bigg( \begin{pmatrix} G_1(x') + (x_3 - \frac{1}{2}) G_2(x') & 0 \\ 0 & 0 \end{pmatrix} Z \bigg) \, dx. \end{align*} Integrating the last expression over $x_3 \in (0,1)$ and noting that the integral of the cross terms vanish we obtain \begin{align*} \liminf_{n \to \infty} \frac{\varepsilon_n^3}{h_n^5} E_n(y_n) &\ge E_{\rm vK}(u,v). \end{align*} Now suppose that $\nu_n \equiv \nu \in \mathbb{N}$. We let $\chi_n$ as above but now define \begin{align*} \omega(t) &:= \sup \big\{ | \tfrac{1}{2} Q_{\rm cell}(F) - W_{\rm cell}(Z + F) | : F \in \mathbb{R}^{3 \times 8} \text{ with } |F| \le t \big\} \\ &\quad + 2 \sup \big\{ | \tfrac{1}{2} Q_{\rm surf}(F) - W_{\rm surf}(Z^{(1)} + F) | : F \in \mathbb{R}^{3 \times 4} \text{ with } |F| \le t \big\} \end{align*} so that still $t^{-2} \omega(t) \to 0$ as $t \to 0$. With $\bar{G}(x) = (\bar{G}^{(1)}(x), \bar{G}^{(2)}(x))$ we have \begin{align*} &\liminf_{n \to \infty} \frac{\varepsilon_n^3}{h_n^5} E_n(y_n) \ge \frac{1}{2} \int_{\Omega} Q_{\rm cell} \big( \bar{G}(x) \big) \, dx \\ &\qquad+ \frac{1}{2(\nu-1)} \int_S Q_{\rm surf} \big( \bar{G}^{(1)}(x', \tfrac{1}{2(\nu-1)}) \big) + Q_{\rm surf} \big( \bar{G}^{(2)}(x', \tfrac{2\nu-3}{2\nu-2}) \big) \, dx, \end{align*} where we have used that $\bar{G}$ is constant on $S \times (0,\frac{1}{\nu-1})$ and on $S \times (\frac{\nu-2}{\nu-1},1)$. Here (see Eq.~\eqref{eq:G3def} for $G_3$), \begin{align*} \bar{G}^{(1)}(x', \tfrac{1}{2\nu-2}) &= \Xint-_0^{\frac{1}{\nu-1}} G(x',x_3) \, dx_3 \, Z^{(1)} + \tfrac{1}{2(\nu-1)} G_3^{(1)}(x'), \\ \bar{G}^{(2)}(x', \tfrac{2\nu-3}{2\nu-2}) &= \Xint-_{\frac{\nu-2}{\nu-1}}^1 G(x',x_3) \, dx_3 \, Z^{(2)} + \tfrac{1}{2(\nu-1)} G_3^{(2)}(x'). \end{align*} The bulk part is estimated as \begin{align*} &\frac{1}{2} \int_{-\frac{1}{2}}^{\frac{1}{2}} Q_{\rm cell} \big( \bar{G}(x) \big) \, dx_3 \\ &~~\ge \frac{1}{2(\nu-1)} \sum_{k = 1}^{\nu-1} Q_{\rm cell}^{\rm rel} \bigg( \begin{pmatrix} \sym ( PG'' ) (x', \tfrac{2k - 1}{2\nu-2}) & 0 \\ 0 & 0 \end{pmatrix} Z + \tfrac{1}{2(\nu-1)} G_3(x') \bigg) \\ &~~= \frac{1}{2(\nu-1)} \sum_{k = 1}^{\nu-1} Q_{\rm cell}^{\rm rel} \bigg( \begin{pmatrix} \sym G_1(x') + \tfrac{2k - \nu}{2\nu-2} G_2(x') & 0 \\ 0 & 0 \end{pmatrix} Z + \tfrac{1}{2(\nu-1)} G_3(x') \bigg) \\ &~~= \frac{1}{2(\nu-1)} \sum_{k = 1}^{\nu-1} \bigg[ Q_{\rm cell}^{\rm rel} \bigg( \begin{pmatrix} \sym G_1(x') & 0 \\ 0 & 0 \end{pmatrix} Z + \tfrac{1}{2(\nu-1)} G_3(x') \bigg) \\ &~~\qquad\qquad\qquad + \tfrac{(2k - \nu)^2}{(2\nu-2)^2}Q_{\rm cell}^{\rm rel} \bigg( \begin{pmatrix} G_2(x') & 0 \\ 0 & 0 \end{pmatrix} Z\bigg) \bigg] \\ &~~= \frac{1}{2} Q_{\rm cell}^{\rm rel} \bigg( \begin{pmatrix} \sym G_1(x') & 0 \\ 0 & 0 \end{pmatrix} Z + \tfrac{1}{2(\nu-1)} G_3(x') \bigg) + \tfrac{\nu(\nu-2)}{24(\nu-1)^2} Q_{\rm cell}^{\rm rel} \bigg( \begin{pmatrix} G_2(x') & 0 \\ 0 & 0 \end{pmatrix} Z\bigg), \end{align*} where we have used that $\sum_{k = 1}^{\nu-1}\tfrac{(2k - \nu)^2}{(2\nu-2)^3} = \frac{\nu(\nu-2)}{24(\nu-1)^2}$. For the surface part first note that by \eqref{eq:Q-invariance}, for any $A = (a_{ij}) \in \mathbb{R}^{3 \times 3}$ and $B \in \mathbb{R}^{3 \times 4}$ we have \begin{align*} &Q_{\rm surf}(A Z^{(1)} + B) \\ &~~= Q_{\rm surf} \big( A Z^{(1)} + B + (a_{3\cdot} \otimes e_3 - e_3 \otimes a_{3\cdot}) Z^{(1)} + (a_{\cdot 3} + a_{3 \cdot}) \otimes (1,1,1,1) \big) \\ &~~= Q_{\rm surf} \bigg( \begin{pmatrix} A'' & 0 \\ 0 & 0 \end{pmatrix} Z^{(1)} + B \bigg) = Q_{\rm surf} \bigg( \begin{pmatrix} \sym A'' & 0 \\ 0 & 0 \end{pmatrix} Z^{(1)} + B \bigg), \end{align*} where $a_{\cdot 3}$ denotes the third column, $a_{3 \cdot}$ the third row and $A'' = (a_{ij})_{1 \le i,j \le 2}$ the upper left $2 \times 2$ part of $A$. Thus also \begin{align*} Q_{\rm surf}(A Z^{(2)} + B) &= Q_{\rm surf} \big( A Z^{(1)} + a_{\cdot 3} \otimes (1,1,1,1) + B \big) \\ &= Q_{\rm surf} \bigg( \begin{pmatrix} \sym A'' & 0 \\ 0 & 0 \end{pmatrix} Z^{(1)} + B \bigg). \end{align*} It follows that \begin{align*} &Q_{\rm surf}(\bar{G}_1(x', \tfrac{1}{2\nu-2})) \\ &~~= Q_{\rm surf} \bigg( \begin{pmatrix} \sym G_1(x') - \frac{\nu - 2}{2\nu - 2} G_2(x') & 0 \\ 0 & 0 \end{pmatrix} Z^{(1)} + \tfrac{1}{2(\nu-1)} G_3^{(1)}(x') \bigg) \\ &~~= Q_{\rm surf} \bigg( \begin{pmatrix} \sym G_1(x') - \frac{1}{2} G_2(x') & 0 \\ 0 & 0 \end{pmatrix} Z^{(1)} + \frac{\partial_{12} v(x')}{4(\nu-1)} M^{(1)} \bigg), \\ &Q_{\rm surf}(\bar{G}_2(x', \tfrac{2\nu - 3}{2\nu-2})) \\ &~~= Q_{\rm surf} \bigg( \begin{pmatrix} \sym G_1(x') + \frac{\nu - 2}{2\nu - 2} G_2(x') & 0 \\ 0 & 0 \end{pmatrix} Z^{(1)} + \tfrac{1}{2(\nu-1)} G_3^{(2)}(x') \bigg) \\ &~~= Q_{\rm surf} \bigg( \begin{pmatrix} \sym G_1(x') + \frac{1}{2} G_2(x') & 0 \\ 0 & 0 \end{pmatrix} Z^{(1)} + \frac{\partial_{12} v(x')}{4(\nu-1)} M^{(1)} \bigg) \bigg), \end{align*} and so \begin{align*} &Q_{\rm surf}(\bar{G}_1(x', \tfrac{1}{2\nu-2})) + Q_{\rm surf}(\bar{G}_2(x', \tfrac{2\nu - 3}{2\nu-2})) \\ &~~= 2 Q_{\rm surf} \bigg( \begin{pmatrix} \sym G_1(x') & 0 \\ 0 & 0 \end{pmatrix} Z^{(1)} + \frac{\partial_{12} v(x')}{4(\nu-1)} M^{(1)} \bigg) \\ &\qquad + \tfrac{1}{2} Q_{\rm surf} \bigg( \begin{pmatrix} G_2(x') & 0 \\ 0 & 0 \end{pmatrix} Z^{(1)}\bigg), \end{align*} Adding bulk and surface contributions and integrating over $x'$ we arrive at \begin{align*} \liminf_{n \to \infty} \frac{\varepsilon_n^3}{h_n^5} E_n(y_n) &\ge \int_S \frac{1}{2} Q_{\rm cell}^{\rm rel} \bigg( \begin{pmatrix} \sym G_1(x') & 0 \\ 0 & 0 \end{pmatrix} Z + \tfrac{1}{2(\nu-1)} G_3(x') \bigg) \\ &~~\qquad + \tfrac{\nu(\nu-2)}{24(\nu-1)^2} Q_{\rm cell}^{\rm rel} \bigg( \begin{pmatrix} G_2(x') & 0 \\ 0 & 0 \end{pmatrix} Z\bigg) \\ &~~\qquad + \tfrac{1}{\nu-1} Q_{\rm surf} \bigg( \begin{pmatrix} \sym G_1(x') & 0 \\ 0 & 0 \end{pmatrix} Z^{(1)} + \frac{\partial_{12} v(x')}{4(\nu-1)} M^{(1)} \bigg) \\ &~~\qquad + \tfrac{1}{4(\nu-1)} Q_{\rm surf} \bigg( \begin{pmatrix} G_2(x') & 0 \\ 0 & 0 \end{pmatrix} Z^{(1)}\bigg) \, dx' \\ &= E_{\rm vK}^{(\nu)}(u, v). \qedhere \end{align*} \end{proof} \subsection{Upper bounds} Without loss of generality we assume that $R^* = \Id$. (For general $R^*$ one just considers the sequence $R^* y_n$ with $y_n$ as in \eqref{eq:recovery-ansatz} and $R^*_n = R^*$ below. If $u : S \to \mathbb{R}^2$ and $v : S \to \mathbb{R}$ are smooth up to the boundary, we choose a smooth extension to a neighborhood of $S$ and define the lattice deformations $y_n : \tilde{\bar{\Lambda}}_n \to \mathbb{R}^3$ by restricting to $\tilde{\bar{\Lambda}}_n$ the mapping $y_n : \overline{\tilde{\Omega}_n^{\rm out}} \to \mathbb{R}^3$, defined by \begin{align}\label{eq:recovery-ansatz} y_n(x) = \begin{pmatrix} x' \\ h_n x_3 \end{pmatrix} + \begin{pmatrix} h_n^2 u(x') \\ h_n v(x') \end{pmatrix} - h_n^2 (x_3 - \tfrac{1}{2}) \begin{pmatrix} (\nabla' v(x'))^T \\ 0 \end{pmatrix} + h_n^3 d(x', x_3) \end{align} for all $x \in \overline{\tilde{\Omega}_n^{\rm out}}$. Here $d : \overline{\tilde{\Omega}_n^{\rm out}} \to \mathbb{R}^3$ will be determined later, see \eqref{eq:d-choose-thick} and \eqref{eq:d-choose-thin} for films with many, respectively, a bounded number of layers. In both cases, $d$ is smooth and bounded in $W^{1,\infty}(\overline{\tilde{\Omega}_n^{\rm out}}; \mathbb{R}^3)$ uniformly in $n$. We let $R^*_n = \Id$ and $c_n = 0$ for all $n$ and define $\tilde{y}_n \in W^{1,2}(\tilde{\Omega}_n^{\rm out}; \mathbb{R}^3)$ as in \eqref{eq:yn-tilde-def} by interpolating as in Section~\ref{sec:preparations} (more precisely, descaling to $w_n$ and then interpolating and rescaling) to obtain $\tilde{y}_n=\dtilde{y}_n$. Analogously we let $\bar{y} = \dbar{y}$. We define $u_n$ and $v_n$ as in \eqref{eq:un-def} and \eqref{eq:vn-def}, respectively. It is straightforward to check that indeed $u_n \to u$ in $W^{1,2}(S; \mathbb{R}^2)$ and $v_n \to v$ in $W^{1,2}(S)$. In order to estimate the energy of $y_n$ we need to compute its discrete gradient. Instead of directly calculating $\bar{\nabla} \bar{y}_n = (\bar{\partial}_1 \bar{y}_n, \ldots, \bar{\partial}_8 \bar{y}_n)$ it is more convenient to first determine $\bar{D} y_n = (\bar{D}_1 y_n, \ldots, \bar{D}_8 y_n)$ which for each $x \in \tilde{\Lambda}'_n - (\frac{\varepsilon_n}{2}, \frac{\varepsilon_n}{2}, \frac{\varepsilon_n}{2 h_n})$ is defined by \begin{align*} \bar{D}_i y_n(x) &= \frac{1}{\varepsilon_n} \big[ y_n \big( \hat{x} + \varepsilon_n ( (a^i)', h_n^{-1} a^i_3) \big) - y_n (\hat{x}) \big], \end{align*} where for $x \in \tilde{\Omega}_n^{\rm out}$ we have set \[ \hat{x} = \big( \varepsilon_n \lfloor \tfrac{x_1}{\varepsilon_n} \rfloor, \varepsilon_n \lfloor \tfrac{x_2}{\varepsilon_n} \rfloor, \tfrac{\lfloor (\nu_n-1) x_3 \rfloor}{\nu_n-1} \big), \] so that $\tilde{Q}_n(x) = \hat{x} + (0,\varepsilon_n)^2 \times (0, (\nu_n-1))$. We set $a^i = \frac{1}{2}(1,1,1)^T + z^i \in \{0,1\}^3$ and write $A := (a^1, \ldots, a^8) = Z + \frac{1}{2}(1,1,1)^T \otimes (1, 1, 1, 1, 1, 1, 1, 1)^T$. Note that \begin{align}\label{eq:D-Nabla-Umrechnung} \bar{D}_i y_n(x) = \bar{\partial}_i \bar{y}_n(x) - \bar{\partial}_1 \bar{y}_n(x) \quad\mbox{and}\quad \bar{\partial}_i \bar{y}_n(x) = \bar{D}_i y_n(x) - \frac{1}{8} \sum_{j=1}^8 \bar{D}_j y_n(x). \end{align} In particular, if $\bar{D} y_n(x)$ is affine, i.e., $\bar{D} y_n(x) = F A$ for some $F \in \mathbb{R}^{3 \times 3}$, then \begin{align}\label{eq:affine-Umrechnung} \bar{\partial}_i \bar{y}_n(x) &= F a^i - \frac{1}{8} \sum_{j=1}^8 F a^j = F \Big( a^i - \frac{1}{2}(1,1,1)^T \Big) = F z^i \end{align} and so $\bar{\nabla} \bar{y}_n(x) = F Z$. For $x$ in a fixed cell $\tilde{Q}_n(x) = \hat{x} + (0,\varepsilon_n)^2 \times (0, (\nu_n-1))$, Taylor expansion of $y_n$ (restricted to $\overline{\tilde{Q}}_n(x)$) yields \begin{align*} \bar{D}_i y_n(x) &= \nabla' y_n(\hat{x}) (a^i)' + h_n^{-1} \partial_3 y_n(\hat{x}) a^i_3 + \frac{\varepsilon_n}{2} (\nabla')^2 y_n(\hat{x}) [(a^i)', (a^i)'] \\ &\qquad + \varepsilon_n h_n^{-1} \sum_{j=1}^2 \partial_{j3} y_n(\hat{x}) a^i_j a^i_3 + \frac{\varepsilon_n h_n^{-2}}{2} \partial_{33} y_n(\hat{x}) (a^i_3)^2 \\ &\qquad + \frac{\varepsilon_n^{2}}{6} \nabla^3 \big( (y_n)_1 (\zeta^1_{\varepsilon_n}), (y_n)_2 (\zeta^2_{\varepsilon_n}), (y_n)_2 (\zeta^2_{\varepsilon_n}) \big)^T \\ &\qquad \qquad \qquad [((a^i)', h_n^{-1} a^i_3), ((a^i)', h_n^{-1} a^i_3), ((a^i)', h_n^{-1} a^i_3)] \end{align*} for some $\zeta_{\varepsilon_n} \in \hat{x} + [0,\varepsilon_n]^2\times[0, \varepsilon_n h_n^{-1}]$. Plugging in \eqref{eq:recovery-ansatz} we get \begin{align*} \bar{D}_i y_n(x) &= \bigg( \begin{pmatrix} \Id_{2\times2} \\ 0 \end{pmatrix} + \begin{pmatrix} h_n^2 \nabla' u(\hat{x}') \\ h_n \nabla' v(\hat{x}') \end{pmatrix} \\ &\qquad - h_n^2 (\hat{x}_3 - \tfrac{1}{2}) \begin{pmatrix} \nabla' (\nabla' v(\hat{x}'))^T \\ 0 \end{pmatrix} + h_n^3 \nabla' d(\hat{x}) \bigg) (a^i)' \\ &\qquad + h_n^{-1} \left( \begin{pmatrix} 0 \\ h_n \end{pmatrix} + 0 - h_n^2 \begin{pmatrix} (\nabla' v(\hat{x}'))^T \\ 0 \end{pmatrix} + h_n^3 \partial_3 d(\hat{x}) \right) a^i_3 \\ &\qquad + \frac{\varepsilon_n h_n}{2} \begin{pmatrix} 0 \\ (\nabla')^2 v(\hat{x}') [(a^i)', (a^i)'] \end{pmatrix} + O(\varepsilon_n h_n^2) \\ &\qquad - \varepsilon_n h_n \begin{pmatrix} \nabla' (\nabla' v(\hat{x}'))^T \\ 0 \end{pmatrix} (a^i)' a^i_3 + O(\varepsilon_n h_n^2) \\ &\qquad + \frac{\varepsilon_n h_n}{2} \partial_{33} d(\hat{x}) (a^i_3)^2 \\ &\qquad + \frac{\varepsilon_n^2}{6} \partial_{333} \big( d_1(\zeta^1_{\varepsilon_n}), d_2(\zeta^2_{\varepsilon_n}), d_3(\zeta^3_{\varepsilon_n}) \big)^T (a^i_3)^3 + O(\varepsilon_n^2 h_n). \end{align*} It follows that \begin{align*} \bar{D}_i y_n(x) &= \bigg( \Id_{3\times3} + h_n \begin{pmatrix} h_n \nabla' u(\hat{x}') & - (\nabla' v(\hat{x}'))^T \\ \nabla' v(\hat{x}') & 0 \end{pmatrix} \\ &\qquad - h_n^2 (\hat{x}_3 - \tfrac{1}{2}) \begin{pmatrix} (\nabla')^2 v(\hat{x}') & 0 \\ 0 & 0 \end{pmatrix} + h_n^2 \begin{pmatrix} 0_{3\times2} & \partial_3 d(\hat{x}) \end{pmatrix} \bigg) a^i \\ &\qquad + \varepsilon_n h_n \left( \begin{pmatrix} - (\nabla')^2 v(\hat{x}') (a^i)' a^i_3 \\ \frac{1}{2} (\nabla')^2 v(\hat{x}') [(a^i)', (a^i)'] \end{pmatrix} + \tfrac{1}{2} \partial_{33} d(\hat{x}) (a^i_3)^2 \right) \\ &\qquad + \frac{\varepsilon_n^2}{6} \partial_{333} \big( d_1(\zeta^1_{\varepsilon_n}), d_2(\zeta^2_{\varepsilon_n}), d_3(\zeta^3_{\varepsilon_n}) \big)^T (a^i_3)^3 + O(\varepsilon_n h_n^2 + \varepsilon_n^2 h_n). \end{align*} We define the skew symmetric matrix $B(\hat{x}) = B_n(\hat{x})$ by \begin{align*} B(\hat{x}) &= \begin{pmatrix} \frac{h_n^2}{2} ( \nabla' u(\hat{x}') - ( \nabla' u(\hat{x}'))^T ) & - h_n (\nabla' v(\hat{x}'))^T \\ h_n \nabla' v(\hat{x}) & 0 \end{pmatrix} \\ &\qquad + \frac{h_n^2}{2} \begin{pmatrix} 0_{2\times2} & \partial_3 d'(\hat{x}) \\ - (\partial_3 d'(\hat{x}))^T & 0 \end{pmatrix}, \end{align*} where we have written $d' = (d_1, d_2)^T$ for $d = (d_1, d_2, d_3)^T$, and consider the special orthogonal matrix \begin{align*} {\mathrm e}^{-B(\hat{x})} &= \Id_{3\times3} - B(\hat{x}) + \frac{1}{2} B^2(\hat{x}) + O(|B(\hat{x})|^3) \\ &= \Id_{3\times3} - h_n \begin{pmatrix} 0_{2\times2} & - (\nabla' v(\hat{x}'))^T \\ \nabla' v(\hat{x}') & 0 \end{pmatrix} \\ &\qquad - \frac{h_n^2}{2} \begin{pmatrix} \nabla' u(\hat{x}') - ( \nabla' u(\hat{x}'))^T + \nabla' v(\hat{x}') \otimes \nabla' v(\hat{x}') & \partial_3 d'(\hat{x}) \\ - (\partial_3 d'(\hat{x}))^T & |\nabla' v(\hat{x}')|^2 \end{pmatrix} \\ &\qquad + O(|h_n|^3). \end{align*} Now compute \begin{align}\label{eq:expBDy} \begin{split} &{\mathrm e}^{-B(\hat{x})} \bar{D}_i y_n(x) = \bar{D}_i y_n(x) \\ &\qquad - h_n \begin{pmatrix} 0_{2\times2} & - (\nabla' v(\hat{x}'))^T \\ \nabla' v(\hat{x}') & 0 \end{pmatrix} \bigg( \Id_{3\times3} + h_n \begin{pmatrix} 0_{2\times2} & - (\nabla' v(\hat{x}'))^T \\ \nabla' v(\hat{x}') & 0 \end{pmatrix} \bigg) a^i \\ &\qquad - \frac{h_n^2}{2} \begin{pmatrix} \nabla' u(\hat{x}') - ( \nabla' u(\hat{x}'))^T + \nabla' v(\hat{x}') \otimes \nabla' v(\hat{x}') & \partial_3 d'(\hat{x}) \\ - (\partial_3 d'(\hat{x}))^T & |\nabla' v(\hat{x}')|^2 \end{pmatrix} a^i \\ &\qquad + O(h_n^3 + \varepsilon_n h_n^2 + \varepsilon_n^2 h_n) \end{split}\notag \\ \begin{split} &= \bigg( \Id_{3\times3} + h_n^2 \begin{pmatrix} {\rm sym} \nabla' u(\hat{x}') + \frac{1}{2} \nabla' v(\hat{x}') \otimes \nabla' v(\hat{x}') & 0 \\ 0 & \frac{1}{2} |\nabla' v(\hat{x}')|^2 \end{pmatrix} \\ &\qquad - h_n^2 (\hat{x}_3 - \tfrac{1}{2}) \begin{pmatrix} (\nabla')^2 v(\hat{x}') & 0 \\ 0 & 0 \end{pmatrix} + h_n^2 \begin{pmatrix} 0_{2\times2} & \frac{1}{2} \partial_3 d'(\hat{x}) \\ \frac{1}{2} (\partial_3 d'(\hat{x}))^T & \partial_3 d_3(\hat{x}) \end{pmatrix} \bigg) a^i \\ &\qquad + \varepsilon_n h_n \left( \begin{pmatrix} - (\nabla')^2 v(\hat{x}') (a^i)' a^i_3 \\ \frac{1}{2} (\nabla')^2 v(\hat{x}') [(a^i)', (a^i)'] \end{pmatrix} + \tfrac{1}{2} \partial_{33} d(\hat{x}) (a^i_3)^2 \right) \\ &\qquad + \frac{\varepsilon_n^2}{6} \partial_{333} \big( d_1(\zeta^1_{\varepsilon_n}), d_2(\zeta^2_{\varepsilon_n}), d_3(\zeta^3_{\varepsilon_n}) \big)^T (a^i_3)^3 + O(h_n^3 + \varepsilon_n h_n^2 + \varepsilon_n^2 h_n). \end{split} \end{align} Here, the error term is uniform in $\hat{x}$. We can now conclude the proof of Theorems~\ref{thm:Gammalimit}, \ref{thm:Gammalimit2} and \ref{thm:Gammalimit3}. \begin{proof}[ Proof of the $\limsup$ inequality in Theorems~\ref{thm:Gammalimit}, \ref{thm:Gammalimit2} and \ref{thm:Gammalimit3}] As the discrete gradient $\bar{\nabla}_n \bar{y}_n$ is uniformly close to $\SO(3) Z$, the following arguments apply to show that $y_n$ defined by \eqref{eq:recovery-ansatz} serves as a recovery sequence in all three theorems. Moreover, in view of Proposition~\ref{prop:limiting-force} it suffices to construct recovery sequences for $f_n = 0$. We first specialize now to the case $\nu_n \to \infty$. For \begin{align}\label{eq:recov-G-def-thick} \begin{split} G(x) &= G_1(x') + (x_3 - \tfrac{1}{2}) G_2(x') \\ &= {\rm sym} \nabla' u(x') + \tfrac{1}{2} \nabla' v(x') \otimes \nabla' v(x') - (x_3 - \tfrac{1}{2}) (\nabla')^2 v(x'). \end{split} \end{align} choosing $d(x) = x_3 d_0(x') + \frac{x_3^2 - x_3}{2} d_1(x')$ with \begin{align}\label{eq:d-choose-thick} \begin{split} d_0(x') &= \argmin_{b \in \mathbb{R}^3} Q_{\rm cell} \bigg[ \begin{pmatrix} G_1( x') & 0 \\ 0 & \tfrac{1}{2} |\nabla' v(x')|^2 \end{pmatrix} Z + (b \otimes e_3) Z\bigg], \\ d_1(x') &= \argmin_{b \in \mathbb{R}^3} Q_{\rm cell} \bigg[ \begin{pmatrix} G_2(x') & 0 \\ 0 & 0 \end{pmatrix} Z + (b \otimes e_3) Z\bigg] \end{split} \end{align} according to \eqref{eq:bmin-Q3}, from \eqref{eq:affine-Umrechnung} and \eqref{eq:expBDy} we obtain \begin{align*} {\mathrm e}^{-B(\hat{x})} \bar{\nabla} \bar{y}_n(x) &= \bigg( \Id_{3\times3} + h_n^2 \begin{pmatrix} G(\hat{x}) & 0 \\ 0 & \frac{1}{2} |\nabla' v(\hat{x}')|^2 \end{pmatrix} \\ &\qquad + h_n^2 {\rm sym} \big( ( d_0(\hat{x}) + (\hat{x}_3 - \tfrac{1}{2}) d_1(\hat{x}) ) \otimes e_3 \big) \bigg) Z + O(h_n^3 + \varepsilon_n h_n) \end{align*} and, Taylor expanding $W_{\rm cell}$, we see that due to the smoothness of $u$ and $v$ the piecewise constant mappings $x \mapsto h_n^{-4} W_{\rm cell} (\bar{\nabla}\bar{y}_n(x)) = h_n^{-4} W_{\rm cell} (e^{-B(\hat{x})} \bar{\nabla} \bar{y}_n(x))$ converge uniformly to \begin{align*} \frac{1}{2} Q_{\rm cell}^{\rm rel} \bigg( \begin{pmatrix} G & 0 \\ 0 & 0 \end{pmatrix} Z \bigg) = \frac{1}{2} Q_2(G). \end{align*} This shows that \begin{align*} \lim_{n \to \infty} h_n^{-4} E_n(y_n) &= \frac{1}{2} \int_S Q_2(G(x)) \, dx \\ &= \int_S \frac{1}{2} Q_2(G_1(x')) + \frac{1}{24} Q_2(G_2(x')) \, dx' = E_{\rm vK}(u,v) \end{align*} and thus finishes the proof in case $\nu_n \to \infty$. \medskip Now suppose that $\frac{\varepsilon_n}{h_n} \equiv \frac{1}{\nu-1}$. Abbreviating $(\nabla')^2 v(\hat{x}') = -G_2(\hat{x}') = -G_2 = (f_{ij}) \in \mathbb{R}^{2 \times 2}$, we observe that \begin{align*} &\begin{pmatrix} 2G_2 (a^i)' a^i_3 \\ -(a^i)'^T G_2 (a^i)' \end{pmatrix}_{i = 1, \ldots, 8} \\ &~~= \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & -2f_{11} & -2f_{11} - 2f_{12} & -2f_{12} \\ 0 & 0 & 0 & 0 & 0 & -2f_{21} & -2f_{21} - 2f_{22} & -2f_{22} \\ 0 & f_{11} & \sum_{\mu,\nu} f_{\mu\nu} & f_{22} & 0 & f_{11} & \sum_{\mu,\nu} f_{\mu\nu} & f_{22}\end{pmatrix}, \end{align*} and hence, with $b = b(\hat{x}') = \big( (\partial_{11}+\partial_{12}) v(\hat{x}'), (\partial_{21}+\partial_{22}) v(\hat{x}'), 0 \big)^T = (f_{11} + f_{12}, f_{21} + f_{22}, 0)^T$, \begin{align*} &\begin{pmatrix} 2G_2 (a^i)' a^i_3 \\ -(a^i)'^T G_2 (a^i)' \end{pmatrix}_{i = 1, \ldots, 8} - (e_3 \otimes b - b \otimes e_3) A \\ &\qquad = \begin{pmatrix} 0 & 0 & 0 & 0 & f_{11} + f_{12} & -f_{11} + f_{12} & -f_{11} - f_{12} & +f_{11} - f_{12} \\ 0 & 0 & 0 & 0 & f_{21} + f_{22} & -f_{21} + f_{22} & -f_{21} - f_{22} & f_{21} - f_{22} \\ 0 & -f_{12} & 0 & -f_{21} & 0 & -f_{12} & 0 & -f_{21}\end{pmatrix}, \\ &\qquad = \begin{pmatrix} G_2 & 0 \\ 0 & 0 \end{pmatrix} ( Z + Z_-) + \frac{1}{2} f_{12} \big( 2 M - e_3 \otimes (1, \ldots, 1) \big) \\ &\qquad = \begin{pmatrix} G_2 & 0 \\ 0 & 0 \end{pmatrix} A - \frac{1}{2} b \otimes (1, \ldots, 1) + \begin{pmatrix} G_2 & 0 \\ 0 & 0 \end{pmatrix} Z_- + \frac{f_{12}}{2} \big( 2 M - e_3 \otimes (1, \ldots, 1) \big). \end{align*} This shows that \begin{align*} &\begin{pmatrix} -(\nabla')^2 v(\hat{x}') (a^i)' a^i_3 \\ \frac{1}{2} (\nabla')^2 v(\hat{x}') [(a^i)', (a^i)'] \end{pmatrix}_{i = 1, \ldots, 8} \\ &~~= \frac{1}{2} \bigg( e_3 \otimes b - b \otimes e_3 + \begin{pmatrix} G_2 & 0 \\ 0 & 0 \end{pmatrix} \bigg) A - \frac{1}{4} ( b + e_3 ) \otimes (1, \ldots, 1) \\ &\qquad + \frac{1}{2} \begin{pmatrix} G_2 & 0 \\ 0 & 0 \end{pmatrix} Z_- + \frac{1}{2} f_{12} M. \end{align*} We define the affine part of the strain $G(x) = G_1(x') + (x_3 - \tfrac{1}{2}) G_2(x')$ as in \eqref{eq:recov-G-def-thick}. The non-affine part is abbreviated by $\frac{1}{2(\nu - 1)} G_3(x')$ as in \eqref{eq:G3def}. Then using \eqref{eq:expBDy} we can write \begin{align*} &{\mathrm e}^{-B(\hat{x})} \bar{\nabla} \bar{y}_n(x) \\ &= \bigg[ \Id_{3\times3} + h_n^2 \begin{pmatrix} G(\hat{x}',\hat{x}_3 + \tfrac{1}{2(\nu-1)}) & 0 \\ 0 & \frac{1}{2} |\nabla' v(\hat{x}')|^2 \end{pmatrix} + h_n^2 {\rm sym} \big( \partial_3 d(\hat{x}) ) \otimes e_3 \big) \\ &\qquad + \frac{h_n^2}{2(\nu - 1)} \big( e_3 \otimes b(\hat{x}') - b(\hat{x}') \otimes e_3 \big) \bigg] Z + \frac{h_n^2}{2(\nu - 1)}G_3(\hat{x}') + O(h_n^3) \\ &\qquad + \Big[ \frac{\varepsilon_n h_n}{2} \partial_{33} d(\hat{x}) + \frac{\varepsilon_n^2}{6} \partial_{333} \big( d_1(\zeta^1_{\varepsilon_n}), d_2(\zeta^2_{\varepsilon_n}), d_3(\zeta^3_{\varepsilon_n}) \big)^T \Big] \otimes (z^1_3, \ldots, z^8_3), \end{align*} where we have used \eqref{eq:affine-Umrechnung} and \eqref{eq:D-Nabla-Umrechnung}. We set \begin{align*} d_0(x') &= \argmin_{d \in \mathbb{R}^3} Q_{\rm cell} \bigg[ \begin{pmatrix} G_1(x') & 0 \\ 0 & \tfrac{1}{2} |\nabla' v(x')|^2 \end{pmatrix} Z + {\rm sym} (d \otimes e_3) Z \\ &\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad + \frac{1}{2(\nu - 1)} G_3(x') \bigg], \\ d_1(x') &= \argmin_{d \in \mathbb{R}^3} Q_{\rm cell} \bigg[ \begin{pmatrix} G_2(x') & 0 \\ 0 & 0 \end{pmatrix} Z + {\rm sym} (d \otimes e_3) Z\bigg] \end{align*} according to \eqref{eq:bmin-Q3} and define $d : S' \times [0,1] \to \mathbb{R}$, $S'$ a neighborhood of $S$, inductively by $d(x,0) = 0$ and \begin{align}\label{eq:d-choose-thin} \begin{split} d(x', \tfrac{j-1}{\nu-1} + t) = d(x', \tfrac{j-1}{\nu-1}) + t d_0(x') + t \tfrac{2j - \nu}{2(\nu-1)} d_1(x') \quad \mbox{if} \quad t \in [\tfrac{j-1}{\nu-1}, \tfrac{j}{\nu-1}], \end{split} \end{align} for $j = 1, \ldots, \nu-1$. Then $d$ is smooth in $x'$ and piecewise linear in $x_3$, more precisely, affine in $x_3$ in between two atomic layers: On $S' \times [\frac{j-1}{\nu-1}, \frac{j}{\nu-1}]$, $j \in \{ 1, \ldots, \nu-1 \}$, it satisfies \[ \partial_3 d(x) = d_0(x') + \tfrac{2j-\nu}{2(\nu-1)} d_1(x') = d_0(x') + (\hat{x}_3 - \tfrac{1}{2} + \tfrac{1}{2(\nu-1)}) d_1(x') \] since $\hat{x}_3 = \hat{x}_3(x) = \tfrac{j-1}{\nu-1}$. Taylor expanding $W_{\rm cell}$, we see that the piecewise constant mappings $x \mapsto h_n^{-4} W_{\rm cell} (\bar{\nabla} \bar{y}_n(x)) = h_n^{-4} W_{\rm cell} (e^{-B(\hat{x})} \bar{\nabla} \bar{y}_n(x))$ converge uniformly on $S' \times [\frac{j-1}{\nu-1}, \frac{j}{\nu-1}]$ to \begin{align*} \frac{1}{2} Q_{\rm cell}^{\rm rel} \bigg( \begin{pmatrix} G_1(x') + \tfrac{2j-\nu}{2(\nu-1)} G_2(x') & 0 \\ 0 & 0 \end{pmatrix} Z + \frac{1}{2(\nu - 1)} G_3(x') \bigg) \end{align*} for each $j \in \{ 1, \ldots, \nu-1 \}$. Since $\tfrac{1}{\nu-1} \sum_{j = 1}^{\nu-1} \tfrac{2j-\nu}{2(\nu-1)} = 0$ and $\tfrac{1}{\nu-1} \sum_{j = 1}^{\nu-1} \big( \tfrac{2j-\nu}{2(\nu-1)} \big)^2 = \tfrac{\nu(\nu-2)}{12(\nu-1)^2}$, this shows \begin{align}\label{eq:bulk-contr} \begin{split} \frac{1}{h_n^{4}} \int_{\tilde{\Omega}_n^{\rm out}} W_{\rm cell} (\bar{\nabla}\bar{y}_n(x)) \, dx &\to \int_S \frac{1}{2} Q_{\rm cell}^{\rm rel} \bigg( \begin{pmatrix} G_1(x') & 0 \\ 0 & 0 \end{pmatrix} Z + \frac{1}{2(\nu - 1)} G_3(x') \bigg) \\ &\qquad \qquad + \frac{\nu(\nu-2)}{24(\nu-1)^2} Q_{\rm cell}^{\rm rel} \bigg( \begin{pmatrix} G_2(x') & 0 \\ 0 & 0 \end{pmatrix} Z \bigg) \, dx'. \end{split} \end{align} For the surface part we write $\bar{\nabla}\bar{y}_n = ([\bar{\nabla}\bar{y}_n]^{(1)}, [\bar{\nabla}\bar{y}_n]^{(2)})$ and use that the piecewise constant mappings $S' \times [0, \frac{1}{\nu-1}] \to \mathbb{R}$, \[x \mapsto h_n^{-4} W_{\rm surf} ([\bar{\nabla}\bar{y}_n(x)]^{(1)}) = h_n^{-4} W_{\rm surf} ([e^{-B(\hat{x})} \bar{\nabla}\bar{y}_n(x)]^{(1)}),\] converge uniformly to \begin{align*} &\frac{1}{2} Q_{\rm surf} \bigg( \begin{pmatrix} G_1(x') - \tfrac{\nu - 2}{2(\nu-1)} G_2(x') & 0 \\ 0 & 0 \end{pmatrix} Z + \frac{1}{2(\nu - 1)} G_3(x') \bigg) \\ &~~= \frac{1}{2} Q_{\rm surf} \bigg( \begin{pmatrix} \sym G_1(x') - \frac{1}{2} G_2(x) & 0 \\ 0 & 0 \end{pmatrix} Z^{(1)} + \frac{\partial_{12} v(x')}{4(\nu-1)} M^{(1)} \bigg). \end{align*} Similarly, the mappings $S' \times [\frac{\nu-2}{\nu-1}, 1] \to \mathbb{R}$, \[x \mapsto h_n^{-4} W_{\rm surf} ([\bar{\nabla}\bar{y}_n(x)]^{(2)}) = h_n^{-4} W_{\rm surf} ([e^{-B(\hat{x})} \bar{\nabla}\bar{y}_n(x)]^{(2)}),\] converge uniformly to \begin{align*} \frac{1}{2} Q_{\rm surf} \bigg( \begin{pmatrix} \sym G_1(x') + \frac{1}{2} G_2(x) & 0 \\ 0 & 0 \end{pmatrix} Z^{(1)} + \frac{\partial_{12} v(x')}{4(\nu-1)} M^{(1)} \bigg). \end{align*} So with $S^{\rm out}_n$ such that $\tilde{\Omega}^{\rm out}_n = S^{\rm out}_n \times (0,1)$, \begin{align}\label{eq:surf-contr} \begin{split} &\frac{1}{h_n^{4}(\nu-1)} \int_{ S^{\rm out}_n} W_{\rm surf} \big( [\bar{\nabla}\bar{y}_n(x',\tfrac{1}{2(\nu-1)})]^{(1)} \big) + W_{\rm surf} \big( [\bar{\nabla}\bar{y}_n(x',\tfrac{2\nu - 3}{2(\nu-1)})]^{(2)} \big) \, dx' \\ &~~\to \int_S \tfrac{1}{\nu-1}Q_{\rm surf} \bigg( \begin{pmatrix} \sym G_1(x') & 0 \\ 0 & 0 \end{pmatrix} Z^{(1)} + \frac{\partial_{12} v(x')}{4(\nu-1)} M^{(1)} \bigg) \\ &\qquad\qquad\qquad\qquad\qquad + \frac{1}{4(\nu-1)}Q_{\rm surf} \bigg( \begin{pmatrix} G_2(x) & 0 \\ 0 & 0 \end{pmatrix} Z^{(1)} \bigg) dx'. \end{split} \end{align} Summarizing \eqref{eq:surf-contr} and \eqref{eq:bulk-contr}, we have shown that \begin{align*} \lim_{n \to \infty} h_n^{-4} E_n(y_n) = \lim_{n \to \infty} \varepsilon_n^3 h_n^{-5} \sum_{x \in \tilde{\Lambda}_n'} W \big( x, \bar{\nabla}y_n(x) \big) = E_{\rm vK}^{(\nu)} (u, v) \end{align*} as $n \to \infty$, where we have also used that the contribution of the lateral boundary cells $\varepsilon_n^3 h_n^{-5} \sum_{x \in \partial \tilde{\Lambda}_n'} W(x, \bar{\nabla}y_n(x))$ is negligible in the limit $n \to \infty$. \end{proof} \begin{proof}[Proof of the energy barrier in Theorem~\ref{thm:Gammalimit3}] If a sequence of $w_n\in \mathcal{S}_{\delta}$ satisfies $E_n(w_n) \leq C h_n^4$, then the proof of Proposition~\ref{prop:energy-estimates} shows $\frac{\varepsilon_n^3}{h_n} E_{\rm atom} (w_n) \leq C h_n^4$. Hence, \[ \dist^2(\bar{\nabla} w_n(x),\SO(3)Z) \le C E_{\rm atom} (w_n) \le C h_n^5 \varepsilon_n^{-3} = C (\nu_n-1)^5 \varepsilon_n^2, \] which tends to $0$ by assumption. This implies that $w_n \in \mathcal{S}_{\delta/2}$ for $n$ large enough. \end{proof} \section*{Acknowledgments.} This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under project number 285722765, as well as the Engineering and Physical Sciences Research Council (EPSRC) under the grant EP/R043612/1. \bibliographystyle{alpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,084
How the richest game in football unlocks digital potential One year ago almost to the day, Huddersfield Town's German defender Christopher Schindler walked from the halfway line at Wembley Stadium to take what would be the deciding penalty in a shoot-out against Championship rivals Reading. His kick was low, hard and to the goalkeeper's right, and it earned Huddersfield a fortune which is currently heading towards £300m, as they clinched promotion to the Premier League. Little wonder that the annual Championship play-off final is known as football's richest game. Fast-forward one year. Fulham and Aston Villa meet at Wembley on Saturday for a place in he Premier League. Once again, millions of pounds of revenue are at stake, but also the opportunity to significantly grow their global fanbase thanks to exposure to hundreds of millions of fans around the world. Based on Deloitte's calculations for last year's game, the winner of Saturday's game nets £170m across three seasons even if they are relegated at the first attempt. Survive the first year, as Huddersfield did, and Deloitte estimates the potential revenue at £290m. Welcome to the big-time. Much of that money, which of course is driven by TV deals, will be spent on players in a high-stakes attempt to stay in the division (it has been compared to prune juice, famously and indelicately), but the revenue earned from digital tends to stay within the club, and it can be significant if clubs invest in the right strategy. Here at Seven League, we have worked on digital strategy and implementation for more than a quarter of Premier League teams (as well as the Premier League itself) and as such have a keen awareness of the global exposure that is gained by a place in this division. Newer entrants to the Premier League may be smaller than the sleeping giants in the Championship by some measures, but membership of the world's richest league provides an opportunity to supercharge the growth of the club. Digital is a central part of how to do that. Huddersfield ended their debut season in the Premier League with over 339,000 followers across their social media platforms. Before Christopher Schindler's penalty, they had 171,000. One year in the top-flight gave them more than a 98% increase. Leeds United, whose average attendance is over 31,000, have over 960,000 followers across social platforms but have not been exposed to Premier League audiences since 2004. Norwich City, with an average attendance of less than 26,000, have been in the Premier League much more recently and boast almost 1.5m social followers. At this point, a reminder of what every good social media manager knows (and constantly tells their executives): follower numbers mean very little in isolation. Not all of Norwich City's 800,000 Facebook fans will be active users of Facebook, and even those who are may not see the club's content, depending on the Canaries' performance and ranking in the algorithm. Follower numbers are indicative but it's engagement that counts: that is how clubs earn money from digital. Remember the recent mini-furore when the EFL said its clubs would vote on whether to scrap the rule that forces them to produce a programme? I saw at least one tweet claiming that while clubs make money from printed programmes, they do not from digital. Very few people still believe this, and for good reason: it's not true. Even a club outside the top six can make anywhere between £1m and £3m per year by creating of packages of digital content that can be sponsored, selling branded digital assets and utilising their fan database - but only if done properly. For the very biggest and most successful Premier League clubs that number rises exponentially. This figure does not even include the contribution digital will make to online ticket sales or e-commerce through the club store. A cohesive digital strategy is no longer something clubs think about when they become established in the big league: it's something none can afford to ignore. That is especially true now that every club outside the top six has a reasonable shot at being the Best of the Rest (not my term but one that a Premier League club's executive said to me recently). Seventh place is a realistic ambition for 14 clubs. This season it was Burnley (average home attendance: just over 20,000; social followers: almost 900,000) and many clubs are eyeing that slot for the coming year. Whoever gets it, will need to immediately re-invest in digital, in order to turn global exposure into engaged fans and revenue. What might that look like? In the 2016-17 season, Seven League rebuilt Leicester City's primary fan-facing digital channels. In 2017, we ran the integration of 5 key vendor platforms allowing the club to offer an unrivalled digital fan experience. We have taken on content and commercialisation projects for Leicester, who can now use match attendance, digital, and physical club shop purchases to get a coherent view of an individual fan. Newcastle United had the vision to overhaul their digital presence ahead of their return to the Premier League in 2017. Seven League helped the club to optimise their website and revenue streams, adding Accelerated Mobile Pages, Facebook Instant Articles, optimised social share buttons, social share tracking, open graph tags, structured mark-up and more. In the top six, where the opportunity is even greater, we about to enter our third season leading Tottenham Hotspur's digital strategy. In that time we have relaunched key digital products to coincide with the opening of the new stadium, created a social media and content strategy that delivers the most engaged audience in the top six (Facebook and Twitter, via Crowdtangle), and impacted both direct revenue (retail, ticketing) and indirect revenue (evolved approach to digital with commercial partners). Life in the Premier League is transformative for football clubs in so many ways. Either Fulham or Aston Villa is about to return to the big-time, and their brands will be immediately catapulted back on to smartphones and other screens the world over. The opportunity to grow, engage and monetise that audience could be one penalty kick away ... CunBA: The 'village basketball' phenomenon sweeping China Mailman Insights: Observations from Beijing 2022
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,224
{"url":"https:\/\/math.stackexchange.com\/questions\/396763\/solving-equations-involving-modulo-operator","text":"# Solving equations involving modulo operator\n\nIn computer programming languages we have an operator called % which expresses the remainder between two numbers. For example $123\\%100 = 23$.\n\nI have an equation evolving this operator, namely,\n\n$$\\frac{5}{3}(N\\%36) - \\frac{2}{3}(N\\%6) + 2(N\\%25) - (N\\%5) = (2N)\\%100$$\n\nIs there some easy way to solve this equation in $N$ or at least count the solutions within some range of numbers? (I would like some techniques from number theory rather than a brute force solution, which I have already calculated myself.)\n\nHere's a non-brute force proof that: $$N\\equiv 0,1,2,3,4\\textrm{ or }5\\ (\\operatorname{mod 30}),$$ i.e. $N=30a+b$ for some integers $a$ and $b$, with $b = 0,1,2,3,4,$ or $5$. Let $N\\%d$ denote the minimal nonnegative remainder after division of $N$ by $d$, such that $0\\leq N\\%d < N$, where $N$ and $d$ are nonnegative integers. Then $$N\\%36 = N\\%6 + 6k,$$ where $k = 1, 2, 3, 4,$ or $5$. Similarly: $$N\\%25 = N\\%5 + 5l$$ where $l = 1, 2, 3,$ or $4$, and $$(2N)\\%100 = (2N)\\%25 + 25m$$ where $m = 1, 2$, or $3$. Note furthermore that we have $$(2N)\\%25 = 2(N\\%25) - 25r,$$ where $r=0\\textrm{ or }1$.\n\nThe above formulae substitute into your equation as follows: $$\\frac{5}{3}(N\\%36) - \\frac{2}{3}(N\\%6) + 2(N\\%25) - (N\\%5) = (2N)\\%100$$ $$\\frac{5}{3}\\left[ N\\%6 + 6k \\right] - \\frac{2}{3}(N\\%6) + 2\\left[ N\\%5 + 5l \\right] - (N\\%5) = 2(N\\%25) - 25r + 25m$$ $$N\\%6 + 10k + N\\%5 + 10l = 2\\left[ N\\%5 + 5l \\right] - 25r + 25m$$ $$N\\%6 - N\\%5 = -10k - 25r + 25m$$ $$N\\%6 - N\\%5 = 5(5m -2k - 5r). \\tag{\\star}$$\n\nEq. $(\\star)$ implies that $N\\%6 - N\\%5$ is divisible by five. However, remember that $\\%$ was defined such that $0\\leq N\\%d < N$: thus $N\\%6 \\leq 5$ and $N\\%5 \\leq 4$. Thus, the only way for this difference to be divisible by $5$ is if $$N\\%6 - N\\%5 = 0\\textrm{ or }5.$$ Suppose that $N\\%6 - N\\%5 = 0$. Then $N\\%6 = N\\%5$, and since $0 \\leq N\\%5\\leq 4$, we have: $$N\\%6 = N\\%5 = 0,1,2,3,\\textrm{ or }4. \\tag{*}$$ You can convince yourself$^1$ that Eq. $(*)$ implies $$N\\%30 = 0,1,2,3,\\textrm{ or }4,$$ i.e. $$N\\equiv 0,1,2,3\\textrm{ or }4\\ (\\operatorname{mod 30}).$$ The other case, $N\\%6 - N\\%5 = 5$ implies $N\\%5=0$ and $N\\%6=5$, which implies $N\\%30=5$, i.e. $N \\equiv 5\\ (\\operatorname{mod 30})$. Q.E.D.\n\nThis type of logic can probably be extended to give the complete solution to your problem, but I don't want to flog it to death. One first next step would be to show that the case $N\\%6=5$, $N\\%5=0$ can't occur.\n\nThis should anyway give you an idea of how basic arithmetic arguments can be brought to bear on this kind of problem. Sometimes things come to this when all else fails in research-level elementary number theory, for example in the proofs that an odd perfect number would have to have more than $k$ distinct prime factors, for various values of k.\n\n$^1$One way to convince yourself is via the Chinese remainder theorem, but it isn't the only way.\n\n\u2022 While this works out nice enough for the $\\mod 30$ case, I'd hate to see the amount of work required to show that the only shifts by $30$ (mod $900$) are $0,30,360,390,720$ (i.e. $30$ times $0,1,12,13,24$). These numbers have little apparent cohesion. \u2013\u00a0awwalker May 25 '13 at 6:21\n\u2022 @AWalker Ah but I disagree! Obviously $0,360,720 \\equiv 0\\ (\\operatorname{mod }360)$ and $30,390 \\equiv 30\\ (\\operatorname{mod }360)$. \u2013\u00a0Douglas B. Staple May 25 '13 at 12:10\n\nWe note that $N$ solves your equation if and only if $900+N$ solves your equation. Then, some quick code (e.g. the following)\n\n Select[Range[900], 5 Mod[#, 36]\/3 - 2 Mod[#, 6]\/3 + 2 Mod[#, 25] - Mod[#, 5]\n== Mod[2 #, 100] &]\n\n\ngives the following solutions in $[1,900]$:\n\n$$\\{1, 2, 3, 4, 30, 31, 32, 33, 34, 360, 361, 362, 363, 364,$$ $$390, 391, 392, 393, 394, 720, 721, 722, 723, 724, 900\\}$$\n\nWe see that each solution occurs in a block of $4$, so we can think of our solutions as elements of $$\\{0,30,360,390,720\\},$$ shifted by an element of $$\\{i+900j : i \\in [0,4],\\, j \\in \\mathbb{Z}\\}.$$\n\n\u2022 This is essentially a brute force approach isn't it? Maybe there is a solution which does not require a computer. \u2013\u00a0Mark May 20 '13 at 1:32\n\u2022 @Mark I doubt so. Pure mathematics rarely deals with such \"mixed mod\" problems, because number theory typically vies the solutions to modular equations as lying in some ring of residues. Here, the best we can do is identify a minimal period ($900$) and, yes, brute force thereafter. \u2013\u00a0awwalker May 20 '13 at 2:39\n\u2022 This seems to be such a natural question to ask though. I take a a number and tell you some facts about its remainders. Then you try to deduce what my number was! \u2013\u00a0Mark May 21 '13 at 15:55\n\u2022 The unnaturality stems from the fact that (mathematically), there's little that says that $9 \\mod 5$ shouldn't just as equally be $4$ or $-1$, e.g. \u2013\u00a0awwalker May 21 '13 at 20:09\n\u2022 But you could just as well define the a%b operator to give you the unique remainder specified in the division algorithm: a = b*q + r \u2013\u00a0Mark May 22 '13 at 3:09","date":"2020-01-26 03:24:25","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9227522611618042, \"perplexity\": 233.31712645664962}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-05\/segments\/1579251684146.65\/warc\/CC-MAIN-20200126013015-20200126043015-00303.warc.gz\"}"}
null
null
Like with Brexit, they blame it on anyone but themselves, any group that is not them. They are tough but helpless, skilled but useless. And there are a lot of them, not necessarily all white middle aged working class males either. To my mind, this sort of result has been on the cards ever since the public reaction to President Obama's attempts to create what looked to us over here like a very watered down version of the NHS. To a large vocal section of Americans the very idea of contributing to someone else's healthcare was anathema and the barely hidden ugly face of that nation was seen. As it is every time there's a gun related massacre and a wringing of collective hands as to why. Why do they think? As it is each time a race related incident flares up in the national news. I don't know that much about Donald Trump either. He seems like the quintessential bully who's astute though not clever, arrogant because he can be, always ready to have the last word whether or not he knows what he's talking about. I have no idea if he'll be a half decent President though I suspect his business background will mean he appoints people who will do his job while he struts around in that stupid cap taking the credit and dishing the blame. There'll be strained lawsuits about something he might have said or did distracting the media agenda from what he's actually supposed to be doing and whether he is doing it (i.e. being a President). Oddly his presidency could make the world both safer and less safe at the same time. Safer in that we're not likely to see US interference in far off places which incidentally has helped create the terrorism he now seeks to stem with his immigration policies. On the other hand he will be the man who can launch nuclear weapons anywhere he chooses and out of every public figure I'm aware of I can think of few less qualified to have that access. The post victory rhetoric of supporters is actually remarkably similar to that which followed Obama's first win eight years ago. "Yes we can" was the slogan as people looked forward to change. Now Trump will "make America great again" which means more great expectations. I suppose if he doesn't (and who can actually measure such a nebulous aspiration?) he will just blame someone else even though for at least two years he can do as he pleases. If you look at it from the point of view of those who voted for him then they may well argue that we've tried every other sort of government and recent political history is packed with the flushed reputations of leaders who promised change and great things from Tony Blair to Barack Obama. Maybe, the Trump supporters -please can we call them Trumptons? - believe a self -made man with no professional political experience can sort it all out. Battered by nearly a decade of austerity, depression, a changing world leaving the working class behind, everyone else being heard above them, they see Trump as the bearer of the golden light of promise. Hilary Clinton probably did better than anyone else would opposing him but it seems the writing was on the wall. I'm not saying this is right but this appears to be why the result is as it was. Not just in the US but all around the world people are looking for answers beyond the traditional cloaks of politics or faith. If the Fairy Godmother had been on the ballot she'd have won by a landslide.
{ "redpajama_set_name": "RedPajamaC4" }
8,595
[![Platform](http://img.shields.io/badge/platform-iOS-blue.svg?style=flat)](http://cocoapods.org/?q=YALSideMenu) [![License](http://img.shields.io/badge/license-MIT-green.svg?style=flat)](https://github.com/Yalantis/Side-Menu.iOS/blob/master/LICENSE) This component implements transition animation to crumble view-controller into tiny pieces. [![Yalantis](https://raw.githubusercontent.com/Yalantis/PullToRefresh/develop/PullToRefreshDemo/Resources/badge_dark.png)](https://yalantis.com/?utm_source=github) ![Preview](preview.gif) Check this <a href="https://dribbble.com/shots/2109991-Star-Wars-App-concept">project on dribbble</a>. Also, read how it was done in [our blog](https://yalantis.com/blog/uidynamics-uikit-or-opengl-3-types-of-ios-animations-for-the-star-wars/) ## Requirements - iOS 8.0+ - Xcode 9 - Swift 3 ## Installing with [CocoaPods](https://cocoapods.org) ```ruby use_frameworks! pod 'StarWars', '~> 2.0' ``` ## Usage At first, import StarWars: ```swift import StarWars ``` Then just implement class of *UIViewControllerTransitioningDelegate* that will return our animation form method *animationControllerForDismissedController* and assign it to *transitioningDelegate* of viewController that you want to dismiss. ```swift override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) { let destination = segue.destinationViewController destination.transitioningDelegate = self } func animationControllerForDismissedController(dismissed: UIViewController) -> UIViewControllerAnimatedTransitioning? { return StarWarsGLAnimator() } ``` There are also two things you can customize in the Star Wars animation: duration and sprite sizes. Let's see how you can do this: ```swift let animator = StarWarsGLAnimator() animator.duration = 2 animator.spriteWidth = 8 ``` Have fun! :) #### Let us know! We'd be really happy if you sent us links to your projects where you use our component. Just send an email to github@yalantis.com And do let us know if you have any questions or suggestion regarding the animation. P.S. We're going to publish more awesomeness wrapped in code and a tutorial on how to make UI for iOS (Android) better than better. Stay tuned! ## License The MIT License (MIT) Copyright © 2015 Yalantis Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
{ "redpajama_set_name": "RedPajamaGithub" }
2,922
At-Tafsīr-ul-Kabīr: The Grand Exegesis Translated by Murtaza Ahmad, UK The Review of Religionscontinues the serialization of At-Tafsīr-ul-Kabīr: The Grand Exegesisin English, written by Hazrat Mirza Bashiruddin Mahmud Ahmadra, the second head of the worldwide Ahmadiyya Muslim Community. This is one of the most insightful and in-depth commentaries of the Holy Qur'an ever written, brought to English readers for the first time. VFXArabia | Shutterstock The name of this surah (chapter), namely, al-Fātiḥah, further has the distinction of being mentioned as a prophecy in past scriptures. Thus, in Revelations 10:2-3, it is written: 'He had a little book open in his hand. And he set his right foot on the sea and his left foot on the land, and cried with a loud voice, as when a lion roars. When he cried out, seven thunders uttered their voices.' The name of this sūrah and the number of verses it contains are recorded as a prophecy. Owing to the translator's unfamiliarity with the essence of the prophecy, the Hebrew word פּתוּחַ patuaḥ[1]has been translated as a book 'having been opened',[2]whereas the Hebrew word Patuaḥ i.e.Fātiḥah– was mentioned as the name to be designated for this sūrah. In this prophecy, the expression 'seven thunders uttered their voices' actually refers to the seven verses of Sūrahal-Fātiḥah. Christian authors admit that the prophecy referring to the second advent of the Messiah [Jesus Christ] is found in the aforementioned verses of Revelations, and indeed this is an established fact. It is clear from the words of the prophecy that [the true meanings of] this sūrahwould remain sealed until the time of the advent of the Messiah. In other words, a detailed exegesis of this sūrahwould be made manifest during the era of the Promised Messiah. Therefore, it is written in Revelations that the Prophet heard a voice from the heavens, saying: 'Seal up the things which the seven thunders uttered, and do not write them.'[3] I have enumerated the names of Sūrahal-Fātiḥahin detail to make it clear that these names have been designated by the Holy Prophetsahimself. As proven from some of the narrations regarding the names of Sūrahal-Fātiḥah, the Holy Prophetsagave these names on the basis of revelation from Allah the Almighty. Secondly, in listing these names, my objective is to show that they shed light on the vast and profound meanings of Sūrahal-Fātiḥah. In fact, these nine names allude to the ten subjects which are explained in Sūrahal-Fātiḥah. It is Fātiḥatul-Kitāb[The Opening Chapter of the Book], meaning that it has been enjoined that this chapter should be placed at the very beginning of the Holy Qur'an. It serves as a key through which the meanings of the Qur'an are disclosed. Sūrahal-Fātiḥahis also Sūrahal-Ḥamd[Chapter of Praise]. This sūrahsheds light on man's relationship with God and the purpose of man's creation. By doing so it brings to light that man has been created to attain the highest degree of progress, and that the relationship between God Almighty and man is one of Grace and Mercy. This chapter is Aṣ–Ṣalāh[The Prayer], signifying that it teaches a perfect prayer, which stands unparalleled. It is also Ummul-Kitāb[The Mother of the Book] in that it addresses mankind through all forms of knowledge and insights. Further, the status of this chapter is akin to being the mother to the Noble Qur'an. This signifies that the heart-wrenching prayers contained in this chapter caused the revelation of Ummul-Qur'ān[The Mother of the Qur'an] to descend from the exalted throne of God. It is also known as Ummul-Qur'ān[The Mother of the Quran] because it provides man with all branches of knowledge that impact his moral and spiritual welfare. It is As-Sab'ul-Mathānī[The Seven Oft-Repeated Verses], because even though the chapter comprises only seven verses, it fulfils man's every need and the answers to all questions pertaining to spirituality are shed light upon through these verses. Thus, the solution to any profound matter can be discovered by repeatedly pondering over them. Mathānī[Oft-repeated], indicates that this chapter must be recited in each rak'ah[one unit] of the Prayer. It is also Al-Qur'ānul- 'Ażīm[The Great Qur'an], i.e., despite being called Ummul-Kitāband Ummul-Qur'ānit is also a part of the Holy Qur'an and not separate from it as some have erroneously considered it to be. Sūrahal-Fātiḥahis called the Great Qur'an in the same sense as when someone is asked to recite the Qur'an, the intention behind this is to recite a sūrahor a portion thereof and not to recite the entire Qur'an. Sūrahal-Fātiḥahis also 'a cure' in the sense that it provides a remedy for all doubts and misgivings that pass through one's mind concerning one's faith. It is a Ruqyah[the charm], meaning that, besides being a general invocation, its recitation also protects man from the attacks of Satan and his followers, and inspires the heart with such strength that the temptations and ploys of Satan are rendered harmless. It is also a Kanz[treasure] in that streams of limitless knowledge flow from it. In Urdu, there is an idiom دریا کوزہ میں بند کرنا'to squeeze a river into a bottle,' which perhaps cannot be applied to anything except Sūrahal-Fātiḥah. In fact, it can be said of this sūrahthat an entire ocean has been squeezed into a bottle. In short, my objective for listing these names has been that I should draw the attention of the readers to those vast meanings that have been elucidated by the Holy Prophetsathrough them. Indeed, the names alone of a sūrah, be they nine or even a hundred, can serve no purpose if they are devoid of meaning. Undoubtedly, it is impossible for the Holy Prophetsato have stated something futile; and therefore these names, for all those who ponder over them, possess a brilliant light and perfect guidance. The Excellences of Sūrahal-Fātiḥah: Many of the excellences of this sūrahhave been mentioned in the Hadith[narrations of the Holy Prophetsa], some of which I have indicated in the aforementioned names of the sūrah. I shallnow discuss those excellences of thissūrahwhich have been elucidated in greater detail. Imam Nisā'ī narrates on the authority of Ubayy bin Ka'b that the Holy Prophetsasaid: مَا أَنْزَلَ اللَّهُ عَزَّ وَجَلَّ فِي التَّوْرَاةِ وَلَا فِي الْإِنْجِيلِ مِثْلَ أُمِّ الْقُرْآنِ وَهِيَ السَّبْعُ الْمَثَانِي وَهِيَ مَقْسُومَةٌ بَيْنِي وَبَيْنَ عَبْدِي وَلِعَبْدِي مَا سَأَلَ[4] 'Neither in the Torah nor in the Gospel has Allah the Almighty revealed any chapter like that of Ummul-Qur'ān(Sūrahal-Fātiḥah), also referred to as The Seven oft-Repeated Verses. Allah the Almighty has informed me that this chapter has been divided equally between Me and My servant. Through this chapter any prayer of My servants will certainly be granted acceptance.' This particular quality of this chapter is of great significance because it contains a practical remedy to benefit one, both in this world and in the hereafter. Any prayer offered using this remedy will find acceptance. It is evident that this does not mean that every prayer offered through Sūrahal-Fātiḥahwill certainly be accepted. Rather, it signifies that one who adopts the means of prayer found in this chapter will indeed find their prayer accepted. Now the question arises as to what are those means? It is clear from the composition of the chapter that these means are: Bismillāh[In the name of Allah] al-Ḥamdu Lillāh[All Praise belongs to Allah] ar-Raḥmān[the Most Gracious] ar-Raḥīm[the Ever Merciful] Māliki yawmid-dīn[Master of the Day of Judgement] Iyyāka na'budu[Thee alone do we worship] Iyyāka nasta'īn[Thee alone do we implore for help] Just as this sūrahconsists of seven verses, there are also seven principles for the acceptance of prayer contained within this chapter. [1]The Hebrew word פּתוּחַ patuah is a passive participle referring to something that is opened or open. [2]The Koine Greek in the book of Revelation reads avoíyw anoígōwhich is a perfect passive participle meaning 'having been opened'. [3]The Bible, Revelations 10:4. [Publishers] [4]Nisā'ī, Kitābul-Iftitāḥ, Faḍlu Fātiḥatil-Kitāb. Hazrat Mirza Ghulam Ahmad(as) and his Community
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,955