text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
{"url":"https:\/\/community.squaredup.com\/t\/write-a-power-shell-based-monitor-with-overrideable-variables\/1099","text":"# Write a Power Shell based monitor with overrideable variables?\n\nHow do I write a script with variables that I would use to calculate thresholds,\n\ne.g. Threshold > \\$value\n\nand then be able to use overrides to change \\$value?\n\nThank you.\n\nTo do this, I\u2019ve put the threshold into the script argument (which is included in overrides) and then have the script return a general good \/ bad response instead of having the monitor health do a numeric calculation. I would return the argument and the return value to the property bag so they both can be included in the alert.","date":"2022-06-29 19:54:42","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.804956316947937, \"perplexity\": 1375.5449357266739}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656103642979.38\/warc\/CC-MAIN-20220629180939-20220629210939-00269.warc.gz\"}"}
null
null
{"url":"https:\/\/ssodelta.wordpress.com\/2014\/11\/20\/how-to-model-the-wheels-of-a-slot-machine\/","text":"# How to Model the Wheels of a Slot Machine\n\nRecently, in my IT class we had to create our own simple games as an exercise in programming. What I chose to make was a casino with a lot of slot machines in Unity. It was mainly an exercise in using Unity as I have little experience working with it. To demonstrate what I\u2019m talking about, I made a small video showcasing the slow machines:\n\nNow, what I want to talk about in this post is how to make the wheels of the\u00a0slot machine stop at the right entries, so that we can\u00a0easily program the slot machines to do what we want. Which means we can have a function:\n\nrotateAndStopAtAngles(ANGLE_1, ANGLE_2, ANGLE_3);\n\nLet\u2019s assume that the wheels are already in motion and are travelling with an angle velocity\u00a0of $\\omega$. Let\u2019s also assume that the wheels are affected by a constant acceleration when they\u2019re stopping. This means that their time dependent angle velocity can be described as:\n\n$\\omega(t) = \\omega_0 - \\alpha \\cdot t$\n\nWhere $\\omega_0$ is the initial velocity of the wheel and $\\alpha$ is the deceleration. Obviously we\u00a0want the wheels to stop spinning,\u00a0so we want to find the time $t_a$ which satisfies:\n\n$\\omega(t_a) = 0$\n\nWhich means we get the following expression for $t_a$:\n\n$t_a = \\dfrac{\\omega_0}{\\alpha}$\n\nNow, what we want to find now is the angle $\\theta_{start}$ where we start the deceleration of the wheel. Let $\\theta_{goal}$ be the angle we want to wheel to stop on. Obviously, the total angle breadth travelled $\\Delta \\theta$ when decelerating is:\n\n$\\Delta \\theta = \\displaystyle \\int_{0}^{t_a} \\! \\omega(t) ~ \\mathrm{d}t$\n\nNow, obviously\u00a0we want $\\theta_{start}$ to satisfy:\n\n$\\theta_{start} + \\Delta \\theta = \\theta_{goal}$\n\nOr written out in full:\n\n$\\theta_{start} + \\displaystyle \\int_{0}^{\\frac{\\omega_0}{\\alpha}} \\! \\omega_0 - \\alpha \\cdot t$ $\\mathrm{d}t$ $= \\theta_{goal}$\n\nWhich allows us to get the following expression for $\\theta_{start}$:\n\n$\\theta_{start} = \\dfrac{1}{2} \\cdot \\dfrac{2 \\cdot \\alpha \\cdot \\theta_{goal} + \\omega_0^2}{\\alpha}$\n\nAnd as you can see in the above video, it works! This is what I love about math \u2013 it\u2019s such a powerful language for describing the world. Although, due to\u00a0technical\u00a0issues (*cough cough* floats) and due to the fact that computers work with discrete units rather than continuous units it doesn\u2019t\u00a0stop at\u00a0exactly\u00a0the right angle, but this is easily mitigated as you can see in the above video by sliding in at low speed to get it to stop just right.","date":"2018-01-19 03:31:47","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 19, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.39679911732673645, \"perplexity\": 333.80561546747157}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-05\/segments\/1516084887729.45\/warc\/CC-MAIN-20180119030106-20180119050106-00237.warc.gz\"}"}
null
null
\subsection{Top 25 Actors According to Each Estimator, Ranked by Multiplicative Effect $e^{\hat\beta_j}$} \paragraph{Naive, Full} \input{tables/actors_full_naive_expbeta_25.txt} \paragraph{WB Deconfounder, Full} \input{tables/actors_full_deconfwb_expbeta_25.txt} \paragraph{Deconfounder, Full} \input{tables/actors_full_deconf_expbeta_25.txt} \paragraph{Budget Adjustment, Full} \input{tables/actors_full_budget_expbeta_25.txt} \paragraph{Naive, One at a Time} \input{tables/actors_subset_naive_expbeta_25.txt} \paragraph{WB Deconfounder, One at a Time} \input{tables/actors_subset_deconfwb_expbeta_25.txt} \paragraph{Deconfounder, One at a Time} \input{tables/actors_subset_deconf_expbeta_25.txt} \subsection{Top 25 Actors According to Each Estimator, Ranked by Appearance-weighted Log-scale Coefficients $n_j \hat\beta_j$} \paragraph{Naive, Full} \input{tables/actors_full_naive_nbeta_25.txt} \paragraph{WB Deconfounder, Full} \input{tables/actors_full_deconfwb_nbeta_25.txt} \paragraph{Deconfounder, Full} \input{tables/actors_full_deconf_nbeta_25.txt} \paragraph{Budget Adjustment, Full} \input{tables/actors_full_budget_nbeta_25.txt} \paragraph{Naive, One at a Time} \input{tables/actors_subset_naive_nbeta_25.txt} \paragraph{WB Deconfounder, One at a Time} \input{tables/actors_subset_deconfwb_nbeta_25.txt} \paragraph{Deconfounder, One at a Time} \input{tables/actors_subset_deconf_nbeta_25.txt} \subsection{Minor Asymptotic Results} \label{a:minor_asymptotics} In this section, we collect a number of minor results that will be useful in bias derivations. We begin by reviewing properties of probabilistic principal component analysis, then present lemmas relating to the asymptotic behavior of the deconfounder and na\"ive estimators. \subsubsection{Properties of Probabilistic Principal Component Analysis} \label{a:ppca} For convenience, we review properties of probabilistic principal component analysis used in the remainder of this section. The generative model is \begin{align*} \bm{Z} &\sim \mathcal{N}(\bm{0}, \bm{I}) \\ \bm{A} &\sim \mathcal{N}(\bm{Z} \bm\theta, \sigma^2 \bm{I}). \end{align*} For compactness, we use $\bm{Z} \sim \mathcal{N}(\bm{0}, \bm{I})$ to denote independently sampling of $\bm{Z}_i$ from a normal distribution centered on the $i$-th row of the mean matrix and the given covariance matrix, then collecting samples in $\bm{Z}$. \citet{TipBis99} show that this data-generating process implies \begin{align} (\bm{Z} | \bm{A}) \sim \mathcal{N}\left( \bm{A} \bm\theta^\top \left( \bm\theta \bm\theta^\top + \sigma^2 \bm{I} \right)^{-1},\ \sigma^2 \left( \bm\theta \bm\theta^\top + \sigma^2 \bm{I} \right)^{-1} \right). \label{eq:post_z_given_a} \end{align} We now examine asymptotic relationships between the singular value decomposition $\bm{A} = \bm{U} \bm{D} \bm{V}^\top$ and the eigendecomposition $\bm\theta^\top \bm\theta = \bm{Q} \bm\Lambda \bm{Q}^\top = \bm{Q}_{1:k} \bm\Lambda_{1:k} \bm{Q}_{1:k}^\top$; note that the trailing $m-k$ eigenvalues are zero. (Subscripts of the form $\bX_{i:j}$ generally indicate column subsets of matrix $\bX$ from column $i$ to column $j$, except when indexing diagonal matrices where they indicate the corresponding diagonal element.) \begin{align*} \plim_{n \to \infty} \frac{1}{\sqrt{n}} \bm{D}_{1:k} &= \left(\bm\Lambda_{1:k} + \sigma^2 \bm{I} \right)^{\frac{1}{2}} \\ \plim_{n \to \infty} \frac{1}{\sqrt{n}} \bm{D}_{(k+1):m} &= \sigma \bm{I} \\ \plim_{n \to \infty} \bm{V}_{1:k} &= \bm{Q}_{1:k} \\ \plim_{n \to \infty} \bm{V}_{(k+1):m} \bm{V}_{(k+1):m}^\top &= \bm{Q}_{(k+1):m} \bm{Q}_{(k+1):m}^\top \end{align*} where the last equality follows from $\plim_{n \to \infty} \bm{V} \bm{V}^\top = \bm{Q} \bm{Q}^\top = \bm{I}$. \subsubsection{Consistency and Inconsistency Results for the Deconfounder} \label{a:consistency_deconfounder} In this section, we present minor results relating to the consistency of the deconfounder. Consider $n$ observations drawn from a data-generating process with $k$ unobserved confounders, $\bm{Z} \sim \mathcal{N}(\bm{0}, \bm{I})$ and $m \ge k$ observed treatments, $\bm{A} \sim \mathcal{N}(\bm{Z} \bm\theta, \sigma^2 \bm{I})$. \begin{mdframed} \begin{defn}{(Pinpointedness of the substitute confounder.)} \label{def:consistency_substitute_confounder} A substitute confounder, $\bm{Z}$ is said to be pinpointed if its posterior distribution, $f(\hat{\bm{z}} | \bm{A})$, collapses to a Dirac delta, $\delta(g(\bm{Z}))$, where $g(\bm{Z})$ is a bijective transformation of $\bm{Z}$. \end{defn} \end{mdframed} Specifically, pinpointedness \citep[previously referred to as ``consistency of the substitute confounder'']{WanBle19} does not require convergence of $\hat\bm{Z}$ to $\bm{Z}$, as consistency; for example, convergence to a rotation or rescaling will suffice. Below, we show pinpointing requires an infinite number of stochastic causes. \begin{mdframed} \begin{lem} \label{l:pinpointing} In the linear-linear setting, strong infinite confounding is necessary for $\hat\bm{Z}$ to asymptotically pinpoint $\bm{Z}$ as the number of causes goes to infinity. \end{lem} \end{mdframed} \textit{Proof of Lemma~\ref{l:pinpointing}.} \citet{TipBis99} show that under the probabilistic principal components analysis model (see Supplement~\ref{a:ppca}), the posterior of the confounder, $f(\bm{z} | \bm{A})$, follows \eqref{eq:post_z_given_a}. The substitute confounder is a summary statistic, such as the mode, of this posterior. We examine the best-case scenario in which $\bm\theta$ and $\sigma^2$ are known. In this setting, the posterior variance of the $k'$-th confounder is $\mathrm{Var}\left( Z_{i,k'} | \bm{A}_i, \bm\theta, \sigma^2 \right) = \sigma^2 \left( \bm\theta_{k'} \bm\theta_{k'}^\top + \sigma^2 \right)^{-1}$. Because we assume $\sigma^2>0$, if the variance goes to zero, which pinpointing implies, then each $\bm\theta_{k'} \bm\theta_{k'}$, the $k'$-th diagonal element of $\bm\theta \bm\theta^\top$, must go to infinity. (We rule out the case of $\sigma^2=0$, by the assumption that the causes are a nondeterministic function of the latent confounders.) Thus, pinpointing of $\bm{Z}_i$ implies strong infinite confounding. \qed It is easy to see that this argument generalizes to all factor models with continuous density, i.e. models with continuous $f(\bm{z}_i)$ and $f(\bm{a}_i | \bm{z}_i) = \prod_{j=1}^m f(\bm{a}_{i,j}| \bm{z}_i)$. Because $f(\bm{z}|\bm{a}) \propto f(\bm{a}|\bm{z}) f(\bm{z})$ maintains nonzero variance for all finite $m$ when $f(\bm{a}_i | \bm{z}_i)$ is nondegenerate, pinpointing requires infinite causes. We return to this point in Theorem~\ref{thm:deconfounder_inconsistency_nonlinear}. Next, we present results relating to the inconsistency of various components of the deconfounder in the linear-linear setting. To review, the deconfounder proceeds as follows. It takes the singular value decomposition $\bm{A} = \bm{U} \bm{D} \bm{V}^\top$, then extracts the first $k$ components to form $\hat\bm{Z} \equiv \sqrt{n} \bm{U}_{1:k}$. It then computes $\hat\mathbb{E}[\bm{A} | \hat\bm{Z}] = \hat\bm{Z} \hat\bm\theta$, where $\hat\bm\theta \equiv \frac{1}{\sqrt{n}} \bm{D}_{1:k} \bm{V}_{1:k}^\top$. \begin{mdframed} \begin{lem}{(Inconsistency of $\hat\bm\theta$.)} \label{l:theta_hat} The asymptotic behavior of $\hat\bm\theta$ is governed by \begin{align*} \plim_{n \to \infty} \hat\bm\theta &= \left( \bm\Lambda_{1:k} + \sigma^2 \bm{I} \right)^{\frac{1}{2}} \bm\Lambda_{1:k}^{-\frac{1}{2}} \bm{R}^\top \bm\theta, \end{align*} where $\bm{R}$ and $\bm\Lambda_{1:k}$ are given by the eigendecomposition $\bm\theta \bm\theta^\top = \bm{R} \bm\Lambda_{1:k} \bm{R}^\top $. \end{lem} \end{mdframed} \begin{proof} We begin with the eigendecomposition $\bm\theta^\top \bm\theta = \bm{Q} \bm\Lambda \bm{Q}^\top = \bm{Q}_{1:k} \bm\Lambda_{1:k} \bm{Q}_{1:k}^\top$, where the last step follows from the fact that the trailing $m-k$ diagonal entries of $\bm\Lambda$ are zero. We now turn to $\hat\bm\theta = \frac{1}{\sqrt{n}} \bm{D}_{1:k} \bm{V}_{1:k}^\top$. By the properties of probabilistic PCA, $\plim_{n \to \infty} \frac{1}{\sqrt{n}} \bm{D}_{1:k} = \left( \bm\Lambda_{1:k} + \sigma^2 \bm{I} \right)^{\frac{1}{2}}$ and $\plim_{n \to \infty} \bm{V}_{1:k} = \bm{Q}_{1:k}$ \citep{TipBis99}. The lemma then follows from the singular value decomposition $\bm\theta = \bm{R} \bm\Lambda^{\frac{1}{2}} \bm{Q}^\top = \bm{R} \bm\Lambda_{1:k}^{\frac{1}{2}} \bm{Q}_{1:k}^\top$ by solving for $\bm{Q}_{1:k}$ and substituting. \end{proof} It will be the case that the white-noised and subset deconfounder implicitly rely on $\hat\bm\theta^\top \hat\bm\theta$ to adjust for dependence between causes. In Lemma~\ref{l:residual_dependence}, we show that a consequence of Lemma~\ref{l:theta_hat} (inconsistency of $\hat\bm\theta$) is that $\hat\mathrm{Cov}(\bm{A}) \equiv \hat\bm\theta^\top \hat\bm\theta$ is a poor estimator of the covariance of $\bm{A}$; the dependence will be incorrectly modeled even as $n$ goes to infinity. \begin{mdframed} \begin{lem}{(Mismodeled dependence structure in $\bm{A}$.)} \label{l:residual_dependence} When $\widehat\mathrm{Cov}(\bm{A}) = \hat\bm\theta^\top \hat\bm\theta$ is used as an estimator for $\mathrm{Cov}(\bm{A})$, the unmodeled residual dependence among causes is asymptotically equal to $\sigma^2 \left[ \bm{I} - \bm\theta^\top (\bm\theta \bm\theta^\top)^{-1} \bm\theta \right]$. \end{lem} \end{mdframed} When the number of causes is finite, this residual covariance is nonspherical. In contrast, the true conditional dependence is $\mathrm{Cov}(\bm{A} | \bm{Z}) = \sigma^2 \bm{I}$. \noindent \textit{Proof.} \begin{align*} \plim_{n \to \infty} \mathrm{Cov}(\bm{A}) - \widehat\mathrm{Cov}(\bm{A}) &= \plim_{n \to \infty} \bm\theta^\top \bm\theta + \sigma^2 \bm{I} - \hat\bm\theta^\top \hat\bm\theta \\ &= \bm\theta^\top \bm\theta + \sigma^2 \bm{I} - \bm\theta^\top \bm{R} \left( \bm\Lambda_{1:k} + \sigma^2 \bm{I} \right) \bm\Lambda_{1:k}^{-1} \bm{R}^\top \bm\theta \nonumber \\ &= \sigma^2 \bm{I} - \sigma^2 \bm\theta^\top \bm{R} \bm\Lambda_{1:k}^{-1} \bm{R}^\top \bm\theta \nonumber \\ &= \sigma^2 \left[ \bm{I} - \bm\theta^\top (\bm\theta \bm\theta^\top)^{-1} \bm\theta \right] & \qed \\ \intertext{Under the strong infinite confounding assumption,} \lim_{m \to \infty} \plim_{n \to \infty} \mathrm{Cov}(\bm{A}) - \widehat\mathrm{Cov}(\bm{A}) &= \sigma^2 \bm{I} \end{align*} \paragraph{An Example When Strong Infinite Confounding Fails.} \label{a:weakinfconf} Here we consider an example where the number of treatments increases, but strong infinite confounding does not hold. This builds on an idea found in \citet{d2019comment}. Suppose $Z_{i} \sim \mathcal{N}(0, \sigma^2)$. We will suppose that $A_{i,m} = \frac{1}{m^2} Z_{i} + \epsilon_{i}$, where $\epsilon_{i} \sim \mathcal{N}(0, \sigma^2)$. In this example, as $\lim_{m \rightarrow \infty} \bm\theta \bm\theta^\top = \sum_{m = 1}^{\infty} \frac{1}{m^4} = \frac{\pi^4}{90}$. Therefore, strong infinite confounding fails. In general, in the one-dimensional case, the infinite series must diverge for strong infinite confounding to hold. We now provide a minor result that helps characterize the behavior of the na\"ive estimator, \eqref{e:naive}, when applied to sequences of data-generating processes satisfying \textit{strong infinite confounding} (Definition~\ref{def:infinite_confounding}). In Supplement~\ref{a:bias_naive} (proof of Proposition~\ref{prop:bias_naive}), we will show the conditions for asymptotic unbiasedness of the na\"ive estimator as $n$ and $m$ go to infinity. Lemma~\ref{l:infinite_confounding} states that under the assumption of strong infinite confounding, this condition is asymptotically satisfied as $m$ grows. \subsubsection{Behavior of the Na\"ive Estimator} \begin{mdframed} \begin{lem}{(Na\"ive convergence under strong infinite confounding.)} \label{l:infinite_confounding} A sequence of strongly infinitely confounded data-generating processes satisfies \begin{align*} \lim_{m \to \infty} \bm\theta \left( \bm\theta^\top \bm\theta + \sigma^2 \bm{I} \right)^{-1} \bm\theta^\top &= \bm{I} \end{align*} \end{lem} \end{mdframed} \noindent \textit{Proof.} By the Woodbury matrix identity, \begin{align*} \left(\bm\theta\btheta^\top \frac{1}{\sigma^4}\bm{I} + \frac{1}{\sigma^2} \bm{I}\right)^{-1} &= \sigma^2 \bm{I} - \sigma^4 \bm{I} \bm\theta ( \sigma^4 \bm{I} + \sigma^2 \bm{I} \bm\theta^\top \bm\theta)^{-1} \bm\theta^\top \\ \left(\bm\theta\btheta^\top \frac{1}{\sigma^2}\bm{I} + \bm{I}\right)^{-1}\sigma^2& = \sigma^2 \bm{I} - \sigma^2 \bm\theta ( \sigma^2 \bm{I} + \bm\theta^\top \bm\theta)^{-1} \bm\theta^\top \\ \left(\bm\theta\btheta^\top \frac{1}{\sigma^2}\bm{I} + \bm{I}\right)^{-1} & = \bm{I} - \bm\theta ( \sigma^2 \bm{I} + \bm\theta^\top \bm\theta)^{-1} \bm\theta^\top \\ \end{align*} for any $m$. Because both the entries and number of columns of $\bm\theta$ are finite, the strong infinite confounding condition requires that the diagonal elements of $\bm\theta \bm\theta^\top$ also tend to infinity as $m$ grows large. Therefore $ \lim_{m \to \infty} \left( \frac{1}{\sigma^2} \bm\theta \bm\theta^\top + \bm{I} \right)^{-1} = \bm{0}$, and $ \lim_{m \to \infty} \bm\theta \left( \bm\theta^\top \bm\theta + \sigma^2 \bm{I} \right)^{-1} \bm\theta^\top = \bm{I}$. \qed Supplement~\ref{a:bias_subset} (proof of Proposition~\ref{prop:bias_subset}) shows that unbiasedness of the subset deconfounder estimator requires $\lim_{m \to \infty} \left( \bm\theta \bm\theta^\top \right)^{-1} = \bm{0}$, which is trivially satisfied for strongly infinitely confounded sequences of data-generating processes. \subsection{Bias of the Na\"ive Estimator} \label{a:bias_naive} For convenience, we reiterate the data-generating process, na\"ive estimation procedure. We will suppose, without loss of generality, that the $m$ causes, $\bm{A}$, are divided into $m_F$ focal causes of interest, the column subset $\bm{A}_F$, and $m_N$ nonfocal causes, $\bm{A}_N$. As before, we consider $n$ observations drawn i.i.d. as follows. \begin{align*} \underset{n \times k}{\bm{Z}} &\sim \mathcal{N}(\bm{0}, \bm{I}) \\ \underset{n \times m_F}{\bm\nu_F} &\sim \mathcal{N}(\bm{0}, \sigma^2 \bm{I}) \\ \underset{n \times m_N}{\bm\nu_N} &\sim \mathcal{N}(\bm{0}, \sigma^2 \bm{I}) \\ \underset{n \times m_F}{\bm{A}_F} &= \bm{Z} \bm\theta_F + \bm\nu_F \\ \underset{n \times m_N}{\bm{A}_N} &= \bm{Z} \bm\theta_N + \bm\nu_N \\ \underset{n \times 1}{\bm\epsilon} &\sim \mathcal{N}(\bm{0}, \omega^2) \\ \underset{n \times 1}{\bm{Y}} &= \bm{A}_F \bm\beta_F + \bm{A}_N \bm\beta_N + \bm{Z} \bm\gamma + \bm\epsilon \end{align*} The na\"ive estimator estimates the treatment effects by conducting a regression of the outcome on both focal and nonfocal causes, producing estimates for both focal and nonfocal effects, then discarding the latter. The full regression coefficients are \begin{align} \label{e:naive} \left[ \begin{array}{l} \hat\bm\beta_F^\mathrm{na\ddot{\i}ve} \\ \hat\bm\beta_N^\mathrm{na\ddot{\i}ve} \end{array} \right] &\equiv \left( \left[ \bm{A}_F, \bm{A}_N \right]^\top \left[ \bm{A}_F, \bm{A}_N \right] \right)^{-1} \left[ \bm{A}_F, \bm{A}_N \right]^\top \bm{Y}. \end{align} We will prove Proposition~\ref{prop:bias_naive_focal}, a generalization of Proposition~\ref{prop:bias_naive} that distinguishes between focal and non-focal treatment cases. The proof of Proposition~\ref{prop:bias_naive} is by reduction to the special case in which all causes are of interest, so that $\bm{A}_F = \bm{A}$ and $\bm{A}_N$ is empty. \begin{mdframed} \begin{prop} \label{prop:bias_naive_focal} (Asymptotic Bias of the Na\"ive Regression under Strong Infinite Confounding.) Suppose that the $m$ causes, $\bm{A}$, are divided into focal causes of interest, the column subset $\bm{A}_F$, and nonfocal causes, $\bm{A}_N$, without loss of generality. The bias of the na\"ive estimator, \eqref{e:naive}, for the corresponding focal effects, $\bm\beta_F$, is given by \begin{align*} \plim_{n \to \infty} \hat\bm\beta_F^\mathrm{na\ddot{\i}ve} - \bm\beta_F &= \left[ \bm\theta_F^\top \bm\theta_F + \sigma^2 \bm{I} - \bm\theta_F^\top \bm\Omega \bm\theta_F \right]^{-1} \left[ \bm\theta_F^\top - \bm\theta_F^\top \bm\Omega \right] \bm\gamma, \end{align*} where $\bm\Omega = \bm\theta_N \left( \bm\theta_N^\top \bm\theta_N + \sigma^2 \bm{I} \right)^{-1} \bm\theta_N^\top $ and $\bm\theta_F$ and $\bm\theta_N$ are the corresponding column subsets of $\bm\theta$. Under the assumptions of the linear-linear model, \begin{align*} \lim_{m \to \infty} \plim_{n \to \infty} \hat\bm\beta_F^\mathrm{na\ddot{\i}ve} - \bm\beta_F &= \bm{0}. \end{align*} \end{prop} \end{mdframed} \noindent\textit{Proof of Proposition~\ref{prop:bias_naive_focal}.} By the Frisch-Waugh-Lovell theorem, $\hat\bm\beta_F$ can be re-expressed in terms of the portion of $\bm{A}_F$ not explained by $\bm{A}_N$. Without loss of generality we suppose that the set of focal treatments, $\bm{A}_F$ are fixed at the outset and are finite. We denote the residualized focal treatments as $\tilde\bm{A}_F^\mathrm{na\ddot{\i}ve} = \bm{A}_F - \hat\bm{A}_F^\mathrm{na\ddot{\i}ve}$, where \begin{align*} \hat\bm{A}_F^\mathrm{na\ddot{\i}ve} &= \hat\mathbb{E}[ \bm{A}_F | \bm{A}_N ] = \bm{A}_N \hat\bm\zeta \text{, } \\ \hat\bm\zeta &= (\bm{A}_N^\top \bm{A}_N)^{-1} \bm{A}_N^\top \bm{A}_F \text{, and} \\ \plim_{n \to \infty} \hat\bm\zeta &\equiv \bm\zeta = \left( \bm\theta_N^\top \bm\theta_N + \sigma^2 \bm{I} \right)^{-1} \bm\theta_N^\top \bm\theta_F . \end{align*} The na\"ive estimator is then rewritten as follows: \begin{align*} \hat{\bm\beta}_F^\mathrm{na\ddot{\i}ve} &= \left( \frac{1}{n} \tilde\bm{A}_F^{\mathrm{na\ddot{\i}ve} \top} \tilde\bm{A}_F^\mathrm{na\ddot{\i}ve} \right)^{-1} \frac{1}{n} \tilde\bm{A}_F^\mathrm{na\ddot{\i}ve} \bm{Y} \end{align*} We now characterize the asymptotic bias of this estimator by examining the behavior of $\frac{1}{n} \tilde\bm{A}_F^{\mathrm{na\ddot{\i}ve} \top} \tilde\bm{A}_F^\mathrm{na\ddot{\i}ve}$ and $\frac{1}{n} \tilde\bm{A}_F^\mathrm{na\ddot{\i}ve} \bm{Y}$ in turn. Beginning with the residual variance of the focal causes, \begin{align} \frac{1}{n} \tilde\bm{A}_F^{\mathrm{na\ddot{\i}ve} \top} \tilde\bm{A}_F^\mathrm{na\ddot{\i}ve} &= \frac{1}{n} \left( \bm{A}_F - \hat\bm{A}_F^\mathrm{na\ddot{\i}ve} \right)^\top \left( \bm{A}_F - \hat\bm{A}_F^\mathrm{na\ddot{\i}ve} \right) \nonumber \\ &= \frac{1}{n} \left( \bm{A}_F^\top \bm{A}_F + \hat\bm{A}_F^{\mathrm{na\ddot{\i}ve} \top} \hat\bm{A}_F^\mathrm{na\ddot{\i}ve} - \bm{A}_F^\top \hat\bm{A}_F^\mathrm{na\ddot{\i}ve} - \hat\bm{A}_F^{\mathrm{na\ddot{\i}ve} \top} \bm{A}_F \right) \nonumber \\ &= \frac{1}{n} (\bm{Z} \bm\theta_F + \bm\nu_F)^\top (\bm{Z} \bm\theta_F + \bm\nu_F) \nonumber\\ &\qquad + \frac{1}{n} \hat\bm\zeta^\top (\bm{Z} \bm\theta_N + \bm\nu_N)^\top (\bm{Z} \bm\theta_N + \bm\nu_N) \hat\bm\zeta \nonumber \\ &\qquad - \frac{1}{n} (\bm{Z} \bm\theta_F + \bm\nu_F)^\top (\bm{Z} \bm\theta_N + \bm\nu_N) \hat\bm\zeta \nonumber \\ &\qquad - \frac{1}{n} \hat\bm\zeta^\top (\bm{Z} \bm\theta_N + \bm\nu_N)^\top (\bm{Z} \bm\theta_F + \bm\nu_F) \nonumber \\ \plim_{n \to \infty} \tilde\bm{A}_F^{\mathrm{na\ddot{\i}ve} \top} \tilde\bm{A}_F^\mathrm{na\ddot{\i}ve} &= \plim_{n \to \infty} \bm\theta_F^\top \bm\theta_F + \sigma^2 \bm{I} + \hat\bm\zeta^\top (\bm\theta_N^\top \bm\theta_N + \sigma^2 \bm{I}) \hat\bm\zeta - \bm\theta_F^\top \bm\theta_N \hat\bm\zeta - \hat\bm\zeta^\top \bm\theta_N^\top \bm\theta_F \nonumber \\ &= \bm\theta_F^\top \bm\theta_F + \sigma^2 \bm{I} - \bm\theta_F^\top \bm\theta_N \left( \bm\theta_N^\top \bm\theta_N + \sigma^2 \bm{I} \right)^{-1} \bm\theta_N^\top \bm\theta_F, \quad \text{and} \label{e:naive_denominator_large_n_small_m} \\ \lim_{m \to \infty} \plim_{n \to \infty} \tilde\bm{A}_F^{\mathrm{na\ddot{\i}ve} \top} \tilde\bm{A}_F^\mathrm{na\ddot{\i}ve} &= \sigma^2 \bm{I} \quad \text{under the infinite confounding assumption.} \label{e:naive_denominator_large_n_large_m} \end{align} Turning to the residual covariance between the focal causes and the outcome, \begin{align} \frac{1}{n} \tilde\bm{A}_F^{\mathrm{na\ddot{\i}ve} \top} \bm{Y} &= \frac{1}{n} \left( \bm{A}_F - \hat\bm{A}_N \hat\bm\zeta \right)^\top \left( \bm{A}_F \bm\beta_F + \bm{A}_N \bm\beta_N + \bm{Z} \bm\gamma + \bm\epsilon \right) \nonumber \\ &= \frac{1}{n} \bm{A}_F^\top \bm{A}_F \bm\beta_F + \frac{1}{n} \bm{A}_F^\top \bm{A}_N \bm\beta_N + \frac{1}{n} \bm{A}_F^\top \bm{Z} \bm\gamma + \frac{1}{n} \bm{A}_F^\top \bm\epsilon \nonumber \\ &\qquad - \frac{1}{n} \hat\bm\zeta^\top \bm{A}_N^\top \bm{A}_F \bm\beta_F - \frac{1}{n} \hat\bm\zeta^\top \bm{A}_N^\top \bm{A}_N \bm\beta_N - \frac{1}{n} \hat\bm\zeta^\top \bm{A}_N^\top \bm{Z} \bm\gamma - \frac{1}{n} \hat\bm\zeta^\top \bm{A}_N^\top \bm\epsilon \nonumber \\ &= \frac{1}{n} (\bm\theta_F^\top \bm{Z}^\top + \bm\nu_F^\top) (\bm{Z} \bm\theta_F + \bm\nu_F) \bm\beta_F + \frac{1}{n} (\bm\theta_F^\top \bm{Z}^\top + \bm\nu_F^\top) (\bm{Z} \bm\theta_N + \bm\nu_N) \bm\beta_N \nonumber \\ &\qquad + \frac{1}{n} (\bm\theta_F^\top \bm{Z}^\top + \bm\nu_F^\top) \bm{Z} \bm\gamma + \frac{1}{n} \bm{A}_F^\top \bm\epsilon - \frac{1}{n} \hat\bm\zeta^\top (\bm\theta_N^\top \bm{Z}^\top + \bm\nu_N^\top) (\bm{Z} \bm\theta_F + \bm\nu_F) \bm\beta_F \nonumber \\ &\qquad - \frac{1}{n} \hat\bm\zeta^\top (\bm\theta_N^\top \bm{Z}^\top + \bm\nu_N^\top) (\bm{Z} \bm\theta_N + \bm\nu_N) \bm\beta_N - \frac{1}{n} \hat\bm\zeta^\top (\bm\theta_N^\top \bm{Z}^\top + \bm\nu_N^\top) \bm{Z} \bm\gamma - \frac{1}{n} \hat\bm\zeta^\top \bm{A}_N^\top \bm\epsilon \nonumber \end{align} Taking limits, \begin{align} \plim_{n \to \infty} \frac{1}{n} \tilde\bm{A}_F^{\mathrm{na\ddot{\i}ve} \top} \bm{Y} &= \left[ \bm\theta_F^\top \bm\theta_F + \sigma^2 - \bm\theta_F^\top \bm\theta_N \left( \bm\theta_N^\top \bm\theta_N + \sigma^2 \bm{I} \right)^{-1} \bm\theta_N^\top \bm\theta_F \right] \bm\beta_F \nonumber \\ &\qquad + \left[ \bm\theta_F^\top - \bm\theta_F^\top \bm\theta_N \left( \bm\theta_N^\top \bm\theta_N + \sigma^2 \bm{I} \right)^{-1} \bm\theta_N^\top \right] \bm\gamma, \quad \text{and} \label{e:naive_numerator_large_n_small_m} \\ \lim_{m \to \infty} \plim_{n \to \infty} \frac{1}{n} \tilde\bm{A}_F^{\mathrm{na\ddot{\i}ve} \top} \bm{Y} &= \sigma^2 \bm\beta_F \quad \text{under the infinite confounding assumption.} \label{e:naive_numerator_large_n_large_m} \end{align} Combining ~\eqref{e:naive_denominator_large_n_small_m}~and~\eqref{e:naive_numerator_large_n_small_m} and applying Lemma 3 to $\bm\theta_{N}$, \begin{align*} \plim_{n \to \infty} \hat\bm\beta_F^\mathrm{na\ddot{\i}ve} &= \bm\beta_F + \left[ \bm\theta_F^\top \bm\theta_F + \sigma^2 \bm{I} - \bm\theta_F^\top \bm\theta_N \left( \bm\theta_N^\top \bm\theta_N + \sigma^2 \bm{I} \right)^{-1} \bm\theta_N^\top \bm\theta_F \right]^{-1} \\ &\qquad\qquad \cdot \left[ \bm\theta_F^\top - \bm\theta_F^\top \bm\theta_N \left( \bm\theta_N^\top \bm\theta_N + \sigma^2 \bm{I} \right)^{-1} \bm\theta_N^\top \right] \bm\gamma , \intertext{and under the infinite confounding assumption, \eqref{e:naive_denominator_large_n_large_m}~and~\eqref{e:naive_numerator_large_n_large_m} yield} \lim_{m \to \infty} \plim_{n \to \infty} \hat\bm\beta_F^\mathrm{na\ddot{\i}ve} &= \bm\beta_F . \end{align*} When all effects are of interest, the above reduces to \begin{align*} \hat\bm\beta^\mathrm{na\ddot{\i}ve} &\equiv \left( \bm{A}^\top \bm{A} \right)^{-1} \bm{A}^\top \bm{Y} \\ \plim_{n \to \infty} \frac{1}{n} \bm{A}^\top \bm{A} &= \bm\theta^\top \bm\theta + \sigma^2 \bm{I} \\ \plim_{n \to \infty} \frac{1}{n} \bm{A}^\top \bm{Y} &= \plim_{n \to \infty} \frac{1}{n} (\bm\theta^\top \bm{Z}^\top + \bm\nu^\top ) (\bm{Z} \bm\theta \bm\beta + \bm\nu \bm\beta + \bm{Z} \bm\gamma + \bm\epsilon) \\ &= \bm\theta^\top \bm\theta \bm\beta + \bm\theta^\top \bm\gamma + \sigma^2 \bm{I} \bm\beta \\ \plim_{n \to \infty} \hat\bm\beta^\mathrm{na\ddot{\i}ve} &= \bm\beta + (\bm\theta^\top \bm\theta + \sigma^2 \bm{I})^{-1} \bm\theta^\top \bm\gamma \\ \lim_{m \to \infty} \plim_{n \to \infty} \hat\bm\beta^\mathrm{na\ddot{\i}ve} &= \bm\beta \qed \end{align*} \subsection{Bias of the Penalized Deconfounder Estimator} \label{a:bias_ridge} For convenience, we reiterate the data-generating process and penalized deconfounder estimation procedure here, along with identities that will be useful in the proof of Proposition~\ref{prop:bias_ridge}. As before, we consider $n$ observations drawn i.i.d. as follows. \begin{align*} \underset{n \times k}{\bm{Z}} &\sim \mathcal{N}(\bm{0}, \bm{I}) \\ \underset{n \times m}{\bm\nu} &\sim \mathcal{N}(\bm{0}, \sigma^2 \bm{I}) \\ \underset{n \times m}{\bm{A}} &= \bm{Z} \bm\theta + \bm\nu \\ \underset{n \times 1}{\bm\epsilon} &\sim \mathcal{N}(\bm{0}, \omega^2) \\ \underset{n \times 1}{\bm{Y}} &= \bm{A} \bm\beta + \bm{Z} \bm\gamma + \bm\epsilon \end{align*} The penalized deconfounder estimator (1) takes the singular value decomposition $\bm{A} = \bm{U} \bm{D} \bm{V}^\top$; (2) extracts the first $k$ components, $\hat\bm{Z} \equiv \sqrt{n} \bm{U}_{1:k}$; and (3) estimates the focal effects by computing \begin{align*} \left[\begin{array}{l} \hat\bm\beta^\mathrm{penalty} \\ \hat\bm\gamma^\mathrm{penalty} \end{array} \right] &\equiv \left( \left[ \bm{A}, \hat\bm{Z} \right]^\top \left[ \bm{A}, \hat\bm{Z} \right] + \lambda(n) \bm{I} \right)^{-1} \left[ \bm{A}, \hat\bm{Z} \right]^\top \bm{Y} \end{align*} and discarding $\hat\bm\gamma^\mathrm{penalty}$. The $\lambda(n)$ term indicates the strength of the ridge penalty; we allow this term to scale with $n$ for full generality. Note that identification is purely from this term ridge penalty---because $\hat\bm{Z}$ is merely a linear transformation of $\bm{A}$, the above is non-estimable when $\lambda(n) = 0$. We now restate Proposition~\ref{prop:bias_ridge} for convenience. \begin{mdframed} \textbf{Proposition~\ref{prop:bias_ridge}.} \it (Asymptotic Bias of the Penalized Full Deconfounder.) Consider the linear-linear data-generating process, in which $n$ observations are sampled i.i.d. by drawing $k$ unobserved confounders, $\bm{Z} \sim \mathcal{N}(\bm{0}, \bm{I})$; these generate $m \ge k$ observed treatments, $\bm{A} \sim \mathcal{N}(\bm{Z} \bm\theta, \sigma^2 \bm{I})$; and a scalar outcome is drawn from $\bm{Y} \sim \mathcal{N}(\bm{A} \bm\beta + \bm{Z} \bm\gamma, \omega^2)$. The penalized deconfounder estimator, as implemented in WB, is \begin{align*} \hat\bm\beta^\mathrm{penalty} &\equiv \left( \left[ \bm{A}, \hat\bm{Z} \right]^\top \left[ \bm{A}, \hat\bm{Z} \right] + \lambda(n) \bm{I} \right)^{-1} \left[ \bm{A}, \hat\bm{Z} \right]^\top \bm{Y}, \end{align*} where $\hat\bm{Z}$ is obtained by taking the singular value decomposition $\bm{A} = \bm{U} \bm{D} \bm{V}^\top$ and extracting the first $k$ components, $\hat\bm{Z} \equiv \sqrt{n} \bm{U}_{1:k}$, and $\lambda(n)$ is a ridge penalty that is assumed to be sublinear in $n$. The asymptotic bias of this estimator is given by \begin{align*} \plim_{n \to \infty} \hat\bm\beta^\mathrm{penalty} - \bm\beta &= \overbrace{ - \bm{Q}_{1:k} \ \mathrm{diag}_j \left( \frac{ 1 }{ \sigma^2 + \Lambda_j + 1 } \right) \bm{Q}_{1:k}^\top \bm\beta }^{\text{Regularization}} \\ &\qquad \underbrace{+ \bm{Q}_{1:k} \ \mathrm{diag}_j \left( \frac{ \Lambda_j }{ \sigma^2 + \Lambda_j + 1 } \right) \bm{Q}_{1:k}^\top \bm\theta^\top (\bm\theta \bm\theta^\top)^{-1} \bm\gamma}_{\text{Omitted Variable Bias}}, \intertext{where $\bm{Q}$ and $\bm\Lambda = [\Lambda_1, \ldots, \Lambda_k, 0, \ldots]$ are respectively eigenvectors and eigenvalues obtained from decomposition of $\bm\theta^\top \bm\theta$. Under strong infinite confounding,} \lim_{m \to \infty} \plim_{n \to \infty} \hat\bm\beta^\mathrm{penalty} - \bm\beta &= \bm{0}. \end{align*} \end{mdframed} In what follows, we prove Proposition~\ref{prop:bias_ridge} by relating the asymptotic behavior of the penalized deconfounder to the eigendecomposition $\bm\theta^\top \bm\theta = \bm{Q} \bm\Lambda \bm{Q}^\top = \bm{Q}_{1:k} \bm\Lambda_{1:k} \bm{Q}_{1:k}^\top$. We will rely on the singular value decompositions of $\bm{A}$ and $[ \bm{A}, \hat\bm{Z}]$. To distinguish these, for this section only, we denote the former as $\bm{A} = \bm{U}_A \bm{D}_A \bm{V}_A^\top$ and the latter as $[ \bm{A}, \hat\bm{Z}] = \bm{U}_{AZ} \bm{D}_{AZ} \bm{V}_{AZ}^\top $. Lemma~\ref{l:svd_az} characterizes the relationship between these. \begin{mdframed} \begin{lem} \label{l:svd_az} For any $n$, the singular value decomposition $[ \bm{A}, \hat\bm{Z}] = \bm{U}_{AZ} \bm{D}_{AZ} \bm{V}_{AZ}^\top $ obeys \begin{align*} \bm{U}_{AZ} &= \left[ \bm{U}_A,\ \ast \right] \\ \bm{D}_{AZ} &= \left[ \begin{array}{lllll} \left( \bm{D}_{A,1:k}^2 + n \bm{I} \right)^{\frac{1}{2}}, && \bm{0}, && \bm{0} \\ \enspace\bm{0}, && \bm{D}_{A,(k+1):m}, && \bm{0} \\ \enspace\bm{0}, && \bm{0}, && \bm{0} \end{array} \right] \\ \bm{V}_{AZ}^\top &= \left[ \begin{array}{lll} \left( \bm{D}_{A,1:k}^2 + n \bm{I} \right)^{-\frac{1}{2}} \bm{D}_{A,1:k} \bm{V}_{A,1:k}^\top, && \sqrt{n} \left( \bm{D}_{A,1:k}^2 + n \bm{I} \right)^{-\frac{1}{2}} \\[1ex] \enspace\bm{V}_{A,(k+1):m}^\top, && \bm{0} \\[1ex] \enspace\ast && \ast \end{array} \right], \end{align*} where $\ast$ indicates irrelevant normalizing columns in $\bm{U}_{AZ}$ and $\bm{V}_{AZ}$. \end{lem} \end{mdframed} \textit{Proof of Lemma~\ref{l:svd_az}.} The first equality follows from the fact that the newly appended $\hat\bm{Z}$ columns are merely linear transformations of $\bm{A}$, so that the leading $m$ left singular vectors remain unchanged. Of the unchanged left singular vectors, each of the first $k$ is directly proportional to the corresponding column of $\hat\bm{Z}$. Because $\hat\bm{Z}$ is standardized by construction, the variance explained by each of the first $k$ left singular vector increases by one; the $(k+1)$-th through $m$-th left singular vectors are orthogonal to the newly appended $\hat\bm{Z}$ and so their singular values remain unchanged. This yields the second equality. The third equality can verified by $\bm{U}_{AZ} \bm{D}_{AZ} \bm{V}_{AZ}^\top = [\bm{U}_A \bm{D}_A \bm{V}_A^\top, \sqrt{n} \bm{U}_{A,1:k}] = [ \bm{A}, \hat\bm{Z}]$ and $\bm{V}_{AZ,1:m}^\top \bm{V}_{AZ,1:m} = \bm{I}$. \qed We now examine the asymptotic behavior of the penalized deconfounder estimator. \noindent\textit{Proof of Proposition~\ref{prop:bias_ridge}.} \begin{align} \left[\begin{array}{l} \hat\bm\beta^\mathrm{penalty} \\ \hat\bm\gamma^\mathrm{penalty} \end{array}\right] &= \left( [ \bm{A} ,\ \hat\bm{Z} ]^\top [ \bm{A} ,\ \hat\bm{Z} ] + \lambda(n) \bm{I} \right)^{-1} [ \bm{A} ,\ \hat\bm{Z} ]^\top \ \bm{Y} \\ &= \bm{V}_{AZ} \left( \bm{D}_{AZ}^{\ 2} + \lambda(n) \bm{I} \right)^{-1} \bm{D}_{AZ} \bm{U}_{AZ}^\top \bm{Y} \intertext{By Lemma~\ref{l:svd_az},} &= \left[\begin{array}{ll} \bm{V}_A \bm{D}_A, & \ast \\[1ex] \sqrt{n} \bm{I}, & \ast \end{array}\right] \left[ \begin{array}{ll} \left( \bm{D}_A^2 + \lambda(n) \bm{I} + n \cdot \mathrm{diag}_j 1\{ j \le k \} \right)^{-1}, & \bm{0} \\[1ex] \enspace\bm{0}, & \bm{0} \end{array} \right] [ \bm{U}_A , \ \ast ]^\top \bm{Y} \nonumber \\ \intertext{where asterisks denote irrelevant blocks, eliminated below.} &= \left[\begin{array}{l} \bm{V}_A \bm{D}_A \\ \sqrt{n} \bm{I} \end{array}\right] \left( \bm{D}_A^2 + \lambda(n) \bm{I} + n \cdot \mathrm{diag}_j 1\{ j \le k \} \right)^{-1} \bm{U}_A^\top \left( \ \bm{A} \bm\beta + \bm{Z} \bm\gamma + \bm\epsilon \right) \nonumber \\ \intertext{We now subset to $\hat\bm\beta^\mathrm{penalty}$, then substitute $\bm{A} = \bm{U}_A \bm{D}_A \bm{V}_A^\top$, $\bm{U}_A^\top = \bm{D}_A^{-1} \bm{V}_A^\top \bm{A}^\top$ and $\bm{Z} = (\bm{A} - \bm\nu) \bm\theta^\top (\bm\theta \bm\theta^\top)^{-1}$,} \plim_{n \to \infty} \hat\bm\beta^\mathrm{penalty} &= \plim_{n \to \infty} \bm{V}_A \bm{D}_A \left( \bm{D}_A^2 + \lambda(n) \bm{I} + n \cdot \mathrm{diag}_j 1\{ j \le k \} \right)^{-1} \bm{D}_A \bm{V}_A^\top \bm\beta \label{e:ridge_goto} \\ &\qquad\qquad + \bm{V}_A \bm{D}_A \left( \bm{D}_A^2 + \lambda(n) \bm{I} + n \cdot \mathrm{diag}_j 1\{ j \le k \} \right)^{-1} \bm{D}_A \bm{V}_A^\top \bm\theta^\top (\bm\theta \bm\theta^\top)^{-1} \bm\gamma \nonumber \\ &\qquad\qquad - \bm{V}_A \left( \bm{D}_A^2 + \lambda(n) \bm{I} + n \cdot \mathrm{diag}_j 1\{ j \le k \} \right)^{-1} \bm{V}_A^\top \bm{A}^\top \bm\nu \bm\theta^\top (\bm\theta \bm\theta^\top)^{-1} \bm\gamma \nonumber \\ &= \plim_{n \to \infty} \bm{V}_{A} \ \mathrm{diag}_j \left( \frac{ \sigma^2 + 1\{ j \le k \} \Lambda_j }{ \sigma^2 + \lambda(n) / n + 1\{ j \le k \} (\Lambda_j + 1) } \right) \bm{V}_{A}^\top \bm\beta \nonumber \\ &\qquad\qquad + \bm{V}_{A} \ \mathrm{diag}_j \left( \frac{ \sigma^2 + 1\{ j \le k \} \Lambda_j }{ \sigma^2 + \lambda(n) / n + 1\{ j \le k \} (\Lambda_j + 1) } \right) \bm{V}_{A}^\top \bm\theta^\top (\bm\theta \bm\theta^\top)^{-1} \bm\gamma \nonumber \\ &\qquad\qquad - \bm{V}_A \ \mathrm{diag}_j \left( \frac{ \sigma^2 }{ \sigma^2 + \lambda(n) / n + 1\{ j \le k \} (\Lambda_j + 1) } \right) \bm{V}_A^\top \bm\theta^\top (\bm\theta \bm\theta^\top)^{-1} \bm\gamma \nonumber \\ \intertext{By properties of PPCA \citep{TipBis99},} &= \plim_{n \to \infty} \bm\beta - \bm{V}_A \ \mathrm{diag}_j \left( \frac{ \lambda(n) / n + 1\{ j \le k \} }{ \sigma^2 + \lambda(n) / n + 1\{ j \le k \} (\Lambda_j + 1) } \right) \bm{V}_A^\top \bm\beta \nonumber \\ &\qquad\qquad + \bm{V}_{A} \ \mathrm{diag}_j \left( \frac{ 1\{ j \le k \} \Lambda_j }{ \sigma^2 + \lambda(n) / n + 1\{ j \le k \} (\Lambda_j + 1) } \right) \bm{V}_{A}^\top \bm\theta^\top (\bm\theta \bm\theta^\top)^{-1} \bm\gamma \nonumber \\ &= \bm\beta - \bm{Q}_{1:k} \ \mathrm{diag}_j \left( \frac{ \lambda(n) / n + 1 }{ \sigma^2 + \lambda(n) / n + \Lambda_j + 1 } \right) \bm{Q}_{1:k}^\top \bm\beta \nonumber \\ & \qquad - \left( \frac{ \lambda(n) / n }{ \sigma^2 + \lambda(n) / n } \right) \bm{Q}_{(k+1):m} \bm{Q}_{(k+1):m}^\top \bm\beta \nonumber \\ &\qquad + \bm{Q}_{1:k} \ \mathrm{diag}_j \left( \frac{ \Lambda_j }{ \sigma^2 + \lambda(n) / n + \Lambda_j + 1 } \right) \bm{Q}_{1:k}^\top \bm\theta^\top (\bm\theta \bm\theta^\top)^{-1} \bm\gamma \nonumber \\ \intertext{When $\lambda(n)$ is sublinear,} \plim_{n \to \infty} \hat\bm\beta^\mathrm{penalty} &= \bm\beta - \bm{Q}_{1:k} \ \mathrm{diag}_j \left( \frac{ 1 }{ \sigma^2 + \Lambda_j + 1 } \right) \bm{Q}_{1:k}^\top \bm\beta \nonumber \\ & \qquad + \bm{Q}_{1:k} \ \mathrm{diag}_j \left( \frac{ \Lambda_j }{ \sigma^2 + \Lambda_j + 1 } \right) \bm{Q}_{1:k}^\top \bm\theta^\top (\bm\theta \bm\theta^\top)^{-1} \bm\gamma . \nonumber \end{align} Under strong infinite confounding, it can be seen that $\lim_{m \to \infty} \plim_{n \to \infty} \hat\bm\beta^\mathrm{penalty} = \bm\beta$. This follows from $\lim_{m \to \infty} \bm\Lambda_{1:k}^{-1} = \bm{0}$ and $\lim_{m \to \infty} \bm\theta^\top (\bm\theta \bm\theta^\top)^{-1} \bm\gamma = \bm{0}$. \qed \subsection{Bias of the Penalized Deconfounder Estimator under Nonlinear Confounding} \label{a:bias_ridge_nonlinear} In this section, we evaluate the behavior of the deconfounder in a more general setting. We consider $n$ observations drawn i.i.d. from the below data-generating process. \begin{align} \underset{n \times k}{\bm{Z}} &\sim \mathcal{N}(\bm{0}, \bm{I}) \nonumber \\ \underset{n \times m}{\bm\nu} &\sim \mathcal{N}(\bm{0}, \sigma^2 \bm{I}) \nonumber \\ \underset{n \times m}{\bm{A}} &= \bm{Z} \bm\theta + \bm\nu \nonumber \\ \underset{n \times 1}{\bm\epsilon} &\sim \mathcal{N}(\bm{0}, \omega^2) \nonumber \\ \intertext{However, we relax the outcome model to allow for arbitrary additively separable confounding,} \underset{n \times 1}{\bm{Y}} &= \left[ \bm{A}_i^\top \bm\beta + g_Y(\bm{Z}_i) + \bm\epsilon_i \right] \label{e:outcome_nonlinear} \end{align} The linear-linear model defined in Section~\ref{s:asymptotic} is a special case of the linear-separable model. Note that in empirical settings, the specific functional form of $g_Y(\cdot)$ is rarely known except when analyzing simple physical systems. However, any $g_Y(\cdot)$ can be approximated to arbitrary degree $d$ as follows. First, denote the polynomial basis expansion of $\bm{Z}_i$ as $h(\bm{Z}_i) \equiv \left[ \prod_{k'=1}^k Z_{i,k'}^{d'_{k'}} \right]_{\sum_{k'=1}^k d'_{k'} \le d}$ and collect these in rows of $h(\bm{Z}) = [ h(\bm{Z}_i) ]$. Then, \eqref{e:outcome_nonlinear} can be rewritten by Taylor expansion as \begin{align} \bm{Y} &= \bm{A} \bm\beta + h(\bm{Z}) \bm\xi + \bm\epsilon \label{e:outcome_nonlinear_expansion} \end{align} with approximation error that grows arbitrarily small as $d$ grows large. We will choose $d$ sufficiently to fully capture $g_Y(\cdot)$. Let $\bm{W}$ be the orthogonal higher-order polynomials of $\bm{Z}$. Then, \eqref{e:outcome_nonlinear} can be rewritten yet again as \begin{align} \bm{Y} &= \bm{A} \bm\beta + \gamma \bm{Z} + \delta \bm{W} + \bm\epsilon. \end{align} As before, the effects $\bm\beta$ are the causal quantities of interest. Note that the confounding, $g_Y(\bm{Z})$, is a nuisance term, so there is no need to reconstruct it from its expansion. We will assume that $g_Y(\bm{Z})$ is zero-mean for convenience; this assumption is trivial to relax using an added intercept. We will derive the asymptotic behavior of the flexible penalized deconfounder, which generalizes the penalized full deconfounder of Supplement~\ref{a:bias_ridge} for all additively separable forms of confounding. The flexible penalized deconfounder estimator consists of the following procedure: (1) take the singular value decomposition $\bm{A} = \bm{U} \bm{D} \bm{V}^\top$; and (2) extract the first $k$ components, $\hat\bm{Z} \equiv \sqrt{n} \bm{U}_{1:k}$. To allow for nonlinear confounding, (3) compute $h(\hat\bm{Z})$ and take its QR decomposition, $h(\hat\bm{Z}) = \bm{Q}_Z \bm{R}_Z = \left[ \frac{1}{\sqrt{n}} \hat\bm{Z} ,\ \frac{1}{\sqrt{n}} \hat\bm{W} \right] \bm{R}_Z$.\footnote{The invariance of $\hat\bm{Z}$ follows from its orthonormality.} Finally, (4) estimate $\bm\beta$ by a ridge regression of the form $$[ \hat\bm\beta^{\mathrm{nonlinear} \top} ,\ \hat\bm\gamma^{\mathrm{nonlinear} \top} ,\ \hat\bm\delta^{\mathrm{nonlinear} \top}]^\top \ = \ \left( [ \bm{A} ,\ \hat\bm{Z} ,\ \hat\bm{W} ]^\top [ \bm{A} ,\ \hat\bm{Z} ,\ \hat\bm{W} ] + \lambda(n) \bm{I} \right)^{-1} [ \bm{A} ,\ \hat\bm{Z} ,\ \hat\bm{W} ]^\top \bm{Y}.$$ As in Supplement~\ref{a:bias_ridge}, the $\lambda(n)$ term indicates the strength of the ridge penalty and is allowed to scale sublinearly in $n$. Again, identification is purely from this ridge penalty, because $\hat\bm{Z}$ is merely a linear transformation of $\bm{A}$ and thus the matrix is non-invertible without regularization. \begin{mdframed} \begin{prop} \label{prop:bias_ridge_nonlinear} (Asymptotic Bias of the Flexible Penalized Deconfounder under Additively Separable Confounding.) For all data-generating processes containing a linear factor model and additively separable confounding, the asymptotic bias of the flexible ridge deconfounder is given by \begin{align*} \plim_{n \to \infty} \hat\bm\beta^\mathrm{nonlinear} - \bm\beta &= - \bm{Q}_{1:k} \ \mathrm{diag}_j \left( \frac{ 1 }{ \sigma^2 + \Lambda_j + 1 } \right) \bm{Q}_{1:k}^\top \bm\beta \\ & \qquad + \bm{Q}_{1:k} \ \mathrm{diag}_j \left( \frac{ \Lambda_j }{ \sigma^2 + \Lambda_j + 1 } \right) \bm{Q}_{1:k}^\top \bm\theta^\top (\bm\theta \bm\theta^\top)^{-1} \bm\gamma, \intertext{where $\bm{Q}$ and $\bm\Lambda = [\Lambda_1, \ldots, \Lambda_k, 0, \ldots]$ are respectively eigenvectors and eigenvalues obtained from decomposition of $\bm\theta^\top \bm\theta$. Under strong infinite confounding,} \lim_{m \to \infty} \plim_{n \to \infty} \hat\bm\beta^\mathrm{nonlinear} &= \bm\beta. \end{align*} \end{prop} \end{mdframed} We briefly offer intuition for the form of this bias before proceeding to the proof. The bias expressions in Proposition~\ref{prop:bias_ridge_nonlinear} are identical to those of Proposition~\ref{prop:bias_ridge}, though the interpretation diverges slightly due to the flexible nature of the confounding function, $g_Y(\bm{Z})$. The term $\bm\gamma$ represents the portion of the confounding due to the linear trend in $g_Y(\bm{Z})$, which induces bias as described above. In contrast, $\bm\delta$ represents the nonlinear portion of the confounding that remains after eliminating the main linear trend. Because this part of $g_Y(\bm{Z})$ is by construction orthogonal to $\bm{Z}$ (and therefore to $\bm{A}$, due to the linear nature of the factor model) it cannot induce bias in $\hat\bm\beta^\mathrm{nonlinear}$. \noindent\textit{Proof of Proposition~\ref{prop:bias_ridge_nonlinear}.} In what follows, we will relate the asymptotic behavior of the flexible penalized deconfounder to the eigendecomposition $\bm\theta^\top \bm\theta = \bm{Q} \bm\Lambda \bm{Q}^\top = \bm{Q}_{1:k} \bm\Lambda_{1:k} \bm{Q}_{1:k}^\top$. To do so, we will rely on the singular value decompositions of $\bm{A}$, $[ \bm{A}, \hat\bm{Z}]$, and $[ \bm{A}, \hat\bm{Z}, \hat\bm{W}]$. For this section only, we respectively denote these as $\bm{A} = \bm{U}_A \bm{D}_A \bm{V}_A^\top$, $[ \bm{A}, \hat\bm{Z}] = \bm{U}_{AZ} \bm{D}_{AZ} \bm{V}_{AZ}^\top $, and $[ \bm{A}, \hat\bm{Z}, \hat\bm{W}] = \bm{U}_{AZW} \bm{D}_{AZW} \bm{V}_{AZW}^\top$. Lemma \ref{l:svd_az} characterizes the relationship between the first two; we now describe the latter. For any $n$, the singular value decomposition $[ \bm{A}, \hat\bm{Z}, \hat\bm{W}] = \bm{U}_{AZW} \bm{D}_{AZW} \bm{V}_{AZW}^\top $ can be seen to obey \begin{align*} \bm{U}_{AZW} &= \left[ \frac{1}{\sqrt{n}} \hat\bm{Z},\ \bm{U}_{A, (k+1):m},\ \ast,\ \frac{1}{\sqrt{n}} \hat\bm{W} \right] \\ \bm{D}_{AZW} &= \left[ \begin{array}{lllllll} \left( \bm{D}_{A,1:k}^2 + n \bm{I} \right)^{\frac{1}{2}}, && \bm{0}, && \bm{0}, && \bm{0}\\ \enspace\bm{0}, && \bm{D}_{A,(k+1):m}, && \bm{0}, && \bm{0} \\ \enspace\bm{0}, && \bm{0}, && \bm{0}, && \bm{0} \\ \bm{0}, && \bm{0}, && \bm{0}, && \sqrt{n} \bm{I} \end{array} \right] \\ \bm{V}_{AZW}^\top &= \left[ \begin{array}{lllll} \left( \bm{D}_{A,1:k}^2 + n \bm{I} \right)^{-\frac{1}{2}} \bm{D}_{A,1:k} \bm{V}_{A,1:k}^\top, && \sqrt{n} \left( \bm{D}_{A,1:k}^2 + n \bm{I} \right)^{-\frac{1}{2}}, && \bm{0} \\[1ex] \enspace\bm{V}_{A,(k+1):m}^\top, && \bm{0}, && \bm{0} \\[1ex] \enspace\ast, && \ast, && \ast \\[1ex] \enspace\bm{0}, && \bm{0}, && \bm{I} \end{array} \right], \end{align*} where $\ast$ indicates irrelevant normalizing columns in $\bm{U}_{AZW}$ and $\bm{V}_{AZW}$. The above is due to Lemma~\ref{l:svd_az} for the first $k+m$ columns. The behavior of the trailing columns follows from the fact that $\hat\bm{W}$ is normalized and orthogonal to $\hat\bm{Z}$ (and therefore to $\bm{A}$) by construction, and therefore remains invariant in the decomposition. We now substitute the singular value decomposition of $[ \bm{A}, \hat\bm{Z}, \hat\bm{W}]$ into the ridge estimator. \begin{align*} \left[\begin{array}{l} \hat\bm\beta^\mathrm{nonlinear} \\ \hat\bm\gamma^\mathrm{nonlinear} \\ \hat\bm\delta^\mathrm{nonlinear} \end{array}\right] &= \left( [ \bm{A} ,\ \hat\bm{Z} ,\ \hat\bm{W} ]^\top [ \bm{A} ,\ \hat\bm{Z} ,\ \hat\bm{W} ] + \lambda(n) \bm{I} \right)^{-1} [ \bm{A} ,\ \hat\bm{Z} ,\ \hat\bm{W} ]^\top \bm{Y} \\ &= \bm{V}_{AZW} \left( \bm{D}_{AZW}^{\ 2} + \lambda(n) \bm{I} \right)^{-1} \bm{D}_{AZW} \bm{U}_{AZW}^\top \bm{Y} \intertext{Eliminating dimensions with zero singular values and subsetting to $\hat\bm\beta^\mathrm{nonlinear}$, we obtain} &= \bm{V}_A \bm{D}_A \left( \bm{D}_A^2 + \lambda(n) \bm{I} + n \cdot \mathrm{diag}_j 1\{ j \le k \} \right)^{-1} \bm{U}_A^\top \bm{Y} \end{align*} and go to \eqref{e:ridge_goto} in the proof of Proposition~\ref{prop:bias_ridge}. \qed \subsection{Bias of the White-noised Deconfounder Estimator} \label{a:bias_noise_white} In one of the tutorial simulations in \citet{wang2019github}, gaussian noise is added to the substitute confounder to render it estimable. This simulation and our reanalysis is discussed in Supplement~\ref{a:logisticsim}. Here we prove properties of this general strategy. For convenience, we reiterate the data-generating process and white-noised deconfounder estimation procedure here. As before, we consider $n$ observations drawn i.i.d. as follows. \begin{align*} \underset{n \times k}{\bm{Z}} &\sim \mathcal{N}(\bm{0}, \bm{I}) \\ \underset{n \times m}{\bm\nu} &\sim \mathcal{N}(\bm{0}, \sigma^2 \bm{I}) \\ \underset{n \times m}{\bm{A}} &= \bm{Z} \bm\theta + \bm\nu_F \\ \underset{n \times 1}{\bm\epsilon} &\sim \mathcal{N}(\bm{0}, \omega^2) \\ \underset{n \times 1}{\bm{Y}} &= \bm{A} \bm\beta + \bm{Z} \bm\gamma + \bm\epsilon \end{align*} The white-noised deconfounder estimator (1) takes the singular value decomposition $\bm{A} = \bm{U} \bm{D} \bm{V}^\top$; (2) extracts the first $k$ components, $\hat\bm{Z} \equiv \sqrt{n} \bm{U}_{1:k}$ and accompanying $\hat\bm\theta \equiv \frac{1}{\sqrt{n}} \bm{D}_{1:k} \bm{V}_{1:k}^\top$; adds noise $\bm{S} \sim \mathcal{N}(\bm{0}, \psi^2 \bm{I})$ to $\hat\bm{Z}$ to break perfect collinearity with $\bm{A}$; and (4) estimates effects by computing \begin{align} \left[\begin{array}{l} \hat\bm\beta^\mathrm{wn} \\ \hat\bm\gamma^\mathrm{wn} \end{array} \right] &\equiv \left( \left[ \bm{A}, \hat\bm{Z} + \bm{S} \right]^\top \left[ \bm{A}, \hat\bm{Z} + \bm{S} \right] \right)^{-1} \left[ \bm{A}, \hat\bm{Z} + \bm{S} \right]^\top \bm{Y}. \label{e:noise_posterior_white_estimator} \end{align} We now restate Proposition~\ref{prop:bias_noise_white} before proceeding to the proof. \begin{mdframed} \textbf{Proposition~\ref{prop:bias_noise_white}.} \label{prop:bias_noise_white} \it (Asymptotic Bias of the White-noised Deconfounder.) Consider $n$ observations drawn from a data-generating process with $k$ unobserved confounders, $\bm{Z} \sim \mathcal{N}(\bm{0}, \bm{I})$; $m \ge k$ observed treatments, $\bm{A} \sim \mathcal{N}(\bm{Z} \bm\theta, \sigma^2 \bm{I})$; and outcome $\bm{Y} \sim \mathcal{N}(\bm{A} \bm\beta + \bm{Z} \bm\gamma, \omega^2)$. The white-noised deconfounder estimator, is \begin{align*} \left[\begin{array}{l} \hat\bm\beta^\mathrm{wn} \\ \hat\bm\gamma^\mathrm{wn} \end{array}\right] &\equiv \left( \left[ \bm{A}, \hat\bm{Z} + \bm{S} \right]^\top \left[ \bm{A}, \hat\bm{Z} + \bm{S} \right] \right)^{-1} \left[ \bm{A}, \hat\bm{Z} + \bm{S} \right] \bm{Y}, \end{align*} where $\hat\bm{Z}$ is obtained by taking the singular value decomposition $\bm{A} = \bm{U} \bm{D} \bm{V}^\top$ and extracting the first $k$ components, $\hat\bm{Z} \equiv \sqrt{n} \bm{U}_{1:k}$; the addition of white noise, $\bm{S} \sim \mathcal{N}(0, \psi^2 \bm{I})$, makes this regression estimable. The asymptotic bias of this estimator is given by \begin{align*} \plim_{n \to \infty} \hat\bm\beta^\mathrm{wn} - \bm\beta &= \left\{ \bm\theta^\top \left[\bm{I} - \frac{\sigma^2}{\psi^2} ( \bm\theta \bm\theta^\top )^{-1} \right] \bm\theta + \frac{\sigma^2}{\psi^2} (1 + \psi^2) \bm{I} \right\}^{-1} \bm\theta^\top \bm\gamma , \\ \intertext{and under strong infinite confounding,} \lim_{m \to \infty} \plim_{n \to \infty} \hat\bm\beta^\mathrm{wn} - \bm\beta &= \left[ \bm\theta^\top \bm\theta + \frac{\sigma^2}{\psi^2} (1 + \psi^2) \bm{I} \right]^{-1} \bm\theta^\top \bm\gamma \end{align*} \end{mdframed} \noindent\textit{Proof of Proposition~\ref{prop:bias_noise_white}.} After subsetting \eqref{e:noise_posterior_white_estimator} to the treatment effects, the estimator can be rewritten as \begin{align*} \hat\bm\beta^\mathrm{wn} &= (\bm{A}^\top \bm{M}^\mathrm{wn} \bm{A})^{-1} \bm{A}^\top \bm{M}^\mathrm{wn} \bm{Y} ,\quad\text{where} \\ \bm{M}^\mathrm{wn} &\equiv \bm{I} - (\hat\bm{Z} + \bm{S}) \left[ (\hat\bm{Z} + \bm{S})^\top (\hat\bm{Z} + \bm{S}) \right]^{-1} (\hat\bm{Z} + \bm{S})^\top. \end{align*} Note that \begin{align*} \plim_{n \to \infty} \hat\bm\beta &= \plim_{n \to \infty} (\bm{A}^\top \bm{M}^\mathrm{wn} \bm{A})^{-1} \bm{A}^\top \bm{M}^\mathrm{wn} \bm{Y} \\ &= \plim_{n \to \infty} (\bm{A}^\top \bm{M}^\mathrm{wn} \bm{A})^{-1} \bm{A}^\top \bm{M}^\mathrm{wn} (\bm{A} \bm\beta + \bm{Z} \bm\gamma + \bm\epsilon) \\ &= \bm\beta + \plim_{n \to \infty} (\bm{A}^\top \bm{M}^\mathrm{wn} \bm{A})^{-1} \bm{A}^\top \bm{M}^\mathrm{wn} \bm{Z} \bm\gamma. \end{align*} We will proceed by first examining $\plim_{n \to \infty} \frac{1}{n} \bm{A}^\top \bm{M}^\mathrm{wn} \bm{A}$ and then $\plim_{n \to \infty} \frac{1}{n} \bm{A}^\top \bm{M}^\mathrm{wn} \bm{Z}$. \begin{align*} \plim_{n \to \infty} \frac{1}{n} \bm{A}^\top \bm{M}^\mathrm{wn} \bm{A} &= \plim_{n \to \infty} \frac{1}{n} \bm{A}^\top \left\{ \bm{I} - (\hat\bm{Z} + \bm{S}) \left[ (\hat\bm{Z} + \bm{S})^\top (\hat\bm{Z} + \bm{S}) \right]^{-1} (\hat\bm{Z} + \bm{S})^\top \right\} \bm{A} \\ &= \plim_{n \to \infty} \frac{1}{n} \bm{A}^\top \left[ \bm{I} - \frac{1}{n (1 + \psi^2 )} (\hat\bm{Z} + \bm{S}) (\hat\bm{Z} + \bm{S})^\top \right] \bm{A} \\ &= \plim_{n \to \infty} \frac{1}{n} \bm{A}^\top \bm{A} - \frac{1}{1 + \psi^2} \left( \frac{1}{n} \bm{A}^\top \hat\bm{Z} \right) \left( \frac{1}{n} \hat\bm{Z}^\top \bm{A} \right) \\ &= \plim_{n \to \infty} \frac{1}{n} \bm{A}^\top \bm{A} - \frac{1}{1 + \psi^2} \hat\bm\theta^\top \hat\bm\theta \\ &= \frac{\psi^2}{1 + \psi^2} \bm\theta^\top \bm\theta + \sigma^2 \bm{I} - \frac{\sigma^2}{1 + \psi^2} \bm\theta^\top \left(\bm\theta \bm\theta^\top\right)^{-1} \bm\theta \quad\text{by Lemma~\ref{l:residual_dependence}} \\ &= \frac{\psi^2 }{1 + \psi^2} \bm\theta^\top \left[\bm{I} - \frac{\sigma^2}{\psi^2} ( \bm\theta \bm\theta^\top )^{-1} \right] \bm\theta + \sigma^2 \bm{I} \\ \plim_{n \to \infty} \frac{1}{n} \bm{A}^\top \bm{M}^{\mathrm{wn}\ast} \bm{Z} &= \plim_{n \to \infty} \frac{1}{n} \bm{A}^\top \left\{ \bm{I} - (\hat\bm{Z} + \bm{S}) \left[ (\hat\bm{Z} + \bm{S})^\top (\hat\bm{Z} + \bm{S}) \right]^{-1} (\hat\bm{Z} + \bm{S})^\top \right\} \bm{Z} \\ &= \plim_{n \to \infty} \bm\theta^\top - \frac{1}{1 + \psi^2} \left( \frac{1}{n} \bm{A}^\top \hat\bm{Z} \right) \left( \frac{1}{n} \hat\bm{Z}^\top \bm{Z} \right) \\ &= \plim_{n \to \infty} \bm\theta^\top - \frac{1}{1 + \psi^2} \bm\theta^\top (\hat\bm\theta \hat\bm\theta^\top)^{-1} \hat\bm\theta \left[ \frac{1}{n} \bm{A}^\top (\bm{A} - \bm\nu) \right] \bm\theta^\top (\bm\theta \bm\theta^\top)^{-1} \\ &= \plim_{n \to \infty} \bm\theta^\top - \frac{1}{1 + \psi^2} \bm\theta^\top (\hat\bm\theta \hat\bm\theta^\top)^{-1} \hat\bm\theta \bm\theta^\top \bm\theta \bm\theta^\top (\bm\theta \bm\theta^\top)^{-1} \\ \intertext{By Lemma~\ref{l:theta_hat},} &= \plim_{n \to \infty} \bm\theta^\top - \frac{1}{1 + \psi^2} \bm\theta^\top \bm{R} \bm\Lambda_{1:k}^{-\frac{1}{2}} \left( \bm\Lambda_{1:k} + \sigma^2 \bm{I} \right)^{\frac{1}{2}} (\hat\bm\theta \hat\bm\theta^\top)^{-1} \nonumber \\ & \qquad \qquad \qquad \qquad \qquad \cdot \hat\bm\theta \hat\bm\theta^\top \left( \bm\Lambda_{1:k} + \sigma^2 \bm{I} \right)^{-\frac{1}{2}} \bm\Lambda_{1:k}^{\frac{1}{2}} \bm{R}^\top \\ &= \frac{\psi^2}{1 + \psi^2} \bm\theta^\top \\ \plim_{n \to \infty} \hat\bm\beta^\mathrm{wn} - \bm\beta &= \plim_{n \to \infty} (\bm{A}^\top \bm{M}^\mathrm{wn} \bm{A})^{-1} \bm{A}^\top \bm{M}^\mathrm{wn} \bm{Z} \bm\gamma \\ &= \left\{ \bm\theta^\top \left[\bm{I} - \frac{\sigma^2}{\psi^2} ( \bm\theta \bm\theta^\top )^{-1} \right] \bm\theta + \frac{\sigma^2}{\psi^2} (1 + \psi^2) \bm{I} \right\}^{-1} \bm\theta^\top \bm\gamma \quad\text{and} \\ \lim_{m \to \infty} \plim_{n \to \infty} \hat\bm\beta^\mathrm{wn} - \bm\beta &= \left[ \bm\theta^\top \bm\theta + \frac{\sigma^2}{\psi^2} (1 + \psi^2) \bm{I} \right]^{-1} \bm\theta^\top \bm\gamma \qed \end{align*} \subsection{Bias of the Posterior-mean Deconfounder Estimator} \label{a:bias_noise_posterior} For convenience, we reiterate the data-generating process and posterior-mean deconfounder estimation procedure here. As before, we consider $n$ observations drawn i.i.d. as follows. \begin{align*} \underset{n \times k}{\bm{Z}} &\sim \mathcal{N}(\bm{0}, \bm{I}) \\ \underset{n \times m}{\bm\nu} &\sim \mathcal{N}(\bm{0}, \sigma^2 \bm{I}) \\ \underset{n \times m}{\bm{A}} &= \bm{Z} \bm\theta + \bm\nu_F \\ \underset{n \times 1}{\bm\epsilon} &\sim \mathcal{N}(\bm{0}, \omega^2) \\ \underset{n \times 1}{\bm{Y}} &= \bm{A} \bm\beta + \bm{Z} \bm\gamma + \bm\epsilon \end{align*} The posterior-mean deconfounder estimator is an approximate Bayesian procedure, in the sense that it obtains an estimate for the effects $\bm\beta$ by integrating over an approximation to the full joint posterior, $f(\bm\beta, \bm\gamma, \bm{z} | \bm{Y}, \bm{A})$, as follows. First, the full posterior is factorized as $f(\bm\beta, \bm\gamma | \bm{Y}, \bm{A}, \bm{z}) f(\bm{z} | \bm{Y}, \bm{A})$. Then, $f(\bm{z} | \bm{A})$ is obtained by a Bayesian principal components analysis of $\bm{A}$ alone---i.e., ignoring information from $\bm{Y}$---and used as an approximation to $f(\bm{z} | \bm{Y}, \bm{A})$. A Bayesian linear regression of $\bm{Y}$ on $\bm{A}$ and $\bm{z}$ is used to obtain the conditional posterior $f(\bm\beta, \bm\gamma | \bm{Y}, \bm{A}, \bm{z})$, and finally $\bm{z}$ is integrated out. The posterior-mean deconfounder estimator is thus \begin{align*} \left[ \hat\bm\beta^{\mathrm{pm} \top}, \hat\bm\gamma^{\mathrm{pm} \top} \right]^\top &\equiv \int f(\bm{z} | \bm{A}) \ \mathbb{E}\left\{ \left[ \bm\beta^\top, \bm\gamma^\top \right]^\top \big| \ \bm{Y}, \bm{A}, \bm{z} \right\} \dd{\bm{z}}. \end{align*} We leave priors for all parameters unspecified; by the Bernstein-von Mises theorem, our results are invariant to the choice of any prior with positive density on the true parameters. We now restate Proposition~\ref{prop:bias_noise_posterior} before proceeding to the proof. \begin{mdframed} \textbf{Proposition \ref{prop:bias_noise_posterior}.} \it (Asymptotic Bias of the Posterior-Mean Deconfounder.) Consider $n$ observations drawn from a data-generating process with $k$ unobserved confounders, $\bm{Z} \sim \mathcal{N}(\bm{0}, \bm{I})$; $m \ge k$ observed treatments, $\bm{A} \sim \mathcal{N}(\bm{Z} \bm\theta, \sigma^2 \bm{I})$; and outcome $\bm{Y} \sim \mathcal{N}(\bm{A} \bm\beta + \bm{Z} \bm\gamma, \omega^2)$, following WB. The posterior-mean deconfounder estimator is \begin{align*} \left[\begin{array}{l} \hat\bm\beta^\mathrm{pm} \\ \hat\bm\gamma^\mathrm{pm} \end{array}\right] &\equiv \int \left( \left[ \bm{A}, \bm{z} \right]^\top \left[ \bm{A}, \bm{z} \right] \right)^{-1} \left[ \bm{A}, \bm{z} \right] \bm{Y} \ f(\bm{z} | \bm{A}) \dd{\bm{z}} , \end{align*} where $f(\bm{z} | \bm{A})$ is a posterior obtained from Bayesian principal component analysis.\footnote{While the regression cannot be estimated when $\bm{z} = \mathbb{E}[\bm{Z} | \bm{A}]$, it is almost surely estimable for samples $\bm{z}^\ast \sim f(\bm{z} | \bm{A})$ due to posterior uncertainty, which eliminates perfect collinearity with $\bm{A}$. The posterior-mean implementation of WB evaluates the integral by Monte Carlo methods and thus is able to compute the regression coefficients for each sample.} The asymptotic bias of this estimator is given by \begin{align*} \plim_{n \to \infty} \hat\bm\beta^\mathrm{pm} - \bm\beta &= (\bm\theta^\top \bm\theta + \sigma^2 \bm{I})^{-1} \bm\theta^\top \bm\gamma, \intertext{and under strong infinite confounding,} \lim_{m \to \infty} \plim_{n \to \infty} \hat\bm\beta^\mathrm{pm} - \bm\beta &= \bm{0} \end{align*} \end{mdframed} \noindent\textit{Proof of Proposition~\ref{prop:bias_noise_posterior}.} Under the Bayesian principal components generative model, \begin{align*} \left[ \begin{array}{l} \bm{Z}_i \\ \bm{A}_i \end{array} \right] &\sim \mathcal{N}\left( \bm{0}, \left[ \begin{array}{ll} \bm{I}, & \bm\theta \\ \bm\theta^\top, & \bm\theta^\top \bm\theta + \sigma^2 \bm{I} \end{array} \right] \right)\\ \intertext{and by properties of the multivariate normal,} f(\bm{z}_i | \bm{A}_i, \bm\theta, \sigma^2) &= \phi\left( \bm{z}_i ; ~ \bm\theta (\bm\theta^\top \bm\theta + \sigma^2 \bm{I})^{-1} \bm{A}_i, ~ \bm{I} - \bm\theta (\bm\theta^\top \bm\theta + \sigma^2 \bm{I})^{-1} \bm\theta^\top \right). \end{align*} We will decompose the conditional posterior over confounders as $f(\bm{z} | \bm{A}) = \\ f(\bm{z} | \bm\theta, \sigma^2, \bm{A}) f(\bm\theta, \sigma^2 | \bm{A})$. A sample $\bm{z}_i^\ast$ can be drawn from the Bayesian principal component posterior by first sampling $\bm\theta^\ast$ and $\sigma^{\ast 2}$ from $f(\bm\theta, \sigma^2 | \bm{A})$, deterministically constructing $\hat\bm{Z}_i^\ast \equiv \mathbb{E}[\bm{Z}_i | \bm\theta^\ast, \sigma^{\ast 2}, \bm{A}_i] = \bm\theta^\ast (\bm\theta^{\ast \top} \bm\theta^\ast + \sigma^{\ast 2} \bm{I})^{-1} \bm{A}_i$, sampling $\bm{s}_i^\ast$ from $f(\bm{s}_i | \bm\theta^\ast, \sigma^{\ast 2}) = \phi\left( \bm{s}_i ; ~ \bm{0}, ~ \bm{I} - \bm\theta^\ast (\bm\theta^{\ast \top} \bm\theta^\ast + \sigma^{\ast 2} \bm{I})^{-1} \bm\theta^{\ast \top} \right) $, and deterministically taking $\bm{z}_i^\ast = \hat\bm{Z}_i^\ast + \bm{s}_i^\ast$. We can then rewrite \begin{align*} \left[ \hat\bm\beta^{\mathrm{pm} \top}, \hat\bm\gamma^{\mathrm{pm} \top} \right]^\top &= \int f(\bm\theta^\ast, \sigma^{\ast 2}, \bm{s}^\ast | \bm{A}) \ \mathbb{E}\left\{ \left[ \bm\beta^\top, \bm\gamma^\top \right]^\top \big| \ \bm{Y}, \bm{A}, \bm\theta^\ast, \sigma^{\ast 2}, \bm{s}^\ast \right\} \dd{\bm\theta^\ast} \dd{\sigma^{\ast 2}} \dd{\bm{s}^\ast} \end{align*} where \begin{align} &\mathbb{E}\left\{ \left[ \bm\beta^{\ast \top}, \bm\gamma^{\ast \top} \right]^\top \big| \ \bm{Y}, \bm{A}, \bm\theta^\ast, \sigma^{\ast 2}, \bm{s}^\ast \right\} \nonumber \\ &\qquad= \left( \left[ \bm{A},\ (\hat\bm{Z}^\ast + \bm{s}^\ast) \right]^\top \left[ \bm{A},\ (\hat\bm{Z}^\ast + \bm{s}^\ast) \right] \right)^{-1} \left[ \bm{A},\ (\hat\bm{Z}^\ast + \bm{s}^\ast) \right]^\top \bm{Y}. \label{e:noise_posterior_deconfounder_conditional_estimate} \end{align} Note that the posterior $f(\bm\theta, \sigma^2 | \bm{A})$ concentrates on true $\sigma^2$ and $\bm\theta$ (up to a rotation). Thus, candidate $\bm\theta^\ast$ and $\sigma^{\ast 2}$ values that fail to satisfy $\bm\theta^{\ast \top} \bm\theta^\ast = \bm\theta^\top \bm\theta$ and $\sigma^{\ast 2} = \sigma^2$ grow vanishingly unlikely as $n$ grows large. We examine the asymptotic behavior of the conditional estimator, \eqref{e:noise_posterior_deconfounder_conditional_estimate}, in this region and show that the bias is constant. Thus, the estimator remains asymptotically biased after integrating over all possible rotations of $\bm\theta^\ast$. After subsetting \eqref{e:noise_posterior_deconfounder_conditional_estimate} to the treatment effects, the conditional estimator can be rewritten as $ \hat\bm\beta^\ast \equiv \mathbb{E}\left[ \bm\beta^\ast | \ \bm{Y}, \bm{A}, \bm\theta^\ast, \sigma^{\ast 2}, \bm{s}^\ast \right] = (\bm{A}^\top \bm{M}^{\mathrm{pm}\ast} \bm{A})^{-1} \bm{A}^\top \bm{M}^{\mathrm{pm}\ast} \bm{Y} $, where $\bm{M}^{\mathrm{pm}\ast}$ denotes the conditional annihilator, $\bm{I} - (\hat\bm{Z}^\ast + \bm{s}^\ast) \left[ (\hat\bm{Z}^\ast + \bm{s}^\ast)^\top (\hat\bm{Z}^\ast + \bm{s}^\ast) \right]^{-1} (\hat\bm{Z}^\ast + \bm{s}^\ast)^\top$. Note that \begin{align*} \plim_{n \to \infty} \hat\bm\beta^\ast &= \plim_{n \to \infty} (\bm{A}^\top \bm{M}^{\mathrm{pm}\ast} \bm{A})^{-1} \bm{A}^\top \bm{M}^{\mathrm{pm}\ast} \bm{Y} \\ &= \plim_{n \to \infty} (\bm{A}^\top \bm{M}^{\mathrm{pm}\ast} \bm{A})^{-1} \bm{A}^\top \bm{M}^{\mathrm{pm}\ast} (\bm{A} \bm\beta + \bm{Z} \bm\gamma + \bm\epsilon) \\ &= \bm\beta + \plim_{n \to \infty} (\bm{A}^\top \bm{M}^{\mathrm{pm}\ast} \bm{A})^{-1} \bm{A}^\top \bm{M}^{\mathrm{pm}\ast} \bm{Z} \bm\gamma . \end{align*} for any $\bm\theta^\ast$ and $\sigma^{\ast 2}$. We will proceed by first examining $\plim_{n \to \infty} \frac{1}{n} \bm{A}^\top \bm{M}^{\mathrm{pm}\ast} \bm{A}$ and then $\plim_{n \to \infty} \frac{1}{n} \bm{A}^\top \bm{M}^{\mathrm{pm}\ast} \bm{Z}$. % \begin{align*} \plim_{n \to \infty} \frac{1}{n} \bm{A}^\top \bm{M}^{\mathrm{pm}\ast} \bm{A} &= \plim_{n \to \infty} \frac{1}{n} \bm{A}^\top \left\{ \bm{I} - (\hat\bm{Z}^\ast + \bm{s}^\ast) \left[ (\hat\bm{Z}^\ast + \bm{s}^\ast)^\top (\hat\bm{Z}^\ast + \bm{s}^\ast) \right]^{-1} (\hat\bm{Z}^\ast + \bm{s}^\ast)^\top \right\} \bm{A} \\ &= \plim_{n \to \infty} \frac{1}{n} \bm{A}^\top \left[ \bm{I} - \frac{1}{n} (\hat\bm{Z}^\ast + \bm{s}^\ast) (\hat\bm{Z}^\ast + \bm{s}^\ast)^\top \right] \bm{A} \\ &= \plim_{n \to \infty} \frac{1}{n} \bm{A}^\top \bm{A} - \left( \frac{1}{n} \bm{A}^\top \hat\bm{Z}^\ast \right) \left( \frac{1}{n} \hat\bm{Z}^{\ast \top} \bm{A} \right) \\ &= \plim_{n \to \infty} \frac{1}{n} \bm{A}^\top \bm{A} - (\bm\theta^\top \bm\theta + \sigma^2 \bm{I}) (\bm\theta^{\ast \top} \bm\theta^\ast + \sigma^{\ast 2} \bm{I})^{-1} \bm\theta^{\ast \top} \bm\theta^\ast \\ & \qquad \qquad \qquad \qquad \cdot (\bm\theta^{\ast \top} \bm\theta^\ast + \sigma^{\ast 2} \bm{I})^{-1} (\bm\theta^\top \bm\theta + \sigma^2 \bm{I}) \\ &= \bm\theta^\top \bm\theta + \sigma^2 \bm{I} - \bm\theta^{\ast\top} \bm\theta^\ast \\ &= \sigma^2 \bm{I} \\ \plim_{n \to \infty} \frac{1}{n} \bm{A}^\top \bm{M}^{\mathrm{pm}\ast} \bm{Z} &= \plim_{n \to \infty} \frac{1}{n} \bm{A}^\top \left\{ \bm{I} - (\hat\bm{Z}^\ast + \bm{s}^\ast) \left[ (\hat\bm{Z}^\ast + \bm{s}^\ast)^\top (\hat\bm{Z}^\ast + \bm{s}^\ast) \right]^{-1} (\hat\bm{Z}^\ast + \bm{s}^\ast)^\top \right\} \bm{Z} \\ &= \plim_{n \to \infty} \frac{1}{n} ( \bm\theta^\top \bm{Z}^\top + \bm\nu^\top ) \bm{Z} - (\bm\theta^\top \bm\theta + \sigma^2 \bm{I}) (\bm\theta^{\ast \top} \bm\theta^\ast + \sigma^{\ast 2} \bm{I})^{-1} \bm\theta^{\ast \top} \left( \frac{1}{n} \hat\bm{Z}^{\ast\top} \bm{Z} \right) \\ &= \plim_{n \to \infty} \bm\theta^\top - \bm\theta^{\ast \top} \left( \frac{1}{n} \bm\theta^\ast (\bm\theta^{\ast \top} \bm\theta^\ast + \sigma^{\ast 2} \bm{I})^{-1} \bm{A}^\top \bm{Z} \right) \\ &= \sigma^2 (\bm\theta^\top \bm\theta + \sigma^2 \bm{I})^{-1} \bm\theta^\top \end{align*} Note that both expressions depend only on $\bm\theta^{\ast \top} \bm\theta^\ast$, not $\bm\theta^\ast$ alone. Thus, the bias is constant over the entire asymptotic posterior (i.e., all rotations) of $\bm\theta^\ast$. \begin{align*} \plim_{n \to \infty} \hat\bm\beta^\mathrm{pm} - \bm\beta &= \plim_{n \to \infty} \int f(\bm\theta^\ast, \sigma^{\ast 2}, \bm{s}^\ast | \bm{A}) \ \mathbb{E}\left[ \bm\beta^\ast - \bm\beta | \ \bm{Y}, \bm{A}, \bm\theta^\ast, \sigma^{\ast 2}, \bm{s}^\ast \right] \dd{\bm\theta^\ast} \dd{\sigma^{\ast 2}} \dd{\bm{s}^\ast} \\ &= (\bm\theta^\top \bm\theta + \sigma^2 \bm{I})^{-1} \bm\theta^\top \bm\gamma \intertext{and under strong infinite confounding,} \lim_{m \to \infty} \plim_{n \to \infty} \hat\bm\beta^\mathrm{pm} - \bm\beta &= \bm{0} \qed \end{align*} \subsection{Bias of the Subset Deconfounder} \label{a:bias_subset} For convenience, we reiterate the data-generating process and subset deconfounder estimation procedure here. We will suppose, without loss of generality, that the $m$ causes, $\bm{A}$, are divided into $m_F$ focal causes of interest, the column subset and $m_N$ nonfocal causes, $\bm{A}_N$. As before, we consider $n$ observations drawn i.i.d. as follows. \begin{align*} \underset{n \times k}{\bm{Z}} &\sim \mathcal{N}(\bm{0}, \bm{I}) \\ \underset{n \times m_F}{\bm\nu_F} &\sim \mathcal{N}(\bm{0}, \sigma^2 \bm{I}) \\ \underset{n \times m_N}{\bm\nu_N} &\sim \mathcal{N}(\bm{0}, \sigma^2 \bm{I}) \\ \underset{n \times m_F}{\bm{A}_F} &= \bm{Z} \bm\theta_F + \bm\nu_F \\ \underset{n \times m_N}{\bm{A}_N} &= \bm{Z} \bm\theta_N + \bm\nu_N \\ \underset{n \times 1}{\bm\epsilon} &\sim \mathcal{N}(\bm{0}, \omega^2) \\ \underset{n \times 1}{\bm{Y}} &= \bm{A}_F \bm\beta_F + \bm{A}_N \bm\beta_N + \bm{Z} \bm\gamma + \bm\epsilon \end{align*} The subset deconfounder estimator (1) takes the singular value decomposition $\bm{A} = \bm{U} \bm{D} \bm{V}^\top$; (2) extracts the first $k$ components, $\hat\bm{Z} \equiv \sqrt{n} \bm{U}_{1:k}$ and accompanying $\hat\bm\theta \equiv \frac{1}{\sqrt{n}} \bm{D}_{1:k} \bm{V}_{1:k}^\top$; and (3) estimates the focal effects by computing \begin{align*} \left[\begin{array}{l} \hat\bm\beta_F^\mathrm{subset} \\ \hat\bm\gamma^\mathrm{subset} \end{array} \right] &\equiv \left( \left[ \bm{A}_F, \hat\bm{Z} \right]^\top \left[ \bm{A}_F, \hat\bm{Z} \right] \right)^{-1} \left[ \bm{A}_F, \hat\bm{Z} \right]^\top \bm{Y} \end{align*} and discarding $\hat\bm\gamma^\mathrm{subset}$. We now restate Proposition~\ref{prop:bias_subset} before proceeding to the proof. \begin{mdframed} \textbf{Proposition~\ref{prop:bias_subset}.} (Asymptotic Bias of the Subset Deconfounder.) The subset deconfounder estimator, based on Theorem 7 from WB, is \begin{align} \left[\begin{array}{l} \hat\bm\beta_F^\mathrm{subset} \\ \hat\bm\gamma^\mathrm{subset} \end{array}\right] &\equiv \left( \left[ \bm{A}_F, \hat\bm{Z} \right]^\top \left[ \bm{A}_F, \hat\bm{Z} \right] \right)^{-1} \left[ \bm{A}_F, \hat\bm{Z} \right]^\top \bm{Y}. \end{align} where the column subsets $\bm{A}_F$ and $\bm{A}_N$ respectively partition $\bm{A}$ into a finite number of focal causes of interest and non-focal causes. The substitute confounder, $\hat\bm{Z}$, is obtained by taking the singular value decomposition $\bm{A} = \bm{U} \bm{D} \bm{V}^\top$ and extracting the first $k$ components, $\hat\bm{Z} \equiv \sqrt{n} \bm{U}_{1:k}$. Under the linear-linear model, the asymptotic bias of this estimator is given by \begin{align*} \plim_{n \to \infty} \hat\bm\beta_F^\mathrm{subset} - \bm\beta_F &= \left( \bm{I} - \bm\theta_F^\top (\bm\theta \bm\theta^\top)^{-1} \bm\theta_F \right)^{-1} \bm\theta_F^\top (\bm\theta \bm\theta^\top)^{-1} \bm\theta_N \bm\beta_N, \\ \intertext{ with $\bm\theta_F$ and $\bm\theta_N$ indicating the column subsets of $\bm\theta$ corresponding to $\bm{A}_F$ and $\bm{A}_N$, respectively. The subset deconfounder is unbiased for $\bm\beta_F$ (i) if $\bm\theta_F = \bm{0}$, (ii) if $\lim_{m \rightarrow \infty} \bm\theta_N \bm\beta_N = \bm{0}$ and $\lim_{m \rightarrow \infty} \left[ \bm{I} - \bm\theta_F^\top (\bm\theta \bm\theta^\top)^{-1} \bm\theta_F \right]^{-1}$ is convergent or (iii) if both strong infinite confounding holds and $(\bm\theta \bm\theta^\top)^{-1} \bm\theta_{N} \bm\beta_{N}$ goes to $\bm{0}$, as $m \rightarrow \infty$. If one of these additional conditions hold, } \lim_{m \to \infty} \plim_{n \to \infty} \hat\bm\beta_F^\mathrm{subset} - \bm\beta_F &= \bm{0} \end{align*} \end{mdframed} \textit{Proof of Proposition~\ref{prop:bias_subset}.} By the Frisch-Waugh-Lovell theorem, $\hat\bm\beta_F^\mathrm{subset}$ can be re-expressed in terms of the portion of $\bm{A}_F$ not explained by $\hat\bm{Z}$. We denote the residualized focal treatments as $\tilde\bm{A}_F^\mathrm{subset} = \bm{A}_F - \hat\bm{A}_F^\mathrm{subset} = \bm{A}_F^\mathrm{subset} - \hat\bm{Z} \hat\bm\theta_F = \bm{U}_{(k+1):m} \bm{D}_{(k+1):m} \bm{V}_{(k+1):m}^\top$. The subset deconfounder estimator is then rewritten as follows: \begin{align*} \hat{\bm\beta}_F^\mathrm{subset} &= \left( \frac{1}{n} \tilde\bm{A}_F^{\mathrm{subset} \top} \tilde\bm{A}_F^\mathrm{subset} \right)^{-1} \frac{1}{n} \tilde\bm{A}_F^\mathrm{subset} \bm{Y} \end{align*} We now characterize the asymptotic bias of this estimator by examining the behavior of $\frac{1}{n} \tilde\bm{A}_F^{\mathrm{subset} \top} \tilde\bm{A}_F^\mathrm{subset}$ and $\frac{1}{n} \tilde\bm{A}_F^\mathrm{subset} \bm{Y}$ in turn. Beginning with the residual variance of the focal causes, \begin{align} \frac{1}{n} \tilde\bm{A}_F^{\mathrm{subset} \top} \tilde\bm{A}_F^\mathrm{subset} &= \frac{1}{n} \left( \bm{A}_F^\top - \hat\bm{A}_F^{\mathrm{subset} \top} \right) \left( \bm{A}_F - \hat\bm{A}_F^\mathrm{subset} \right) \nonumber \\ &= \frac{1}{n} \left( \bm{A}_F^\top \bm{A}_F + \hat\bm{A}_F^{\mathrm{subset} \top} \hat\bm{A}_F^\mathrm{subset} - \bm{A}_F^\top \hat\bm{A}_F^\mathrm{subset} - \hat\bm{A}_F^{\mathrm{subset} \top} \bm{A}_F \right) \nonumber \\ &= \frac{1}{n} \left( \bm\theta_F^\top \bm{Z}^\top + \bm\nu_F^\top \right) \left( \bm{Z} \bm\theta_F + \bm\nu_F \right) + \frac{1}{n} \hat\bm\theta_F^\top \hat\bm{Z}^\top \hat\bm{Z} \hat\bm\theta_F \nonumber \\ &\qquad - \frac{1}{n} \left( \bm{V}_{1:k,F} \bm{D}_{1:k} \bm{U}_{1:k}^\top + \bm{V}_{(k+1):m,F} \bm{D}_{(k+1):m} \bm{U}_{(k+1):m}^\top \right) \bm{U}_{1:k} \bm{D}_{1:k} \bm{V}_{1:k,F}^\top \nonumber \\ &\qquad - \frac{1}{n} \bm{V}_{1:k,F} \bm{D}_{1:k} \bm{U}_{1:k}^\top \left( \bm{U}_{1:k} \bm{D}_{1:k} \bm{V}_{1:k,F}^\top + \bm{U}_{(k+1):m} \bm{D}_{(k+1):m} \bm{V}_{(k+1):m,F}^\top \right) \nonumber \\ &= \frac{1}{n} \left( \bm\theta_F^\top \bm{Z}^\top + \bm\nu_F^\top \right) \left( \bm{Z} \bm\theta_F + \bm\nu_F \right) + \hat\bm\theta_F^\top \hat\bm\theta_F - \frac{2}{n} \bm{V}_{1:k,F} \bm{D}_{1:k}^2 \bm{V}_{1:k,F}^\top \end{align} Taking limits, \begin{align} \plim_{n \to \infty} \frac{1}{n} \tilde\bm{A}_F^{\mathrm{subset} \top} \tilde\bm{A}_F^\mathrm{subset} &= \bm\theta_F^\top \bm\theta_F + \sigma^2 \bm{I} - \hat\bm\theta_F^\top \hat\bm\theta_F \nonumber \\ &= \sigma^2 \bm{I} - \sigma^2 \bm\theta_F^\top (\bm\theta \bm\theta^\top)^{-1} \bm\theta_F \quad \text{by Lemma~\ref{l:residual_dependence}, and} \label{e:subset_deconfounder_denominator_large_n_small_m} \\ \lim_{m \to \infty} \plim_{n \to \infty} \frac{1}{n} \tilde\bm{A}_F^{\mathrm{subset} \top} \tilde\bm{A}_F^\mathrm{subset} &= \sigma^2 \bm{I} \quad \text{under strong infinite confounding.} \label{e:subset_deconfounder_denominator_large_n_large_m} \end{align} Turning to the residual covariance between the focal causes and the outcome, \begin{align} \frac{1}{n} \tilde\bm{A}_F^{\mathrm{subset} \top} \bm{Y} &= \frac{1}{n} \tilde\bm{A}_F^{\mathrm{subset} \top} \left( \bm{A}_F \bm\beta_F + \bm{A}_N \bm\beta_N + \bm{Z} \bm\gamma + \bm\epsilon \right) \nonumber \\ &= \frac{1}{n} \left[ \left(\bm{A}_F^\top - \hat\bm{A}_F^{\mathrm{subset} \top}\right) \bm{A}_F \bm\beta_F + \tilde\bm{A}_F^{\mathrm{subset} \top} \bm{A}_N \bm\beta_N + \tilde\bm{A}_F^{\mathrm{subset} \top} \bm{Z} \bm\gamma + \tilde\bm{A}_F^{\mathrm{subset} \top} \bm\epsilon \right] \nonumber \\ &= \frac{1}{n} (\bm{A}_F^\top - \hat\bm{A}_F^{\mathrm{subset} \top}) \bm{A}_F \bm\beta_F % + \frac{1}{n} \bm{V}_{(k+1):m,F} \bm{D}_{(k+1):m} \bm{U}_{(k+1):m}^\top \bm{U} \bm{D} \bm{V}_{N}^\top \bm\beta_N \nonumber \\ &\qquad + \frac{1}{\sqrt{n}} \bm{V}_{(k+1):m,F} \bm{D}_{(k+1):m} \bm{U}_{(k+1):m}^\top \bm{U}_{1:k} \bm\gamma% + \frac{1}{n} \tilde\bm{A}_F^{\mathrm{subset} \top} \bm{Z} \bm\gamma + \frac{1}{n} \tilde\bm{A}_F^{\mathrm{subset} \top} \bm\epsilon \nonumber \\ &= \frac{1}{n} (\bm{A}_F^\top - \hat\bm{A}_F^{\mathrm{subset} \top}) \bm{A}_F \bm\beta_F % + \frac{1}{n} \bm{V}_{(k+1):m,F} \bm{D}_{(k+1):m}^2 \bm{V}_{(k+1):m, N}^\top \bm\beta_N \nonumber \\ &\qquad + \frac{1}{n} ( \bm\theta_F^\top \bm{Z}^\top + \bm\nu_F^\top - \hat\bm\theta_F^\top \hat\bm{Z}^\top) \bm{Z} \bm\gamma + \frac{1}{n} \tilde\bm{A}_F^{\mathrm{subset} \top} \bm\epsilon \nonumber \end{align} We will proceed by reducing $(\bm{A}_F^\top - \hat\bm{A}_F^{\mathrm{subset} \top}) \bm{A}_F$ as above, substituting \begin{align*} \hat\bm{Z}^\top \bm{Z} = \big[ (\hat\bm\theta \hat\bm\theta^\top)^{-1} \hat\bm\theta \bm{A}^\top \big] \big[ (\bm{A} - \bm\nu) \bm\theta^\top (\bm\theta \bm\theta^\top)^{-1} \big] \end{align*} and invoking Lemmas~\ref{l:theta_hat}~and~\ref{l:residual_dependence}. \begin{align} \plim_{n \to \infty} \frac{1}{n} \tilde\bm{A}_F^{\mathrm{subset} \top} \bm{Y} &= \sigma^2 \left[ \bm{I} - \bm\theta_F^\top (\bm\theta \bm\theta^\top)^{-1} \bm\theta_F \right] \bm\beta_F - \sigma^2 \bm\theta_F^\top (\bm\theta \bm\theta^\top)^{-1} \bm\theta_N \bm\beta_N \nonumber \\ &\qquad + \bm\theta_F^\top \bm\gamma - \plim_{n \to \infty} \frac{1}{n} \hat\bm\theta_F^\top (\hat\bm\theta \hat\bm\theta^\top)^{-1} \hat\bm\theta \bm{A}^\top (\bm{A} - \bm\nu) \bm\theta^\top (\bm\theta \bm\theta^\top)^{-1} \bm\gamma \nonumber \\ &= \sigma^2 \left[ \bm{I} - \bm\theta_F^\top (\bm\theta \bm\theta^\top)^{-1} \bm\theta_F \right] \bm\beta_F - \sigma^2 \bm\theta_F^\top (\bm\theta \bm\theta^\top)^{-1} \bm\theta_N \bm\beta_N \nonumber \\ &\qquad + \bm\theta_F^\top \bm\gamma - \hat\bm\theta_F^\top (\hat\bm\theta \hat\bm\theta^\top)^{-1} \hat\bm\theta \left( \bm\theta^\top \bm\theta + \sigma^2 \bm{I} - \sigma^2 \bm{I} \right) \bm\theta^\top (\bm\theta \bm\theta^\top)^{-1} \bm\gamma \nonumber \\ &= \sigma^2 \left[ \bm{I} - \bm\theta_F^\top (\bm\theta \bm\theta^\top)^{-1} \bm\theta_F \right] \bm\beta_F - \sigma^2 \bm\theta_F^\top (\bm\theta \bm\theta^\top)^{-1} \bm\theta_N \bm\beta_N \nonumber \\ &\qquad + \bm\theta_F^\top \bm\gamma - \hat\bm\theta_F^\top (\hat\bm\theta \hat\bm\theta^\top)^{-1} \hat\bm\theta \bm\theta^\top \bm\gamma \nonumber \\ &= \sigma^2 \left[ \bm{I} - \bm\theta_F^\top (\bm\theta \bm\theta^\top)^{-1} \bm\theta_F \right] \bm\beta_F - \sigma^2 \bm\theta_F^\top (\bm\theta \bm\theta^\top)^{-1} \bm\theta_N \bm\beta_N \nonumber \\ &\qquad + \bm\theta_F^\top \bm\gamma - \hat\bm\theta_F^\top (\hat\bm\theta \hat\bm\theta^\top)^{-1} \hat\bm\theta \hat\bm\theta^\top \left( \bm\Lambda_{1:k} + \sigma^2 \bm{I} \right)^{-\frac{1}{2}} \bm\Lambda_{1:k}^{\frac{1}{2}} \bm{R}^\top \bm\gamma \nonumber \\ &= \sigma^2 \left[ \bm{I} - \bm\theta_F^\top (\bm\theta \bm\theta^\top)^{-1} \bm\theta_F \right] \bm\beta_F - \sigma^2 \bm\theta_F^\top (\bm\theta \bm\theta^\top)^{-1} \bm\theta_N \bm\beta_N \nonumber \\ &\qquad + \bm\theta_F^\top \bm\gamma - \bm\theta_F^\top \bm{R} \left( \bm\Lambda_{1:k} + \sigma^2 \bm{I} \right)^{\frac{1}{2}} \bm\Lambda_{1:k}^{-\frac{1}{2}} \left( \bm\Lambda_{1:k} + \sigma^2 \bm{I} \right)^{-\frac{1}{2}} \bm\Lambda_{1:k}^{\frac{1}{2}} \bm{R}^\top \bm\gamma \nonumber \\ &= \sigma^2 \left[ \bm{I} - \bm\theta_F^\top (\bm\theta \bm\theta^\top)^{-1} \bm\theta_F \right] \bm\beta_F - \sigma^2 \bm\theta_F^\top (\bm\theta \bm\theta^\top)^{-1} \bm\theta_N \bm\beta_N \label{e:subset_deconfounder_numerator_large_n_small_m} \\ \intertext{and under strong infinite confounding,} \lim_{m \to \infty} \plim_{n \to \infty} \frac{1}{n} \tilde\bm{A}_F^{\mathrm{subset} \top} \bm{Y} &= \sigma^2 \bm\beta_F . \label{e:subset_deconfounder_numerator_large_n_large_m} \end{align} Combining \eqref{e:subset_deconfounder_denominator_large_n_small_m}~and~\eqref{e:subset_deconfounder_numerator_large_n_small_m}, \begin{align*} \plim_{n \to \infty} \hat\bm\beta_F^\mathrm{subset} &= \bm\beta_F + \left[ \bm{I} - \bm\theta_F^\top (\bm\theta \bm\theta^\top)^{-1} \bm\theta_F \right]^{-1} \bm\theta_F^\top (\bm\theta \bm\theta^\top)^{-1} \bm\theta_N \bm\beta_N , \intertext{Consider the additional conditions. (i) If $\bm\theta_{F} = \bm{0}$ then convergence is immediate. (ii) If $\lim_{m \rightarrow \infty} \bm\theta_{N} \bm\beta_{N}$ and $\lim_{m \rightarrow \infty} \left[ \bm{I} - \bm\theta_F^\top (\bm\theta \bm\theta^\top)^{-1} \bm\theta_F \right]^{-1}$ is convergent then $\lim_{m \rightarrow \infty} (\bm\theta \bm\theta^\top)^{-1} \bm\theta_N \bm\beta_N = \bm{0}$ so the product of the limits is $\bm{0}$. (iii) If $\lim_{m \rightarrow \infty} \left(\bm\theta \bm\theta^{\top} \right)^{-1} \bm\theta_{N} \bm\beta_{N} = \bm{0} $ and strong infinite confounding holds then $\lim_{m \rightarrow \infty} \left[ \bm{I} - \bm\theta_F^\top (\bm\theta \bm\theta^\top)^{-1} \bm\theta_F \right]^{-1} = \bm{I}$ so the bias term also goes to zero. Therefore, combining Equations \eqref{e:subset_deconfounder_denominator_large_n_large_m}~and~\eqref{e:subset_deconfounder_numerator_large_n_large_m} if one of these conditions holds yields} \lim_{m \to \infty} \plim_{n \to \infty} \hat\bm\beta_F^\mathrm{subset} &= \bm\beta_F . \qed \end{align*} Proposition \ref{prop:bias_naive_focal} demonstrates that the na\"ive regression estimator is an unbiased estimator for focal treatments; it is a generalization of Proposition~\ref{prop:bias_naive}. The proof is given in Section~\ref{a:bias_naive}. \subsection{Subset Deconfounder Requires Assumptions about Treatment Effects} \label{a:subset_add} To see why strong infinite confounding is insufficient, consider a simple example. Using the linear-linear data-generating process, consider the case of $k = 1$. For the sake of this example, we will suppose that for each treatment $j$ that $\theta_{j} = \bar{\theta}$ and that $\beta_{j} = \bar{\beta}$. This clearly satisfies strong infinite confounding, PCA will be a consistent estimator of the substitute confounder, and na\"ive regression will be a consistent estimator of the treatment effects. But this is not the case for the subset deconfounder. Using Proposition 4, the bias for arbitrary focal treatment $j$ as $m \rightarrow \infty$ is given by: \begin{align} \lim_{m \rightarrow \infty} \mathbb{E}[\hat\beta_j] - \beta_j &= \lim_{m \rightarrow \infty} \left(1 - \frac{\bar\theta^2}{m \bar\theta^2}\right)^{-1} \frac{(m-1) \bar{\theta}^2 \bar{\beta}}{m \theta^2} \nonumber \\ &= \bar{\beta} \nonumber \end{align} So long as $\bar{\beta} \neq 0$ then the bias is non-zero, regardless of how many treatments are present. The intuition is that as we add more treatments we are having two countervailing effects on our estimator. Additional treatments are giving us a better estimate of the substitute confounder, which is reducing the correlation between the focal and non-focal treatments, which on its own would reduce bias. But at the same time, we're adding additional omitted variable bias. That additional omitted variable bias causes the subset deconfounder's bias to not decrease as the treatments are added. \paragraph{Subset Deconfounder as Regularization} Connections between the subset deconfounder and na\"ive regression can be seen through the following straightforward argument. Consider first na\"ive regression. By the Frisch-Waugh-Lovell theorem, for any treatment $\bm{A}_j$, its estimated effect is related to the residualized treatment, $\tilde{\bm{A}}^\mathrm{na\ddot{\i}ve}_j = \bm{A}_j - \hat{\bm{A}}^\mathrm{na\ddot{\i}ve}_j $, where $\hat{\bm{A}}^\mathrm{na\ddot{\i}ve}_j = \left( \bm{A}_{\setminus j}^{\top} \bm{A}_{\setminus j} \right)^{-1} \bm{A}_{\setminus j}^{\top} \bm{A}_j$ is the part of $\bm{A}_j$ that can be predicted from the other treatments, $\bm{A}_{\setminus j}$. Specifically, $\hat\beta_j^{\text{na\"ive}} = \frac{\mathrm{Cov}(\bm{Y}, \tilde{\bm{A}}^\mathrm{na\ddot{\i}ve}_j) }{ \mathrm{Var}(\tilde{\bm{A}}^\mathrm{na\ddot{\i}ve}_j )}$. Denoting the SVD of $\bm{A}_{\setminus j}$ as $\bm{U}_{\setminus j}\bm{D}_{\setminus j} \bm{V}_{\setminus j}^{\top}$, then $\hat{\bm{A}}^\mathrm{na\ddot{\i}ve}_j = \bm{U}_{\setminus j} \bm{U}_{\setminus j}^{\top} \bm{A}_j $. As $m , n\rightarrow \infty$ then under linear-linear confounding this approaches $\bm{U} \bm{U}^{\top} \bm{A}_j$. Now consider the subset deconfounder. Also by Frisch-Waugh-Lovell, for any single treatment $\bm{A}_j$, its estimated effect is $\hat\beta_j^{\text{subset}} = \frac{\mathrm{Cov}(\bm{Y}, \tilde{\bm{A}}_j^{\text{subset}}) }{ \mathrm{Var}(\tilde{\bm{A}}_j^{\text{subset}} )}$ and $\tilde{\bm{A}}_j^{\text{subset}} = \bm{A}_j^{\text{subset}} - \hat{\bm{A}}_j^{\text{subset}} $. Denoting the SVD of $\bm{A}$ as $\bm{A} = \bm{U}\bm{D} \bm{V}^{\top}$, it can be seen that $\hat{\bm{A}}_j^{\text{subset}} = \bm{U}_{1:k} \bm{U}_{1:k}^{\top} \bm{A}_j$. This makes clear that the na\"ive regression is adjusting along the same eigenvectors as the subset deconfounder. Further, it shows that the subset deconfounder is performing the same regression as the na\"ive regression but with a particular form of regularization. Specifically, the deconfounder imposes no penalty on the first $k$ eigenvectors, but then regularizes by suppressing the influence of dimensions $k+1$ to $m$. \subsection{Convergence of Deconfounder and Na\"ive Estimators} \label{a:deconfounder_naive_equality} In this section, we extend our previous results on asymptotic equivalence between the deconfounder and na\"ive analyses to a broad class of nonlinear-nonlinear data-generating processes. We consider factor models in which the distributions of $\bm{Z}_i$ and $\bm{A}_i | \bm{Z}_i$ have continuous density. Following the deconfounder papers, we analyze the case in which pinpointedness holds and this requires strong infinite confounding and infinite $m$. We also follow the deconfounder papers in restricting attention to outcome models with constant treatment effects and finite variance. That is, we study outcome models satisfying $\mathbb{E}[ Y_i | \bm{A}, \bm{Z} ] = \bm{A}_i^\top \bm\beta + g_Y(\bm{Z}_i)$, allowing for arbitrarily complex nonlinear confounding. This family is more restrictive than the outcome models considered in Section~\ref{a:deconfounder_inconsistency_nonlinear} but nevertheless nests all data-generating processes studied in prior deconfounder papers and our paper. For convenience, we restate Theorem~\ref{thm:deconfounder_naive_equality} here. \begin{mdframed} \textbf{Theorem~\ref{thm:deconfounder_naive_equality}.} \it (Deconfounder-Na\"ive Convergence under Strong Infinite Confounding.) Consider all data-generating processes in which (i) treatments are drawn from a factor model with continuous density that is a function $\bm{Z}$, (ii) $\bm{Z}$ is pinpointed, and (iii) the outcome model contains constant treatment effects and treatment assignment is ignorable nonparametrically given $\bm{Z}$. Any consistent deconfounder converges to a na\"ive estimator for any finite subset of treatment effects. \end{mdframed} \noindent \textit{Proof.} We begin with some preliminaries. As in Section~\ref{a:bias_naive}, we partition the $m$ causes, $\bm{A}_i$, into finite $m_F$ focal causes of interest, $\bm{A}_{i,F}$, and $m_N$ nonfocal causes, $\bm{A}_{i,N}$. Then the conditional expectation function of the outcome can be rewritten $\mathbb{E}[ Y_i | \bm{A}, \bm{Z} ] = \bm{A}_{i,F}^\top \bm\beta_F + \bm{A}_{i,N}^\top \bm\beta_N + g_Y(\bm{Z}_i)$. In what follows, we will also use the conditional expectation function $g_{\bm{A}}(\bm{z}) \equiv \mathbb{E}[ \bm{A}_i | \bm{Z}_i=\bm{z} ]$, as well as its partitioned counterparts, $g_{\bm{A}_F}(\bm{z}) \equiv \mathbb{E}[ \bm{A}_{i,F} | \bm{Z}_i=\bm{z} ]$ and $g_{\bm{A}_N}(\bm{z}) \equiv \mathbb{E}[ \bm{A}_{i,N} | \bm{Z}_i=\bm{z} ]$. Pinpointedness implies that $g_{\bm{A}}(\bm{z})$ must be invertible and consistently estimable; it also requires that both $n$ and $m$ go to infinity. When these conditions hold, $\plim_{n \to \infty} \hat{g}_{\bm{A}}^{-1}(\bm{A}_i) = g_{\bm{A}}^{-1}(\bm{A}_i) = \bm{Z}_i$, up to symmetries such as rotation invariance. The deconfounder estimator then reduces to the partially linear regression \begin{align*} \left( \hat\bm\beta_F^\mathrm{deconf}, \hat\bm\beta_N^\mathrm{deconf}, \hat{g}_Y^\mathrm{deconf} \right) &= \argmin_{\bm\beta_F^\ast, \bm\beta_N^\ast, g_Y^\ast} \ \sum_{i=1}^n \left( Y_i - \bm{A}_{i,F}^\top \bm\beta_F^\ast - \bm{A}_{i,N}^\top \bm\beta_N^\ast - g_Y^\ast(\hat{g}_{\bm{A}}^{-1}(\bm{A}_i)) \right)^2 \end{align*} which is consistent for $\bm\beta_F$ \citep{robinson1988}. Now consider the following alternative estimator, the partially linear na\"ive regression \begin{align} \hat{h}_{\bm{A}_F}^\mathrm{na\ddot{\i}ve} &= \argmin_{h_{\bm{A}_F}^\ast} \sum_{i=1}^n \left\| \bm{A}_{i,F} - h_{\bm{A}_F}^\ast(\bm{A}_{i,N}) \right\|_F^2 \\ \hat{h}_Y^\mathrm{na\ddot{\i}ve} &= \argmin_{h_Y^\ast} \sum_{i=1}^n \left( Y_i - h_Y^\ast(\bm{A}_{i,N}) \right)^2 \\ \hat\bm\beta_F^\mathrm{na\ddot{\i}ve} &= \left( \tilde\bm{A}_F^{\mathrm{na\ddot{\i}ve} \top} \tilde\bm{A}_F^\mathrm{na\ddot{\i}ve} \right)^{-1} \tilde\bm{A}_F^{\mathrm{na\ddot{\i}ve} \top} \tilde\bm{Y}^\mathrm{na\ddot{\i}ve} \end{align} where $\tilde\bm{A}_F^\mathrm{na\ddot{\i}ve}$ collects $\bm{A}_{i,F} - \hat{h}_{\bm{A}_F}^\mathrm{na\ddot{\i}ve}(\bm{A}_{i,N})$ and $\tilde\bm{Y}^\mathrm{na\ddot{\i}ve}$ collects $Y_i - \hat{h}_Y^\mathrm{na\ddot{\i}ve}(\bm{A}_{i,N})$. Like its linear-linear analogue, \eqref{e:naive}, this generalized na\"ive estimator models the outcome in terms of the treatments only, ignoring the existence of confounding. It can be seen that \eqref{e:naive_nonlinear_af} and \eqref{e:naive_nonlinear_y} flexibly estimate the conditional means of $\bm{A}_{i,F}$ and $Y_i$, respectively, given $\bm{A}_{i,N}$. We now note that because pinpointedness only holds with infinite $m$, it must also hold with $m - m_F$ treatments. That is, if $g_{\bm{A}}^{-1}(\bm{A}_i) = \bm{Z}_i$, then also $g_{\bm{A}_N}^{-1}(\bm{A}_{i,N}) = \bm{Z}_i$. Next, because $g_{\bm{A}_F}(\cdot)$ by definition minimizes the conditional variance in the focal treatments, $g_{\bm{A}_F}( g_{\bm{A}_N}^{-1}(\bm{A}_{i,N}) )$ is asymptotically the minimizer of \eqref{e:naive_nonlinear_af}. Similarly, it is easy to see that $\bm{A}_{i,N}^\top \bm\beta_N + g_Y(g_{\bm{A}_N}^{-1}(\bm{A}_{i,N}))$ asymptotically solves \eqref{e:naive_nonlinear_y}. As a result \eqref{e:naive_nonlinear} identifies using $\bm{A}_{i,F} - \mathbb{E}[\bm{A}_{i,F} | \bm{Z}_i]$, the component of the focal treatments which is uncontaminated by confounding. Thus, $\hat\bm\beta_F^\mathrm{deconf}$ and $\hat\bm\beta_F^\mathrm{na\ddot{\i}ve}$ both converge in probability to $\bm\beta_F$, and hence to each other. \qed \subsection{Inconsistency of the Deconfounder in Nonlinear Settings} \label{a:deconfounder_inconsistency_nonlinear} We now generalize our previous results to a broad class of nonlinear-nonlinear data-generating processes. We consider all factor models in which the distributions of $\bm{Z}_i$ and $\bm{A}_i | \bm{Z}_i$ have continuous density. We restrict our attention to the class of outcome models with additively separable confounding, a family that nests all data-generating processes studied in our paper and in applications of the deconfounder. That is, we study outcome models satisfying $\mathbb{E}[ Y_i | \bm{A}, \bm{Z} ] = f(\bm{A}_i) + g_Y(\bm{Z}_i)$, allowing for arbitrarily complex confounding and arbitrarily complex, interactive treatment effects. For convenience, we restate Theorem~\ref{thm:deconfounder_inconsistency_nonlinear} here. \begin{mdframed} \textbf{Theorem~\ref{thm:deconfounder_inconsistency_nonlinear}.} \it (Inconsistency of the Deconfounder in Nonlinear Settings.) Consider all data-generating processes in which finite treatments are drawn from a factor model with continuous density. The deconfounder is inconsistent for any outcome model with additively separable confounding. \end{mdframed} This result follows from a relatively simple premise, building on the intuition behind Lemma~\ref{l:pinpointing}. Because there is always noise in $\bm{A}_i$, the analyst can never recover the exact value of the $\bm{Z}_i$ when there are a finite number of treatments. The error in $\hat\bm{Z}_i$ depends on $\bm{A}_i$, and the outcome $Y_i$ is in part dependent on this error (that is, $Y_i$ is dependent on $\bm{Z}_i$, which is only partially accounted for by $\hat\bm{Z}_i$). Therefore, an outcome analysis that neglects the unobserved mismeasurement, $\hat\bm{Z}_i - \bm{Z}_i$, will necessarily be affected by omitted variable bias. We proceed by cases, focusing primarily on a highly implausible best-case scenario. As we discuss below, alternate cases either (i) asymptotically almost never occur or (ii) lead trivially to inconsistency of the deconfounder. Thus, because the deconfounder is inconsistent even in this ideal setting, it is inconsistent in all cases. \noindent \textit{Proof.} First, consider the case in which $g_{\bm{A}}(\bm{z}) \equiv \mathbb{E}[ \bm{A}_i | \bm{Z}_i=\bm{z}]$ is invertible. (Invertibility of the conditional expectation function is a necessary condition for pinpointedness; in the case where it does not hold, $\bm{Z}_i$ cannot be learned, and inconsistency follows immediately.) In the invertible case, it is easy to see that the unobserved confounder could be recovered with $ g_{\bm{A}}^{-1}( \mathbb{E}[ \bm{A}_i | \bm{Z}_i ] ) = \bm{Z}_i$, if both $g_{\bm{A}}(\cdot)$ and the conditional expectation of the treatments (i.e., without noise) was known. However, the fundamental problem is that the analyst only observes the treatments for unit $i$, $\bm{A}_i$, including noise---its conditional expectation given the latent confounder, $\mathbb{E}[\bm{A}_i | \bm{Z}_i]$, is unknown because the true $\bm{Z}_i$ is unknown. Let $h_{\bm{A}}(\bm{A}_i, \bm{A}_i - \mathbb{E}[ \bm{A}_i | \bm{Z}_i ])$ represent an unobservable clean-up function that corrects the resulting error when $\bm{A}_i$ is used instead of $\mathbb{E}[ \bm{A}_i | \bm{Z}_i ]$, so that $ h_{\bm{A}}(\bm{A}_i, \bm{A}_i - \mathbb{E}[ \bm{A}_i | \bm{Z}_i ]) + g_{\bm{A}}^{-1}( \bm{A}_i ) \ = \ \bm{Z}_i $. For concreteness, consider the linear-linear setting as an example: here, $ g_{\bm{A}}(\bm{Z}_i) = \bm{Z}_i^\top \bm\theta $, $ g_{\bm{A}}^{-1}( \bm{A}_i ) = \bm{A}_i^\top \bm\theta^\top (\bm\theta \bm\theta^\top )^{-1}$, and $h_{\bm{A}}(\bm{A}_i, \bm\nu_i) = -\bm\nu_i^\top \bm\theta^\top (\bm\theta \bm\theta^\top )^{-1}$. We next assume that factor-model parameters can be consistently estimated, setting aside likelihood invariance and other issues, so that $\plim_{n \to \infty} \hat{g}_{\bm{A}}(\cdot) = g_{\bm{A}}(\cdot)$. (In cases where these conditions do not hold, $\bm{Z}_i$ is again unidentified, and the deconfounder is again inconsistent.) The deconfounder procedure asks the analyst to compute $\hat\bm{Z}_i \equiv \hat{g}_{\bm{A}}^{-1}( \bm{A}_i )$. As $n$ grows large, the analyst is left with the unobserved error $\plim_{n \to \infty} \hat\bm{Z}_i - \bm{Z}_i = -h_{\bm{A}}(\bm{A}_i, \bm{A}_i - \mathbb{E}[ \bm{A}_i | \bm{Z}_i ])$. In other words, consistency of $\hat{g}_{\bm{A}}$ does not imply consistency of $\hat\bm{Z}_i$---even when factor model parameters are known perfectly, noise in $\bm{A}_i$ will almost surely deceive the factor model about the true value of $\bm{Z}_i$. We now turn to the outcome model, $\mathbb{E}[Y_i | \bm{A}_i, \bm{Z}_i] = f(\bm{A}_i) + g_Y(\bm{Z}_i)$. The deconfounder requires analysts to correct for confounding by estimating $g_Y(\cdot)$. Here, we consider three cases. The first is the knife-edge case in which errors in $\hat{g}_Y(\cdot)$ and $\hat\bm{Z}_i$ exactly offset one another, so that $\hat{g}_Y(\hat\bm{Z}_i) = g_Y(\bm{Z}_i)$. This occurs on a set of densities with measure zero, because as $n$ grows large the probability that all noise-induced errors in $\hat\bm{Z}$ can be rectified by a deterministic correction goes to zero. The second is where the confounding function is not perfectly recovered and errors do not cancel. Note that in practice, this scenario dominates; the confounding function generally cannot be estimated reliably because its inputs are only observed with measurement error. Here, deconfounder inconsistency again follows trivially. The third is yet another unrealistic, best-case scenario: the confounding function, $g_Y(\cdot)$, is perfectly recovered. Even in this third, ideal scenario, the deconfounder procedure introduces further unobservable error when substituting $\hat{g}_Y(\hat\bm{Z}_i) = g_Y(\hat\bm{Z}_i)$ instead of $g_Y(\bm{Z}_i)$. We now define a second clean-up function, $h_Y(\bm{Z}_i, \hat\bm{Z}_i - \bm{Z}_i)$, that ensures $\mathbb{E}[Y_i | \bm{A}_i, \bm{Z}_i] = f(\bm{A}_i) + g_Y(\hat\bm{Z}_i) + h_Y(\bm{Z}_i, \hat\bm{Z}_i - \bm{Z}_i)$. As a concrete example, in the linear-linear case, $g_Y(\hat\bm{Z}_i) = \hat\bm{Z}_i^\top \bm\theta$, and $h_Y(\bm{Z}_i, \hat\bm{Z}_i - \bm{Z}_i) = (\bm{Z}_i - \hat\bm{Z}_i)^\top \bm\theta$ corrects for mismeasurement of $\bm{Z}$. Finally, the deconfounder asks analysts to compute a regression after adjustment for the confounding function. Specifically, the deconfounder learns the relationship between $Y_i$ and $f(\bm{A}_i)$, seeking to correct for the estimated confounding, $\hat{g}_Y(\hat\bm{Z}_i)$. Even in this best-case scenario, which grants $\hat{g}_Y(\cdot) = g_Y(\cdot)$, this regression fails to account for the unobserved term $h_Y(\bm{Z}_i, h_{\bm{A}}(\bm{A}_i, \bm{A}_i - \mathbb{E}[\bm{A}_i | \bm{Z}_i]))$, which is associated with both $\bm{A}_i$ and $Y_i$. Thus, omitted-variable bias arises everywhere in the model space except when knife-edge conditions are satisfied. As a concrete example of such a knife-edge condition, consider the partially linear constant-effects model $\mathbb{E}[ Y_i | \bm{A}, \bm{Z} ] = \bm{A}_i^\top \bm\beta + g_Y(\bm{Z}_i)$. Here, the deconfounder is inconsistent except in the special cases where either $\mathrm{Cov}\left(Y_i, h_Y(\bm{Z}_i, h_{\bm{A}}(\bm{A}_i, \bm{A}_i - \mathbb{E}[\bm{A}_i | \bm{Z}_i])) \right)$ or $\mathrm{Cov}\left(\bm{A}_i, h_Y(\bm{Z}_i, h_{\bm{A}}(\bm{A}_i, \bm{A}_i - \mathbb{E}[\bm{A}_i | \bm{Z}_i])) \right)$ equal zero. \qed \subsection{Actor Case Study Reveals Limitations of the Deconfounder} \label{s:actor} WB's case study investigates how the cast of a movie causally affects movie revenue. The deconfounder is applied to the TMDB 5,000 data set, estimating how much each of the $m=901$ actors affected the revenue of $n=2,828$ movies. WB presents results from a full deconfounder in which substitute confounders are estimated using the leading $k=50$ dimensions of a poisson matrix factorization (PMF) of the binary matrix of movie-actor appearance indicators. A linear regression of log revenue on actor appearance and substitute confounders is used to estimate what is described as the causal effect of the cast. \begin{table}[!ph] \caption{\textbf{The Deconfounder Estimates Implausible Effects for Actors.} Estimated causal effect of each actor's casting on movie revenue. Following WB, estimates are computed by linear regression of log revenue on actor indicators and additional covariates. Each row reports a different specification (for example, ``deconfounder'' rows each adjust for a 50-dimensional substitute confounder, and the ``controls'' row adjusts for budget, a multi-cause confounder). The top panel contains estimators that analyze all actors simultaneously, including the full deconfounder; the bottom panel contains estimators that analyze each actor $j$ in isolation, including the subset deconfounder and the univariate na\"ive estimator $\hat\beta_j = \mathrm{Cov}(\bm{A}_j, \bm{Y}) / \mathrm{Var}(\bm{A}_j)$. Two versions of each deconfounder estimator are used, one relying on a cached poisson matrix factorization (PMF) provided by WB and another using a re-estimated PMF. For each estimator, the top five actors and associated estimates are presented in the form ``Actor ($\times e^{\hat\beta_j}$),'' indicating an estimate that Actor causally modifies revenue by a multiplicative factor of $e^{\hat\beta_j}$.} \label{t:actors} \begin{tabular}{p{1in} | p{.8in} | p{4in}} \hline \multicolumn{3}{c}{\bf Estimating all actor effects simultaneously (full deconfounder)} \\ \hline Na\"ive & Standard & \input{tables/actors_full_naive_expbeta_5.txt} \\ \hline Deconfounder & Cached PMF & \input{tables/actors_full_deconfwb_expbeta_5.txt} \\ \hline Deconfounder & Rerun PMF & \input{tables/actors_full_deconf_expbeta_5.txt} \\ \hline Controls & Adjusting for budget & \input{tables/actors_full_budget_expbeta_5.txt} \\ \hline \multicolumn{3}{c}{\bf Estimating effects one actor at a time (subset deconfounder)} \\ \hline Na\"ive & Univariate & \input{tables/actors_subset_naive_expbeta_5.txt} \\ \hline Deconfounder & Cached PMF & \input{tables/actors_subset_deconfwb_expbeta_5.txt} \\ \hline Deconfounder & Rerun PMF & \input{tables/actors_subset_deconf_expbeta_5.txt} \\ \hline \end{tabular} \end{table} We replicate this analysis in Table~\ref{t:actors}, using cached PMF output from WB to ensure that our conclusions are unaffected by random seed. The five largest estimates from this model as well as several alternatives are reported in Table~\ref{t:actors} (expanded results, including appearance-weighted log-scale coefficients,\footnote{WB rank actors by multiplying each actor's log-scale coefficients by their number of movie appearances. This transformation is difficult to interpret substantively but produces similar results.} are in Supplement~\ref{a:actors}). According to WB's results, the single most valuable actor is Stan Lee---whose appearances are cameos in movies based primarily on his Marvel comic books\footnote{And a 40 second appearance in ``Mallrats."} totaling 200 seconds of screen time. These unreported estimates suggest that with his casting, Marvel Cinematic Universe (MCU) producers causally increased their movies' revenue by 831\%---more than nine times the box-office haul of their counterfactual Stan-less versions, a total of \$15.5 billion in additional earnings. The subset deconfounder's estimates are similarly implausible, suggesting that Jess Harnell causally increases a movie's revenue by 1,128\%. This is driven by his appearance as a voice actor in the high-budget ``Transformers'' series, as his credits are otherwise in peripheral roles not included in this data set, such as a supporting role in the animated series \textit{Doc McStuffins}. % WB's subset deconfounder suggests that his appearances collectively increased revenue by \$2.5 billion. The deconfounder produces implausible estimates because it fails to capture important multi-cause confounders. This is clearest when we explicitly adjust for a movie's budget---the quintessential multi-cause confounder, enabling the casting of big-name stars and also reflecting the studio's underlying belief in the viability of the film.\footnote{In \citet{WanBle19} and replication code generously shared with us, actor analyses did not condition on any observed covariates. After we shared our draft with Wang and Blei in July 2020, a reference implementation conditioning on budget and runtime was posted.} This simple adjustment produces dramatically different assessments of actor value that are far more reasonable in scale, though likely still overstated. The deconfounder claims to capture all multi-cause confounding---not only from budget but also genre, series, directors, writers, language, and release season. WB explicitly argue that including observed covariates with the deconfounder is not necessary---yet this example shows that it is.% \subsection{Strong Assumptions Rule Out Other Applications} \label{s:assumptions} The deconfounder's exceedingly strong assumptions often make it unsuitable for many uses. Although there are other embedded assumptions---see \citet{ogburn2019comment} for more---we focus on four that rule out important applications, including the case study above. First, the deconfounder requires that treatments arise from a factor model with a low-dimensional confounder. Practically speaking, analysts must know enough about the functional form of this factor model to feasibly estimate it. Second, the deconfounder requires treatments to be independently drawn given $\bm{Z}$, ruling out settings where treatments cause other treatments. This implies that casting one actor cannot influence whether another actor is cast later. This alone excludes many realistic settings---except perhaps genetics, where many of these ideas originated. Third, pinpointing confounders with a factor model requires strong infinite confounding. In practice, this means analysts must record a very large number of treatments, which are contaminated by comparatively few confounders. In the actor setting, this would be violated by producers who regularly work with the same sets of actors. Furthermore, the mere fact that all movies have finite casts implies limits to the information learned about confounding and means that, by Theorem~\ref{thm:deconfounder_inconsistency_nonlinear}, the deconfounder will always be inconsistent. Fourth, even when a pinpointable factor model of the proper class exists, parametric assumptions such as separable confounding or constant treatment effects are used in many proofs, both here and in WB. Often these conditions help to address failures of positivity by leveraging functional form assumptions. Yet, particularly in social and medical problems, causal heterogeneity is the rule, not the exception. It is unknown how sensitive the na\"ive and deconfounder families of methods are to slight violations of these assumptions. Until there is a way to relax these assumptions or otherwise evaluate the severity of the consequences of violating them, we cannot recommend either the na\"ive regression or the deconfounder for real-world applications. \subsection{Takeaways} Assumptions used to prove deconfounder properties, like pinpointing, are extremely strong and unlikely to hold in real applications. While analysts cannot know whether the deconfounder estimates are accurate, results from the actor case study are highly implausible. Given the high-stakes nature of many proposed applications, we think a great deal more evidence is warranted before these methods are put into practice. \subsection{The Linear-Linear Model and Strong Infinite Confounding} Consider $n$ observations drawn i.i.d. from the following data-generating process: \begin{align} k \text{ unobserved confounders:}& &\bm{Z}_i &\sim \mathcal{N}(\bm{0}, \bm{I}); \label{e:confounder}\\ m \ge k \text{ observed treatments:}& &\bm{A}_i &\sim \mathcal{N}(\bm{Z}_i^\top \bm\theta, \sigma^2 \bm{I}); \label{e:treatment} \\ \text{scalar outcome:}& & Y_i &\sim \mathcal{N}(\bm{A}_i^\top \bm\beta + \bm{Z}_i^\top \bm\gamma, \omega^2); \label{e:outcome} \end{align} We assume elements of $\bm\theta$ and $\bm\gamma$ are finite and $\sigma^2$ is nonzero. Our goal is to estimate $\bm\beta = [\beta_1, \ldots, \beta_m]$, the causal effects of increasing the corresponding $[A_{i,1}, \ldots, A_{i,m}]$ by one unit; following WB, effects are assumed constant. Results are collected in $\bm{Z}$, $\bm{A}$, and $\bm{Y}$.\footnote{We occasionally denote simultaneous sampling of all $n$ observations with $\bm{Z} \sim \mathcal{N}(\bm{0}, \bm{I})$ or similar.} The variable $\bm{Z}$ is unobserved and therefore confounds our inferences about the causal effect of $\bm{A}$ when both $\bm\gamma$ and $\bm\theta$ are nonzero. However, if the analyst could observe $\bm{Z}$ and adjust for it, they would have the \textit{oracle} estimator, \begin{align} \left[ \ \hat\bm\beta^{\mathrm{oracle} \top} ,\ \hat\bm\gamma^{\mathrm{oracle} \top} \ \right]^\top &\equiv \left( \left[ \bm{A}, \bm{Z} \right]^\top \left[ \bm{A}, \bm{Z} \right] \right)^{-1} \left[ \bm{A}, \bm{Z} \right]^\top \bm{Y}. \label{e:oracle} \end{align} It follows directly from the properties of ordinary least squares that the oracle is an unbiased and consistent estimator of treatment effects for any $m$. No other estimator that we will consider is consistent for finite $m$. However, we define an asymptotic regime in $m$, called strong infinite confounding, under which the na\"ive regression will approach consistency.\footnote{A recent preprint, \citet{guo2020doubly}, defines a related ``dense confounding'' condition.} \begin{mdframed} \begin{defn}{(Strong infinite confounding under the linear-linear model.)} \label{def:infinite_confounding} A sequence of linear-linear data-generating processes with a fixed number of confounders, $k$, and growing number of causes, $m$, is said to be \textbf{strongly infinitely confounded} if as $m \to \infty$, all diagonal elements of $\bm\theta \bm\theta^\top$ tend to infinity. \end{defn} \end{mdframed} The $j$-th diagonal element of $\bm\theta \bm\theta^\top$ contains the sum of the squared coefficients relating confounder $j$ to each of the $m$ treatments.\footnote{Lemma~\ref{l:pinpointing} of Supplement~\ref{a:derivation} proves strong infinite confounding is necessary for pinpointing, and Lemma~\ref{l:infinite_confounding} connects this to the conditions for unbiased estimation of a na\"ive regression.} Intuitively, strong infinite confounding says that as $m$ grows large, the finite $k$ confounders continue to strongly affect a growing number of treatments. We discuss the practical implication of this condition for finite samples in Section~\ref{s:finite}. In Supplement~\ref{a:weakinfconf}, we build on an example from \cite{d2019comment} to show an example of ``weak'' infinite confounding, where the number of treatments grows but the diagonal elements of $\bm\theta \bm\theta^\top$ do not tend towards infinity. % \subsection{The Na\"ive Regression in the Linear-Linear Setting} As a baseline for the deconfounder, WB present the \textit{na\"ive} estimator, which simply ignores $\bm{Z}$. In Proposition~\ref{prop:bias_naive}, we characterize the asymptotic properties of na\"ive regression with finite $m$ and, perhaps surprisingly, establish it is asymptotically unbiased for the linear-linear model as both $n$ and $m$ go to infinity under strong infinite confounding. \begin{mdframed} \begin{prop} \label{prop:bias_naive} (Asymptotic Bias of the Na\"ive Regression in the Linear-Linear Model.) Under the linear-linear model, the asymptotic bias of the na\"ive estimator,\\ $\hat{\bm\beta}^\mathrm{na\ddot{\i}ve} \ \equiv \ \left( \bm{A}^\top \bm{A} \right)^{-1} \bm{A}^\top \bm{Y}$, follows $\plim_{n \to \infty} \hat\bm\beta^\mathrm{na\ddot{\i}ve} - \bm\beta \ = \ (\bm\theta^\top \bm\theta + \sigma^2 \bm{I})^{-1} \bm\theta^\top \bm\gamma$, indicating that the na\"ive estimator is inconsistent. However, when applied to a sequence of data-generating processes with growing $m$ which satisfy strong infinite confounding,\\ $\lim_{m \to \infty} \plim_{n \to \infty} \hat\bm\beta^\mathrm{na\ddot{\i}ve} - \bm\beta \ = \ \bm{0}$. \end{prop} \end{mdframed} Intuitively, the na\"ive regression is unbiased as the number of treatments grow, because linear regression adjusts along the eigenvectors of the covariance matrix of the treatments, and in this setting the most consequential eigenvectors are the confounders. This connects to the core intuition of the deconfounder and a prior literature in genetics---under deconfounder assumptions, shared confounding leaves an imprint on the observed data distribution \citep{Price06,WanBle19,WanBle20}---and, as it turns out, this imprint is useful for the na\"ive regression. \subsection{Deconfounder Under the Linear-Linear Model} \label{s:declinlin} In place of the na\"ive estimator, WB recommend the \emph{full deconfounder}, which under the linear-linear model proceeds in three steps: (1) take the singular value decomposition $\bm{A} = \bm{U} \bm{D} \bm{V}^\top$, (2) extract the first $k$ components, $\hat\bm{Z} \equiv \sqrt{n} \bm{U}_{1:k}$, and (3) adjust using \begin{align} \left[ \ \hat\bm\beta^{\mathrm{full} \top} ,\ \hat\bm\gamma^{\mathrm{full} \top} \ \right]^\top &\equiv \left( \left[ \bm{A}, \hat\bm{Z} \right]^\top \left[ \bm{A}, \hat\bm{Z} \right] \right)^{-1} \left[ \bm{A}, \hat\bm{Z} \right]^\top \bm{Y}. \label{e:deconfounder} \end{align} But there is a problem: adding these new terms to the na\"ive regression renders Equation \eqref{e:deconfounder} inestimable. The substitute confounder is merely a linear transformation of the original treatments, $\hat\bm{Z} = \sqrt{n} \bm{A} \bm{V}_{1:k} \bm{D}_{1:k}^{-1}$, meaning that $\left[ \bm{A}, \hat\bm{Z} \right]^\top \left[ \bm{A}, \hat\bm{Z} \right]$ is always rank-deficient. This perfect collinearity is a consequence of that fact that the inclusion of $\hat\bm{Z}$ brings no new information beyond that contained in the original treatments, $\bm{A}$. The deconfounder papers and tutorials deploy variants of the linear-linear model throughout their simulations and empirical examples. To render the model estimable, these examples use two modifications to the full deconfounder to break the perfect collinearity between the treatments and the substitute confounder: (1) the \textit{penalized full deconfounder}, which uses penalized outcome models to estimate treatment effects and adjust for the substitute confounder, and (2) the \textit{posterior full deconfounder}, which adds random noise to the substitute confounder $\hat\bm{Z}$ by sampling it from an approximate posterior. We analyze both strategies and demonstrate that while both render the full deconfounder technically estimable, adding the substitute confounder does not help recover the quantity of interest.% \subsubsection{Asymptotic Theory for Full Deconfounder Variants} We analyze the \emph{penalized full confounder}, an estimator used in \citet{WanBle19} and \citet{Zha19} through their use of normal priors on regression coefficients. We analyze the frequentist version of this estimator, which uses a ridge penalty. In Supplement~\ref{a:bias_ridge}, we prove Proposition~\ref{prop:bias_ridge}, which gives the asymptotic bias of the penalized full confounder. \begin{mdframed} \begin{prop} \label{prop:bias_ridge} (Asymptotic Bias of the Penalized Full Deconfounder.) The penalized deconfounder estimator, as implemented in WB, is \begin{align*} \left[ \ \hat\bm\beta^{\mathrm{penalty} \top} ,\ \hat\bm\gamma^{\mathrm{penalty} \top} \ \right]^\top &\equiv \left( \left[ \bm{A}, \hat\bm{Z} \right]^\top \left[ \bm{A}, \hat\bm{Z} \right] + \lambda(n) \bm{I} \right)^{-1} \left[ \bm{A}, \hat\bm{Z} \right]^\top \bm{Y}, \end{align*} where $\hat\bm{Z}$ is obtained by taking the singular value decomposition $\bm{A} = \bm{U} \bm{D} \bm{V}^\top$ and extracting the first $k$ components, $\hat\bm{Z} \equiv \sqrt{n} \bm{U}_{1:k}$, and $\lambda(n)$ is a ridge penalty assumed to be sublinear in $n$. Under the linear-linear model, the asymptotic bias of this estimator is given by \begin{align*} \plim_{n \to \infty} \hat\bm\beta^\mathrm{penalty} - \bm\beta &= \overbrace{ - \bm{Q}_{1:k} \ \mathrm{diag}_j \left( \frac{ 1 }{ \sigma^2 + \Lambda_j + 1 } \right) \bm{Q}_{1:k}^\top \bm\beta }^{\text{Regularization}} \\ &\qquad \underbrace{+ \bm{Q}_{1:k} \ \mathrm{diag}_j \left( \frac{ \Lambda_j }{ \sigma^2 + \Lambda_j + 1 } \right) \bm{Q}_{1:k}^\top \bm\theta^\top (\bm\theta \bm\theta^\top)^{-1} \bm\gamma}_{\text{Omitted Variable Bias}}, \intertext{where $\bm{Q}$ and $\bm\Lambda = [\Lambda_1, \ldots, \Lambda_k, 0, \ldots]$ are respectively eigenvectors and eigenvalues obtained from decomposition of $\bm\theta^\top \bm\theta$. Under strong infinite confounding,} \lim_{m \to \infty} \plim_{n \to \infty} \hat\bm\beta^\mathrm{penalty} - \bm\beta &= \bm{0}. \end{align*} \end{prop} \end{mdframed} Proposition~\ref{prop:bias_ridge} shows that the finite-$m$ bias in $\hat\bm\beta^\mathrm{penalty}$ comes from two sources: regularization of coefficients and omitted-variable bias from excluding the true confounders, $\bm{Z}$. Under strong infinite confounding, as $m$ and $n$ grow large, both regularization bias and omitted-variable bias go to zero; the latter is true because $(\bm\theta\btheta^\top)^{-1}$ goes to $\bm{0}$. Therefore, like a na\"ive regression, the $\hat\bm\beta^\mathrm{penalty}$ estimator is asymptotically consistent in $m$, but only because it effectively nests the na\"ive regression. Briefly, the proof proceeds by examining the singular value decomposition of the augmented data matrix, $\left[\bm{A}, \hat\bm{Z} \right] = \bm{U}^\ast \bm{D}^\ast \bm{V}^{\ast\top}$, and using the facts that (1) the first $m$ components of $\bm{U}^\ast$ remain unchanged from $\bm{U}$, since $\hat\bm{Z}$ are merely rescaled versions of the original left-singular vectors; (2) the first $k$ diagonal elements of $\bm{D}^{\ast 2}$ are equal to $\bm{D}^2 + n\bm{I}$ due to the additional variance of $\hat\bm{Z}$; and (3) the last $k$ diagonal elements of $\bm{D}^\ast$ are zero. The second strategy employed by the deconfounder papers to address perfect collinearity in the linear-linear case is to integrate over an approximate posterior. This renders the deconfounder estimable, since the resulting samples are no longer perfectly collinear with $\bm{A}$. Proposition \ref{prop:bias_noise_posterior} gives the asymptotic bias (proof in Supplement~\ref{a:bias_noise_posterior}) and shows that sampling the substitute confounder from a posterior is inconsistent with a finite $m$ but converges to a na\"ive regression as $m$ grows large.\footnote{In Supplement~\ref{a:bias_noise_white} we also offer a proof of Proposition \ref{prop:bias_noise_white}, that estimators adding a fixed amount of white noise remain asymptotically inconsistent as $m$ grows, even under strong infinite confounding.} \begin{mdframed} \begin{prop} \label{prop:bias_noise_posterior} (Asymptotic Bias of the Posterior-Mean Deconfounder.) The posterior-mean deconfounder estimator is \begin{align*} \left[ \ \hat\bm\beta^{\mathrm{pm} \top} ,\ \hat\bm\gamma^{\mathrm{pm} \top} \ \right]^\top &\equiv \int \left( \left[ \bm{A}, \bm{z} \right]^\top \left[ \bm{A}, \bm{z} \right] \right)^{-1} \left[ \bm{A}, \bm{z} \right] \bm{Y} \ f(\bm{z} | \bm{A}) \dd{\bm{z}} , \end{align*} where $f(\bm{z} | \bm{A})$ is a posterior obtained from Bayesian principal component analysis.\footnote{While the regression cannot be estimated when $\bm{z} = \mathbb{E}[\bm{Z} | \bm{A}]$, it is almost surely estimable for samples $\bm{z}^\ast \sim f(\bm{z} | \bm{A})$ due to posterior uncertainty, which eliminates perfect collinearity with $\bm{A}$. The posterior-mean implementation of WB evaluates the integral by Monte Carlo methods and thus is able to compute the regression coefficients for each sample.} Under the linear-linear model, the asymptotic bias of this estimator is given by \begin{align*} \plim_{n \to \infty} \hat\bm\beta^\mathrm{pm} - \bm\beta &= (\bm\theta^\top \bm\theta + \sigma^2 \bm{I})^{-1} \bm\theta^\top \bm\gamma, \intertext{and under strong infinite confounding,} \lim_{m \to \infty} \plim_{n \to \infty} \hat\bm\beta^\mathrm{pm} - \bm\beta &= \bm{0} \end{align*} \end{prop} \end{mdframed} \subsubsection{Asymptotic Theory for the Subset Deconfounder} Theorem 7 of WB suggests an alternate version of the deconfounder---the subset deconfounder---which estimates the effect of some treatments, ignores others, and adjusts for the substitute confounder. After extracting a substitute confounder, $\hat\bm{Z}$, this estimator designates a finite number, $m_F$, of the $m$ treatments as ``focal'' (we denote this column subset as $\bm{A}_F$) and sets aside the remaining $m_N$ ``non-focal'' treatments ($\bm{A}_N$). It then regresses the outcome, $\bm{Y}$, on only $\bm{A}_F$ and $\hat\bm{Z}$. The subset confounder avoids the collinearity issue if $m_{F} + k < m$. This generalizes a popular estimator in genetics \citep{Price06} which estimates the effect of one treatment at a time. In Proposition~\ref{prop:bias_subset}, we show that the asymptotic bias of the subset deconfounder remains non-zero \textit{even under strong infinite confounding}. To approach consistency, the subset deconfounder requires additional strong conditions on the effects of the non-focal treatments. \begin{mdframed} \begin{prop} \label{prop:bias_subset} (Asymptotic Bias of the Subset Deconfounder.) The subset deconfounder estimator, based on Theorem 7 from WB, is \begin{align} \left[ \ \hat\bm\beta_F^{\mathrm{subset} \top} ,\ \hat\bm\gamma^{\mathrm{subset} \top} \ \right]^\top &\equiv \left( \left[ \bm{A}_F, \hat\bm{Z} \right]^\top \left[ \bm{A}_F, \hat\bm{Z} \right] \right)^{-1} \left[ \bm{A}_F, \hat\bm{Z} \right]^\top \bm{Y}. \label{e:subset} \end{align} where the column subsets $\bm{A}_F$ and $\bm{A}_N$ respectively partition $\bm{A}$ into a finite number of focal causes of interest and non-focal causes. The substitute confounder, $\hat\bm{Z}$, is obtained by taking the singular value decomposition $\bm{A} = \bm{U} \bm{D} \bm{V}^\top$ and extracting the first $k$ components, $\hat\bm{Z} \equiv \sqrt{n} \bm{U}_{1:k}$. Under the linear-linear model, the asymptotic bias of this estimator is given by \begin{align*} \plim_{n \to \infty} \hat\bm\beta_F^\mathrm{subset} - \bm\beta_F &= \left( \bm{I} - \bm\theta_F^\top (\bm\theta \bm\theta^\top)^{-1} \bm\theta_F \right)^{-1} \bm\theta_F^\top (\bm\theta \bm\theta^\top)^{-1} \bm\theta_N \bm\beta_N, \\ \intertext{ with $\bm\theta_F$ and $\bm\theta_N$ indicating the column subsets of $\bm\theta$ corresponding to $\bm{A}_F$ and $\bm{A}_N$, respectively. The subset deconfounder is unbiased for $\bm\beta_F$ (i) if $\bm\theta_F = \bm{0}$, (ii) if $\lim_{m \rightarrow \infty} \bm\theta_N \bm\beta_N = \bm{0}$ and $\lim_{m \rightarrow \infty} \left[ \bm{I} - \bm\theta_F^\top (\bm\theta \bm\theta^\top)^{-1} \bm\theta_F \right]^{-1}$ is convergent, or (iii) if both strong infinite confounding holds and $(\bm\theta \bm\theta^\top)^{-1} \bm\theta_{N} \bm\beta_{N}$ goes to $\bm{0}$ as $m \rightarrow \infty$. If one of these additional conditions hold, } \lim_{m \to \infty} \plim_{n \to \infty} \hat\bm\beta_F^\mathrm{subset} - \bm\beta_F &= \bm{0} \end{align*} \end{prop} \end{mdframed} A proof is given in Supplement~\ref{a:bias_subset}; we interpret conditions (i--iii) below. Intuitively, the subset deconfounder is a biased estimator of the effect of $\bm{A}_F$ because of mismodeling of the dependence structure among the causes. Though $\bm{A}_F$ and $\bm{A}_N$ would be conditionally independent if the true $\bm{Z}$ could be observed and adjusted for, Lemma~\ref{l:residual_dependence} shows that they are not conditionally independent given $\hat\bm{Z}$. This mismodeled dependence leads to omitted variable bias when excluding $\bm{A}_N$. But, unlike na\"ive regression, strong infinite confounding does not resolve this omitted variable bias for the subset deconfounder. In Supplement~\ref{a:subset_add} we provide further intuition for this result, leveraging properties of PCA regression to show the subset confounder is effectively a regularized regression that only adjusts along the first $k$ eigenvectors \citep{hastie2013elements}. The subset deconfounder and similar estimators in genetics \citep[e.g.][]{Price06} rely on more than the intuition that there is shared confounding.\footnote{In Supplement~\ref{a:subset_add} we provide an example where strong infinite confounding holds, but the subset deconfounder has bias that cannot be eliminated by adding more treatments.} Proposition~\ref{prop:bias_subset} provides three stronger conditions, of which at least one is required for the subset deconfounder to approach unbiasedness. Condition (i), $\bm\theta_F = \bm{0}$, states that the focal treatment is unconfounded, and therefore no adjustment is needed. Condition (ii) is $\lim_{m \rightarrow \infty} \bm\theta_{N} \bm\beta_{N} = \bm{0}$ and $\lim_{m \rightarrow \infty} \left[ \bm{I} - \bm\theta_F^\top (\bm\theta \bm\theta^\top)^{-1} \bm\theta_F \right]^{-1}$ is convergent. This will hold if, for example, each element of $\bm\beta_{N}$, the treatment-outcome effects, and each column of $\bm\theta_N$, the confounder-treatment relationships, are drawn from zero-expectation distributions with finite variance and zero covariance between $\bm\beta_N$ and $\bm\theta_N$. Condition (iii) says that the subset deconfounder will hold if strong infinite confounding holds and $\lim_{m \rightarrow \infty} \left(\bm\theta \bm\theta^{'}\right)^{-1} \bm\theta_{N} \bm\beta_{N} = 0$. This could be satisfied, if, for example, the infinite sum $\sum_{j=1}^m \theta_{N,k',j} \beta_{N,j}$ is convergent for all latent confounders $k' \in \{1, \ldots, k\}$. A necessary (but not sufficient) condition for this to hold is if $\left( \theta_{N, k', j} \ \beta_{N, j} \right)_{j = 1}^{m} $ converges to zero for each $k'$ as $m \rightarrow \infty$. For example, analysts might assume that treatment effects in the non-focal set, $\left(\beta_{N,j} \right)_{j = 1}^{m}$, go to zero fast enough to ensure the infinite series converge. All of these conditions are extremely strong---even more than strong infinite confounding---because they either require assuming characteristics about the treatment effects (when the very reason for estimation is that they are unknown) or assuming there is no confounding at all. % \subsubsection{Takeaways} In the linear-linear setting, the default full deconfounder is rank deficient because the substitute confounder is a linear projection of the treatments. There are three ways of addressing this issue: penalizing the outcome regression, sampling the substitute confounder from its posterior distribution and analyzing a subset of the causes. The penalized and posterior full deconfounders converge asymptotically in $n$ and $m$ under strong infinite confounding to the correct solution only by virtue of the fact that they contain the na\"ive regression. By contrast, the subset deconfounder requires strong and unverifiable additional assumptions about the values of the treatment effects or the nonexistence of confounding. \subsection{Extensions to Separable Nonlinear Settings} \label{s:nonlinear_analysis} In this section, we extend our linear-linear results to settings with nonlinear factor and outcome regression models under separable confounding and strong infinite confounding. Under constant treatment effects, we connect the deconfounder with a partially linear regression \citep{robinson1988} to demonstrate that a semi-parametric na\"ive regression approaches asymptotic unbiasedness for the same reason that the deconfounder does. We then relax the assumption of constant treatment effects to show in our most general setting that infinite $m$ is required for consistency of the deconfounder. \subsubsection{Convergence of Deconfounder and Na\"ive Regressions} Following \citet{WanBle19,WanBle20,Zha19}, we first study constant-effects outcome models of the form $\mathbb{E}[Y_i | \bm{A}_i, \bm{Z}_i] = \bm{A}_i^\top \bm\beta + g_Y(\bm{Z}_i)$. Theorem~\ref{thm:deconfounder_naive_equality} shows any consistent deconfounder converges to a flexible na\"ive regression, which is also consistent. \begin{mdframed} \begin{thm} \label{thm:deconfounder_naive_equality} (Deconfounder-Na\"ive Convergence under Strong Infinite Confounding.) Consider all data-generating processes in which (i) treatments are drawn from a factor model with continuous density that is a function $\bm{Z}$, (ii) $\bm{Z}$ is pinpointed, and (iii) the outcome model contains constant treatment effects and treatment assignment is ignorable nonparametrically given $\bm{Z}$. Any consistent deconfounder converges to a na\"ive estimator for any finite subset of treatment effects. \end{thm} \end{mdframed} A proof is given in Supplement~\ref{a:deconfounder_naive_equality}. The proof proceeds by noting that pinpointedness implies that the function $g_{\bm{A}}(\bm{z}) \equiv \mathbb{E}[ \bm{A}_i | \bm{Z}_i=\bm{z} ]$ is asymptotically recoverable and invertible. (Here, pinpointedness hinges on a generalization of strong infinite confounding to nonlinear settings: the conditional entropy of $\bm{Z}$ given $\bm{A}$ must approach zero as $m$ grows large.) This makes the deconfounder equivalent to the partially linear regression: \begin{align} \left(\hat\bm\beta^\mathrm{deconf}, \hat{g}_Y^\mathrm{deconf} \right) &= \argmin_{\bm\beta^\ast, g_Y^\ast} \ \sum_{i=1}^n \left( Y_i - \bm{A}_{i}^\top \bm\beta^\ast - g_Y^\ast(\hat{g}_{\bm{A}}^{-1}(\bm{A}_i)) \right)^2. \end{align} The above is consistent for $\bm\beta$ \citep{robinson1988}. Intuitively, this shows that what the deconfounder buys is part of the function mapping the treatments to the outcome, $g_Y^\ast(\hat{g}_{\bm{A}}^{-1}(\cdot))$. We can instead directly fit the following partially linear regression. As in Section~\ref{a:bias_naive}, we partition the $m$ causes, $\bm{A}_i$, into finite $m_F$ focal causes of interest, $\bm{A}_{i,F}$, and $m_N$ nonfocal causes, $\bm{A}_{i,N}$. Then the conditional expectation function of the outcome can be rewritten $\mathbb{E}[ Y_i | \bm{A}, \bm{Z} ] = \bm{A}_{i,F}^\top \bm\beta_F + \bm{A}_{i,N}^\top \bm\beta_N + g_Y(\bm{Z}_i)$. The associated conditional expectations of the treatments are $g_{\bm{A}_F}(\bm{z}) \equiv \mathbb{E}[ \bm{A}_{i,F} | \bm{Z}_i=\bm{z} ]$ and $g_{\bm{A}_N}(\bm{z}) \equiv \mathbb{E}[ \bm{A}_{i,N} | \bm{Z}_i=\bm{z} ]$. The semiparametric na\"ive regression is \begin{align} \hat{h}_{\bm{A}_F}^\mathrm{na\ddot{\i}ve} &= \argmin_{h_{\bm{A}_F}^\ast} \sum_{i=1}^n \left\| \bm{A}_{i,F} - h_{\bm{A}_F}^\ast(\bm{A}_{i,N}) \right\|_F^2 \label{e:naive_nonlinear_af} \\ \hat{h}_Y^\mathrm{na\ddot{\i}ve} &= \argmin_{h_Y^\ast} \sum_{i=1}^n \left( Y_i - h_Y^\ast(\bm{A}_{i,N}) \right)^2 \label{e:naive_nonlinear_y} \\ \hat\bm\beta_F^\mathrm{na\ddot{\i}ve} &= \left( \tilde\bm{A}_F^{\mathrm{na\ddot{\i}ve} \top} \tilde\bm{A}_F^\mathrm{na\ddot{\i}ve} \right)^{-1} \tilde\bm{A}_F^{\mathrm{na\ddot{\i}ve} \top} \tilde\bm{Y}^\mathrm{na\ddot{\i}ve} \label{e:naive_nonlinear} \end{align} where $\tilde\bm{A}_F^\mathrm{na\ddot{\i}ve}$ collects $\bm{A}_{i,F} - \hat{h}_{\bm{A}_F}^\mathrm{na\ddot{\i}ve}(\bm{A}_{i,N})$ and $\tilde\bm{Y}^\mathrm{na\ddot{\i}ve}$ collects $Y_i - \hat{h}_Y^\mathrm{na\ddot{\i}ve}(\bm{A}_{i,N})$. These two approaches asymptotically converge to one another. In short, the key problem is estimating the composite function $g_Y(\hat{g}_{\bm{A}}^{-1}(\cdot))$. This can be done directly by a flexible na\"ive regression without the added complication or factor-model functional form assumptions of the deconfounder. While a flexible na\"ive regression guarantees that we recover the correct control function without parametric information about $f(\bm{A} | \bm{Z})$, we note that in applied settings, considerable data is required to make use of this result. \subsubsection{Inconsistency of the Nonlinear Deconfounder in Finite $m$} Next, we generalize to the class of continuous factor and outcome models with additively separable confounding, $\mathbb{E}[Y_i | \bm{A}_i, \bm{Z}_i] = f(\bm{A}_i) + g_Y(\bm{Z}_i)$. This class nests all models considered in this paper, covering nonlinear factor models, nonlinear and interactive treatment effects, and arbitrary nonlinear confounding. Theorem~\ref{thm:deconfounder_inconsistency_nonlinear} states that the deconfounder is inconsistent for all such data-generating processes with finite $m$. \begin{mdframed} \begin{thm} \label{thm:deconfounder_inconsistency_nonlinear} (Inconsistency of the Deconfounder in Nonlinear Settings.) Consider all data-generating processes in which a finite number of conditionally ignorable treatments are drawn from a factor model with continuous density that is a function of the confounding variable $\bm{Z}$. Then the deconfounder is inconsistent for any outcome model with additively separable confounding. \end{thm} \end{mdframed} A proof is given in Supplement~\ref{a:deconfounder_inconsistency_nonlinear}. This result follows from a simple premise: because $\bm{A}_i$ is a stochastic function of the confounder, $\bm{Z}_i$, analysts can never recover the exact value of $\bm{Z}_i$ using a \textit{finite} number of treatments \citep{imai2019discussion}, even if given the function $g_{\bm{A}}^{-1}$ mapping $\mathbb{E}[ \bm{A}_i | \bm{Z}_i ]$ to $\bm{Z}_i$---because $\mathbb{E}[ \bm{A}_i | \bm{Z}_i ]$ is unknown. Next, the error in $\hat\bm{Z}_i$ depends on $\bm{A}_i$, and the outcome $Y_i$ is in part dependent on this component (i.e., $Y_i$ depends on $\bm{Z}_i$, which is only partially accounted for by $\hat\bm{Z}_i$). Therefore, outcome analyses that neglect the unobserved mismeasurement, $\hat\bm{Z}_i - \bm{Z}_i$, will necessarily suffer from omitted variable bias. \subsection{Takeaways} Every estimator considered in this paper, save the oracle, is inconsistent for finite $m$. The deconfounder's pinpointing assumption requires \textit{strong infinite confounding}---an asymptotic regime for $m$ that is a very stringent assumption. For the subset deconfounder, except in knife-edge cases, \textit{strong infinite confounding} is insufficient and requires further strong assumptions. When the deconfounder does work (is estimable and satisfies conditions for asymptotic unbiasedness), we prove that a suitably flexible na\"ive regression converges to the deconfounder asymptotically in $m$. Thus the deconfounder works in limited settings. When the deconfounder works, the factor-model machinery of the deconfounder is unnecessary, because a na\"ive regression asymptotically produces the same result. \subsection{The Deconfounder} \label{s:deconfounder_definition} WB seeks to ``develop the deconfounder algorithm, prove that it is unbiased, and show that it requires weaker assumptions than traditional causal inference'' (p. 1574). The recommended algorithm is to fit a factor model to the treatments, check its fit, and then run a regression of the outcome on all treatments and low-dimensional representations of each unit extracted from the factor model. WB offer three settings (their Theorems 6--8) which leverage parametric assumptions (Theorem 6) and limitations on what can be estimated (Theorems 7--8) to achieve identification.\footnote{Theorem 6 provides identification of average treatment effects by making parametric assumptions, including that the substitute confounder is piecewise constant and there can be no confounder/cause interactions. Theorem 7 identifies the average treatment effects of a subset of the treatments. Finally, Theorem 8 restricts estimation to only those treatments which map to the same value of the substitute confounder. In short, Theorem 6 leverages functional form assumptions for identification, while Theorems 7 and 8 narrow the set of causal questions the method can answer.} All rely on the assumption of ``no single cause confounders'' and what is called ``consistency of the substitute confounder'' in WB, but called ``pinpointing'' in subsequent papers \citep{WanBle20}. This later assumption requires that the substitute confounder, $\hat{\bm{Z}}$---which is a deterministic function of the observed treatments---is a bijective transformation of $\bm{Z}$. % While the pinpointing assumption is stated as an exact equality, any method to consistently estimate $\bm{Z}$ requires asymptotics in $n$ and $m$. In Definition~\ref{def:consistency_substitute_confounder} of Supplement~\ref{a:consistency_deconfounder}, we offer a redefinition of pinpointing as an asymptotic property. We analyze the deconfounder in the strongest possible asymptotic regime, where $m \rightarrow \infty$ and for each treatment, $n \rightarrow \infty$. \subsection{Related Work} Even before publication, WB generated considerable conversation focused almost exclusively on its theoretical claims. In response to a working paper version of WB, \citet{d2019multi} showed that general non-parametric identification is impossible.% These issues arise because the factor model and the no unobserved single cause confounders assumptions only partially constrain the observed data distribution.% Commentaries, published alongside WB, clarified implications of several key theoretical assumptions. \citet{imai2019discussion} notes that $\hat{\bm{Z}}$ converges to a function of the observed treatments rather than the true $\bm{Z}$, a random variable \citep[1607]{imai2019discussion}. This is problematic because the adjustment criteria implicitly assumes the support of $p(\hat{\bm{Z}}_i | \bm{A}_i =\mathbf{a})$ is the same as that of $p(\hat{\bm{Z}}_i)$---which cannot be true because pinpointing implies that $p(\hat{\bm{Z}}_i | \bm{A}_i =\mathbf{a})$ is degenerate. Both \citet{ogburn2019comment} and \citet{d2019multi} emphasize that pinpointing generally requires $m$ going to infinity. We build on this point in Theorem~\ref{thm:deconfounder_inconsistency_nonlinear}, which proves the deconfounder is inconsistent for broad classes of data-generating processes with finite numbers of treatments. \subsection{Takeaways and Contributions} Because the substitute confounder is a function of the observed treatments, $\hat\bm{Z} = f(\bm{A})$, the deconfounder estimates $\mathbb{E}[\bm{Y} | \bm{A}, \hat\bm{Z}] = \mathbb{E}[\bm{Y} | \bm{A}, f(\bm{A})]$, which reduces to $\mathbb{E}[\bm{Y} | \bm{A}]$. In other words, the deconfounder is only a method to learn a transformation of the treatments. There are several important restrictions implicit in the deconfounder assumptions including, notably, that the treatments, $\bm{A}$, cannot causally depend on each other. We maintain these assumptions and return to them in Section~\ref{s:assumptions}. Our contribution is twofold. First, we present Propositions~\ref{prop:bias_naive}--\ref{prop:bias_ridge_nonlinear} and Theorem~\ref{thm:deconfounder_naive_equality}, showing that under the deconfounder assumptions, na\"ive regression is asymptotically unbiased and that every variant deconfounder estimator only also achieves asymptotic unbiasedness if it uses the same information as a na\"ive regression. When the deconfounder uses only a subset of information, additional untestable and strong assumptions must be made about the treatment effects. Second, we show the theoretical concerns raised here and in prior papers make the deconfounder and na\"ive regression unsuitable for current real-world applications. % \subsection{Linear-Linear Deconfounders Only Help When Biases Cancel} In the linear-linear setting, the substitute confounder is a linear function of the treatments---information already captured by including the treatments in the linear outcome model. We show that in the linear-linear setting, deconfounder can sometimes outperform the na\"ive regression in subsets of the parameter space where differing na\"ive and deconfounder biases align in the right way. However, these situations always rely on parameters that would be unknown to the analyst. We also show that given estimation instability in near-collinear estimators, the full deconfounder with a linear factor model is never appropriate. \subsubsection{Medical Deconfounder} \cite{Zha19} presents an application of the deconfounder to the analysis of electronic health records. The first simulation study presented in the paper considers a situation where there are two treatments, of which only one has a true non-zero coefficient.% The true data generating process draws $n=1,000$ patients from a linear-linear model.\footnote{This process is $Z_i \sim \mathcal{N}(0, 1)$, $A_{i,1} \sim \mathcal{N}(0.3 Z_i, 1)$, $A_{i,2} \sim \mathcal{N}(.4Z_i, 1)$, $Y_i \sim \mathcal{N}(0.5 Z_i + 0.3 A_{i,2}, 1)$.} They estimate a one-dimensional substitute confounder using probabilistic principal component analysis.\footnote{\citet{Zha19} uses black box variational inference \citep{ranganath2014black}, then estimates the outcome model with automatic differentiation variational inference \citep{kucukelbir2017automatic}. We use \texttt{Stan} code for probabilistic PCA and the outcome model. Further details are in Supplement~\ref{a:med1}.} We introduce a faster and more accurate variant deconfounder which performs PCA, extracts the top component, and then runs ridge regression with a penalty chosen by cross-validation. \citet{Zha19} report a single sample. We repeat this process 1,000 times, assessing bias, variance and root mean squared error (RMSE) in Table~\ref{tab:med1}. \begin{table}[ht] \centering \begin{tabular}{|cr|cc|cc|cc|} \hline\hline && \multicolumn{2}{c|}{Bias} & \multicolumn{2}{c|}{Std. Dev.} & \multicolumn{2}{c|}{RMSE} \\ & Model & $\beta_1$ & $\beta_2$ & $\beta_1$ & $\beta_2$ & $\beta_1$ & $\beta_2$ \\ \hline & Na\"ive & 0.120 & 0.160 & 0.033 & 0.033 & 0.125 & 0.164 \\ \textbf{Orig. simulation} & Oracle & 0.000 & 0.000 & 0.032 & 0.033 & 0.032 & 0.033 \\ ($\beta_1=0$, $\beta_2 = 0.3$) & Deconfounder & 0.145 & 0.189 & 0.675 & 0.877 & 0.690 & 0.897 \\ & PCA+CV-Ridge & 0.028 & -0.146 & 0.014 & 0.029 & 0.031 & 0.149 \\ \hline \textbf{Our simulation} & Deconfounder & 0.150 & 0.197 & 0.685 & 0.893 & 0.701 & 0.914 \\ ($\beta_1=-0.3$, $\beta_2 = 0.3$) & PCA+CV-Ridge & 0.188 & -0.099 & 0.028 & 0.037 & 0.191 & 0.106 \\ \hline \hline \end{tabular} \caption{\textbf{Simulation Study 1 of the Medical Deconfounder}. The main ``Deconfounder'' estimation procedure is from \citet{Zha19} and uses probabilistic PCA and Bayesian linear regression. ``PCA + CV-Ridge'' is an improved deconfounder estimator we developed. ``Na\"ive''and ``Oracle'' estimators are as described in the main text.} \label{tab:med1} \end{table} The original deconfounder performs poorly, with higher bias and variance than the na\"ive estimator due to near collinearity driving estimation instability. Our PCA+CV-Ridge deconfounder appears to perform better than na\"ive, but this does not hold across the parameter space. Under the original data generating process, the effect of $A_1$ is zero and therefore, the ridge penalty drives the coefficient of $A_1$ towards the truth. This results in apparent good performance for the simplified deconfounder variant. In the last two rows of Table~\ref{tab:med1}, we repeat the same simulation switching the true effect of $A_1$ to $-0.3$: the simplified deconfounder now performs slightly worse than the na\"ive regression. \subsubsection{Subset Deconfounder} Proposition 4 shows strong infinite confounding is insufficient for the subset deconfounder to provide unbiased estimates of treatment effects, even when na\"ive regression can achieve oracle-like performance. Only under strong assumptions about treatment effects will the subset deconfounder be unbiased. We design a simulation to demonstrate this fact for different sequences of treatment effects, $\beta$, even when strong infinite confounding is satisfied ($\theta_{j} = 10 \ \forall \ j$). The full simulation is included in Supplement~\ref{a:subsetsim}. % Table \ref{t:sim_best} provides the average RMSE for treatment effect estimates. When treatment effects are constant ($\beta_{j} = 10$ or $\beta_{j} = 100$) the subset deconfounder's performance fails to improve as more treatments are added. This is true even though na\"ive regression's average RMSE converges on the oracle's performance. Similarly, we see that when $\beta_{j} \sim \mathcal{N}(1,2)$, the subset deconfounder converges on an average bias of $1$---the mean of $\beta_j$ (see Case ii from Proposition 4). In Table \ref{t:sim_best} the subset deconfounder is unbiased only when the sequence of treatment effects converge to 0 as more treatments are added (e.g., if $\beta_{j} = \frac{1}{j}$). \begin{table}[h!!] \centering \small \input{tables/FinalSubsetTable.tex} \normalsize \caption{\textbf{The Subset Deconfounder is Unbiased Only under Strong Assumptions about Treatment Effects} As the number of treatments grow (columns moving from left to right), both the na\"ive regression converges on the oracle's average RMSE (entries in table average over RMSE of each treatment effect), while the subset deconfounder's performance depends on the treatment's effects. The subset deconfounder is unbiased only when the sequence of treatments converges to zero. Even when the treatments are random draws from a normal distribution, the bias of the subset deconfounder converges on the average treatment effect. }\label{t:sim_best} \end{table} \subsubsection{Takeaways} In finite-sample linear-linear settings, the deconfounder cannot improve performance over the na\"ive regression. Due to estimation instability, variants of the full deconfounder with a linear factor model are never preferable to na\"ive regressions. Subset deconfounders only outperform na\"ive regressions if strong assumptions about treatment effects are satisfied. \subsection{Nonlinear Deconfounder and Na\"ive Approaches Can Sometimes Exploit Parametric Information} The best-case scenario for the deconfounder is it may efficiently exploit known parametric information about a nonlinear data generating process. We examine one such case, drawing on a simulation posted on the \texttt{blei-lab} GitHub page \citep{wang2019github}. We find that even in this ideal setting, the deconfounder is weakly dominated by a correctly specified nonlinear na\"ive regression. We simulate $n=10,000$ draws from the following data-generating process, \begin{align} \left[\begin{array}{l} A_{i,1} \\ A_{i,2} \\ Z_i \end{array} \right] &\sim \mathcal{N}\left( \left[\begin{array}{l} 0 \\ 0 \\ 0 \end{array} \right] , \left[\begin{array}{lll} 1 & \rho & \rho \\ \rho & 1 & \rho \\ \rho & \rho & 1 \end{array} \right] \right) \nonumber \\[1ex] Y_i &\sim \mathcal{N}(0.4 + 0.2 A_{i,1}^2 + 1 A_{i,2}^2 + 0.9 Z_i^2,\ 1) \label{e:quadratic_outcome} \end{align} for $\rho=0.4$. % As before, these are collected in $\bm{Z}$, $\bm{A}$, and $\bm{Y}$. A substitute confounder is obtained by taking the singular value decomposition $\bm{A} = \bm{U} \bm{D} \bm{V}^\top$ and extracting the first component, $\hat\bm{Z} \equiv \bm{A} \bm{V}_{1}$, so for $\hat Z_{i} = \frac{\sqrt{n}}{D_{1}}(V_{11}A_{i1} + V_{21} A_{i2})$. Following \citet{wang2019github}, we then estimate a linear regression of $\bm{Y}$ on three predictors: $\bm{A}_1^2$, $\bm{A}_2^2$, and $\hat\bm{Z}^2$.% \begin{figure}[h!!!] \centering \includegraphics[width=.48\textwidth]{figs/quadratic_rho.pdf} \hfill \includegraphics[width=.48\textwidth]{figs/quadratic_a.pdf} \caption{\textbf{Under Strong Infinite Confounding, a Parametric Na\"ive Model Weakly Dominates the Deconfounder (left); Na\"ive Regression Converges to the Same Performance as the Number of Treatments Increases (right).} The left plot shows average RMSE for varying $\rho$, the off-diagonal covariances in Equation~\eqref{e:quadratic_outcome}. The right plot shows as $m$ increases, deconfounder and na\"ive RMSEs converge.} \label{f:quadratic} \end{figure} The deconfounder captures the interaction of the treatments, as can be seen by expanding the polynomial $\hat{Z}_{i}^2 = \left(\frac{\sqrt{n}}{D_{1}}(V_{11}A_{i1} + V_{21} A_{i2})\right)^2 \ = \ \frac{n}{D_1^2} \left( V_{11}^2 A_{i1}^2 + V_{21}^2 A_{i2}^2 + V_{11} V_{21} A_{i1} A_{i2} \right)$. The expansion is not a linear combination of $A_{i,1}^2$ and $A_{i,2}^2$ due to the inclusion of the interaction $A_{i,1} A_{i,2}$. However, the deconfounder only incorporates partial information about the true functional form of the outcome model, Equation~\eqref{e:quadratic_outcome}. By using the same information more carefully, a better parametric na\"ive estimator can be derived. By properties of the multivariate normal distribution, $\frac{1}{\rho^2} \mathbb{E}[Z_i|\bm{A}_i]^{2} = A_{i,1}^2 + A_{i,2}^2 + 2 A_{i,1} A_{i,2}$. Therefore, the causal effects can also be estimated by a regression of $\bm{Y}$ on $\bm{A}_1^2$, $\bm{A}_2^2$, and $(\bm{A}_1^2 + \bm{A}_2^2 + 2\bm{A}_1 \circ \bm{A}_2)$, where $\circ$ denotes the elementwise product. We refer to this approach as the ``parametric" alternative; it is also a na\"ive regression, as it does not seek to estimate $\bm{Z}$. As Figure~\ref{f:quadratic} (left panel) shows, this substantially improves over the deconfounder for negative $\rho$, in terms of average root mean squared error, while capturing the same information for positive $\rho$. Moreover, when the latent confounder satisfies strong infinite confounding, the na\"ive regression will approach the deconfounder's performance as the number of treatments grows. The right panel demonstrates this point for the original setting of $\rho=0.4$. While the quadratic example shows it may be theoretically possible to design a simulation where nonlinear information helps the deconfounder improve over an incorrectly specified na\"ive regression, it is difficult. In this tutorial simulation, the factor model performs poorly for negative correlations while the parametric model does well. In more complex simulations involving factor model nonlinearity, reported gains are often modest. \citet{WanBle19} presents a simulation based on Genome Wide Association Study (GWAS) data which show a maximum RMSE reduction of merely 3\% for the deconfounder over the na\"ive. (In our replication in Supplement~\ref{a:gwas}, we show that in fact, the deconfounder actually performs worse than na\"ive regression for the non-zero coefficients---an unfortunate pattern for applied genetics, where the primary interest is detecting nonzero coefficients and assessing their size.) Similarly, a second simulation study in \citet{Zha19} report deconfounder RMSE that is only slightly better than na\"ive RMSE (our simulations point to the opposite conclusion, that the deconfounder does slightly \textit{worse}; see Supplement~\ref{a:med2}). \subsubsection{Takeaways} It is sometimes possible for the deconfounder to improve over incorrectly specified na\"ive regression in finite samples---if the deconfounder learns the correct nonlinearity. However, in practice, making use of this fact is impossible without extensive knowledge of the data generating process. Moreover, even the deconfounder papers show, at best, marginal improvements. When such extensive knowledge is available, a well-specified na\"ive regression making use of that knowledge will also perform well. We conjecture that if the high-dimensional treatments lie on a low-dimensional manifold \textit{and} the correct factor model specification is known, it might be more efficient to model the relationship between $\hat{\bm{Z}}$ and $\bm{Y}$ semiparametrically (as in the deconfounder) rather than directly modeling high-dimensional $\bm{A}$ and $\bm{Y}$ semiparametrically (as in na\"ive regressions). However, this has yet to be demonstrated in any simulation. \subsection{Takeaways and Additional Empirical Results} We have replicated every simulation across the deconfounder papers for which data is available, and we find no evidence that the deconfounder consistently outperforms the na\"ive regression. Several simulations---like the medical deconfounder and quadratic examples highlighted above---perform better at some parameter values but worse at others. Full details of all six replicated simulations are in the supplement. Separately, \citet{WanBle19} use posterior predictive checks (PPCs) of the factor model, arguing these will assess when the deconfounder can improve estimates. If this claim were true, it would allow highly flexible density estimation to be used, even when the true parametric form of the factor model was unknown---as is always the case in practice. However, it is not. Theoretically, Proposition~\ref{prop:bias_subset} proves that for subset deconfounders, this is impossible because the performance depends on untestable assumptions about the treatment effects, not the factor model. And for the full deconfounder, PPCs are ill-suited to evaluating conditional independence of $\bm{A}$ given $\hat{\bm{Z}}$, perhaps the most relevant observable property of the factor model \citep{imai2019discussion}. Empirically, we further present a new simulation in Supplement~\ref{a:ppc} demonstrating that the PPC does not reliably indicate whether a deconfounder will perform well, either in absolute terms or relative to na\"ive regression. Our asymptotic results suggested that the deconfounder would not outperform the na\"ive regression, and simulations have shown this to hold in finite samples. In nonlinear settings it is possible to exploit parametric information in the factor model, but it is both difficult to do in practice and can also be used to comparably perform the na\"ive regression. It remains possible that a simulation could establish a particular data generating process where the deconfounder performs better than na\"ive regression, but this has yet to be demonstrated. \section{Introduction} \input{introduction.tex} \section{The Deconfounder and Multiple Causal Inference} \label{s:deconfounder} \input{deconfounder.tex} \section{Asymptotic Theory Justifies the Na\"ive Regression Whenever The Deconfounder Can Be Used} \label{s:asymptotic} \input{asymptotic_theory.tex} \section{Deconfounder Does Not Consistently Outperform Na\"ive Regression in Finite Samples} \label{s:finite} \input{finite_sample} \section{Neither Na\"ive Regression nor Deconfounder is Currently Suitable For Real World Applications} \label{s:applications} \input{applications.tex} \section{Discussion} \input{conclusion} \section{Simulation Details} \label{app:sim} Our simulation code is based on replication materials for \citet{WanBle19} that were provided to us by Wang and Blei in December of 2019 and two simulations that were publicly posted tutorials on the \texttt{blei-lab} github page from September 20, 2018 \citep{wang2019github} until its removal on March 22nd, 2020, after we downloaded and analyzed them. After reading our working paper and discussing our analyses with them, Wang, Zhang and Blei posted reference implementations on July 2nd, 2020 at \url{https://github.com/blei-lab/deconfounder_public} and \url{https://github.com/zhangly811/Medical_deconfounder_simulation}. Because the reference implementation was produced in response to our analyses, all references are with respect to the December 2019 code and the details provided in the published papers. This appendix provides additional details on the simulations and attempts to explain why our results deviate from the previously published results. In this introduction to the appendix, we define a few terms that we will use repeatedly. We then detail common deviations between our simulations and the original designs and provide discussion of why our results differ. We then run through each simulation in turn. \noindent \textbf{Common Terms:} \begin{itemize} \item Na\"ive Regression\\ When not otherwise specified the na\"ive regression is an OLS regression of the outcome on all the treatments. When creating confidence intervals they are always $95\%$ intervals calculated using the classical covariance matrix estimated under homoskedasticity. \item Oracle Regression\\ This follows the same setup as the na\"ive regression but also controls for the unmeasured confounder. \item PCA\\ When we compute a principal components analysis we always center and scale the treatments. We take the top $k$ principal components where $k$ is set to the true number of unobserved components. This is obviously not feasible in real world settings, but is as generous as possible to the deconfounder. \item pPCA \\ To compute probabilistic Principal Components Analysis we follow the \texttt{rstan} code provided to us by Wang and Blei for replicating the smoking example in \citet{WanBle19}. We remove the computation of heldout samples for reasons that will be explained below. We estimate the model using automatic differentiation variational inference using their convergence settings. When unspecified we use the posterior mean as our estimate of $\hat{Z_i}$. \item Deconfounder \\ When not otherwise specified we are using the substitute confounder version of the deconfounder not the reconstructed causes. \end{itemize} \subsection{Common Deviations in Simulation Setups} \label{a:common_deviations} We have tried to remain as faithful as possible to the original simulation setups, but we have made changes where necessary to assess the deconfounder's performance. In each simulation and application, we detail all deviations in procedures---here we summarize the most common such deviations. \paragraph{Stabilizing Estimation} There are two areas where estimation issues become a concern in the deconfounder: the estimation of the factor model and the estimation of the outcome model. Instability in the factor model arises for a number of reasons. In at least two of the simulations (Medical Deconfounder 2 and Smoking), the original design calls for simulating factor models which have more latent dimensions than observed data dimensions ($m$). In these settings the extra factors are just noise as $n$ factors would be sufficient to exactly reconstruct the data under a frequentist model. In these settings, we report results for the models with $k > m$ but caution against overinterpreting the results. See the smoking simulation for an examination of factor model instability in this setting. In other settings (GWAS, Actors and Breast Cancer), the code uses a procedure that analyzes a subset of the data when fitting factor models and updates in batches. In Breast Cancer and Actors we replace their procedure with code that analyzes the entire dataset at once. In many of the applications we replace their PPCA with standard PCA where possible to avoid the noise from the posterior approximation. Many of the original simulations estimate the na\"ive and oracle regressions with bayesian linear regression with normal priors fit with black-box varitional estimation in \texttt{Stan}. We always use the computationally cheaper and more stable OLS which explains why in several simulations (e.g. Medical Deconfounder) the original papers report oracle coverage rates that are below the nominal levels. Instability in the deconfounder outcome model is particularly high because of near-perfect collinearity between the treatments and the substitute confounder. This problem is in turn exacerbated by the black box variational estimation in \texttt{Stan}. This drives our development of the PCA+CV-Ridge deconfounder variant we deploy in the Medical Deconfounder 1 simulation. \paragraph{Probing the Simulations} We extend many of the simulations to more thoroughly probe the properties of the estimator. A number of the original simulations (Quadratic, Logistic, Medical Deconfounder 1) only report results from a single realization of the data generating process, while we repeat hundreds or thousands of times. We also extend our search over reasonable parameter spaces of the original data generating process by examining different numbers of treatments and levels of confounding (Quadratic), different levels of noise added to the substitute confounder (Logistic) and different coefficients in the outcome model (Medical Deconfounder 1). In other cases, we use the same simulation designs, but explore the results in different ways (e.g. breaking out GWAS results by causal and non-causal coefficients). \paragraph{Removing the Heldout Procedure} Many of the original simulations describe holding out $20\%$ of the data for predictive checks. We generally assume the correct latent specification and are interested in comparing all models that are presented in the original paper so we skip this step. See also our discussion in Supplement~\ref{app:smokingsim} for an explanation of why the heldout procedure implemented in the smoking example yields different results. \subsection{Medical Deconfounder Simulations} The simulations are both taken from Section 3 of \citet{Zha19} on the medical deconfounder. They are used in conjunction with two case studies to establish the performance of the medical deconfounder. We recreate these simulations from details provided in the paper (reusing \texttt{Stan} code provided to us by Wang and Blei provided for the smoking example). \subsubsection{Simulation Study 1: One Real Cause} \label{a:med1} \textbf{Summary:} We replicate and extend a simulation design designed to support the medical deconfounder from \citet{Zha19} which uses penalized regression to be estimable. The true data generating process for each of 1000 patients indexed by $i$ with true scalar confounder $Z_i$ is as follows. \begin{align*} Z_i &\sim \text{Normal}(0,1) \\ A_{1,i} &\sim \text{Normal}(0.3Z_i,1) \\ A_{2,i} &\sim \text{Normal}(0.4Z_i,1) \\ Y_i &\sim \text{Normal}(0.5Z_i + 0.3A_{2,i},1) \end{align*} In the actual paper, the notation writes the error term as $\epsilon_i$ for that is shared across $A_1,A_2,Y$, but we take this to be a typo. The original simulation fits a probabilistic PCA model with $k=1$ using black box variational inference \citep{ranganath2014black} in Edward and fit the outcome model using ADVI \citep{kucukelbir2017automatic}. To keep the entire workflow in R, we fit the probabilistic PCA model with $k=1$ in \texttt{rstan} using ADVI based on the probabilistic PCA model provided to us for replicating the smoking example. We modified the code to remove the heldout procedure designed for posterior predictive checks but otherwise kept the model with same, with the same priors. We use the default settings in \texttt{rstanarm} for the outcome regression. We increased the maximum number of iterations on both the factor model and the outcome model by a factor of 10 to try to stabilize the estimates. We introduce the PCA + CV-Ridge estimator. We estimate PCA with $k=1$ and then estimate a ridge regression using \texttt{glmnet} \citep{glmnet} and choosing the penalty parameter by cross-validation. In the paper, \citet{Zha19} present one realization of this data-generating process and we repeat this process 1000 times to create a simulation. Table 2 of \citet{Zha19} contains their results. The results are not directly comparable in that they are showing a single realization of a set of estimators and are showing more systematic results. Finally, we show an additional simulation where the true treatment effects are $(-.3,.3)$ instead of $(0,.3)$ to demonstrate the results are sensitive to the settings of the true coefficients, consistent with our results about the subset deconfounder. \begin{framed} \noindent \textbf{Deviations} \begin{itemize} \item we take more than one draw from the simulation and report aggregate quantities \item we correct a typo in the manuscript and do not share errors across treatments \item we use \texttt{rstan} instead of \texttt{Edward} for pPCA \item we increase the maximum iterations of the bayesian procedures \item we add the PCA + CV-Ridge Estimator. \item we assess in terms of bias, standard deviation and RMSE rather than with coverage or $p$-values. \item we don't holdout $20\%$ of the data or do predictive checks \item we use OLS rather than bayesian linear regression for our na\"ive and oracle estimators \end{itemize} \noindent \textbf{Their Results:} They argue based on one draw from their simulated process that the deconfounder leads to the right conclusions and the na\"ive leads to the wrong conclusions.\\ \noindent \textbf{Our Results:} We demonstrate that even with the PCA + CV-Ridge Estimator which provides large improvements over the medical deconfounder, the performance of the deconfounder is in practice no better than the na\"ive regression. \end{framed} \subsubsection{Simulation Study 2: A multi-medication simulation} \label{a:med2} \textbf{Summary: }We replicate and extend a simulation design created to support the medical deconfounder from \citet{Zha19} which uses a nonlinear functional form to be estimable. \citet{Zha19} simulate 50 total treatments of which only 10 have a non-zero effect on the outcome. They use the data generating process, \begin{align*} Z_{i,k} &\sim \text{Normal}(0,1), &k=1\dots10\\ A_{i,j} &\sim \text{Bernoulli}\left(\sigma\left(\sum_{k=1}^{10}\lambda_{k,j}Z_{i,k} \right) \right), &j=1\dots50\\ Y_i &\sim \text{Normal}\left(\sum_{j=1}^{10} \beta_jA_{ij} + \sum_{k=1}^{10} \gamma_kZ_{i,k},1\right)\\ \lambda_{k,j} &\sim \text{Normal}(0,.5^2) \\ \gamma_k,\beta_j &\sim \text{Normal}(0,.25^2) \end{align*} where $\sigma$ is the sigmoid function. Only the first 10 treatments have non-zero coefficients---these are the medications that work. \citet{Zha19} report results for the oracle, the na\"ive estimator, a poisson matrix factorization (PMF) with $K=450$ and a deep exponential family---the latter two implemented in \texttt{Edward}. We estimate the Poisson matrix factorization using the \texttt{R} package \texttt{poismf} with $L_2$ regularization (\texttt{l2\_reg=.01}) and $K=450$ run for 100 iterations. All outcome regressions are computed using standard linear regression with homoskedastic standard errors. Coverage is computed with respect to $95\%$ confidence intervals. \begin{table}[ht] \centering \begin{tabular}{|r|c|ccc|} \hline \hline & & \multicolumn{3}{c|}{Coverage Proportion}\\ \hline & RMSE & All & Non-Zero & Zero \\ \hline Oracle & 0.03 & 0.95 & 0.95 & 0.95 \\ Na\"ive & 0.13 & 0.42 & 0.41 & 0.42 \\ Deconfounder & 0.13 & 0.43 & 0.43 & 0.44 \\ \hline \hline \end{tabular} \caption{\textbf{Replication of Simulation Study 2 of \citet{Zha19}.} The deconfounder is estimated using a 450-dimensional poisson matrix factorization. Coverage rates for nominal 95\% intervals are reported separately for zero coefficients (no causal effect) and non-zero coefficients.} \label{tab:med2} \end{table} We calculate RMSE by calculating all the squared errors $(\hat{\beta_j} - \beta)^2$ and taking the square root of the mean over all coefficients in all simulations. We simulate the entire data generating process each time and conduct 1000 simulations. We note that original results in \citet{Zha19} do not show the oracle achieving $95\%$ coverage---in fact, they show only $50\%$ coverage for the non-zero coefficients whereas we achieve the nominal rate. The results of the deconfounder and the na\"ive estimator are essentially indistinguishable in our setting, but we note that they aren't particularly different as reported in the original paper, either. Because the factor models are fit with $k=450$ to a 50-dimensional the exact form of the prior specification will have a huge impact on the values of the substitute confounder. In an unpenalized setting, $k=50$ should be sufficient to exactly reconstruct the observed data. \begin{framed} \noindent \textbf{Deviations} \begin{itemize} \item we report only the poisson matrix factorization (not the deep exponential family) and use \texttt{poismf} instead of \texttt{Edward}. \item we don't holdout $20\%$ of the data or do predictive checks \item we use OLS rather than bayesian linear regression for all outcome regressions \end{itemize} \noindent \textbf{Their Results:} They argue that the deconfounder produces better results than the na\"ive.\\ \noindent \textbf{Our Results:} We cannot compare to the deep exponential family, but the poisson matrix factorization does not show the gains in improvement over na\"ive that they claim. We show better RMSE for all estimators as well as better coverage. \end{framed} \subsection{Tutorial Simulations} The simulations were IPython notebooks in a folder marked \texttt{toy simulations}. Each shows a simulated data generating process and then walks through a single draw and compares the na\"ive regression (labeled ``noncausal estimation'') with deconfounder estimates based on reconstructed causes and the substitute confounder. We understand that public tutorials need to be fast to run and thus may often be less nuanced than authors would prefer. That said, we think these tutorials are important to replicate because they are they way many potential users would be exposed to the deconfounder and would come to understand its properties. \subsubsection{Logistic Simulation} \label{a:logisticsim} \textbf{Summary:} This simulation adds noise to the substitute confounder to make the model estimable---we explore how variations on the amount of noise affect simulation results. The clearest application of the noised deconfounder is in the logistic tutorial simulation \citep{wang2019github}. The simulation uses the following data generating process to create 10,000 observations: \begin{eqnarray} (X_{1}, X_{2}, Z) & \sim & \text{Multivariate Normal}\left((0,0,0), \begin{pmatrix} 1 & 0.4 & 0.4 \\ 0.4 & 1 & 0.4 \\ 0.4 & 0.4 & 1 \\ \end{pmatrix}\right) \nonumber \\ y & \sim & \text{Bernoulli}\left( \frac{1}{1 + \text{exp}\left(-(.4 + .2X_1 + 1X_2 + .9Z)\right)} \right) \nonumber \end{eqnarray} The substitute confounder is found by using PCA of ($X_1, X_2$) and then adding random noise, such that \begin{align*} \hat{Z} &= \text{Normal}(\underbrace{\eta_{1} X_{1} + \eta_{2} X_{2}}_{\text{PCA}}, .1^2) \end{align*} The simulation then estimates a logistic regression with a linear predictor ($X_1,X_2,\hat{Z}$). The random noise is introduced to break the perfect collinearity between the treatments and the substitute confounder. For a single draw of the data generating process, the tutorial simulation claims that with the na\"ive regression ``none of the confidence intervals include the truth", but with the deconfounder ``both of the confidence intervals (for $X1, X2$) include the truth." The implication is that the deconfounder improves upon the na\"ive regression estimates. We repeat the simulation 100,000 times to summarize properties more generally and vary the standard deviation of the noise added to the deconfounder from $.1$ to $(1,.1,.01,.001)$. We extend the tutorial to assess bias, standard deviation, coverage and RMSE all at different values of the noise parameter with results shown in Table~\ref{tab:logistic}. At no point does the overall performance of the deconfounder exceed that of the na\"ive estimator. For large noise, the deconfounder approaches the performance of the na\"ive estimator; as the noise grows small and collinearity increases, estimator variance and RMSE get large very quickly. \begin{table}[ht] \centering \begin{tabular}{|lc|cccc|} \hline \hline {} & {} & \multicolumn{2}{c}{Treatment 1}& \multicolumn{2}{c|}{Treatment 2} \\ {} & Noise S.D. & Deconfounder & Na\"ive & Deconfounder & Na\"ive \\ \hline \multirow{4}{*}{Bias} & $10^{-3}$ & 0.200 & \multirow{4}{*}{0.210} & 0.116 & \multirow{4}{*}{0.126} \\ & $10^{-2}$ & 0.202 & & 0.118 & \\ & \textbf{$10^{-1}$} & 0.210 & & 0.127 & \\ & $10^{0}$ & 0.210 & & 0.126 & \\ \hline \multirow{4}{*}{Std. Dev.} & $10^{-3}$& 16.566 & \multirow{4}{*}{0.026} & 16.567 & \multirow{4}{*}{0.030}\\ & $10^{-2}$ & 1.659 & & 1.659 & \\ & \textbf{$10^{-1}$} & 0.167 & & 0.168 & \\ & $10^{0}$ & 0.031 & & 0.035 & \\ \hline \multirow{4}{*}{Coverage} & $10^{-3}$ & 0.949 & \multirow{4}{*}{0.000} & 0.949 & \multirow{4}{*}{0.012} \\ & $10^{-2}$ & 0.948 & & 0.949 & \\ & \textbf{$10^{-1}$}& 0.759 & & 0.884 & \\ & $10^{0}$ & 0.000 & & 0.042 & \\ \hline \multirow{4}{*}{RMSE} & $10^{-3}$ & 16.568 & \multirow{4}{*}{0.211} & 16.567 & \multirow{4}{*}{0.130} \\ & $10^{-2}$ & 1.671 & & 1.663 & \\ & \textbf{$10^{-1}$} & 0.269 & & 0.211 & \\ & $10^{0}$ & 0.212 & & 0.131 & \\ \hline\hline \end{tabular} \caption{\textbf{Logistic Tutorial Simulation.} 100,000 simulations are summarized in terms of bias, standard deviation of the sampling distribution, coverage of $95\%$ confidence intervals, and root mean squared-error for various levels of simulated noise. As the noise gets small, the standard deviation and the RMSE of the deconfounder explode (the estimator approaches perfect collinearity). As the noise increases, the deconfounder collapses on the performance of the na\"ive estimator. \label{tab:logistic}} \end{table} To help explain the discrepancy in our results, we note that the single draw shown in the workbook is unusual in terms of its error. The argument made in the writeup is that the confidence intervals of the deconfounder contain the truth while the na\"ive estimator doesn't. This is a fairly common occurrence---approximately $75\%$ of the simulations because the na\"ive estimator has coverage close to zero. However, in only $42\%$ of the simulations did the deconfounder produce answers closer to the truth than the na\"ive (along both dimensions). The deconfounder is unusually close to (and the the na\"ive estimator unusually far from) the truth in terms of mean absolute error across the two coefficients. The deconfounder only performs as well as reported $8\%$ of the time and na\"ive only performs as poorly as reported $16\%$ of the time. Thus the reported draw is not a representative indicator of performance. \begin{framed} \noindent \textbf{Deviations} \begin{itemize} \item we repeat the process to create a simulation \item we examine only substitute confounder and not reconstructed causes \item we explore different noise levels \end{itemize} \noindent \textbf{Their Results:} The simulation shows an example where the confidence interval for the deconfounder covers the truth and the na\"ive estimator does not. \\ \noindent \textbf{Our Results:} We show that the coverage result is relatively typical but the one draw shown is abnormally accurate for the deconfounder. By evaluating across levels of noise added to the substitute confounder we demonstrate that the results are highly sensitive to the noise level and are at no level better than na\"ive on bias, variance or RMSE. \end{framed} \subsubsection{Quadratic Simulation} \label{app:quadsim} \textbf{Summary:} This simulation uses a transformation of PCA to make the model estimable---we explore how variations in simulation parameters affect results. 10,000 observations are simulated from the following data-generating process, \begin{align*} \left[\begin{array}{l} A_{i,1} \\ A_{i,2} \\ Z_i \end{array} \right] &\sim \mathcal{N}\left( \left[\begin{array}{l} 0 \\ 0 \\ 0 \end{array} \right] , \left[\begin{array}{lll} 1 & \rho & \rho \\ \rho & 1 & \rho \\ \rho & \rho & 1 \end{array} \right] \right) \nonumber \\[1ex] Y_i &\sim \mathcal{N}(0.4 + 0.2 A_{i,1}^2 + 1 A_{i,2}^2 + 0.9 Z_i^2,\ 1) \label{e:quadratic_outcome} \end{align*} for $\rho=.4$. The substitute confounder, $\hat{Z}$, is based on a PCA of $(A_{1}, A_{2})$, $\hat{Z} = \eta_{1} A_{1} + \eta_{2} A_{2}$. The outcome regression is $Y = \tau_{0} + \tau_{1} A_{1}^{2} + \tau_{2}A_{2}^{2} + \gamma \hat{Z}^2 + \epsilon_{i}$. We modify this simulation by adding noise to the outcome drawn from Normal(0,1). We also evaluate the performance of the parametric specification $\hat{Z}_{(\text{alt})} = A_{1}^2 + A_{2}^2 + 2 A_{1} A_{2}$ and show that it has superior performance. We also demonstrate that the deconfounder breaks down for negative values of the correlation coefficient. In a footnote of the main text, we noted that the proposed inference procedure uses PCA on $A_1,A_2$ but estimates treatment effects on their squares. This either presumes that we are interested in the treatment effect of the squared treatment or that we have access to the square root of our real treatments of interest. However, in that case, we lose the true sign of $A_1, A_2$. To approximate this scenario, we ran a version of the analysis where we estimate PCA on the absolute value of the variables ($|A_1|$), $|A_2|$) with results shown in Table~\ref{tab:sqrt}. This leads to an approximately five-fold increase in the RMSE for the deconfounder and roughly a doubling in RMSE for the parametric specification. The relatively worse performance of the deconfounder is due to centering before performing PCA, but the knowledge to not center can only be leveraged if we are sure that the underlying $\bm{A}$ was centered which requires more knowledge that we cannot have. \begin{table}[ht] \centering \begin{tabular}{r|cc} & \multicolumn{2}{c}{Treatment Number} \\ model & 1 & 2 \\ \hline Na\"ive & 0.1248 & 0.1252 \\ Oracle & 0.0072 & 0.0074 \\ Parametric & 0.0425 & 0.0431 \\ Deconfounder & 0.1028 & 0.1033 \\ \end{tabular} \caption{RMSE of Quadratic Simulation with Original Settings ($\rho=.4$ and $m=2$) with PCA of Absolute Value of $\bm{A}_1,\bm{A}_2$.} \label{tab:sqrt} \end{table} \begin{framed} \noindent \textbf{Deviations} \begin{itemize} \item we simulate the outcome with error \item we repeat the process to create a simulation \item we examine only substitute confounder and not reconstructed causes \end{itemize} \noindent \textbf{Their Results:} The original simulation shows for a single draw that the deconfounder is closer to the truth than the na\"ive and they claim that the confidence intervals contain the truth. \\ \noindent \textbf{Our Results:} We extend the simulation and while performance is in fact strong at the given settings, changing the correlation between $\bm{A}$'s to moderately negative causes the deconfounder to perform much worse than the na\"ive. As our theory predicts, when the number of treatments is large, the difference between the deconfounder and the na\"ive regression disappears. \end{framed} \subsection{GWAS}\label{a:gwas} \textbf{Summary:} \citet{WanBle19} applies the deconfounder to study the effects of genes on traits. We use replication provided to us by Wang and Blei to perform a simulation under similar conditions and we find that the deconfounder and the na\"ive regression perform almost identically. On closer inspection this is expected. Figure 4 of \citet{WanBle19} shows nearly identical performance between the deconfounder and the na\"ive regression. \subsubsection{Overview} As a motivating example of the deconfounder, \citet{WanBle19} evoke the use of methods similar to the deconfounder in the genetics literature to explain the effect of genes on the expression of traits. The genetics database that was used to produce the simulations in the original paper could not be shared with us because of restrictions in data access. Instead, Wang and Blei shared a purely synthetic simulation procedure intended to replicate the characteristics of the simulations in the original paper as well as code for several factor models applied to this data set. In this section we describe our results using this synthetic example of the GWAS simulation. We find that the deconfounder offers essentially identical performance to the na\"ive regression. This is not surprising, because Figure 4 of \citet{WanBle19} shows that RMSE from the deconfounder and the na\"ive regression are nearly identical. \subsubsection{Simulation Procedure} We follow the description of the data generating process in \citet{WanBle19} for the high SNR setting, using a synthetic genetic simulation provided to us to generate data under the Balding-Nichols procedure. We generate data with 5000 individuals and 5000 genetic markers, with genetic and environmental variation set to 0.4, and with 10\% of the genes assumed to have a causal effect on the outcome. Note, that because of this assumption, any method that shrinks coefficient estimates towards zero will obtain better performance on the non-causal genes, so we divide our results into causal and non-causal genes. We use the provided code to estimate substitute confounders for deep exponential families (with a 100-dimensional substitute confounder), pca (10-dimensional), poisson matrix factorization (10-dimensional), linear factor analysis (10-dimensional), and probabilistic principal components (5-dimensional). We avoid the use of the holdout procedure described in the code because it incorrectly sets all held out values to be zero. The posterior predictive checks, as implemented in the code, suggest that the factor models have unrealistic model fits. We then generate a single set of effects on the causal genes $\bm\beta \sim \mathcal{N}(0, 1)$ and confounding variables $\boldsymbol{\lambda}$ using a slightly modified function from WB, where the modification enabled us to draw the coefficients only once. Following the original simulation design, we set all non-causal coefficients to zero. Using the draws of $\bm\beta$ and $\boldsymbol{\lambda}$ we simulated the outcome vector, $\bm{Y}$, 100 times. As in the original simulation, we use a ridge regression and nonlinear functional form to render the deconfounder estimable. In each simulation we estimate a ridge regression, cross validating to obtain the penalty parameter. For the na\"ive regression we condition on all 5,000 genes. For each of the factor models we also include the corresponding estimated substitute confounders. We write our own code to estimate the average root mean squared error, which we display in Table \ref{t:genes}. Table \ref{t:genes} shows that the na\"ive regression outperforms the deconfounder on this simulation on the genes that have a causal effect on the outcome. On the non-causal genes the other models perform slightly better, but all models offer a nearly identical improvement over the na\"ive regression of the outcomme on one gene at a time. % \begin{table}[hbt!] \caption{Using the Synthetic Genetic Data Set, The Deconfounder Offers No Improvement Over Na\"ive Regression} \label{t:genes} \centering \begin{tabular}{l|ccc|} \hline \hline & \multicolumn{3}{c|}{RMSE} \\ \hline & Causal & Non-Causal & Overall \\ \hline Na\"ive & 0.737 & 0.127 & 0.263 \\ Oracle & 0.742 & 0.125 & 0.263 \\ Deconfounder (DEF) & 0.746 & 0.123 & 0.263 \\ Deconfounder (PCA) & 0.745 & 0.123 & 0.263 \\ Deconfounder (PMF) & 0.745 & 0.123 & 0.263 \\ Deconfounder (LFA) & 0.746 & 0.123 & 0.263 \\ Bivariate Naive & 1.576 & 1.607 & 1.604 \\ \hline\hline \end{tabular} \end{table} \begin{framed} \noindent \textbf{Deviations} \begin{itemize} \item We use a synthetic simulation created to approximate the simulation in the original paper \item We draw the genes and $\lambda$ once. \item We evaluate bias and RMSE (see discussion in Supplement~\ref{a:definitions}. \item We use a ridge regression with a cross validation-selected penalty, using mean squard error as a cross validation statistic \item Following the genetics literature, we examine performance differences on causal and non-causal genes \end{itemize} \noindent \textbf{Takeaways} \begin{itemize} \item The deconfounder offers marginal improvements on the non-causal genes and performs worse than the na\"ive estimator on the causal genes. \item \citet{WanBle19} uses this simulation as evidence that DEFs are useful. While DEFs do provide a better estimate of the non-causal genes, they are worse on the causal genes and offer---at best---marginal improvements in effect estimation. \end{itemize} \end{framed} \subsection{Subset Deconfounder} \label{a:subsetsim} We use a new simulation design to examine the finite sample properties of the subset deconfounder under the linear-linear data generating process. We create a single-dimensional confounder, $\bm{Z}$, and allow this confounder to satisfy strong infinite confounding. Our simulation is designed to demonstrate how the average RMSE of the subset deconfounder depends on the underlying treatment effect sizes. In all of our settings we set each $\theta_{m} = 10$, ensuring strong infinite confounding is satisfied. We suppose $A_{i} \sim \theta_{m} Z_{i} + \nu_{m}$ where $\nu_{m} \sim \mathcal{N}(0, 0.01)$. We then generate outcome data using the linear outcome model using the following coefficient values: \begin{enumerate} \item $\beta_{m} = 10$ \item $\beta_{m} = 100$ \item $\beta_{m} \sim \mathcal{N}(1, 4)$ \item $\beta_{m} = \frac{1}{m} $ \end{enumerate} We suppose $\gamma= 10$ for all simulations and that $Y_{i} = A_{i} \bm\beta + Z_{i} \gamma + \epsilon_{i}$ where $\epsilon_{i} \sim \mathcal{N}(0, 0.01)$. The results from this simulation align exactly with the predictions from Proposition 4. Specifically, from Proposition 4 we predict for $\beta_{m} = 10$ a bias of magnitude 10, $\beta_{m} = 100$ a bias of magnitude 100, $\beta_{m} \sim \mathcal{N}(1, 4)$ a bias of magnitude 1, and for $\beta_{m} = \frac{1}{m} $ the bias at $M$ treatments will bias$_M = \frac{ \sum_{m=1}^{M} \frac{1}{M} }{ M}$ or the average value of $\beta_{m}$. \section{Smoking Simulation} \label{app:smokingsim} \textbf{Summary:} We replicate the first empirical case study in \citet{WanBle19}, a semi-synthetic dataset about the causes of smoking. We argue that the simulation design is not informative about the performance of the deconfounder because (1) the factor models often have $k \geq m$, and (2) the controls used to compare with a strategy of measuring confounders are themselves uninformative about the confounding. We first quickly review the details of the original design based on the paper and replication code provided by Wang and Blei in December 2019. We then briefly detail our argument along with some additional results. \subsection{Original Design} The smoking simulation is a semi-synthetic study that uses data from the 1987 National Medical Expenditures Survey (NMES) to generate a real joint distribution of three variables which are combined with a linear model to create a synthetic outcome. \subsubsection{Data Generating Process in \citet{WanBle19}} The original simulation in \citet{WanBle19} selects two observed treatments from the NMES: the individual's martial status, $A_{\text{mar}}$, and the exposure to smoking measured as the number of cigarettes per day divided by 20 times number of years smoked,$A_{\text{exp}}$. The last age at which the person smoked, $A_{\text{age}}$, is designated as the unobserved confounder. All variables are centered and scaled. In equations 23 of \cite{WanBle19}, the data generating process for the synthetic outcome is laid out as, \begin{align*} Y_i &= \text{Normal}\left(\beta_0 + \beta_{\text{mar}}A_{\text{mar},i} + \beta_{\text{exp}}A_{\text{exp},i} + \beta_{\text{age}}A_{\text{age},i},1\right) \end{align*} where the intercept $\beta_0$ is included in the replication code provided to us but not the paper. In \citet{WanBle19} equation 24, the data generating process of the coefficients is described as, \begin{align*} \beta_{\text{mar}} &\sim \text{Normal}(0,1) \\ \beta_{\text{exp}} &\sim \text{Normal}(0,1) \\ \beta_{\text{age}} &\sim \text{Normal}(0,1) \end{align*} although in the provided replication code the coefficient for the last variable (which will be used as the unobserved confounder) is multiplied by 2.5, leading to, \begin{align*} \beta_{\text{age}} &\sim \text{Normal}(0,2.5^2) \end{align*} One of the two treatments, $A_{\text{mar}}$, is a factor variable with 5 levels. While the factor variable is unlabeled, by examining earlier sources\footnote{The data comes from \citet{imai2004causal} which in turn gets it from \citet{johnson2003disease} which obtains the data from the original source.}, we are confident that level 1 corresponds to married and level 5 corresponds to never married. We think levels 2-4 correspond to widowed, divorced and separated respectively. This treatment is treated as a numeric variable in \texttt{R} although factor levels 2-4 ($22\%$ of the data) aren't meaningfully ordered. The original simulation treats the first two variables as observed and the final($A_{\text{age},i}$) as the unobserved confounder. The paper reports the results for 12 configurations of models: na\"ive regression, oracle, linear factor with one dimension (substitute confounder and reconstructed cause), quadratic factor with one, two and three dimensions (substitute confounder and reconstructed cause) and the one dimensional quadratic factor model with additional covariates. \subsubsection{Factor Model Inference in \citet{WanBle19}} For each simulation in \citet{WanBle19}, factor models are fit using automatic variational bayes as implemented in \texttt{rstan}. The model for the quadratic factor analysis (the linear is analogous) as implemented in the provided replication code is, \begin{align*} \alpha &\sim \text{Gamma}(1,1) \\ \theta^{(0)} &\sim \text{Normal}(0, 1/\alpha) \\ \theta^{(1)}_k &\sim \text{Normal}(0, 1/\alpha) \\ Z_{i,k} &\sim \text{Normal}(0, 2^2) \\ \bm{A}_{i} &\sim \text{Normal}\left(\theta^{(0)} + \sum_{k=1}^K\theta_k^{(1)} Z_{i,k} + \sum_{k=1}^K \theta^{(2)}_k Z_{i,k}^2, .1^2\right) \end{align*} The Normal variances are held fixed for $\bm{Z}$ and $\bm{A}$ and in the equations above we have set them to the values given in the code. The model is fit with the default settings for ADVI (fully factorized gaussian approximation) except with a fixed step size of .25. Before beginning the variational approximation, the initial values are set by optimizing the joint posterior with LBFGS for a maximum of 1000 iterations. For the substitute confounder the $\bm{Z}$ variables are used directly. For the reconstructed causes, the model outputs: \begin{align*} \hat{\bm{A}}_{\text{WB}} &\sim \text{Normal}\left(\theta^{(0)} + \sum_{k=1}^K\theta_k^{(1)} Z_{i,k} + \sum_{k=1}^K \theta^{(2)}_k Z_{i,k}^2, .1^2\right) \end{align*} This differs from \citet{WanBle19} on page 1582 which defines the reconstructed causes as the posterior predictive mean. \subsubsection{A Note on the Holdout Procedure.} In order to calculate the posterior predictive checks, the code holds back approximately $5\%$ of the individual cells of the matrix $\bm{A}$ sampled at random. The holdout percentage is approximate because the sampling procedure allows duplicates which are then removed. The heldout values are replaced by zero (which due to centering is also the mean of the data). These values are not resampled in the inference program and so they are effectively treated as mean single imputations of the missing values. This presumably has both an effect on the fit of the factor model and the posterior predictive checks themselves (which are now conducted exclusively on data that the model is trained believing are exact zeroes). This procedure was corrected in the reference implementation released in July 2020. \subsubsection{Evaluation Procedure in \citet{WanBle19}} After estimating the factor model the code from \citet{WanBle19} fits one of the following adjustment strategies: \begin{enumerate} \itemsep0em \item Substitute Confounder \\ control for $\hat{\bm{Z}}$ \item Reconstructed Causes \\ replacing the treatment with $\bm{A} - \hat{\bm{A}}$. This is the version described in the paper \item Reconstructed Causes 2 \\ the two-parameter version where they control for $\hat{A}$ \item Substitute Confounder with Controls \\ controlling for $\hat{\bm{Z}}$ and five controls (see below) \item Reconstructed Causes with Controls \\ replacing treatment with $\bm{A} - \hat{\bm{A}}$ and five controls \item Oracle \\ controlling for the true confounder \item Na\"ive \\ regression of $Y$ on all treatments only \end{enumerate} The controls include the following variables: age started smoking, binary sex indicator, 3-level factor variable for race, 3-level factor variable for seatbelt use (rarely/sometimes/(always or almost always), 4-level factor variable for education (college graduate/some college/high school/other).\footnote{We pieced these together from \citet{johnson2003disease} and \citet{imai2004causal} but we can't be sure without definitions. These definitions due line up approximately with the summary statistics reported in \citet{johnson2003disease}.} Unlike the treatments and confounders, these control variables are not standardized or centered. The factor variables are entered as scalars (rather than contrast coded factors). For some variables like education that are ordered this produces a linear approximation to the factor model but for the race value there is no guarantee it produces anything in particular.Each of these models is estimated with one of two different outcome regressions: bayesian linear regression estimated using ADVI in \texttt{rstanarm} and OLS. \subsubsection{Outcome Regression: Bayesian Linear Regression:} The ultimate goal for the simulations from \citet{WanBle19} is to study properties of the joint posterior distribution $f(\bm\beta,\bm{z} | \bm{Y}, \bm{A})$. Samples from this joint distribution are obtained by factorizing as $f(\bm\beta | \bm{Y},\bm{A}, \bm{z}) f(\bm{z} | \bm{Y}, \bm{A})$. Samples from $f(\bm{z} | \bm{A})$ are taken from the factor model's posterior---ignoring information from $\bm{Y}$---and used as an approximation to $f(\bm{z} | \bm{Y},\bm{A})$. Then a Bayesian linear regression of $\bm{Y}$ on $\bm{A}$ and $\bm{z}$ is used to sample from the conditional posterior $f(\bm\beta| \bm{Y}, \bm{A},\bm{z})$. Let $\tilde{\beta}_{j, s, f, d}$ be a draw from this approximate posterior distribution for treatment $j$ in simulation $s$ where: \begin{itemize} \item $j \in 1 \dots 2$ indexes the two observed treatments \item $s \in 1\dots S$ indexes the simulation (i.e. one dataset drawn from the semi-synthetic data generating process) \item $f \in 1\dots F$ indexes samples from the factor model's posterior distribution $f(\bm{z} | \bm{A})$ \item $d \in 1\dots D$ indexes the sample from the outcome regression's posterior distribution $f(\bm\beta| \bm{Y}, \bm{A},\bm{z})$. \end{itemize} and we use $\tilde{\cdot}$ to emphasize that it is a sample from a posterior distribution. In the replication code $f(\bm{z} | \bm{A})$ (the factor model posterior) is approximated with five samples and so we will set $F=5$. The code then approximates $f(\bm\beta| \bm{Y}, \bm{A},\bm{z})$ (outcome regression conditional posterior) with a single sample and so we will set $D=1$. Let $\beta_{j,s}$ indicate the true treatment effect for treatment $j$ in simulation $s$. The \citet{WanBle19} code computes three quantities which effectively treat the sample from the posterior as the estimator and compute properties of that estimator within a given simulation (treating the posterior draws as independent realizations of that estimator) and then average over simulations and treatments. We explore each below. \label{a:definitions} \paragraph{Bias Calculation.} The following quantity is computed: \begin{align*} \text{Bias}^2_{\text{WB}} = \frac{1}{2}\sum_{j=1}^2 \left( \frac{1}{S} \sum_{s=1}^S \left( \left(\frac{1}{5}\sum_{f=1}^5 \underbrace{\tilde{\beta}_{j,s,f,1}}_{\text{estimator}}\right) - \underbrace{\beta_{j,s}}_{\text{truth}} \right)^2 \right) \end{align*} The quantity marked ``estimator'' is a draw from the posterior distribution and the expectation for the bias is taken with respect to that posterior distribution. The metric can also be interpreted as the mean-squared error of the posterior mean estimator approximated with five samples from the posterior distribution. \paragraph{Variance Calculation.} Let the function $\widehat{\text{Var}}(\cdot)$ be the sample variance of its arguments. The code defines, \begin{align*} \text{Var}_{\text{WB}} = \frac{1}{2} \sum_{j=1}^2 \left( \frac{1}{S} \sum_{s=1}^S \underbrace{\widehat{\text{Var}}\left(\tilde{\beta}_{j,s,1,1}\dots\tilde{\beta}_{j,s,5,1}\right)}_{\text{posterior var}} \right) \end{align*} Thus, this reports the per-simulation average posterior variance summed over treatments. \paragraph{Mean Squared Error.} Finally the code computes, \begin{align*} \text{MSE}_{\text{WB}} = \frac{1}{2}\sum_{j=1}^2 \left( \frac{1}{S} \sum_{s=1}^S \left( \frac{1}{5}\sum_{f=1}^5 \left(\tilde{\beta}_{j,s,f,1} - \beta_{j,s}\right)^2 \right) \right) \end{align*} This is the per-simulation, average squared posterior deviation summed over the treatments. \paragraph{Na\"ive and Oracle Regressions.} In both the oracle and na\"ive regression there are, of course, no samples from the factor model. The oracle simply averages over all 1000 samples from the outcome regression's posterior and the na\"ive regression averages over 5 samples from the outcome regression's posterior. This will make estimates for the na\"ive regression much noisier. \paragraph{Priors.} In the original replication code, the Bayesian outcome regression uses different priors depending on the specification. The substitute confounder uses \texttt{rstanarm}'s default Normal(0,1) prior. The reconstructed causes, oracle and na\"ive regressions use the default \texttt{hs\_plus()} prior which is hierarchical shrinkage prior which is a Normal centered at 0 with a standard deviation that is the product of two independent half Cauchy parameters. The latter has much more mass at 0 and correspondingly fatter tails. \subsubsection{Outcome Regression: OLS} The original replication code employs a corresponding set of definitions when using ordinary least squares. Denote $\hat{\beta}_{j,s,f}$ to be the coefficient for treatment $j$ fit on simulation $s$ conditional on draw $f$ from the factor model. \paragraph{Reported Bias Calculation.} The following quantities are computed: \begin{align*} \text{Bias}^2_{\text{WB}} = \frac{1}{2}\sum_{j=1}^2 \left( \frac{1}{S} \sum_{s=1}^S \left( \underbrace{\left(\frac{1}{5}\sum_{f=1}^5 \hat{\beta}_{j,s,f}\right)}_{\text{estimator}} - \underbrace{\beta_{j,s}}_{\text{truth}} \right)^2 \right) \end{align*} This corresponds to the calculation in the Bayesian case but plugging in the coefficient estimates for the sample from the posterior. \paragraph{Reported Variance Calculation.} Let the function $\widehat{\text{Var}}(\cdot)$ be the same variance of its arguments and $\widehat{SE}(\cdot)$ be the estimated standard error of its argument. The code defines: \begin{align*} \text{Var}_{\text{WB}} = \frac{1}{2}\sum_{j=1}^2 \left( \frac{1}{S} \sum_{s=1}^S \underbrace{\widehat{\text{Var}}\left(\hat{\beta}_{j,s,1}\dots\hat{\beta}_{j,s,5}\right)}_{\text{var of coefs}} + \underbrace{\frac{1}{5}\sum_{f=1}^5\widehat{SE}(\hat{\beta_{j,s,f}})^2}_{\text{avg of vars}} \right) \end{align*} This uses the law of total variance to provide a more efficient estimator of the sum of the average variance of $f(\bm\beta,\bm{z} | \bm{Y}, \bm{A})$. \paragraph{Reported Mean Squared Error.} The code computes, \begin{align*} \text{MSE}_{\text{WB}} = \text{Bias}^2_{\text{WB}} + \text{Var}_{\text{WB}} \end{align*} \subsubsection{Reported Results} Table 3 of \citet{WanBle19} presents the findings. The discussion highlights the improved performance of the one-dimensional and two-dimensional quadratic models over the na\"ive regression although no estimator is particularly close to the oracle. In the corresponding discussion (1585-1586), the results are used to emphasize three points: (1) the value of the posterior predictive check for signaling whether results are biased, (2) controlling for observed confounders increases variance but does not decrease bias, and (3) the deconfounder outperforms na\"ive regression. \subsection{New Results} In this section, we briefly present a conceptual argument about the design before providing some broader results to contextualize the findings. \subsubsection{Factor models where $k \geq m$} In four of the eight original factor model specifications including two of the three highlighted in \citet{WanBle19} for performance, the dimensionality of the latent factors exceeds the dimensionality of the data. In a frequentist setting, these models would exactly reconstruct the observed treatments (they all nest 2-dimensional PCA as a special case). The models are fit with bayesian methods but using broad priors and so it would appear that they only reason they don't perfectly reconstruct the treatments is noise in the posterior approximation. This renders the models estimable, but uninformative about performance of the deconfounder. \subsubsection{Deviations in Our Procedure} In re-implementing the simulation we tried to strike a balance between remaining comparable to the original design and making changes that we felt were essential to being able to interpret the simulation. In total, we estimate three baseline specifications: na\"ive regression, the oracle model and a regression controlling for WB's controls as well as five sets of deconfounder models based on specifications by WB (Linear Factor Model with 1 dimension, Quadratic Factor Model with 1-3 dimensions and Quadratic Factor Model with 1 dimension and controls). For each set of models we report three variants: the substitute confounder (controlling for $\hat{\bm{Z}}$), reconstructed causes as stated in the paper (replacing the treatment with $\bm{A} - \hat{\bm{A}}$) and reconstructed causes as implemented in their code (controlling for $\hat{\bm{A}}$). This adds the specification of the controls alone and includes both versions of the reconstructed causes. We outline the other changes we make here along with our rationale. \paragraph{Deviation 1: OLS Outcome Regression.} We use an OLS outcome regression and average results over 1000 draws of the factor model's posterior distribution (rather than 5). This ensures the computation of the approximate joint posterior mean is not too noisy. The 1000 regressions can be done computationally efficiently by noting that the design matrix stays fixed. Denoting the design matrix $\bX$ we precompute $(\bX'\bX)^{-1}\bX'$ which means the full set of 1000 regressions can be computed with a single additional matrix multiply. \paragraph{Deviation 2: Fixed Simulation Coefficients} It is difficult to evaluate properties like bias when the coefficients are varying across runs. To see why this would be complicated, consider an estimator that was biased towards zero, and imagine that the calculation applied was $E[\hat{\beta}_i - \beta_i]$ where $\beta_i$ was sampled from a normal distribution. The upward and downward biases would cancel each other out over draws, leading to an estimate of approximately 0. We avoid this problem by fixing the coefficients to arbitrarily chosen values (1,1,1) and then assessing the bias, variance and mean-squared-error of the posterior mean. The results are driven primarily by bias so we report only the root mean squared error. \paragraph{Deviation 3: Removed Holdout Procedure} We were concerned about the effects of the mean imputation so we removed the holdout procedure entirely. Once this was done, the treatments do not change across simulations. \paragraph{Deviation 4: Change in the Reconstructed Causes.} We replace the reconstructed causes with \begin{align*} \hat{\bm{A}}_{i} = \theta^{(0)} + \sum_{k=1}^K\theta_k^{(1)} Z_{i,k} + \sum_{k=1}^K \theta^{(2)}_k Z_{i,k}^2 \end{align*} in each sample from the posterior. Due to the sampling of $\bm{Z}$ and $\theta$, the reconstructed cause is almost surely not collinear with $\bm{A}$. \paragraph{Deviation 5: Remove the intercept.} We remove the intercept from the true model to be consistent with the equations in \citet{WanBle19}. \paragraph{Deviation 6: Make control variables factors} We treat the control variables which are categorical as factors rather than numerics as done in the original simulation design. \paragraph{Concerns We Do Not Address} There were several issues in the simulation design that we did not address because we felt that to do so would too fundamentally alter the simulation. \begin{itemize} \item for the treatments, we continue treating the factor variables as continuous variables because otherwise the dimension of $\bm{A}$ changes \item $A_{\text{exp}}$ is skewed (it is logged in \citet{imai2004causal}) which affects the normality assumptions \item many of the models considered here involve many more dimensions than the two in the observed data. It is unclear why the model isn't fitting the data perfectly in these settings or why we would use latent variable modes with many more latent variables than observed dimensions. \end{itemize} \subsubsection{New Results: Instability in Factor Models} \label{a:smoking_extra} We first demonstrate that a substantial amount of variation in the original results is due to instability in factor model estimation. Table~\ref{tab:smokingmultimodal} shows results across four different factor model fits, labeled F1--4. Because we removed the model checking using held-out data, the inputs across all four models are identical, so only the seed changes across the four iterations. The differences in the learned factor models induce substantial differences in the RMSE of the resulting estimates. For example, the two-dimensional quadratic model with substitute confounder (one of the preferred specifications in \citet{WanBle19} and row 11 of our Table~\ref{tab:smokingmultimodal}) ranges from $12\%$ better to $30\%$ worse than the na\"ive estimate for the effect of Treatment 1. \begin{table}[ht!!] \centering \begin{tabular}{rl|cccc|cccc} \hline \hline & & \multicolumn{4}{c|}{Treatment 1} & \multicolumn{4}{c}{Treatment 2} \\ \hline & & F1&F2&F3&F4&F1&F2&F3&F4\\ \hline \multirow{3}{*}{Baseline} & Na\"ive & \multicolumn{4}{c}{1.00} & \multicolumn{4}{c}{1.00} \\ & Oracle & \multicolumn{4}{c}{0.043} & \multicolumn{4}{c}{0.026} \\ & Controls & \multicolumn{4}{c}{0.900} & \multicolumn{4}{c}{1.094} \\ \hline \multirow{3}{*}{Lin. (1 dim.)} &Sub. & 1.020 & 1.176 & 1.065 & 0.980 & 1.011 & 1.093 & 1.033 & 0.987 \\ & Rec. & 0.253 & 0.181 & 0.843 & 0.665 & 0.617 & 0.529 & 0.911 & 0.816 \\ & Rec. 2 & 1.020 & 1.176 & 1.065 & 0.980 & 1.011 & 1.093 & 1.033 & 0.987 \\ \hline \multirow{3}{*}{Quad. (1 dim.)} &Sub. & 0.657 & 1.054 & 0.759 & 1.097 & 0.861 & 0.989 & 1.022 & 0.908 \\ &Rec. & 0.533 & 1.298 & 1.135 & 0.375 & 1.733 & 1.278 & 2.243 & 0.481 \\ &Rec. 2 & 1.299 & 1.503 & 1.026 & 0.136 & 2.186 & 1.503 & 1.702 & 0.545 \\ \hline \multirow{3}{*}{Quad. (2 dim.)} &Sub. & 0.882 & 1.028 & 1.317 & 1.012 & 0.834 & 0.941 & 0.569 & 0.921 \\ &Rec. & 3.399 & 1.814 & 1.094 & 6.526 & 1.315 & 0.872 & 2.431 & 0.701 \\ &Rec. 2 & 0.640 & 0.711 & 0.498 & 0.538 & 1.848 & 1.117 & 0.915 & 1.329 \\ \hline \multirow{3}{*}{Quad. (3 dim.)} &Sub. & 1.638 & 1.856 & 1.003 & 1.398 & 1.060 & 0.964 & 1.003 & 0.440 \\ &Rec. & 6.870 & 5.692 & 1.441 & 4.737 & 0.113 & 3.327 & 3.864 & 2.089 \\ &Rec. 2 & 0.502 & 1.777 & 0.850 & 0.445 & 1.859 & 1.420 & 1.124 & 1.116 \\ \hline \multirow{3}{*}{\begin{tabular}{r} Quad.(1 dim.) \\w/ Controls \end{tabular}} &Sub. & 0.586 & 0.945 & 0.665 & 0.990 & 0.966 & 1.085 & 1.116 & 1.009 \\ &Rec. & 0.636 & 1.112 & 1.033 & 0.446 & 1.744 & 1.179 & 2.146 & 0.515 \\ &Rec. 2 & 1.221 & 1.334 & 0.876 & 0.173 & 2.179 & 1.528 & 1.653 & 0.659 \\ \hline\hline \end{tabular} \caption{\textbf{Smoking Simulation Results Vary Substantially By Factor Model Run}: This table shows the ratio of root mean squared-error to the na\"ive regression for 18 different specifications and four different runs of each factor model. Values above 1 indicate that the model is performing worse than the na\"ive regression and models below 1 indicate it is performing better. The left column provides the factor model and the second column provides the adjustment strategy. ``Sub.'' is the substitute confounder; ``Rec.'' is the reconstructed causes approach stated in the paper; and ``Rec. 2'' is the two-parameter reconstructed causes approach implemented in code. Models do not consistently perform better than na\"ive.} \label{tab:smokingmultimodal} \end{table} We note that different adjustment strategies (Sub., Rec., Rec. 2) used with the same factor model can yield substantially different results. Because the PPC is specific to the factor model and not the adjustment strategy, it cannot provide information about which would provide better performance. As in \citet{WanBle19} we do not observe substantial benefits from including covariates with the deconfounder. However, line three of the table makes clear that this is because the covariates alone are not sufficient to improve over the Na\"ive regression. In practice this is because they are essentially uncorrelated with the variable chosen to be the unobserved confounder. Thus, we should not draw conclusions from this study about the role of measured confounders. We have only shown one set of simulated coefficients here. However, because the RMSE is driven almost entirely by the bias term, the results here are extremely well predicted by the standard omitted variable bias formula. Define $\bX = (A_{\text{mar}}, A_{\text{exp}},\hat{\bm{Z}})$ then, \begin{align*} \text{bias}(\beta_{\text{mar}},\beta_{\text{exp}}) = (\bX^T\bX)^{-1}\bX^T\bm{Z}\beta_{\text{age}} \end{align*} For a fixed factor model, and thus a fixed $\hat{\bm{Z}}$, we can calculate the bias for any setting of the true coefficient $\beta_{\text{age}}$. \subsubsection{New Results: Max ELBO of the Factor Models} The variational inference procedure comes with a natural mechanism for choosing among the factor models. We run each model twenty times and choose the one that maximizes the evidence lower bound. In \texttt{rstan} this has to be parsed from a log that collects the material printed to the screen as it is not included in the returned output. The results are in Table~\ref{tab:smokingelbo}. \begin{table}[ht!!] \centering \begin{tabular}{rl|c|c} \hline \hline & & Treatment 1 & Treatment 2 \\ \hline \multirow{3}{*}{Baseline} & Na\"ive & 1.000 & 1.000 \\ & Oracle & 0.044 & 0.025 \\ & Controls & 0.900 & 1.094 \\ \hline \multirow{3}{*}{Lin. (1 dim.)} &Sub. & 1.062 & 1.033 \\ & Rec. & 1.325 & 1.151 \\ & Rec. 2 & 1.061 & 1.033 \\ \hline \multirow{3}{*}{Quad. (1 dim.)} &Sub. & 0.648 & 0.844 \\ &Rec. & 1.027 & 1.865 \\ &Rec. 2 & 1.748 & 2.437 \\ \hline \multirow{3}{*}{Quad. (2 dim.)} &Sub. & 2.648 & 0.308 \\ &Rec. & 4.073 & 1.214 \\ &Rec. 2 & 0.718 & 1.353 \\ \hline \multirow{3}{*}{Quad. (3 dim.)} &Sub. & 2.192 & 0.840 \\ &Rec. & 3.290 & 1.362 \\ &Rec. 2 & 0.605 & 0.924 \\ \hline \multirow{3}{*}{\begin{tabular}{r} Quad.(1 dim.) \\w/ Controls \end{tabular}} &Sub. & 0.566 & 0.947 \\ &Rec. & 1.166 & 1.889 \\ &Rec. 2 & 1.670 & 2.421 \\ \hline\hline \end{tabular} \caption{\textbf{Deconfounder Does Not Outperform the Na\"ive Regression In the Smoking Simulation}: This table shows the ratio of root mean squared-error to the na\"ive regression for 18 different specifications using the factor model which maximized the ELBO over twenty runs. Values above 1 indicate that the model is performing worse than the na\"ive regression and models below 1 indicate it is performing better. The left column provides the factor model and the second column provides the adjustment strategy. ``Sub.'' is the substitute confounder; ``Rec.'' is the reconstructed causes approach stated in the paper; and ``Rec. 2'' is the two-parameter reconstructed causes approach implemented in code. Models do not consistently perform better than na\"ive. See cautionary note in main text, results are very unstable. \label{tab:smokingelbo}} \end{table} In practice we observe that all the linear factor model fits are very similar, but the quadratic factor models vary substantially. Those where $k\geq m$ vary the most. Thus while we maximized the ELBO over twenty different fits, we would expect that results would be unstable under replication. We present these results simply to demonstrate that the ELBO does not provide a way to resolve the problem demonstrated in the previous subsection. \subsection{Conclusions on Smoking} The smoking simulation in \citet{WanBle19} seeks to use a semisynthetic design to justify a number of conclusions about the deconfounder's performance. Unfortunately, as we have shown, these conclusions do not hold under reasonable adjustments and extensions to the simulation design. \section{Breast Cancer Tutorial}\label{a:breast} \textbf{Summary:}The github tutorial examines the effect of various tumor features on the diagnosis of breast cancer tumors. The tutorial uses approximate inference to fit a probabilistic principal components model to estimate the substitute confounder and then assert this provides valid causal estimates. This assertion is based on a non-standard assessments of whether a model is causal or not. We show that the full deconfounder is only estimable because the approximate inference leads to considerable noise in the estimated substitute confounder. When a more standard estimation procedure for the substitute confounder is deployed the full deconfounder is only estimable with a penalized regression. And the coefficient estimate that we obtain is entirely dependent on the amount of penalization. This demonstrates that the deconfounder is not particularly helpful for causal inference in this setting. Using a breast cancer data set that is distributed with SciKit learn, the tutorial estimates a substitute confounder using black box variational inference. The tutorial argues that approximate inference is completely acceptable and can be ignored. We show this is not the case---approximate inference adds considerable noise to the estimated substitute confounder. Consider the left-hand facet of Figure~\ref{f:pca_ppca} which compares the first dimension of the estimated substitute confounder from approximate inference (vertical axis) to the substitute confounder estimated using PCA. This shows that the estimated substitute confounder is a noisy version of PCA (we have not rotated the loadings, which explains the negative relationship). But the right-hand facet compares the estimate of the substitute confounder estimated from PPCA using maximum likelihood estimation against PCA. The estimated loadings using maximum likelihood to fit a PPCA model are effectively identical to the loadings from PCA, just scaled differently. In short, the approximate inference procedure leads to a poorly estimated model. \begin{figure} \scalebox{0.5}{\includegraphics{figs/WB_pca.pdf}} \scalebox{0.5}{\includegraphics{figs/ppca_pca.pdf}} \caption{\textbf{The tutorial's procedure for estimating the substitute confounder adds unncessary noise.} The left-hand plots shows the relationship between the substitute confounder estimated using approximate inference and the substitute confounder estimated using traditional PCA. The approximate inference adds considerable noise. The right-hand plot shows that if we use well known MLE routines for estimating PPCA there is no disagreement between the loadings. Approximate inference, then, leads to considerable and unnecessary error.} \label{f:pca_ppca} \end{figure} The result of this poorly estimated factor model is that including the substitute confounder has little effect on the actual coefficient estimates, yielding estimates that are nearly identical to a na\"ive regression. Column 1 of Table \ref{t:breast} provides the coefficient estimates of features on breast cancer diagnosis using the original estimates of the substitute confounder in a logistic model. Rather than make the non-standard step of subsetting to only 80\% of the data, we fit the model to the entire data set. We want to emphasize that this model is estimable \textit{solely} because the approximate inference procedure yields a poor estimate of the parameters of the PPCA model. As we might expect give this fact, the second column shows that a na\"ive logistic regression that simply ignores those confounders yields nearly identical results. In the third column we obtain coefficient estimates, but now we estimate the substitute confounder using standard PCA. Without the errors in the estimated substitute confounder from approximate inference, the model is no longer estimable using a standard logistic regression. Instead we rendered the model estimable with a ridge regression, with the penalty selected using cross validation. This yields dramatically different results. Many of the coefficients have effect sizes that are orders of magnitude smaller. The final column shows the estimates from the one-at-a-time deconfounder, fit using a logistic regression. This reveals strikingly different results from those reported in the online tutorial: the sign flips on half of the coefficients. \begin{table}[ht] \centering \begin{tabular}{l|cccc} \hline & Full Deconfounder & Na\"ive & Full Deconfounder & One-at-a-Time \\ & WB PCA & & True PCA & True PCA \\ \hline mean radius & -3.10 & -3.48 & -0.81 & -0.56 \\ mean texture & -1.38 & -1.64 & -0.51 & -0.07 \\ mean smoothness & -0.87 & -1.07 & -0.38 & -1.31 \\ mean compactness & 1.28 & 0.88 & -0.29 & 2.59 \\ mean concavity & -0.77 & -1.19 & -0.50 & 1.37 \\ mean concave points & -1.78 & -2.27 & -0.75 & -0.97 \\ mean symmetry & -0.24 & -0.50 & -0.24 & -0.34 \\ mean fractal dimension & 0.27 & 0.19 & 0.29 & 0.98 \\ \hline \end{tabular} \caption{\textbf{Correctly estimating PCA leads to dramatically different results} The first column replicates the exact procedure from the github tutorial, but estimate the coefficients on the entire sample. The second column merely drops the substitute confounder and yields very similar results. The third column estimates the full deconfounder using the true PCA estimates, using a penalized ridge regression to fit the model. This yields dramatically different results, consistent with our theoretical results. The fourth column provides the results from a one-at-a-time deconfounder. This shows even more deviations, with half of the coefficients changing signs, and many coefficients exhibiting orders of magnitude effect estimate changes. } \label{t:breast} \end{table} \begin{framed} \textbf{Deviations} \begin{itemize} \item We fit the outcome regression on the entire data set, rather than using an 80\% held out data set. \item Given the severe errors in the approximate inference procedure, we use standard PCA estimation routines \end{itemize} \textbf{Their Results}: The tutorial claims this model provides robust causal effect estimates. \textbf{Our Results}: We show that the model in WB's tutorial is estimable solely because of errors in the estimation of the PCA model. Once corrected, we obtain different coefficient results that vary substantially depending on how we apply the deconfounder. \end{framed} \section{Posterior Predictive Model Checking Does Not Reliably Identify When the Deconfounder Works} \label{a:ppc} We have shown that it is impossible to know when the deconfounder improves over the na\"ive regression in practice. Throughout their papers, \citet{WanBle19} and \citet{Zha19} use a framework of posterior predictive checks (PPCs) to ``greenlight'' their use of the deconfounder in practice and adjudicate between alternative estimators. \citet{WanBle19} explain, \begin{quote} We consider predictive scores with predictive scores larger than 0.1 to be satisfactory; we do not have enough evidence to conclude significant mismatch of the assignment model. Note that the threshold of 0.1 is a subjective design choice. We find such assignment models that pass this threshold often yield satisfactory causal estimates in practice \citep[][p. 1581]{WanBle19} \end{quote} If PPCs could be used in this way, it would allow highly flexible density estimation models to be used, even when the true parametric form was unknown---as is always the case in practice. The proof of the subset deconfounder establishes that this is impossible in that setting because the performance of the subset deconfounder depends on untestable assumptions about the treatment effects. For the full deconfounder, the check in WB are not well-suited to evaluating the conditional independence of $\bm{A}$ given $\hat{\bm{Z}}$ which is perhaps the most relevant observable property \citep{imai2019discussion}. % We evaluate the performance of PPCs on a quadratic-poisson factor model with $n=10000$ observations and $m=100$ treatments, where \begin{align*} Z_{i} &\sim \mathcal{N}(0,0.2) \\ A_{ij} &\sim \text{Poisson}\left(\exp\left(\theta_{j1} Z_{i} + \theta_{j2} Z_{i}^2\right) \right) \\ Y_i &\sim \mathcal{N}\left(\bm{A}_i\bm\beta + Z_i\gamma, 0.1\right) \end{align*}, where $\bm\theta_{j1}, \bm\theta_{j2} \sim \mathcal{N}(0, 1)$ and $\bm\beta$ is set equal to $(0.8, -0.6, 0.4, -1.2)$ repeated 25 times. We compare the na\"ive regression and the oracle to the two versions of the subset deconfounder: (i) using a Deep Exponential Family (DEF) \citep{ranganath2015deep} with (5,3,1) layers to estimate the substitute confounder, and (ii) using a two-dimensional PCA to estimate the substitute confounder. \begin{figure}[hbt!] \centering \includegraphics[width=.75\textwidth]{figs/def_ppc_plot.pdf} \caption{\textbf{Posterior Predictive Checks Do Not Reliably Identify When the Deconfounder Outperforms the Na\"ive Estimator.} The horizontal axis is the predictive score from the posterior predictive check of the DEF and the vertical axis is the average RMSE for the treatment effect estimates. The red points are the average RMSE from applications of the DEF that failed the predictive score check, while the green points are the average RMSE from applications of the DEF that passed the predictive score check.} \label{fig:def} \end{figure} The results in Figure~\ref{fig:def} show the average RMSE for each adjustment strategy plotted at the corresponding PPC for the DEF. The average RMSE for the DEF-deconfounder is approximately equal, whether the model passes the PPC or not. Further, we see that there is considerable variation. We see that there can be extremely large RMSEs from estimation when the deconfounder passes the PPC and quite small when it fails the PPC. We also find that more complex models do not outperform simple alternatives. The average RMSE of the deconfounder using DEF is over three-times larger than the average RMSE when using PCA---even though the true underlying model that generated the treatments is nonlinear in the substitute confounder. But both implementations of the deconfounder perform considerably worse than the naive regression. The DEF deconfounder that pass the PPC has an average RMSE 5.8 times the average RMSE of the naive regression, while the PCA deconfounder has an average RMSE 1.8 times the average RMSE of the naive estimator. In every simulation, the average RMSE from the naive regression is better than either implementation of the deconfounder. This suggests that PPCs cannot help us distinguish when the deconfounder will improve over alternatives. There is considerable noise in the RMSE of models that pass or do not pass PPCs, so the safest conclusion is that there is no difference between the RMSE of applications of the deconfounder that do and do not pass the PPC. The simulation also demonstrates that if the functional form of the factor model is unknown, a flexible factor model can perform considerably worse than simpler models, even when the true data generating process is nonlinear and the flexible model passes model-fit checks. Most importantly, the deconfounder's estimates can be considerably worse than a na\"ive estimator which simply ignores confounding. In short, model checking and flexible nonlinear factor models cannot solve the deconfounder's problems.
{ "redpajama_set_name": "RedPajamaArXiv" }
7,626
Buffalo Bull's Back Fat ou Stu-mick-o-súcks était un chef de la nation nord-amérindienne des Gens du Sang (ou Kainah). Annexe Articles connexes Gens du Sang Confédération des Pieds-Noirs Chef des Premières nations au Canada Personnalité politique albertaine
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,100
Q: Override std::equal In providing a specialization for the equal_to operator for std::unordered_map, I was wondering if it's possible to determine which of the lhs or rhs is the data currently stored in the hashmap? I'd like to do something like this: template<> struct equal_to<METADATA> { bool operator() (METADATA const& data1, METADATA const& data2) { if (data1.Size == data2.Size) { // Need to look up the stored pointer in a global data structure SIZE_T Pointer = g_Pointer + data1.Offset; return memcmp(reinterpret_cast<void*>(Pointer), reinterpret_cast<void*>(data2.Pointer), data1.Size) == 0; } return false; } }; Thanks. A: Have the temporary METADATA contain a flag that determines whether you will use the global pointer or not. Probably use a sentinel value for Offset or Pointer. template<> struct equal_to<METADATA> { bool operator() (METADATA const& data1, METADATA const& data2) { if (data1.Size == data2.Size) { // Need to look up the stored pointer in a global data structure void * pointer1 = data1.Pointer; if (pointer1 == NULL) pointer1 = g_Pointer + data1.Offset; void * pointer2 = data2.Pointer; if (pointer2 == NULL) pointer2 = g_Pointer + data2.Offset; return memcmp(pointer1, pointer2, data1.Size) == 0; } return false; } };
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,060
Didrepanephorus pilosus är en skalbaggsart som beskrevs av Patrice Bouchard 2007. Didrepanephorus pilosus ingår i släktet Didrepanephorus och familjen Rutelidae. Inga underarter finns listade i Catalogue of Life. Källor Skalbaggar pilosus
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,733
\section{Introduction} \vspace{-0.2cm} \label{intro} In an over deployed Wireless Sensor Network (WSN), a large number of sensor nodes are randomly deployed to monitor a large geographical area. Each sensor node is integrated with processing elements, memory, battery power and wireless communication capabilities. Once deployed, they are, in general left unattended. Hence due to power drainage, hardware degradation, environmental hazards etc. sensor nodes are much prone to failures. For better utilization of the over-deployed nodes to save energy and to extend the lifetime of the network, this paper addresses the problem of finding maximum number of partitions of the sensor nodes such that each partition is connected and covers the whole query region. Instead of keeping all the sensors active always, these partitions will remain active one after another in a round robin fashion. Therefore, if there are $K$ such partitions, the network lifetime will be enhanced by at most $K$ times. Here, given a random uniform distribution of sensor nodes over a 2-D plane, a distributed algorithm is developed for finding the maximum number of partitions of connected nodes such that each partition ensures coverage. In case of node failures, a distributed algorithm is developed for fault recovery that rearranges the affected partition locally to tolerate single node faults within a partition. Simulation studies show that compared to the earlier techniques, the proposed algorithm is faster and results better partition topology with reduced diameter and requires less message overhead. Also, in case of unpredictable node faults the neighboring nodes execute the localized fault recovery algorithm that rearranges the partition locally to make the system fault-tolerant. Simulation results show that it extends the network lifetime significantly. The rest of the paper is organized as follows. Section \ref{sec:rel_work} presents a brief outline of related works. Section \ref{sec:proposed_model} describes the proposed model. Section \ref{sec:algo} includes the proposed algorithms. Simulation results are described in Section \ref{sec:results} and Section \ref{sec:conclusion} concludes the paper. \vspace{-0.35cm} \section{Related Works} \vspace{-0.3cm} \label{sec:rel_work} Extensive research results have been reported so far addressing the problems of sensing coverage and network connectivity in wireless sensor networks. In many works, the authors considered only the coverage issues in wireless sensor networks. In\cite{ Li, Meguerdichian}, authors propose efficient distributed algorithms to optimally solve the coverage problem in WSN. In \cite{ Demin}, authors provide an analytical framework for the coverage problem and lifetime maximization of a WSN. In \cite{ jie} a decentralized and localized node density control algorithm is proposed for network coverage. The work in\cite{Huang} proposed three approximation algorithms for the {\it set-k Cover} problem, where each point of the query region will be covered by at least $k$ nodes. The work in \cite{ SSlij} considers the problem of maximizing the number of disjoint sets of sensor nodes to cover the query region. \\ But unless the coverage and connectedness problems are considered jointly, the data sensed by the nodes covering the region can not be gathered at the sink node in multi hop WSN's. Authors of \cite{ Himangsu}, \cite{ Zhou} focused on both connectivity and coverage problems with the objective of finding a single connected set cover only. The problem of finding a connected set cover of minimum size is itself an {\it NP-hard} problem \cite{ Himangsu}. Some of the papers considered the fault tolerant connected set cover problems. An approximation algorithm is proposed in \cite{Zhang} for fault tolerant connected set cover problem. In \cite{ Peng}, a coverage optimization algorithm based on particle swarm optimization technique is developed. In \cite{Lin,Chong,Tian,Wang,Gallais}, authors proposed several dynamic localized algorithms to achieve the coverage and connectivity. But dynamic algorithms, in general, require large message overhead to collect current neighborhood information at some intervals. Also, finding just a single connected cover keeps a large number of sensors unutilized. Hence, the authors in \cite{ Pervin}, propose a localized algorithm for finding maximum number of connected set covers that is to be executed once during network initialization only. In some papers \cite{ Gallais, Pervin, Tian}, it has been assumed that the query area is a dense grid \cite{ Wei}, \cite{ RSS} composed of unit cells. The knowledge of exact location of each node is needed here. A sensor node computes the covered area by counting the cells covered by each neighbor that makes the procedure computation intensive. To avoid this, in \cite{ dibakar} the {\it DCSP} algorithm is proposed where authors assume that the monitoring area is divided into a limited number of square blocks such that a sensor node within a block completely covers it irrespective of its position within the block.Therefore, the coverage problem can be solved easily with much less computation. However, the proposed distributed algorithm was a slow one requiring $p$ rounds to achieve a partition with $p$ nodes. Also, the fault model considers the faults due to energy exhaustion only that assumes that a node can predict its failure and can inform its neighbors in advance.\\ In this paper, a faster distributed algorithm requiring less message overhead is proposed that is executed during network initialization only. It attempts to create maximum number of connected partitions of sensor nodes with reduced diameter such that each partition covers the area under investigation and being active in round robin fashion it enhances the network lifetime manifold. The reduced diameter of the partition keeps the communication latency low. Moreover, a distributed fault recovery algorithm is developed for a stronger fault model that in presence of any unpredictable node faults, can rearrange the affected partition locally, so that it remains operational. Simulation studies show that this fault recovery scheme enhances the network lifetime by more than $50\%$. \vspace{-0.4cm} \section{Proposed Model and Problem Formulation} \vspace{-0.3cm} \label{sec:proposed_model} Let $n$ homogeneous sensor nodes be deployed over a 2-D region $P$ each with same sensing range $\cal S$ and transmission range $\cal T$. It is assumed that $P$ is divided into a finite number of square blocks \cite{ dibakar}. Each side of the block is $\frac {\cal R}{\surd 2}$ as shown in Fig.\ref{fig1}, where $\cal {R}$= min$(\cal S,T)$. \vspace{-0.5cm} \begin{figure}[ht] \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[scale=0.4]{fig1.pdf} \caption{Nodes in P divided into a grid of square blocks} \label{fig1} \vspace{-0.1cm} \end{minipage} \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[scale=0.4]{coverage.pdf} \vspace{0.8cm} \caption{Connectivity without coverage} \label{fig:coverage} \end{minipage} \end{figure} \vspace{-0.5cm} Therefore, it is evident that each sensor node completely covers the block it belongs to and all nodes within the same block are connected to each other. Hence, activating just a single sensor node from each block is sufficient to cover the region $P$. But it is not guaranteed that any such set is connected or not. \begin{figure}[ht] \begin{minipage}[b]{0.45\linewidth} \centering \includegraphics[scale=0.4]{connectivity.pdf} \caption{Coverage without connectivity} \label{fig:connectivity} \vspace{-0.1cm} \end{minipage} \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[scale=0.4]{fig2.pdf} \caption{Coverage with connectivity} \label{fig:coverage_connectivity} \vspace{-0.1cm} \end{minipage} \end{figure} As for example, Fig.\ref{fig:coverage} shows a partition where the selected nodes are connected but a block B is not covered. Whereas, Fig. \ref{fig:connectivity} shows a partition where all blocks are covered but the nodes are not connected, and finally, Fig. \ref{fig:coverage_connectivity} shows the desired topology where the partition covers all blocks as well as it is connected. Assuming this grid structure of the query region $P$, this paper addresses the {\it connected set cover partitioning } problem introduced in \cite {Pervin}. For completeness, the problem is defined below. \begin{definition} Consider a sensor network consisting of a set $S$ of $n$ sensors and a query region $P$. A set of sensors $N\subseteq S$ is said to be a {\emph connected $1$-cover} for $P$ if, each point $p\in P$ is covered by at least one sensor from $N$ and the communication graph induced by $N$ is connected. \vspace{-0.2cm} \end{definition} \paragraph{\bf Connected Set Cover Problem} Given $n$ sensor nodes distributed over a query region, the {\emph Connected Set Cover Problem} is to find a {\it connected $1$-cover} of minimum size. This problem is known to be an NP-hard problem \cite{Himangsu}. \paragraph{\bf Connected Set Cover Partitioning Problem} The Connected Set Cover Partitioning Problem is to partition the sensors into {\it connected $1$-covers} such that the number of covers is maximized \cite{Pervin}. The following section describes the algorithms developed for solving the Connected Set Cover Partitioning Problem. \vspace{-0.3cm} \section{Algorithm for Connected Cover Partitioning} \vspace{-0.2cm} \label{sec:algo} In pervasive computing environments, it is evident that in most of the cases the system captures data in distributed nodes communicating through poorly connected network. Since, in WSN large number of sensor nodes are deployed over a geographical area, to collect information of the whole network at a central node is not feasible in terms of message overhead and energy requirement. Instead, it is better to compute in a distributed fashion based on the local neighborhood information using less communication. Hence the focus of our work is on distributed computation of the connected partitions.\\ In this section, a distributed algorithm is developed to find the maximum number of {\it connected-$1$ covers} of a WSN. Also, in the presence of a node fault, a localized algorithm is presented to rearrange the affected partition to make the system fault tolerant. \vspace{-0.3cm} \subsection{Distributed Algorithm for Partitioning} It is assumed that a set of $n$ sensor nodes $S=\{s_1, s_2, s_3, \ldots, s_n\}$ is deployed on a 2-D plane $P$ divided into say, $m$ square blocks $P$= $\{ p_1, p_2, p_3, \ldots, p_m\}$, as has been described in Section 3. Each square block has unique id. Each sensor node knows the location in terms of its block within which it is located.\\ We propose the following types of messages to be exchanged among nodes. \begin{itemize} \item { \bf Selectlist($C_i, i, \{j\}$) :} This message is sent by a node-$i$ that selects a list of neighbors $\{j\}$ for inclusion in its partition with leader $C_i$. \item {\bf Selected($C_i, \{j\}$) :} This message is initiated by the leader node $C_i$ and is sent to node-$\{j\}$ for inclusion in its partition. \item {\bf Confirm($C_i,j$) :} Node $j$ sends this message to the leader after joining the partition $C_i$. \item {\bf Include($C_i,j$) :} Leader $l$ broadcasts this message within $C_i$ to include node-$j$ in $C_i$. \end{itemize} Depending on the node density, a probability value $0 < l_{prob} < 1$ is determined to select $p$ number of leader nodes randomly. Each node $i\in S$ generates a random number $r$ to check if $r \leq l_{prob}$, the {\it leader probability}. If yes, it becomes a leader node and sets its parent as null. If $p$ leader nodes emerge, ${\cal L}=\{ l_1, l_2, l_3,\ldots, l_ p\}$, each leader initiates the creation of one partition concurrently to generate disjoint connected set covers. \vspace{-0.5cm} \begin{figure}[ht] \begin{minipage}[b]{0.3\linewidth} \centering \includegraphics[scale=0.35]{dis_fig1.pdf} \end{minipage} \begin{minipage}[b]{0.3\linewidth} \centering \includegraphics[scale=0.35]{dis_fig2.pdf} \end{minipage} \begin{minipage}[b]{0.3\linewidth} \centering \includegraphics[scale=0.35]{dis_fig3.pdf} \end{minipage} \hspace{0.5cm} \begin{minipage}[b]{0.3\linewidth} \centering \includegraphics[scale=0.35]{dis_fig4.pdf} \end{minipage} \hspace{0.4cm} \begin{minipage}[b]{0.3\linewidth} \centering \includegraphics[scale=0.35]{dis_fig5.pdf} \end{minipage} \hspace{0.4cm} \begin{minipage}[b]{0.3\linewidth} \centering \includegraphics[scale=0.35]{dis_fig6.pdf} \end{minipage} \caption{steps for making connected set cover} \label{dis_fig} \end{figure} \vspace{-0.5cm} In round $1$, each leader $l_i\in {\cal L}$ initiates a partition $C_i=\{l_i\}$. In each round, each node $i\in C_i$ prepares 'Selectlist' consisting maximum number of neighbors, each one from an uncovered block. In case a node gets more than one node from the same block, it selects the neighbor with minimum degree $\cal D$. Node-$i$ sends a 'Selectlist' message to its parent if it is a leaf node in $C_i$. Else, node $j\in C_i$ selects a list of nodes each belonging to uncovered blocks from its own list and from the received 'Selectlist' messages from its children in same partition. Then it sends the combined 'Selectlist' message to the parent node if it is not a leader node. The leader node finally from these 'Selectlist' messages selects the nodes to be included and sends the 'Selected' message to them. If a node receives 'Selected' messages, it selects the parent with minimum $\cal D$ and confirms the request by sending a 'Confirm' message to the corresponding leader. The leader includes the node in $C_i$ and broadcasts the 'Include' message to all nodes in $C_i$. On receiving an 'Include($C_i,j$)' message, all nodes $k\in C_i$ include node-$j$ in its partition and make necessary updates. In each round, this procedure is repeated until either all blocks are covered by a partition $C_i$ , or no neighbors are left for inclusion. The formal description of the algorithm is given below. \vspace{-0.5cm} \begin{algorithm}[ht] \scriptsize \SetLine \label{algo:partitioning} \caption{Distributed algorithm for connected set cover partitioning} \KwIn{1-hop neighbor list of each node $NL(i)$ with degree ${\cal D}$, {\it Block-Id}, {\it Block~status}, $Status$, $Leader Probability: l_{prob}$} \KwOut{Partition $C_i$ from leader $l_i$ } \For {each node-$i$} { \If{node-$i$ is a leader} { $C_i\leftarrow\{i\}, parent=\phi, status=1$\; } \If{$Status=1$ and not all blocks covered } { Select neighbors from uncovered blocks and received 'Selectlist' messages, selecting nodes with minimum $\cal D$\; \eIf{ $Parent \neq \phi$} { Send 'Selectlist' to the parent node\; } { \eIf{$'Selectlist'=\phi$} { Broadcast success=0 and terminate\; } { Send 'Selected' message to the selected neighbors\; } } } \If{$Status=0$ receives 'Selected' message } { Select that partition where the sender have minimum $\cal D$ and send 'Confirm' message to leader\; Update $NL(i)$, {\it Block~status}, $Status$ and include in $C_i$\; } \eIf{leader node and receives 'Confirm' message } { Broadcast 'Include' message to all nodes in $C_i$ and update $NL(i)$ , {\it Block~status}, $Status$\; \If { all blocks are covered} { Broadcast success=1 and terminate\; } } { update $NL(i)$\; } } \normalsize \end{algorithm} \vspace{-0.9cm} \example In Fig.\ref{dis_fig}, it is shown that in round $1$, the leader node (red) selects the neighbors (the green ones) from uncovered blocks. In the next round all black nodes are selected by the red and green nodes. This procedure is repeated to include brown and blue nodes until all blocks are covered. In the last round, all purple nodes are selected and the process is terminated as no uncovered block exists. It is clear that in each round of the procedure, the nodes already in partition includes several neighbors in the partition so that the partition remains connected with new nodes covering additional blocks. Hence, the procedure terminates faster, each leader either results a successful partition satisfying the condition of connectedness and coverage or it reports a failure when the nodes in the incomplete partition declare them as free nodes. \vspace{-0.6cm} \subsection{Distributed Fault Recovery Algorithm} \vspace{-0.2cm} As it has been mentioned in Section 1, once deployed the sensor nodes may fail due to low energy, hardware degradation, inaccurate readings, environmental changes etc. This paper focuses on the fault recovery problem in case of a single unpredictable node fault in a partition. It is assumed that when an active node $f$ of a partition $C_i$ fails abruptly, its parent if exists and the children in $C_i$ can detect it. \vspace{-0.5cm} \begin{figure} \begin{minipage}[b]{0.3\linewidth} \centering \includegraphics[scale=0.4]{dis_fault.pdf} \end{minipage} \hspace{0.4cm} \begin{minipage}[b]{0.3\linewidth} \centering \includegraphics[scale=0.4]{dis_fault2.pdf} \end{minipage} \hspace{0.4cm} \begin{minipage}[b]{0.3\linewidth} \centering \includegraphics[scale=0.4]{dis_fault1.pdf} \end{minipage} \caption{Fault Detection and Recovery Example} \label{fig_fault} \end{figure} \vspace{-0.2cm} A fast fault tolerant algorithm by which all children and the parent of the faulty node in the partition after detecting the faulty node rearrange the partition quickly to make the partition connected the full coverage. If the faulty node is the leader node, its children with the minimum node-id becomes the new leader otherwise the parent node becomes the leader and the fault recovery procedure is initiated by the new leader.\\ The formal description of the algorithm is given below.\\ \begin{algorithm}[H] \label{fault_algo} \scriptsize \caption{Distributed fault recovery algorithm} \KwIn{faulty node-id : $f$, partition: $C_i$, parent \& children of $f$: $S_i$} \KwOut{Recovered partition $C_i$} If $f$ is a leader node then $leader_{temp}\leftarrow$ Minimum ID child of $f$, otherwise $leader_{temp}\leftarrow$ parent of $f$\; Make a block list $B_S[~]$ covered by $(f\cup S_i)\setminus leader_{temp}$ with $status\leftarrow 0$ \; include $leader_{temp}$ in $temp[~]$\; \For { each node-$i\in temp[~]$} { find neighbors in $S_i$ or from uncovered block. If none, select the free neighbor with maximum $\cal D$\; Include neighbors in 'Selectlist', \;\ \eIf { node-$i$ not $leader_{temp}$} { Send 'Selectlist' message to $leader_{temp}$\; } { if node-$i$ receives 'Selectelist' message from all nodes-$j\in temp[~]$ , Include nodes in $temp[~]$\; Update $B_S[~]$\; } \If{ $B_S[~] == \phi$} { Send 'FaultRecovered' message with $temp[~]$ to $\forall j\in C_i$ and terminate\; } \eIf{ $temp[~]=\phi$} { Broadcast 'RecoveryFailed' and terminate\; } { send $temp[~]$ and $B_S[~]$ to all node in $temp[~]$\; } } \normalsize \label{algo:fault} \end{algorithm} \vspace{-0.3cm} \example In Fig. \ref{fig_fault}, say, the red node is faulty. It will be detected by all its children and parent (colored by blue). Now the blue nodes execute the distributed fault recovery algorithm \ref{fault_algo}. After the fault, the partition is broken into three disjoint components as shown in Fig. \ref{fig_fault}(b). The purple node from the faulty node's block is chosen next for maintaining the coverage and it is also connected to at least one node from the disjoint components. Therefore, the connectivity is preserved. Now the new partition including the purple node, is ready for monitoring the area. \vspace{-0.4cm} \section{Simulation Results and Discussion} \vspace{-0.3cm} \label{sec:results} For the simulation studies, we have used network simulator NS 2.34 to evaluate the performance of our proposed algorithm. We have compared our results with \cite{dibakar} that shows significant improvement on the number of rounds for generating the partitions, the network diameter and the number of transmitted messages per node during the procedure. \vspace{-0.8cm} \begin{figure} \begin{minipage}[]{0.45\textwidth} \centering \includegraphics[scale=0.30]{Graph_round.pdf} \caption{Comparison between DCSP and the Proposed distributed algorithm in terms of average number of rounds for partitioning} \label{result1} \end{minipage} \hspace{0.6cm} \begin{minipage}[]{0.47\textwidth} \centering \includegraphics[scale= 0.30]{Graph_diameter.pdf} \hspace{0.8cm} \caption{Comparison between DCSP and the Proposed distributed algorithm in terms of partition diameter} \label{result2} \end{minipage} \end{figure} \vspace{-0.6cm} The sensor nodes are deployed over a grid $P$ which is divided into a number of blocks ($2 \times 2$),( $3 \times 3$) to ( $7 \times 7$) respectively. Fig.\ref{result1} shows the variation of the average number of rounds to complete partitioning with the grid size. Obviously, the number of rounds increases with the number of blocks. However, compared to the {\it DCSP algorithm} proposed in \cite{dibakar}, the present method completes in significantly less number of rounds. Therefore, during initialization, the proposed method will converge faster to achieve the connected covers of the nodes. \vspace{-0.1cm} \begin{figure}[!ht] \begin{minipage}{0.435\textwidth} \includegraphics[scale=0.25]{Graph_msg.pdf} \caption{DCSP vs Proposed distributed algorithm in terms of average number of transmitted messages } \label{result3} \end{minipage} \hspace{0.5cm} \begin{minipage}{0.48\textwidth} \includegraphics[keepaspectratio=true,scale=0.205]{lifetime.pdf} \caption{Extension of network lifetime using fault recovery technique vs without fault recovery } \label{lifetime} \end{minipage} \end{figure} In Fig.\ref{result2}, it is shown that the proposed algorithm results significant improvement in terms of network diameter over the DCSP algorithm\cite{dibakar}. In a network with large diameter, the number of steps to route a message from a source node to a destination node will require more delay and more communications between intermediate nodes. Therefore, the low diameter network topologies are preferred for a partition that can aggregate the data using less number of hops, i.e., with less delay and less number of broadcasts. Also, Fig. \ref{result3} shows the significant improvement in average number of transmitted messages per node in computing the connected set covers. Since the procedure terminates faster using fewer rounds of computation, the total number of messages exchanged per node is also less here. Finally, Fig. \ref{lifetime} shows how the fault recovery algorithm enhances the network lifetime in presence of faults. Though the proposed fault model includes any unpredicted node faults, in the simulation, only node faults due to energy exhaustion has been taken into account. Simulation results show almost $50\%$ enhancement in network lifetime. \vspace{-0.4cm} \section{Conclusion} \vspace{-0.3cm} \label{sec:conclusion} In this paper, we have focused on the connected set cover partitioning problem. A self-organized fast distributed algorithm is proposed for finding maximum number of connected cover partitions. Also, distributed fault recovery technique is developed to rearrange connected set covers in presence of unpredictable node faults to satisfy both connectivity and coverage criteria. Minimization of network diameter of the partition and significant improvements in terms of computation rounds and message overhead are also achieved by our proposed method. In summary, the proposed connected set cover partitioning technique along with the localized fault recovery scheme opens up new avenues for setting up self organized wireless sensor networks with enhanced lifetime. \bibliographystyle{splncs} \vspace{-0.52cm}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,443
WASHINGTON, DC, January 9, 2018– The EPDM Roofing Association (ERA), the leading trade association representing the manufacturers of EPDM single-ply roofing products and their suppliers, has joined The Business Council for Sustainable Energy (BCSE), a coalition of companies and trade associations that promotes the use of commercially-available clean energy technologies, products and services. Since its founding in 1992, BCSE has been an outspoken advocate of public and private initiatives to prepare the country for significant climate events, and will intensify its efforts in 2018 to support policies that promote the creation of resilient structures. Responding to the heightened interest in and concern over the resilience of the built environment, ERA last year launched its microsite, EpdmTheResilientRoof.org. The site details the need for resilience in roofing systems, and the specific attributes of EPDM that make it uniquely valuable in attaining resilience in a structure. In addition, to add context to the information about EPDM products, the website provides a clearinghouse of sources about resilience, as well as an up-to-date roster of recent articles, blog posts, statements of professional organizations and other pertinent information about resilience. To access the complete ERA website, go to www.epdmroofs.org. To access the resilience microsite, go to www.EpdmTheResilientRoof.org.
{ "redpajama_set_name": "RedPajamaC4" }
2,769
{"url":"https:\/\/www.quizover.com\/precalculus\/section\/section-exercises-right-triangle-trigonometry-by-openstax","text":"# 5.2 Right triangle trigonometry \u00a0(Page 5\/12)\n\n Page 5 \/ 12\n\n## Measuring a distance indirectly\n\nTo find the height of a tree, a person walks to a point 30 feet from the base of the tree. She measures an angle of $57\u00b0\\text{\\hspace{0.17em}}$ between a line of sight to the top of the tree and the ground, as shown in [link] . Find the height of the tree.\n\nWe know that the angle of elevation is $\\text{\\hspace{0.17em}}57\u00b0\\text{\\hspace{0.17em}}$ and the adjacent side is 30 ft long. The opposite side is the unknown height.\n\nThe trigonometric function relating the side opposite to an angle and the side adjacent to the angle is the tangent. So we will state our information in terms of the tangent of $57\u00b0,$ letting $\\text{\\hspace{0.17em}}h\\text{\\hspace{0.17em}}$ be the unknown height.\n\nThe tree is approximately 46 feet tall.\n\nHow long a ladder is needed to reach a windowsill 50 feet above the ground if the ladder rests against the building making an angle of $\\text{\\hspace{0.17em}}\\frac{5\\pi }{12}\\text{\\hspace{0.17em}}$ with the ground? Round to the nearest foot.\n\nAccess these online resources for additional instruction and practice with right triangle trigonometry.\n\nVisit this website for additional practice questions from Learningpod.\n\n## Key equations\n\n Cofunction Identities $\\begin{array}{l}\\begin{array}{l}\\\\ \\mathrm{cos}\\text{\\hspace{0.17em}}t=\\mathrm{sin}\\left(\\frac{\\pi }{2}-t\\right)\\end{array}\\hfill \\\\ \\mathrm{sin}\\text{\\hspace{0.17em}}t=\\mathrm{cos}\\left(\\frac{\\pi }{2}-t\\right)\\hfill \\\\ \\mathrm{tan}\\text{\\hspace{0.17em}}t=\\mathrm{cot}\\left(\\frac{\\pi }{2}-t\\right)\\hfill \\\\ \\mathrm{cot}\\text{\\hspace{0.17em}}t=\\mathrm{tan}\\left(\\frac{\\pi }{2}-t\\right)\\hfill \\\\ \\mathrm{sec}\\text{\\hspace{0.17em}}t=\\mathrm{csc}\\left(\\frac{\\pi }{2}-t\\right)\\hfill \\\\ \\mathrm{csc}\\text{\\hspace{0.17em}}t=\\mathrm{sec}\\left(\\frac{\\pi }{2}-t\\right)\\hfill \\end{array}$\n\n## Key concepts\n\n\u2022 We can define trigonometric functions as ratios of the side lengths of a right triangle. See [link] .\n\u2022 The same side lengths can be used to evaluate the trigonometric functions of either acute angle in a right triangle. See [link] .\n\u2022 We can evaluate the trigonometric functions of special angles, knowing the side lengths of the triangles in which they occur. See [link] .\n\u2022 Any two complementary angles could be the two acute angles of a right triangle.\n\u2022 If two angles are complementary, the cofunction identities state that the sine of one equals the cosine of the other and vice versa. See [link] .\n\u2022 We can use trigonometric functions of an angle to find unknown side lengths.\n\u2022 Select the trigonometric function representing the ratio of the unknown side to the known side. See [link] .\n\u2022 Right-triangle trigonometry permits the measurement of inaccessible heights and distances.\n\u2022 The unknown height or distance can be found by creating a right triangle in which the unknown height or distance is one of the sides, and another side and angle are known. See [link] .\n\n## Verbal\n\nFor the given right triangle, label the adjacent side, opposite side, and hypotenuse for the indicated angle.\n\nWhen a right triangle with a hypotenuse of 1 is placed in the unit circle, which sides of the triangle correspond to the x - and y -coordinates?\n\nThe tangent of an angle compares which sides of the right triangle?\n\nThe tangent of an angle is the ratio of the opposite side to the adjacent side.\n\nWhat is the relationship between the two acute angles in a right triangle?\n\nExplain the cofunction identity.\n\nFor example, the sine of an angle is equal to the cosine of its complement; the cosine of an angle is equal to the sine of its complement.\n\nhow to know photocatalytic properties of tio2 nanoparticles...what to do now\nit is a goid question and i want to know the answer as well\nMaciej\nDo somebody tell me a best nano engineering book for beginners?\nwhat is fullerene does it is used to make bukky balls\nare you nano engineer ?\ns.\nwhat is the Synthesis, properties,and applications of carbon nano chemistry\nMostly, they use nano carbon for electronics and for materials to be strengthened.\nVirgil\nis Bucky paper clear?\nCYNTHIA\nso some one know about replacing silicon atom with phosphorous in semiconductors device?\nYeah, it is a pain to say the least. You basically have to heat the substarte up to around 1000 degrees celcius then pass phosphene gas over top of it, which is explosive and toxic by the way, under very low pressure.\nHarper\nDo you know which machine is used to that process?\ns.\nhow to fabricate graphene ink ?\nfor screen printed electrodes ?\nSUYASH\nWhat is lattice structure?\nof graphene you mean?\nEbrahim\nor in general\nEbrahim\nin general\ns.\nGraphene has a hexagonal structure\ntahir\nOn having this app for quite a bit time, Haven't realised there's a chat room in it.\nCied\nwhat is biological synthesis of nanoparticles\nwhat's the easiest and fastest way to the synthesize AgNP?\nChina\nCied\ntypes of nano material\nI start with an easy one. carbon nanotubes woven into a long filament like a string\nPorter\nmany many of nanotubes\nPorter\nwhat is the k.e before it land\nYasmin\nwhat is the function of carbon nanotubes?\nCesar\nI'm interested in nanotube\nUday\nwhat is nanomaterials\u200b and their applications of sensors.\nwhat is nano technology\nwhat is system testing?\npreparation of nanomaterial\nYes, Nanotechnology has a very fast field of applications and their is always something new to do with it...\nwhat is system testing\nwhat is the application of nanotechnology?\nStotaw\nIn this morden time nanotechnology used in many field . 1-Electronics-manufacturad IC ,RAM,MRAM,solar panel etc 2-Helth and Medical-Nanomedicine,Drug Dilivery for cancer treatment etc 3- Atomobile -MEMS, Coating on car etc. and may other field for details you can check at Google\nAzam\nanybody can imagine what will be happen after 100 years from now in nano tech world\nPrasenjit\nafter 100 year this will be not nanotechnology maybe this technology name will be change . maybe aftet 100 year . we work on electron lable practically about its properties and behaviour by the different instruments\nAzam\nname doesn't matter , whatever it will be change... I'm taking about effect on circumstances of the microscopic world\nPrasenjit\nhow hard could it be to apply nanotechnology against viral infections such HIV or Ebola?\nDamian\nsilver nanoparticles could handle the job?\nDamian\nnot now but maybe in future only AgNP maybe any other nanomaterials\nAzam\nHello\nUday\nI'm interested in Nanotube\nUday\nthis technology will not going on for the long time , so I'm thinking about femtotechnology 10^-15\nPrasenjit\ncan nanotechnology change the direction of the face of the world\nhow did you get the value of 2000N.What calculations are needed to arrive at it\nPrivacy Information Security Software Version 1.1a\nGood\nBerger describes sociologists as concerned with\nGot questions? Join the online conversation and get instant answers!","date":"2018-10-18 05:33:25","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 6, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7413427829742432, \"perplexity\": 1325.3510765642588}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-43\/segments\/1539583511703.70\/warc\/CC-MAIN-20181018042951-20181018064451-00028.warc.gz\"}"}
null
null
{"url":"https:\/\/socratic.org\/questions\/how-do-you-simplify-2-2i-3-3i","text":"# How do you simplify (2-2i)*(-3+3i)?\n\nJan 4, 2017\n\nPerform the multiplication, using the F.O.I.L. method, substitute -1 for ${i}^{2}$, and then combine like terms.\n\n#### Explanation:\n\nMultiply the F irst terms:\n\n$\\left(2 - 2 i\\right) \\cdot \\left(- 3 + 3 i\\right) = - 6$\n\nMultiply the Outside terms:\n\n$\\left(2 - 2 i\\right) \\cdot \\left(- 3 + 3 i\\right) = - 6 + 6 i$\n\nMultiply the I nside terms:\n\n$\\left(2 - 2 i\\right) \\cdot \\left(- 3 + 3 i\\right) = - 6 + 6 i + 6 i$\n\nMultiply the Last terms:\n\n$\\left(2 - 2 i\\right) \\cdot \\left(- 3 + 3 i\\right) = - 6 + 6 i + 6 i - 9 {i}^{2}$\n\nSubstitute -1 for ${i}^{2}$:\n\n$\\left(2 - 2 i\\right) \\cdot \\left(- 3 + 3 i\\right) = - 6 + 6 i + 6 i - 9 \\left(- 1\\right)$\n\n$\\left(2 - 2 i\\right) \\cdot \\left(- 3 + 3 i\\right) = - 6 + 6 i + 6 i + 9$\n\nCombine like terms:\n\n$\\left(2 - 2 i\\right) \\cdot \\left(- 3 + 3 i\\right) = 3 + 12 i$\n\nI hope that this helps.","date":"2020-04-03 04:48:58","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 9, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7590636014938354, \"perplexity\": 13487.471400661967}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-16\/segments\/1585370510287.30\/warc\/CC-MAIN-20200403030659-20200403060659-00295.warc.gz\"}"}
null
null
#include "config.h" #if ENABLE(MATHML) #include "RenderMathMLOperator.h" #include "FontSelector.h" #include "MathMLNames.h" #include "RenderText.h" namespace WebCore { using namespace MathMLNames; RenderMathMLOperator::RenderMathMLOperator(Node* container) : RenderMathMLBlock(container) , m_stretchHeight(0) , m_operator(0) { } RenderMathMLOperator::RenderMathMLOperator(Node* container, UChar operatorChar) : RenderMathMLBlock(container) , m_stretchHeight(0) , m_operator(convertHyphenMinusToMinusSign(operatorChar)) { } bool RenderMathMLOperator::isChildAllowed(RenderObject*, RenderStyle*) const { return false; } static const float gOperatorSpacer = 0.1f; static const float gOperatorExpansion = 1.2f; void RenderMathMLOperator::stretchToHeight(int height) { if (height == m_stretchHeight) return; m_stretchHeight = static_cast<int>(height * gOperatorExpansion); updateBoxModelInfoFromStyle(); setNeedsLayout(true); } void RenderMathMLOperator::layout() { // FIXME: This probably shouldn't be called here but when the operator // isn't stretched (e.g. outside of a mrow), it needs to be called somehow updateFromElement(); RenderBlock::layout(); } // This is a table of stretchy characters. // FIXME: Should this be read from the unicode characteristics somehow? // table: stretchy operator, top char, extension char, bottom char, middle char static struct StretchyCharacter { UChar character; UChar topGlyph; UChar extensionGlyph; UChar bottomGlyph; UChar middleGlyph; } stretchyCharacters[13] = { { 0x28 , 0x239b, 0x239c, 0x239d, 0x0 }, // left parenthesis { 0x29 , 0x239e, 0x239f, 0x23a0, 0x0 }, // right parenthesis { 0x5b , 0x23a1, 0x23a2, 0x23a3, 0x0 }, // left square bracket { 0x2308, 0x23a1, 0x23a2, 0x23a2, 0x0 }, // left ceiling { 0x230a, 0x23a2, 0x23a2, 0x23a3, 0x0 }, // left floor { 0x5d , 0x23a4, 0x23a5, 0x23a6, 0x0 }, // right square bracket { 0x2309, 0x23a4, 0x23a5, 0x23a5, 0x0 }, // right ceiling { 0x230b, 0x23a5, 0x23a5, 0x23a6, 0x0 }, // right floor { 0x7b , 0x23a7, 0x23aa, 0x23a9, 0x23a8 }, // left curly bracket { 0x7c , 0x23d0, 0x23d0, 0x23d0, 0x0 }, // vertical bar { 0x2016, 0x2016, 0x2016, 0x2016, 0x0 }, // double vertical line { 0x7d , 0x23ab, 0x23aa, 0x23ad, 0x23ac }, // right curly bracket { 0x222b, 0x2320, 0x23ae, 0x2321, 0x0 } // integral sign }; // We stack glyphs using a 14px height with a displayed glyph height // of 10px. The line height is set to less than the 14px so that there // are no blank spaces between the stacked glyphs. // // Certain glyphs (e.g. middle and bottom) need to be adjusted upwards // in the stack so that there isn't a gap. // // All of these settings are represented in the constants below. // FIXME: use fractions of style()->fontSize() for proper zooming/resizing. static const int gGlyphFontSize = 14; static const int gGlyphLineHeight = 11; static const int gMinimumStretchHeight = 24; static const int gGlyphHeight = 10; static const int gTopGlyphTopAdjust = 1; static const int gMiddleGlyphTopAdjust = -1; static const int gBottomGlyphTopAdjust = -3; static const float gMinimumRatioForStretch = 0.10f; void RenderMathMLOperator::updateFromElement() { // Destroy our current children children()->destroyLeftoverChildren(); // Since we share a node with our children, destroying our children will set our node's // renderer to 0, so we need to re-set it back to this. node()->setRenderer(this); // If the operator is fixed, it will be contained in m_operator UChar firstChar = m_operator; // This boolean indicates whether stretching is disabled via the markup. bool stretchDisabled = false; // We made need the element later if we can't stretch. if (node()->nodeType() == Node::ELEMENT_NODE) { if (Element* mo = static_cast<Element*>(node())) { AtomicString stretchyAttr = mo->getAttribute(MathMLNames::stretchyAttr); stretchDisabled = equalIgnoringCase(stretchyAttr, "false"); // If stretching isn't disabled, get the character from the text content. if (!stretchDisabled && !firstChar) { String opText = mo->textContent(); for (unsigned int i = 0; !firstChar && i < opText.length(); i++) { if (!isSpaceOrNewline(opText[i])) firstChar = opText[i]; } } } } // The 'index' holds the stretchable character's glyph information int index = -1; // isStretchy indicates whether the character is streatchable via a number of factors. bool isStretchy = false; // Check for a stretchable character. if (!stretchDisabled && firstChar) { const int maxIndex = WTF_ARRAY_LENGTH(stretchyCharacters); for (index++; index < maxIndex; index++) { if (stretchyCharacters[index].character == firstChar) { isStretchy = true; break; } } } // We only stretch character if the stretch height is larger than a minimum size (e.g. 24px). bool shouldStretch = isStretchy && m_stretchHeight>gMinimumStretchHeight; // Either stretch is disabled or we don't have a stretchable character over the minimum height if (stretchDisabled || !shouldStretch) { m_isStacked = false; RenderBlock* container = new (renderArena()) RenderMathMLBlock(node()); RefPtr<RenderStyle> newStyle = RenderStyle::create(); newStyle->inheritFrom(style()); newStyle->setDisplay(INLINE_BLOCK); newStyle->setVerticalAlign(BASELINE); // Check for a stretchable character that is under the minimum height and use the // font size to adjust the glyph size. int currentFontSize = style()->fontSize(); if (!stretchDisabled && isStretchy && m_stretchHeight > 0 && m_stretchHeight <= gMinimumStretchHeight && m_stretchHeight > currentFontSize) { FontDescription desc; desc.setIsAbsoluteSize(true); desc.setSpecifiedSize(m_stretchHeight); desc.setComputedSize(m_stretchHeight); newStyle->setFontDescription(desc); newStyle->font().update(newStyle->font().fontSelector()); } container->setStyle(newStyle.release()); addChild(container); // Build the text of the operator. RenderText* text = 0; if (m_operator) text = new (renderArena()) RenderText(node(), StringImpl::create(&m_operator, 1)); else if (node()->nodeType() == Node::ELEMENT_NODE) if (Element* mo = static_cast<Element*>(node())) text = new (renderArena()) RenderText(node(), mo->textContent().replace(hyphenMinus, minusSign).impl()); // If we can't figure out the text, leave it blank. if (text) { RefPtr<RenderStyle> textStyle = RenderStyle::create(); textStyle->inheritFrom(container->style()); text->setStyle(textStyle.release()); container->addChild(text); } } else { // Build stretchable characters as a stack of glyphs. m_isStacked = true; if (stretchyCharacters[index].middleGlyph) { // We have a middle glyph (e.g. a curly bracket) that requires special processing. int half = (m_stretchHeight - gGlyphHeight) / 2; if (half <= gGlyphHeight) { // We only have enough space for a single middle glyph. createGlyph(stretchyCharacters[index].topGlyph, half, gTopGlyphTopAdjust); createGlyph(stretchyCharacters[index].middleGlyph, gGlyphHeight, gMiddleGlyphTopAdjust); createGlyph(stretchyCharacters[index].bottomGlyph, 0, gBottomGlyphTopAdjust); } else { // We have to extend both the top and bottom to the middle. createGlyph(stretchyCharacters[index].topGlyph, gGlyphHeight, gTopGlyphTopAdjust); int remaining = half - gGlyphHeight; while (remaining > 0) { if (remaining < gGlyphHeight) { createGlyph(stretchyCharacters[index].extensionGlyph, remaining); remaining = 0; } else { createGlyph(stretchyCharacters[index].extensionGlyph, gGlyphHeight); remaining -= gGlyphHeight; } } // The middle glyph in the stack. createGlyph(stretchyCharacters[index].middleGlyph, gGlyphHeight, gMiddleGlyphTopAdjust); // The remaining is the top half minus the middle glyph height. remaining = half - gGlyphHeight; // We need to make sure we have the full height in case the height is odd. if (m_stretchHeight % 2 == 1) remaining++; // Extend to the bottom glyph. while (remaining > 0) { if (remaining < gGlyphHeight) { createGlyph(stretchyCharacters[index].extensionGlyph, remaining); remaining = 0; } else { createGlyph(stretchyCharacters[index].extensionGlyph, gGlyphHeight); remaining -= gGlyphHeight; } } // The bottom glyph in the stack. createGlyph(stretchyCharacters[index].bottomGlyph, 0, gBottomGlyphTopAdjust); } } else { // We do not have a middle glyph and so we just extend from the top to the bottom glyph. int remaining = m_stretchHeight - 2 * gGlyphHeight; createGlyph(stretchyCharacters[index].topGlyph, gGlyphHeight, gTopGlyphTopAdjust); while (remaining > 0) { if (remaining < gGlyphHeight) { createGlyph(stretchyCharacters[index].extensionGlyph, remaining); remaining = 0; } else { createGlyph(stretchyCharacters[index].extensionGlyph, gGlyphHeight); remaining -= gGlyphHeight; } } createGlyph(stretchyCharacters[index].bottomGlyph, 0, gBottomGlyphTopAdjust); } } } RefPtr<RenderStyle> RenderMathMLOperator::createStackableStyle(int size, int topRelative) { RefPtr<RenderStyle> newStyle = RenderStyle::create(); newStyle->inheritFrom(style()); newStyle->setDisplay(BLOCK); FontDescription desc; desc.setIsAbsoluteSize(true); desc.setSpecifiedSize(gGlyphFontSize); desc.setComputedSize(gGlyphFontSize); newStyle->setFontDescription(desc); newStyle->font().update(newStyle->font().fontSelector()); newStyle->setLineHeight(Length(gGlyphLineHeight, Fixed)); newStyle->setVerticalAlign(TOP); if (size > 0) newStyle->setMaxHeight(Length(size, Fixed)); newStyle->setOverflowY(OHIDDEN); newStyle->setOverflowX(OHIDDEN); if (topRelative) { newStyle->setTop(Length(topRelative, Fixed)); newStyle->setPosition(RelativePosition); } return newStyle; } RenderBlock* RenderMathMLOperator::createGlyph(UChar glyph, int size, int charRelative, int topRelative) { RenderBlock* container = new (renderArena()) RenderMathMLBlock(node()); container->setStyle(createStackableStyle(size, topRelative).release()); addChild(container); RenderBlock* parent = container; if (charRelative) { RenderBlock* charBlock = new (renderArena()) RenderBlock(node()); RefPtr<RenderStyle> charStyle = RenderStyle::create(); charStyle->inheritFrom(container->style()); charStyle->setDisplay(INLINE_BLOCK); charStyle->setTop(Length(charRelative, Fixed)); charStyle->setPosition(RelativePosition); charBlock->setStyle(charStyle); parent->addChild(charBlock); parent = charBlock; } RenderText* text = new (renderArena()) RenderText(node(), StringImpl::create(&glyph, 1)); text->setStyle(container->style()); parent->addChild(text); return container; } int RenderMathMLOperator::baselinePosition(FontBaseline, bool firstLine, LineDirectionMode lineDirection, LinePositionMode linePositionMode) const { if (m_isStacked) return m_stretchHeight * 2 / 3 - (m_stretchHeight - static_cast<int>(m_stretchHeight / gOperatorExpansion)) / 2; return RenderBlock::baselinePosition(AlphabeticBaseline, firstLine, lineDirection, linePositionMode); } } #endif
{ "redpajama_set_name": "RedPajamaGithub" }
7,226
Un mille cube (abréviation : mi cu ou mi3) est une unité de volume impériale, ne faisant pas partie du système métrique. Elle est utilisée aux États-Unis, au Canada et au Royaume-Uni. Elle est définie comme le volume d'un cube ayant des côtés de 1 mille (~ ) de longueur. Symboles Il n'y a pas de symbole universellement reconnu. Les symboles suivants sont utilisés : mille cube mille/-3 mi/-3 mille^3 mi^3 mile3 mi3 Dans le monde anglo-saxon : cubic mile cu mile cu mi Conversions Voir aussi Unités de mesure anglo-saxonnes Mille carré Notes et références Unité de volume Unité de mesure anglo-saxonne
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,704
Q: Storm message timeout works improperly I found the following when I used the debug mode. TOPOLOGY_MESSAGE_TIMEOUT_SECS is set to 90s. Spout should not get more than 90s from sending to getting a fail message, but why is 135s in the log? 2019-05-14 16:53:12.037 o.a.s.s.CheckpointSpout Thread-13-$checkpointspout-executor[1 1] [DEBUG] Current state CheckPointState{txid=7, state=COMMITTING}, emitting txid 7, action COMMIT 2019-05-14 16:55:27.097 o.a.s.s.CheckpointSpout Thread-13-$checkpointspout-executor[1 1] [DEBUG] Got fail with msgid 7 2019-05-14 16:55:27.097 o.a.s.s.CheckpointSpout Thread-13-$checkpointspout-executor[1 1] [DEBUG] Checkpoint failed, will trigger recovery A: The message timeout is not a hard limit. The messages may take up to 2x the timeout to actually time out. This is due to a performance optimization, where instead of timing out tuples every second, we have two buckets. When a tuple is created, it gets put into bucket 1. Once the timeout has passed, we rotate all bucket 1 tuples into bucket 2, and fail all bucket 2 tuples. This lets us guarantee that a tuple gets at least the full message timeout to complete, while also being cheap to compute.
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,304
package org.smartdata.hdfs.action; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hdfs.DistributedFileSystem; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.smartdata.action.annotation.ActionSignature; import org.smartdata.hdfs.CompatibilityHelperLoader; import java.io.IOException; import java.net.URI; import java.util.Map; /** * action to truncate file */ @ActionSignature( actionId = "truncate", displayName = "truncate", usage = HdfsAction.FILE_PATH + " $src " + TruncateAction.LENGTH + " $length" ) public class TruncateAction extends HdfsAction { private static final Logger LOG = LoggerFactory.getLogger(TruncateAction.class); public static final String LENGTH = "-length"; private String srcPath; private long length; @Override public void init(Map<String, String> args) { super.init(args); srcPath = args.get(FILE_PATH); this.length = -1; if (args.containsKey(LENGTH)) { this.length = Long.parseLong(args.get(LENGTH)); } } @Override protected void execute() throws Exception { if (srcPath == null) { throw new IllegalArgumentException("File src is missing."); } if (length == -1) { throw new IllegalArgumentException("Length is missing"); } System.out.println(truncateClusterFile(srcPath, length)); } private boolean truncateClusterFile(String srcFile, long length) throws IOException { if (srcFile.startsWith("hdfs")) { // TODO read conf from files Configuration conf = new Configuration(); DistributedFileSystem fs = new DistributedFileSystem(); fs.initialize(URI.create(srcFile), conf); //check the length long oldLength = fs.getFileStatus(new Path(srcFile)).getLen(); if (length > oldLength) { throw new IllegalArgumentException("Length is illegal"); } else { return CompatibilityHelperLoader.getHelper().truncate(fs, srcPath, length); } } else { long oldLength = dfsClient.getFileInfo(srcFile).getLen(); if (length > oldLength) { throw new IllegalArgumentException("Length is illegal"); } else { return CompatibilityHelperLoader.getHelper().truncate(dfsClient, srcPath, length); } } } }
{ "redpajama_set_name": "RedPajamaGithub" }
8,455
<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="description" content=""> <meta name="author" content=""> <title>People Search</title> <!-- Bootstrap Core CSS --> <link rel="stylesheet" href="{{ url_for('static', filename='css/bootstrap.min.css') }}" type="text/css"> <!-- Custom Fonts --> <link href='http://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,600italic,700italic,800italic,400,300,600,700,800' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Merriweather:400,300,300italic,400italic,700,700italic,900,900italic' rel='stylesheet' type='text/css'> <link rel="stylesheet" href="{{ url_for('static', filename='font-awesome/css/font-awesome.min.css') }}" type="text/css"> <!-- Plugin CSS --> <link rel="stylesheet" href="{{ url_for('static', filename='css/animate.min.css') }}" type="text/css"> <!-- Custom CSS --> <link rel="stylesheet" href="{{ url_for('static', filename='css/creative.css') }}" type="text/css"> <!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries --> <!-- WARNING: Respond.js doesn't work if you view the page via file:// --> <!--[if lt IE 9]> <script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script> <script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script> <![endif]--> </head> <body id="page-top"> <nav id="mainNav" class="navbar navbar-default navbar-fixed-top"> <div class="container-fluid"> <!-- Brand and toggle get grouped for better mobile display --> <div class="navbar-header"> <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a class="navbar-brand page-scroll" href="#page-top">REVERSE PEOPLE SEARCH</a> </div> <!-- Collect the nav links, forms, and other content for toggling --> <div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1"> <ul class="nav navbar-nav navbar-right"> <li> <a class="page-scroll" href="#about">About</a> </li> <li> <a class="page-scroll" href="#services">What We Do</a> </li> <li> <a class="page-scroll" href="#portfolio">Demo</a> </li> <li> <a class="page-scroll" href="#contact">Contact</a> </li> </ul> </div> <!-- /.navbar-collapse --> </div> <!-- /.container-fluid --> </nav> <header> <div class="header-content"> <div class="header-content-inner"> <h1>REVERSE PEOPLE SEARCH</h1> <hr> <p>Reverse people search tool is a software that enables you to to find people by phone number or e-mail address. Yes, it's that simple.</p> <a href="#about" class="btn btn-primary btn-xl page-scroll">Find Out More</a> </div> </div> </header> <section class="bg-primary" id="about"> <div class="container"> <div class="row"> <div class="col-lg-8 col-lg-offset-2 text-center"> <h2 class="section-heading">Find People In A Second</h2> <hr class="light"> <p class="text-faded">Reverse people search tool has everything you need to get you the right information whether it is the picture, phone number or home address. Don't worry, we get what you need!</p> <form method="post"> <input type="submit" value="Get Started!"/> </form> <!-- <a href="../reverseSearch.html" class="btn btn-default btn-xl">Get Started!</a> --> </div> </div> </div> </section> <section id="services"> <div class="container"> <div class="row"> <div class="col-lg-12 text-center"> <h2 class="section-heading">Our Service</h2> <hr class="primary"> </div> </div> </div> <div class="container"> <div class="row"> <div class="col-lg-3 col-md-6 text-center"> <div class="service-box"> <i class="fa fa-4x fa-diamond wow bounceIn text-primary"></i> <h3>Find People</h3> <p class="text-muted">Our tool contains one of the largest online directory with contact information you need.</p> </div> </div> <div class="col-lg-3 col-md-6 text-center"> <div class="service-box"> <i class="fa fa-4x fa-paper-plane wow bounceIn text-primary" data-wow-delay=".1s"></i> <h3>Ready to Ship</h3> <p class="text-muted">You can use this information for your business and ready to make an impact!</p> </div> </div> <div class="col-lg-3 col-md-6 text-center"> <div class="service-box"> <i class="fa fa-4x fa-newspaper-o wow bounceIn text-primary" data-wow-delay=".2s"></i> <h3>Up to Date</h3> <p class="text-muted">Our database updates regularly.</p> </div> </div> <div class="col-lg-3 col-md-6 text-center"> <div class="service-box"> <i class="fa fa-4x fa-heart wow bounceIn text-primary" data-wow-delay=".3s"></i> <h3>Made with Love</h3> <p class="text-muted">We already made this tool come in handy these days!</p> </div> </div> </div> </div> </section> <section class="no-padding" id="portfolio"> <div class="container-fluid"> <div class="row no-gutter"> <div class="col-lg-4 col-sm-6"> <a href="#" class="portfolio-box"> <img src="{{ url_for('static', filename='img/portfolio/1.jpg') }}" class="img-responsive" alt=""> <div class="portfolio-box-caption"> <div class="portfolio-box-caption-content"> <div class="project-category text-faded"> Category </div> <div class="project-name"> Full Name </div> </div> </div> </a> </div> <div class="col-lg-4 col-sm-6"> <a href="#" class="portfolio-box"> <img src="{{ url_for('static', filename='img/portfolio/2.jpg') }}" class="img-responsive" alt=""> <div class="portfolio-box-caption"> <div class="portfolio-box-caption-content"> <div class="project-category text-faded"> Category </div> <div class="project-name"> Home Address </div> </div> </div> </a> </div> <div class="col-lg-4 col-sm-6"> <a href="#" class="portfolio-box"> <img src="{{ url_for('static', filename='img/portfolio/3.jpg') }}" class="img-responsive" alt=""> <div class="portfolio-box-caption"> <div class="portfolio-box-caption-content"> <div class="project-category text-faded"> Category </div> <div class="project-name"> Phone Carrier Provider </div> </div> </div> </a> </div> <div class="col-lg-4 col-sm-6"> <a href="#" class="portfolio-box"> <img src="{{ url_for('static', filename='img/portfolio/4.jpg') }}" class="img-responsive" alt=""> <div class="portfolio-box-caption"> <div class="portfolio-box-caption-content"> <div class="project-category text-faded"> Category </div> <div class="project-name"> Google Image Search </div> </div> </div> </a> </div> <div class="col-lg-4 col-sm-6"> <a href="#" class="portfolio-box"> <img src="{{ url_for('static', filename='img/portfolio/5.jpg') }}" class="img-responsive" alt=""> <div class="portfolio-box-caption"> <div class="portfolio-box-caption-content"> <div class="project-category text-faded"> Category </div> <div class="project-name"> Facebook Picture </div> </div> </div> </a> </div> <div class="col-lg-4 col-sm-6"> <a href="#" class="portfolio-box"> <img src="{{ url_for('static', filename='img/portfolio/6.jpg') }}" class="img-responsive" alt=""> <div class="portfolio-box-caption"> <div class="portfolio-box-caption-content"> <div class="project-category text-faded"> Category </div> <div class="project-name"> You Name It, We Will Make It! </div> </div> </div> </a> </div> </div> </div> </section> <aside class="bg-dark"> <div class="container text-center"> <div class="call-to-action"> <h2>Start To Use Reverse People Search Tool Today!</h2> <a href="#" class="btn btn-default btn-xl wow tada">Try Now!</a> </div> </div> </aside> <section id="contact"> <div class="container"> <div class="row"> <div class="col-lg-8 col-lg-offset-2 text-center"> <h2 class="section-heading">Let's Get In Touch!</h2> <hr class="primary"> <p>Ready to try reverse people search tool with us? That's great! Give us a call or send us an email if you have any questions! We will get back to you as soon as possible!</p> </div> <div class="col-lg-4 col-lg-offset-2 text-center"> <i class="fa fa-phone fa-3x wow bounceIn"></i> <p>123-456-6789</p> </div> <div class="col-lg-4 text-center"> <i class="fa fa-envelope-o fa-3x wow bounceIn" data-wow-delay=".1s"></i> <p><a href="mailto:your-email@your-domain.com">reversepeoplesearch@gmail.com</a></p> </div> </div> </div> </section> <!-- jQuery --> <script src="{{ url_for('static', filename='js/jquery.js') }}"></script> <!-- Bootstrap Core JavaScript --> <script src="{{ url_for('static', filename='js/bootstrap.min.js') }}"></script> <!-- Plugin JavaScript --> <script src="{{ url_for('static', filename='js/jquery.easing.min.js') }}"></script> <script src="{{ url_for('static', filename='js/jquery.fittext.js') }}"></script> <script src="{{ url_for('static', filename='js/wow.min.js') }}"></script> <!-- Custom Theme JavaScript --> <script src="{{ url_for('static', filename='js/creative.js') }}"></script> </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
7,600
Previous Post360 spins for Joel ParkerNext PostNEW SHAPEMODEL! Based on the ESA navcam bonanza. Really great achievement Mattias ! I wonder how you are able to keep track of Rosetta's position relative to 67P ; does ESA release this data ? The rosetta mission makes it's SPICE kernels avalable to the public. They contain data that describe the position orientation and speed of both the spacecraft its instruments and its target bodies. I used the Spice library for C and wrote a small program that feeds me the data I need. Mattias, I enjoy watching the realtime view once or twice a day. Thank you for this. A comment: Your current view (24 April 9:40 UTC) is similar to the angle seen in "Cometwatch 14 March 6 hours later". But there seems to be a lot more shadow in your view than in the 14 March picture. from 125.51km that the "dark side" is displayed . The dark side is "well" illuminated . I don't know if this display corresponds with Rosetta's ability right now to see the "dark side " ? The seasons are changing on the comet and we are finally getting some sun reaching over the edge. As far as I can tell my simulation is correct. It matches well with the navcams released just days ago. I can't wait to see some of the navcam images taken these last few days. The glimpses of the terrain we have seen so far is different from the stuff on the "lit" side. The model really looks hideous from this angle with the lack of data showing as ugly flat patches. Hopefully there will be images released soon that give me the coverage needed so that I can build something a little less horrible. If you could mark the approximate location of Philae on your pictures when it is visible from Rosetta, that would interest many people. I used your past images to look at 13 June 20:30 UTC and saw that Rosetta was almost at the zenith above Philae then. It is difficult for me to figure out, but perhaps it isn't difficult for you to answer this question with the model: when was Rosetta around the zenith above Philae before that date and time? Then one can perhaps confirm that Philae was not awake then.
{ "redpajama_set_name": "RedPajamaC4" }
9,554
Q: importing modules with submodules from deep in a library here at office we have a library named after the company name and inside of it sublibraries, per project more or less, and in each sublibrary there might be more modules or libraries. we are using Django and this makes our hierarchy a couple of steps deeper... I am a bit perplex about the differences among the following import instructions: 1: import company.productline.specific.models, company.productline.base.models specific, base = company.productline.specific, company.productline.base 2: import company.productline.specific.models, company.productline.base.models from company.productline import specific, base 3: from company.productline import specific, base import company.productline.specific.models, company.productline.base.models the first style imports only the models? what are then the names specific and base made available in the current namespace? what happens in the initialization of modules if one imports first submodules and only afterwards the containing libraries? maybe the neatest style is the last one, where it is clear (at least to me) that I first import the two modules and putting their names directly in the current namespace and that the second import adds the model submodule to both modules just imported. on the other hand, (1) allows me to import only the inner modules and to refer to them in a compact though clear way (specific.models and base.models) not so sure whether this is question, but I'm curious to read comments. A: The three examples above are all equivalent in practice. All of them are weird, though. There is no reason to do from company.productline import specific and import company.productline.specific.models You can (most of the time) just access models by specific.models after the first import. It seems reasonable to in this case do from company.productline import base from company.productline import specific And then access these like base.models, specific.whatever, etc. If "specific" is a you also need to do a import model in __init__.py to be able to access specific.module. A: so I have looked into it a bit deeper, using this further useless package: A:(__init__.py: print 'importing A', B:(__init__.py: print 'importing B', C1:(__init__.py: print 'importing C1', D:(__init__.py: print 'importing D')) C2:(__init__.py: print 'importing C2', D:(__init__.py: print 'importing D')))) notice that C1 and C2 contain two different modules, both named D in their different namespaces. I will need them both, I don't want to use the whole path A.B.C1.D and A.B.C2.D because that looks too clumsy and I can't put them both in the current namespace because one would overwrite the other and -no- I don't like the idea of changing their names. what I want is to have C1 and C2 in the current namespace and to load the two included D modules. oh, yes: and I want the code to be readable. I tried the two forms from A.B import C1 and the much uglier one import A.B.C1 C1 = A.B.C1 and I would conclude that they are equivalent: Python 2.6.2 (release26-maint, Apr 19 2009, 01:56:41) [GCC 4.3.3] on linux2 >>> from A.B import C1 importing A importing B importing C1 >>> C1 <module 'A.B.C1' from 'A/B/C1/__init__.pyc'> >>> Python 2.6.2 (release26-maint, Apr 19 2009, 01:56:41) [GCC 4.3.3] on linux2 >>> import A.B.C1 importing A importing B importing C1 >>> C1=A.B.C1 >>> C1 <module 'A.B.C1' from 'A/B/C1/__init__.pyc'> >>> Importing package C1 or C2 does not import included modules just because they are there. and too bad that the form from A.B import C1.D is not syntactically accepted: that would be a nice compact thing to have. on the other hand, I am offered the opportunity to do so in A.B.C1.__init__.py if I please. so if I append the line import D to the __init__.py in A.B.C1, this is what happens: Python 2.6.2 (release26-maint, Apr 19 2009, 01:56:41) [GCC 4.3.3] on linux2 >>> from A.B import C1 importing A importing B importing C1 importing D >>> importing included modules is probably best done at the end of the package initialization. considering all this and in given some specific django behaviour (that makes it difficult/impossible to automatically import models while importing the package), I think I prefer style 3.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,482
\section{Introduction} \label{sec:intro} With 4.66 billion users worldwide,\footnote{\url{https://bit.ly/3d2VzRB} Accessed 11 March 2021.} the Internet has become a mainstream medium for social interaction and information dissemination. Social media is an integral part of the Internet, as it enables users and organizations worldwide to connect, socialize, and express themselves with ease to a large audience. With 560 million Internet users, India ranks second in the world. However, English is the first language for only 0.02\% of the Indian population,\footnote{\url{https://bit.ly/3vtwWno} Accessed 11 March 2021.} thereby creating a language barrier for the majority of the population~\cite{e_gov_eng}. The diverse set of languages spoken in India raises the need for multilingual social media platforms in Indian languages. \par Recognising this need, Koo is a recent attempt at building a multilingual Indian social network. Formerly known as \textit{Ku Koo Ku}, it was launched in March 2020 as an Indian alternative to the popular social networking service, Twitter. Originally available in Kannada, Koo allows for and encourages discourse in Indian languages and currently supports 9 other Indian languages apart from English.\footnote{At the time of data collection, support for three of these was still under development.} It plans to include more Indian languages soon. \par Koo gained popularity during August 2020, when it won the Government of India's \textit{Aatmanirbhar} Bharat App Innovation Challenge Award.\footnote{\url{https://bit.ly/2RWGI3w} Accessed 12 March 2021.} During the Indian Farmers' protest in January 2021, Twitter entered a week-long standoff with the Indian Government, over their refusal to block accounts that the Indian Government claimed were spreading misinformation.\footnote{\url{https://bit.ly/3zwWaUS} Accessed 21 March 2021.} Consequently, several Indian Government ministers,\footnote{\url{https://bit.ly/3gyWWZ7} Accessed 21 March 2021.} officials and agencies\footnote{\url{https://bit.ly/3xuECY1} Accessed 21 March 2021.} created accounts on Koo. There has been a general promotion of Koo amongst Government organizations since this event, sparking a rise in the platform's popularity. It again saw an increase in the influx of users when Koo became the first platform to agree to abide by the Government of India's Information Technology (Intermediary Guidelines and Digital Media Ethics Code) 2021 \cite{meity}, while Twitter and Facebook resisted them. Koo has even announced its entry into Nigeria, with the Nigerian Government creating an account on the platform, following the ban of Twitter in the country.\footnote{\url{https://bit.ly/2RXEaCi} Accessed 14 June 2021.} \par Koo uniquely positions itself as an alternative to mainstream online social networks like Twitter. A cardinal reason for its popularity in the Indian context is its multilingual support that allows for more inclusivity. The language barrier is broken, and more people can now join social media and express and share their opinions in the language that they choose. Additionally, Koo has received support from the Indian and Nigerian government agencies, contributing to its acclaim. With the ever-increasing political discourse on social media, Koo becomes an important platform from a political standpoint. \par The rich diversity of Indian languages on the platform coupled with its sudden rise and steady expansion in popularity and political backing motivate the need to understand what goes on inside such a unique platform. To understand this, we provide, to the best of our knowledge, the first characterization of the Koo social network. We do this by investigating the following research questions: \begin{enumerate} \item \textbf{RQ1:} What are the characteristics and demographic of Koo users? When did they join the platform? \item \textbf{RQ2:} What kind of content is posted on the platform? Which languages are most popular? \item \textbf{RQ3:} What are the network properties of Koo and how do they differ from Twitter's? What communities are formed on the platform? \end{enumerate} We were able to collect data for 4.07 million users out of the total 4.7 million\footnote{As of 16th March 2021, https://bit.ly/3d2VcGH} users and 163 million follower-following edges in our data (Section~\ref{sec:data_collection}). In Section~\ref{sec:rq1}, we analyse the user demographic - we found that even though the users on the platform are predominantly male, female reported users are more active, and have more average followers and average likes. We also found that the influx of users surged on the platform in August 2020 when it won the \textit{Atmanirbhar} Bharat App Innovation Challenge, and in the starting months of 2021 when the government promoted the App. On analysing the content in Section~\ref{sec:rq2}, we saw that the most popular language on Koo is Hindi, followed by English, Kannada and Telugu. We also observed a Koo vs Twitter rhetoric on the platform and support for the Indian political party BJP. We saw that major political figures were one of the most mentioned users on Koo. On comparing the network properties of Koo and Twitter (Section~\ref{sec:rq3}), we found Koo to have a dense, well connected network with a higher clustering coefficient. Furthermore, we observed distinct communities of users on Koo, which are based on languages, with English speaking users more centrally placed and having connections to Hindi and Kannada users. \par Through our work, we make the following contributions: \begin{enumerate} \item \textit{Perform an extensive characterization of the new Indian social network Koo, in terms of its user demographic and content.} \item \textit{Present the first dataset of users, their connections, and content on Koo.} \item \textit{Study the network and communities formed on this multilingual platform.} \end{enumerate} \section{Related Work} In this section, we review previous work on social media analysis, in particular the multilingual nature of platforms. \par There has been a vast amount of research on popular social media, Twitter. This includes its characterization in its initial years \cite{twitter2010} and the documentation of its steady user growth \cite{twt_pop}. With the growth of the platforms, the amount of research increased as well, with new topics of interest emerging, such as fake news and misinformation \cite{fake_news} \cite{misinformation}, automated bots \cite{bots}, hate speech \cite{hatespeech} \cite{hate_2}, etc. Some works have explored Indian languages on Twitter, with many using it as a corpus of indic-NLP research \cite{twt_ind_nlp} \cite{twt_ind_nlp_2} or studying the topics of discourse in the Indian subcontinent \cite{indian_twt} \cite{indian_twt_2}. Our work differs substantially from these as we focus on a platform dedicated to promoting Indian languages, with a predominantly Indian user base. \par There have been studies exploring the emergence of new social platforms with unique properties such as Whisper for its anonymity \cite{whisper}, TikTok for its short videos \cite{tiktok}, Twitch for mixed media \cite{twitch}, and so on. Some Twitter alternative alt-right platforms like Gab \cite{zannettou2018gab} \cite{lima2018inside} and Parler \cite{prabhu2021capitol} have also been analysed in the past. Other social media platforms in local languages have also been studied, like VKontakte in Russia \cite{kozitsin2020modeling} and Weibo in China \cite{gao2012comparative}. \par Multiple studies look at the multilingual aspect of other social platforms as well like blogs \cite{hale2012net} and reviews \cite{hale2016user}; but most of these cases find the English language to be dominant over the other languages. Our study focuses on a platform that has a native language to be dominant. Agarwal et al. \cite{sharechat} do study the qualities of an Indian multilingual social network, Sharechat. However, it focuses mainly on image-based content posted by the user. Koo is majorly a text-based platform, thereby making an inquiry of language aspects even more pertinent. Apart from the content posted, our work also lays emphasis on the user characteristics and network of Koo and draw a comparison of the platform's properties to that of Twitter. The choice of a novel platform and the context behind its rapid rise in popularity differentiates our work from the remaining. \section{Methodology} \label{sec:data_collection} We describe next our data collection methodology. We start by presenting an overview of the platform. \subsection{Overview of Koo platform} Similar to Twitter, Koo allows logged-in users to share microblog posts known as ``\textit{koo}s''. While signing up, users can choose their display name, handle, language and other personal details such as gender, marital status, and birth date. Once logged-in, users can post koos, which can at most be 400 characters long; whereas, Twitter allows tweets to be 280 characters at maximum. Users can also comment on others' koos or re-share (also known as ``\textit{rekoo}'') them with or without adding a comment. Koo provides transliteration support while typing in native Indian scripts. The platform's user interface, the users' feed, and the list of recommended accounts to follow changes according to their chosen language, enabling the formation of linguistic communities on the platform. \subsection{Data Collection} We collected data pertaining to the users' profiles, the follower-following network, and content (koos, rekoos, comments, likes, and mentions) posted on the platform, from Koo's public API that serves the Koo web and mobile applications. Data collection started on 26 February 2021 and lasted until 11 March 2021. Table~\ref{tab:data_stats} summarises statistics of our proposed dataset. \par Koo's stark resemblance to Twitter also invites a comparison between user behaviour on the two platforms. Hence, we created a dataset of Koo and Twitter user IDs that correspond to the same entity. This dataset can be used for automated identity resolution tasks and cross-platform analysis of Koo and Twitter. \par We make our dataset public in adherence to FAIR principles as described in Section~\ref{sec:fair_principles}. \begin{table}[h!] \centering \begin{tabular}{lr} \toprule \textbf{Entity} & \textbf{Count} \\ \midrule User profiles & 4,061,735 \\ Follower-following relationships & 163,117,465 \\ Koos & 7,339,684 \\ Rekoos & 2,828,158 \\ Rekoo with Comments & 413,955 \\ Comments & 4,793,492 \\ \bottomrule \end{tabular} \caption{Statistics of the dataset we collected from Koo.} \label{tab:data_stats} \end{table} \subsubsection{Koo Data Collection} We collected and analyzed data about the general discourse on the platform and the indulgent users, without restricting ourselves to certain trends, hashtags, or topics. Analyzing close-to-complete networks enables a significant understanding of the entire platform~\cite{twitter_fallacy}. We used the follower-following network to discover and collect users recursively using the snowball methodology, essentially traversing the network graph in a breadth-first manner, similar to Kwak, et al.~\cite{twitter2010} and Zannettou, et al.~\cite{zannettou2018gab}. We seeded the search with Koo's official language accounts (Table~\ref{tab:lang_accounts}) which are the first accounts shown to a new user, and the list of popular accounts that Koo recommends users to follow.\footnote{\url{https://www.kooapp.com/people}} These were chosen because of their large number of followers. We then collected the (previously undiscovered) followers and followees of these accounts and repeated the process for their follower-followee network. This approach is bound to leave out singletons and isolated communities; however, we hypothesize that they would not constitute a large share of the active users on the platform. We preferred this over enumerating and querying the API for all possible user IDs because of the significantly larger number of API requests the latter entails, which may have overloaded Koo's servers. We then used the user IDs to collect the users' profiles and the content (koos, rekoos, comments, likes, and mentions) generated by them, by querying the appropriate endpoints. \begin{table}[h!] \centering \begin{tabular}{c} \includegraphics[width=0.4\textwidth]{images/lang_table.jpg} \end{tabular} \caption{Koo's official language accounts. Koo's Hindi account has the highest number of followers. Koo handles are written in the specific languages.} \label{tab:lang_accounts} \end{table} \subsubsection{Twitter Data Collection} To create the Koo-Twitter user ID dataset, we considered two sets of users: those with verified Koo accounts and those who had listed their Twitter handles on their Koo profile. For the verified users, we manually curated corresponding Twitter handles. We queried Twitter's Users Search API\footnote{\url{https://api.twitter.com/1.1/users/search.json}} for the users' names and found probable matches, manually annotating each one. 1,030 verified Koo handles were distributed equally among 10 annotators, who were asked to verify the corresponding Twitter handles of the users. According to a set of guidelines prepared by the authors, the annotators looked for similarity in profile pictures, profile information, and the content the users posted on the two platforms. All the annotations were verified by two annotators and ambiguous cases were dropped from the dataset. Out of the 1,030 verified Koo users in our dataset, we found matching Twitter profiles for 872 of them, 499 of which are verified on Twitter as well. \par For the second case, we used Twitter handles provided by the users themselves. We removed duplicate handles and queried Twitter's Users Show API\footnote{\url{https://api.twitter.com/1.1/users/show.json}} to eliminate invalid usernames. In all, we make public 38,711 Koo and Twitter user IDs that correspond to the same entity. \subsection{Adherence to FAIR Dataset Principles} \label{sec:fair_principles} The gathered data consists of publicly available information about a social network, gathering and examining which would provide significant insights into the platform's characteristics. Our dataset also conforms to the FAIR principles. In particular, the dataset is ``findable'', as it is shared publicly.\footnote{\url{https://precog.iiit.ac.in/resources.html}} This dataset is also``accessible'', given the format used (CSV) is popular for data transfer and storage. This file format also makes the data ``interoperable'', given that most programming languages and softwares have libraries to process CSV files. Finally, the dataset is ``reusable'', as the included README file explains the data files in detail. The data was collected through public API endpoints of Koo, adhering to their privacy policy.\footnote{\url{https://www.kooapp.com/privacy}. Accessed 12 March 2021.} The data we collected was stored in a central server with restricted access and firewall protection. All experiments shown in this paper were performed on this dataset. \section{RQ1 : User Characteristics and Demographics} \label{sec:rq1} We analyze the demographics of users on Koo using their profile information. It should be noted that such user-entered information might not always be accurate and should be dealt with cautiously. \begin{figure*}[ht] \centering \includegraphics[width = 0.95\linewidth]{images/user_creation_timeline.jpg} \caption{Weekly user creation timeline with language distribution, for 10 most popular languages. We observe peaks in users joining around August 2020, when Koo won the \textit{Aatmanirbhar} Bharat App Innovation Challenge, and 10 February 2021, when MeitY tweeted about Koo. Kannada language was popular during the initial stages of Koo, following which Hindi language took over from August 2020. Many English users started joining from February 2021. The inset graph represents the lower frequency language users (languages except for Hindi and English). } \label{fig:useron} \end{figure*} \subsection{User Onboarding} The platform is reported to host 4.7 million users at the time of writing. Our dataset has approximately 4 million users, of which, 1.9 million joined Koo in the first two months of 2021 alone. Figure~\ref{fig:useron} shows the user creation timeline across various languages. We find that Koo was predominantly used by Kannada users in its initial stages, presumably because it was launched in Bangalore where Kannada is the vernacular dialect. Surges in influx of users can be seen in August 2020 - around the time of the \textit{Aatmanirbhar} Bharat App Innovation Challenge Award. The user base expanded to other Indian languages, with Hindi users joining the platform in large numbers. February 2021 saw a huge spike in users with almost 200,000 of the users signing up on the days around 10 February 2021, just after the Ministry of Electronics and Information Technology, Government of India tweeted about the Koo app. As the debate of \#KooVsTwitter trended on Twitter, many English users joined the Koo platform. The inset graph of Figure~\ref{fig:useron} shows the distribution of languages other than Hindi and English - highlighting the prominence of Kannada, Telugu, and Tamil amongst Indian languages. \subsection{Gender Distribution} Of the 18.1\% of the users who specified their gender on their profile, 92.1\% identify as male (699,083), with only 7.5\% users identifying as female (58,996) and 0.36\% as others (3,236). However, Female users are more active, in terms of the number of average likes (103.6) and average rekoos (21.7) they do, as compared to other genders. Figure~\ref{fig:gender_act} also shows that Female have more followers (632.9) on average as compared to male users with an average of 117.0 followers and users identifying with the other category with an average of 283.45 followers. Male users, on average, produce fewer koos (5.9) and follow fewer people on average (84.8) than the other two categories, as visible in Figures~\ref{fig:gender_act} and \ref{fig:gender_ff}. Figure~\ref{fig:gender_age} shows that the median age of is similar for all gender categories at ~28 years. More male users identify themselves as single (12.0\%), more female users as married (7.6\%) and more users of the other category as divorced (3.0\%), as shown in Figure~\ref{fig:gender_marital}. Table~\ref{tab:intergender} shows the follower-following patterns between the genders. We observe that for all genders male users contribute to major proportion of the followers and following. \begin{table}[h!] \centering \begin{tabularx}{0.5\textwidth} {X P{0.045\textwidth}P{0.045\textwidth}P{0.045\textwidth}P{0.001mm}P{0.045\textwidth}P{0.045\textwidth}P{0.045\textwidth}} \hline \toprule \multicolumn{1}{c}{\multirow{2}{*}{Gender}} & \multicolumn{3}{c}{Followers} & & \multicolumn{3}{c}{Following} \\ \cline{2-4}\cline{6-8} \multicolumn{1}{c}{} & Male & Female & Others & & Male & Female & Others \\ \midrule Male & 87.90\% & 11.14\% & 0.95\% && 75.61\% & 23.56\% & 0.81\% \\ Female & 92.12\% & 7.32\% & 0.54\% && 83.19\% & 16.20\% & 0.59\% \\ Others & 91.59\% & 7.84\% & 0.56\% && 84.85\% & 14.61\% & 0.52\% \\ \bottomrule \end{tabularx} \caption{Gender wise follower-following distribution. Male users contribute to the majority of followers and following across all the genders.} \label{tab:intergender} \end{table} \subsection{Language and Location} Koo allows users to choose their location from a list of Indian cities. As shown in Figure~\ref{fig:locations}, in the 75,091 user profiles with location information, Bengaluru appears most frequently with 141,469 users, presumably because of the platform being headquartered in Bengaluru and being initially available in Kannada. Bengaluru also appears as an example help text over the location field on both the Android and iOS mobile applications. \par Despite Kannada being the first language on the platform, it is not the most popular, as it is outnumbered by Hindi, which was the language for 44.2\% and 51.2\% of the user base and total posts, respectively. Hindi was followed by English, which constituted to 23.8 \% of the users and 25.9\% of content (see Table~\ref{table:language}). The popularity of Hindi over English demonstrates a degree of success of the platform in promoting discourse in Indian languages. The distribution of users across languages closely mirrors the number of followers of Koo's official language accounts, apart from English (see Table~\ref{tab:lang_accounts}). This may be because the language account corresponding to the user's language is the first account whose post appears in a user's feed. \begin{figure*}[ht] \centering \begin{tabular}{cccc} \subfloat[Gender vs Content \label{fig:gender_act}]{\includegraphics[width = 0.22\linewidth]{images/Gender_koos_rekoos_likes.png}} & \subfloat[Gender vs Follower \label{fig:gender_ff}]{\includegraphics[width = 0.22\linewidth]{images/Gender_Foll.png}} & \subfloat[Gender vs Age \label{fig:gender_age}]{\includegraphics[width = 0.22\linewidth]{images/Gender_age.png}} & \subfloat[Gender vs Marital status \label{fig:gender_marital}]{\includegraphics[width = 0.22\linewidth]{images/Gender_married.png}} \end{tabular} \caption{Distribution of the user meta-data across genders. 18.1\% of total users specified their gender, $N = 761,315$, of which 92.1\% are male, 7.55\% are female and 0.362\% belong to the others category. (a) The box plot corresponds to the 25th, 50th and 75th quartile. The iOS App Store policy mentions a minimum age of 12. Hence, we discard all ages below 12 (1.2\%). Median age of 28 is observed across all users. (b) Female users have much higher average followers as compared to the other two categories (c) Female users are more active in terms of likes and rekoos as compared to the other two categories. (d) 12\% of male users are single which is much higher than the other two categories, while higher proportion of other category users are divorced.} \end{figure*} \begin{figure}[ht!] \centering \includegraphics[width=0.95\linewidth]{images/location_user.png} \caption{Top 10 current locations mentioned by users in the metadata, N=75,091. The y-axis is in logarithmic scale. Bengaluru is the most frequent location. Koo is headquartered in Bengaluru and was originally available in Kannada, which is Bengaluru's local language.} \label{fig:locations} \end{figure} \subsection{Bio and Professional details} Koo allows the users to add and edit their profile bio. Figure~\ref{fig:userbio} shows the wordcloud of the user profile bios on Koo. We see the occurrence of words pertaining to state and national identities like ``Marathi'', ``Tamil'', ``Bengali'', ``Indian'', ``Bharat'' etc. Users are also allowed to add and edit professional details. Figure~\ref{fig:professional} shows the top occurring work titles and educational qualifications of the users, with ``Student'' being the most common work title (61,778 users) and ``MBA'' being the most common educational qualification (4,988 users). \begin{table*}[h!] \label{tab:lang} \centering \begin{tabular}{lrrrr} \toprule \textbf{Language} & \textbf{Number of Users} & \textbf{Percentage of Users} & \textbf{Number of Posts} & \textbf{Percentage of Posts} \\ \midrule Hindi & 1,795,411 & 44.2030 & 3,755,829 & 51.1715 \\ English & 968,271 & 23.8388 & 1,907,993 & 25.9955 \\ Kannada & 711,049 & 17.5060 & 818,679 & 11.1541 \\ Telugu & 259,171 & 6.3807 & 359,874 & 4.9031 \\ Marathi & 183,073 & 4.5072 & 242,803 & 3.3080 \\ Gujarati & 64,829 & 1.5960 & 126,853 & 1.7283 \\ Tamil & 48,285 & 1.1887 & 77,981 & 1.0624 \\ Bangla & 31,211 & 0.7684 & 49,172 & 0.6699 \\ Malayalam & 318 & 0.0078 & 257 & 0.0035 \\ Assamese & 46 & 0.0011 & 113 & 0.0015 \\ Punjabi & 43 & 0.0010 & 43 & 0.0005 \\ Oriya & 27 & 0.0006 & 87 & 0.0011 \\ \bottomrule \end{tabular} \caption{Language distribution on the platform. Hindi is by far the most popular language, for both user profiles as well as posting contributing to 44.20\% of the user profile languages and 51.17\% of the posts.} \label{table:language} \end{table*} \begin{figure}[ht!] \centering \setlength{\fboxsep}{0pt} \setlength{\fboxrule}{1pt} \fbox{\includegraphics[width=.7\linewidth]{images/User-Bio-Cloud.png}} \caption{Wordcloud for user bios. Presence of words pertaining to state and national identities such as ``Tamil'', ``Marathi'', ``Gujarati'' and ``Indian'' can be seen.} \label{fig:userbio} \end{figure} \begin{figure*}[ht!] \centering \begin{tabular}{cc} \subfloat[Work Title.]{\includegraphics[width = 0.4\linewidth]{images/title_user.png}} & \subfloat[Educational Qualifications.]{\includegraphics[width = 0.4\linewidth]{images/user_education.png}} \end{tabular} \caption{Top 10 occurring work titles and educational qualifications. (a) A total of N = 584,352 users mentioned their title, with ``student'' being the highest mentioned title with 61,778 occurrences (b) A total of N = 118,474 users mentioned educational qualifications, with ``MBA'' being the most common qualification with 4,988 mentions. } \label{fig:professional} \end{figure*} \begin{figure*}[ht] \centering \includegraphics[width = 0.9\linewidth]{images/post_creation_timeline.jpg} \caption{Weekly post creation timelines with language distribution, for top 10 used languages. A peak around 10 February 2021 is observed, when MeitY promoted Koo on Twitter. Posts were majorly in Kannada language during the initial days of Koo, following which Hindi posts took over from August 2020. Posts in English language spiked around February 2021. The inset graph represents the lower frequency language posts (languages, except for English and Hindi). } \label{fig:posts} \end{figure*} \section{RQ2 : Content Analysis} \label{sec:rq2} \subsection{User-Generated Content} We have a total of 7,339,684 koos in our dataset. Figure~\ref{fig:posts} shows the post generation timeline of Koo with the language distribution. This displays a very similar behaviour to the user creation timeline. A major spike in posting is observed around 10 February 2021, especially in English koos, when the Government of India's Ministry of Electronics and Information Technology posted a tweet promoting Koo, and many prominent government figures started joining the platform. The Kannada and Hindi language posts follow the same pattern as the user joining distribution. Similar to other social networks \cite{twitter_rhythm}, the posting activity on Koo gradually increases throughout the day, attaining its peak in the evening at around 2100 hours IST. \subsection{Media Distribution} Koo allows for multiple types of media such as images, videos, links, gifs and text to be shared. Figure~\ref{fig:media} shows the distribution of these media types across various languages on the platform. Being a microblogging platform, more than 50\% of the content on the platform appears as text, followed by images, gifs, links, video and audio. Most languages follow the same general trend, with a stark exception of Kannada, showing 40\% of the content being shared through gifs. \begin{figure}[!t] \centering \includegraphics[width=.85\linewidth]{images/media.png} \caption{Media share in posts per language.} \label{fig:media} \end{figure} \subsection{Hashtag Analysis} Hashtags in social media have become a way for users to build communities around topics and promote opinions. We extract the top occurring hashtags from the user posts and plot them as a wordcloud in Figure~\ref{fig:hashtagwordcloud}. We see hashtags like ``kooforindia'', ``bantwitter'', and ``koovstwitter'', which project a sentiment of competition between Twitter and Koo, and promote the Koo platform. Hashtags like ``indiawithmodi'', ``atmanirbharbharat'', ``modi'', ``modistrikesback'', and ``bjp'', which are associated with the Bharatiya Janata Party (BJP) are also present. Figure~\ref{fig:hashtagclustering} shows a network of the 100 most frequently occurring hashtags, with the edges indicating two hashtags that occur in the same post. Although the graph has a low modularity score (0.329), hashtags related to one topic fall in the same category when the graph is clustered based on modularity~\cite{Blondel_2008}. \begin{figure}[!ht] \centering \includegraphics[width=\linewidth]{images/hashtags_all_wc.png} \caption{Wordcloud for top 25 hashtags in posts. Hashtags like \#koo, \#Koovstwitter, and \#bantwitter show the competitive sentiment between the two platforms.} \label{fig:hashtagwordcloud} \end{figure} \subsection{N-gram Analysis} In order to get an insight into the popular conversations going on the platform, we plot the top occurring uni-grams and bi-grams in the content of the posts (Figure~\ref{ngrams}). We observe an overwhelming number of Hindi n-grams. Through both uni-grams and bi-grams, we observe the existence of Hindu-centric words like {\dn jy\399wFrAm} (``jaishreeram'') and {\dn rAm{\rs -\re}rAm} (``ram-ram'') that allude to ``Lord Ram'', a major deity in Hinduism. There is also a mention of many Indian religious leaders like {\dn rAmpAl{\rs -\re}jF} (``rampal-ji''), {\dn s\2t{\rs -\re}\399wF} (``sant-shri''), and {\dn jF{\rs -\re}mhArAj} (``ji-maharaj'')\footnote{Words in brackets are the transliterated forms of the original Hindi text.}. This indicates that some of the discussions on Koo are around religion and that people may be using slogans like {\dn jy\399wFrAm} (``jaishreeram'') as a symbol of their religious faith. \begin{figure}[ht!] \centering \includegraphics[width=\linewidth]{images/hashtags-graph.png} \caption{A network of the 100 most frequently occurring hashtags, where an edge represents co-occurrence of hashtags in a post. Colors indicate clustering based on modularity. The entire graph has a modularity score of 0.329.} \label{fig:hashtagclustering} \end{figure} \subsection{Mentions and Likes} The number of mentions and likes of a particular user are useful indicators to study a user's engagement and popularity on a platform~\cite{twitter_fallacy}. Table~\ref{tab:mentions} shows the top 10 most mentioned users on the platform, of which ``republic'', which is an Indian news channel comes at the top with 16,041 mentions. Notably, Republic also has an editorial partnership with Koo.\footnote{\url{https://bit.ly/35nZFPJ} Accessed 12 March 2021.} Ravi Shankar Prasad\footnote{Minister of Law and Justice, Electronics and Information Technology and Communications, Government of India.} is the next most mentioned user with 12,991 mentions. Many prominent ministers and political figures like Piyush Goyal\footnote{Minister of Railways, Commerce \& Industry, Consumer Affairs and Food \& Public Distribution, Government of India.} and Sambit Patra\footnote{Official spokesperson of the Bharatiya Janata Party in India.} are also present in the list of top mentioned users. Table~\ref{tab:likes} shows the top 10 users with the most liked posts on the platform. Ravi Shankar Prasad is the most liked user with 435,752 likes. Both English and Hindi pages for Republic News channel are present in the top 10 list. \section{RQ3 : Koo's user network and communities} \label{sec:rq3} We analyzed Koo's following network, looking for characteristics of the community forming as the young platform grows. The network has a noticeably high local clustering coefficient of 0.561, that represents how well connected the neighbourhood of a vertex is. This indicates a strong modular structure in the network, presumably due to Koo only catering to audiences from a single country. In contrast, Twitter, which caters to worldwide audiences, only had an average local clustering coefficient of 0.072 during its early years in 2009~\cite{twitter_cc}, indicating much weaker communities. \par We see that around 90\% of the users have less than a hundred followers. However, a few users with an extremely high number of followers (of the order of $10^6$) skew the distribution. The distribution of the number of followees follows a similar pattern. Further, users with a high number of followers tend to only have a small number of followees and vice-versa, indicating the presence of influential and popular accounts that are followed by a majority of other users. This may, in part, be due to new users being shown a list of popular accounts to follow, on top of their feed. \par Figure~\ref{fig:ff-verified} shows the following network between verified users on Koo. We see distinct communities of users based on language, possibly because users interact more with others whose language they can understand. English-speaking users are more centrally placed in the network, with connections to both Hindi and Kannada speakers. Verified accounts in the two Indian languages, on the contrary, do not have many follower-followee relationships. \begin{table}[ht!] \parbox{.48\linewidth}{ \centering \begin{tabular}{lr} \hline \toprule \textbf{Handle} & \textbf{Mentions} \\ \midrule republic & 16,041 \\ ravishankarprasad & 12,991 \\ kisanektamorcha & 11,010 \\ {\dn ErpENlk}\_{\dn BArt} & 9,366 \\ piyushgoyal & 9,127 \\ mayank & 7,588 \\ leledirect.com & 7,045 \\ aprameya & 6,390 \\ sambitpatra & 5,742 \\ khushbookapoor & 5,693 \\ \bottomrule \end{tabular} \caption{Users with most number of mentions. News channel Republic has the highest number of mentions (16,041), followed by Ravi Shankar Prasad (12,991).} \label{tab:mentions} } \hfill \parbox{.48\linewidth}{ \centering \begin{tabular}{lr} \hline \toprule \textbf{Handle} & \textbf{Likes} \\ \midrule ravishankarprasad & 435,752 \\ piyushgoyal & 395,674 \\ republic & 357,745 \\ {\dn ErpENlk}\_{\dn BArt} & 296,039 \\ meghupdates & 277,495 \\ rinki & 257,266 \\ sawatimehera & 239,962 \\ chouhanshivraj & 185,354 \\ narendramodiforyou & 181,271 \\ anandranganathan & 168,518 \\ \bottomrule \end{tabular} \caption{Users with most number of likes. Ravi Shankar Prasad is the most liked user on Koo with 435,752 likes, while Republic takes the third place.} \label{tab:likes} } \end{table} \section{Discussion} Koo's tagline, ``The Voices of India'', captures the essence of the platform, i.e., support for Indian languages, large Indian user base, and homegrown development. The recent surge in popularity of the platform, presence of multilingual content, and linguistic communities makes the study of Koo interesting. We release the first-ever dataset of Koo and characterize the platform based on the users, content posted, and the network. We show the formation of tight communities based on language, as well as the massive popularity of Indian politicians, news media agencies and government organizations on the network. We note that female users are more active, despite being present in smaller numbers. The higher presence of Hindi than English on Koo indicates a degree of success in promoting discourse in Indian languages. Kannada, Tamil, Telugu and Marathi also constitute a considerable portion of the content and activity on the platform. We observe a Koo vs Twitter rhetoric with hashtags such as ``\#koovstwitter`` and ``\#bantwitter`` trending. Koo is still in its nascent stages and is being developed to include more languages - Gujarati, Malayalam, Oriya, Punjabi, and Assamese. As Koo grows in size, it should build convenient mechanisms with which researchers can collect and work on their data, which can prove useful for research in social computing and Indian languages. \begin{figure}[ht!] \centering \includegraphics[width=0.9\linewidth]{images/koo-verified-ff-final-without-cropped.png} \caption{ Following graph of verified users on Koo. Edges are directed from the follower to the followee. Node size is proportional to in-degree, while colors indicate language. Names are only shown for a few prominent accounts. Singletons are not shown.} \label{fig:ff-verified} \end{figure} \begin{figure}[ht!] \centering \begin{tabular}{cc} \subfloat[Unigram Wordcloud.] { \setlength{\fboxsep}{0pt} \setlength{\fboxrule}{1pt} \fbox{\includegraphics[width = 0.4\linewidth]{images/alltextuni.png}} } & \subfloat[Bigram Wordcloud.] { \setlength{\fboxsep}{0pt} \setlength{\fboxrule}{1pt} \fbox{\includegraphics[width = 0.4\linewidth]{images/alltext-bw.png}} } \\ \end{tabular} \caption{Word Clouds for user posts. Both unigrams and bigrams show substantial Hindu religion centric content.} \label{ngrams} \end{figure} \section{Limitations and Future Work} This paper performs an exploratory analysis of the characteristics of Koo, presenting a novel dataset and uncovering valuable insights. Although our dataset contains a substantial portion of the user-base of Koo, it does have the limitation that it leaves out singletons and isolated communities. For future work, this multilingual data can act as a corpus for research in Indian languages. A study of posting trends by the same users on Koo and Twitter could reveal if users use the two platforms for different purposes, or if their activity on one mirrors that on the other. Our dataset of corresponding Koo and Twitter user IDs makes this data collection convenient. An analysis of who gains popularity on Koo, as well as the presence of bot accounts could reveal more about the inter-user interactions on the platform. \section*{Acknowledgements} The authors would like to thank the annotators for their help and contribution for creating the dataset. The authors would also like to thank Nidhi Goyal, Samiya Caur and Shivangi Singhal for their helpful reviews and comments on the initial draft versions. \bibliographystyle{IEEEbib}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,833
St.Albans, Vermont(near the Canadian border) holds a three day Maple festival at the end of April, partly to celebrate the end of the maple syrup season in Vermont and partly to bring some extra business to the downtown merchants. The winter is long in St Albans and this event marks the end of it. The community is close to Quebec and La Belle Provence influence is everywhere, even at Walmart where French is the second language for each sign. Its not quite Bill 101, which would see English as a subtitle and French as a title.
{ "redpajama_set_name": "RedPajamaC4" }
3,689
Mandibulata (kusadlovci) je skupina (klad) členovců, do které se řadí šestinozí, korýši a stonožkovci. Kusadlovci jsou největší a nejrozmanitější skupinou členovců. Kusadlovci jsou jako přirozený klad členovců dobře podpořeni molekulárními analýzami. Konkurenční hypotézy – např. přirozenost Schizoramia (korýši + klepítkatci) či Myriochelata (syn. Paradoxopoda; stonožkovci + klepítkatci) – vycházely pouze z fosilního záznamu a zpravidla byly odhaleny jako artefakty vzniklé uvažováním konvergentních znaků. Skupina Mandibulata je sesterská ke skupině Arachnomorpha (klepítkatci, nohatky a několik skupin vymřelých členovců, zejména trilobiti a Megacheira), která však může být podle jiných studií jako celek nepřirozená – některé skupiny (Megacheira) se mohou odvětvovat bazálněji než klad klepítkatců a trilobitů (společné jméno Artiopoda), některé (Marellomorpha) naopak později od linie vedoucí ke kusadlovcům. Reference Členovci
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,001
\section{Introduction\label{sec1}} The influence of disorder is of great interest in physics, since pure systems are rare in nature. It has been known for more than thirty years that the universality class associated with a continuous phase transition can be changed by the presence of quenched impurities~\cite{Khmelnitskii74}. According to the Harris criterion~\cite{Harris74}, uncorrelated randomness coupled to the energy density can only affect the critical behaviour of a system if the critical exponent $\alpha$ describing the divergence of the specific heat in the pure system is positive. This has been established in the case of the $q$-state Potts model in dimension $D=2$ for example. For $2<q\le 4$, the pure system undergoes a continuous transition with a positive critical exponent $\alpha$. As predicted by the Harris criterion, new universality classes have been observed both perturbatively and numerically~\cite{Potts2D} (for a review, see Ref.~\cite{BercheChatelain03}). The special case $q=2$, the Ising model, is particularly interesting since in the pure system, the specific heat displays a logarithmic divergence ($\alpha=0$) making the Harris criterion inconclusive~\cite{DotsDots83}. Based on perturbative and numerical studies, it is now generally believed that the critical behaviour remains unchanged apart from logarithmic corrections when introducing randomness in the system~\cite{Ising2D}. In three dimensions (3D), the disordered Ising model was subject of really extensive studies (see, e.g., Ref.~\cite{FolkEtAl03} for an exhaustive list of references). Less attention has been paid to first-order phase transitions. It is known that randomness coupled to the energy density softens any temperature-driven first-order phase transition~\cite{ImryWortis79}. Moreover, it has been rigorously proved~\cite{AizenmanWehr89HuiBerker89} that in dimension $D\le 2$ an infinitesimal amount of disorder is sufficient to {\em turn any first-order transition into a continuous one}. The first observation of such a change of the order of the transition was made in the 2D 8-state Potts model~\cite{ChenFerrenbergLandau92} where a new universality class was identified~\cite{cardy_jacobsen,Pottsq8}. For higher dimensions, the first-order nature of the transition may persist up to a finite amount of disorder. A tricritical point at finite disorder between two regimes of respectively first-order and continuous transitions is expected \cite{cardy_jacobsen,cardy}. The existence of such a tricritical point for the site-diluted 3D 3-state Potts model could only be suspected by simulations because the pure model already undergoes a very weak first-order phase transition~\cite{BallesterosEtAl00}. On the other hand, the first-order phase transition of the pure 5-state Potts model is very strong and would hence make it rather difficult to study the role of disorder. As a consequence, we have turned our attention to the 3D 4-state Potts model and have shown that there exists a second-order transition regime for this model~\cite{ChatelainEtAl01}. Our choice of bond dilution is motivated by the fact that for this model only high-temperature expansions results are available up to now which to our knowledge cannot be done for site-dilution or are at least more difficult~\cite{HellmundJanke02}. In Sect.~\ref{sec2} we define the model and the observables, and remind the reader of how these quantities behave at first- and second-order phase transitions. Section~\ref{sec3} is devoted to the numerical procedure, first the description of and then the comparison between the algorithms which are used at low and high impurity concentrations, followed by a first discussion of the qualitative properties of the disorder average. A short characterisation of the nature of the phase transition -- at a qualitative level -- is reported in Sect.~\ref{sec4}. The motivation of this section is to first convince ourselves that the transition does indeed undergo a qualitative change when the strength of disorder is varied. Then, we describe how the phase diagram is obtained and concentrate on the first-order regime in Sect.~\ref{sec5}. In Sect.~\ref{sec6}, we discuss the critical behaviour in the second-order induced regime. Finally, the main features of the paper are summarised in Sect.~\ref{sec7}. \section{Model and observables\label{sec2}} We study the disordered 4-state Potts model on a cubic lattice $\Lambda$. The model is defined by the Hamiltonian \begin{equation} {\cal H}[\sigma,J]=-\sum_{(i,j)} J_{ij}\delta_{\sigma_i,\sigma_j}, \label{eq1} \end{equation} where the spins $\sigma_i$, located on the vertices $i$ of the lattice $\Lambda$, are allowed to take one of the $q=4$ values $\sigma_i=1,\dots,q$. The boundaries are chosen periodic in the three space directions. The notation ${\cal H}[\sigma,J]$ specifies that the Hamiltonian is defined for any configuration of spins and of couplings. The sum runs over the couples of nearest-neighbouring sites and the exchange couplings $J_{ij}$ are independent quenched, random variables, distributed according to the normalised binary distribution ($J > 0$) \begin{equation} P[J_{ij}]=\prod_{(i,j)}[p\delta(J_{ij}-J)+(1-p)\delta(J_{ij})]. \label{eq2} \end{equation} The pure system (at $p=1$) undergoes a strong first-order phase transition with a correlation length $\xi\sim 3$ lattice units at the inverse transition temperature $\beta_cJ=0.628\,63(2)$~\cite{JankeKappler96} (we keep the conventional notation $\beta=(k_BT)^{-1}$, since in the context there is no risk of confusion with the critical exponent of the magnetisation). As far as we know, no more information has been made available on this model. We do not expect any phase transition for bond concentration $p$ smaller than the percolation threshold $p_c = 0.248\ \!812\ \!6(5)$~\cite{LorenzZiff97} since the absence of a percolating cluster makes the appearance of long-range order impossible. In the following, we are thus dealing with quenched dilution. The averaging prescription is such that the physical quantities of interest in the diluted system (say an observable $Q$) are obtained after {\em averaging first a given sample $[J]$ over the Boltzmann distribution, $\langle Q_{[J]}\rangle_\beta$, and then over the random distribution of the couplings} denoted by $\overline{\langle Q_{[J]}\rangle_\beta}$, since there is no thermal relaxation of the degrees of freedom associated to quenched disorder: \begin{itemize} \item[$\triangleright$] The thermodynamic average of an observable $Q$ at inverse temperature $\beta$ and for a given disorder realization $[j]$ is denoted \begin{eqnarray} \langle Q_{[J]}\rangle_\beta&=&(Z_{[J]}(\beta))^{-1}\int {\cal D} [\sigma]Q_{[\sigma,J]}{\rm e}^{-\beta{\cal H}[\sigma,J]}\nonumber\\ &\approx&\frac{1}{N_{\rm MCS}}\sum_{{\rm MCS}}Q_{[J]}(\beta), \label{eq2bis} \end{eqnarray} where $N_{\rm MCS}$ is the number of Monte Carlo iterations (Monte Carlo steps) during the ``production'' part after the system has been thermalised. Here we use the following notation: $Q_{[\sigma,J]}$ is the value of $Q$ for a given spin configuration $[\sigma]$ and a given disorder realization $[J]$, $Q_{[J]}(\beta)$ is a value obtained by Monte Carlo simulation at inverse temperature $\beta$. Time to time, we will have to specify a particular disorder realization, say $\# n$, and the value of the observable $Q$ for this very sample will be denoted as $Q_{\#n}(\beta)$. \item[$\triangleright$] The average over randomness is then performed, \begin{eqnarray} \overline{\langle Q_{[J]}\rangle_\beta}&=&\int {\cal D}[J] \langle Q_{[J]}\rangle_\beta P[J] \cr &\approx&\frac{1}{N\{J\}}\sum_{[J]}\langle Q_{[J]}\rangle_\beta \cr &=&\int {d}\ \!Q\ \!Q P_\beta(Q), \label{eq2ter} \end{eqnarray} where $N\{J\}$ is the number of independent samples. The probability $P_\beta(Q)$ is determined empirically from the discrete set of values of $\langle Q_{[J]}\rangle_\beta$. This disorder average is simply denoted as $\overline Q(\beta)$ for short, i.e. $\overline Q(\beta)\equiv\overline{\langle Q_{[J]}\rangle_\beta}$. \end{itemize} For a specific disorder realization $[J]$, the magnetisation per spin $m_{[\sigma,J]} = L^{-D} M_{[\sigma,J]}$ of the spin configuration $[\sigma]$ is defined from the fraction of spins, $\rho_{[\sigma,J]}$, that are in the majority orientation, \begin{eqnarray} \rho_{[\sigma,J]}&=&\max_{\sigma_0}\left[L^{-D}\sum_{i\in\Lambda} \delta_{\sigma_i,\sigma_0}\right],\cr m_{[\sigma,J]}&=&\frac{q\rho_{[\sigma,J]}-1}{q-1}. \label{eq3} \end{eqnarray} The order parameter of the diluted system is thus denoted $\overline m(\beta)=\overline{\langle m_{[J]}\rangle_\beta}$. Thermal and disorder moments $\overline{\langle m_{[J]}^n\rangle_\beta}$ and $\overline{\langle m_{[J]}\rangle_\beta^n}$, respectively, are also quantities of interest. The magnetic susceptibility $\chi_{[J]}(\beta)$ and the specific heat $C_{[J]}(\beta)$ of a sample are defined using the fluctuation-dissipation theorem, i.e. \begin{eqnarray} \chi_{[J]}(\beta)&=&\beta L^D\left[{\langle m_{[J]}^2\rangle_\beta} -{\langle m_{[J]}\rangle_\beta^2}\right], \label{eq4Chi}\\[1mm] C_{[J]}(\beta)/k_B&=& \beta^2 L^D\left[{\langle e_{[J]}^2\rangle_\beta} -{\langle e_{[J]}\rangle^2_\beta}\right], \label{eq4C} \end{eqnarray} where \begin{equation} e_{[\sigma,J]} = L^{-D} E_{[\sigma,J]} =L^{-D}\sum_{(i,j)} J_{ij} \delta_{\sigma_{i},\sigma_{j}}. \label{eqE} \end{equation} is the negative energy density since $E_{[\sigma,J]}=-{\cal H}[\sigma,J]$. Binder cumulants~\cite{Binder81} take their usual definition, for example \begin{equation} U_{m_{[J]}}(\beta)= 1-{{{\langle m_{[J]}^4\rangle_\beta}\over 3{\langle m_{[J]}^2\rangle_\beta}^2}}. \label{eq5} \end{equation} Derivatives with respect to the exchange coupling are computed through \begin{equation} L^{-D}{d\over d\beta}\ln \langle m_{[J]}^n\rangle_\beta = {{\langle m_{[J]}^ne_{[J]}\rangle_\beta \over \langle m_{[J]}^n\rangle_\beta} -\langle e_{[J]}\rangle_\beta}. \label{eq6} \end{equation} All these quantities are then averaged over disorder, yielding $\overline\chi(\beta)$, $\overline C(\beta)$, $\overline U_m(\beta)$, and $\overline{\partial_\beta\ln\langle m_{[J]}^n\rangle_\beta}$. At a second-order transition, these quantities are expected to exhibit singularities described in terms of power laws from the deviation to the critical point. These power laws define the critical exponents. In the following, the properties will be investigated using finite-size scaling analyses, i.e., according to the following size dependence at the critical temperature, \begin{eqnarray} \overline m({\beta_c},L^{-1})&\sim&B_c L^{-\beta/\nu},\label{eq7bisa}\\[1mm] \overline\chi({\beta_c},L^{-1})&\sim&\Gamma_c L^{\gamma/\nu}, \label{eq7bisb}\\[1mm] \overline C({\beta_c},L^{-1})&\sim&A_c L^{\alpha/\nu},\label{eq7bisc}\\[1mm] L^{-D}\left.\overline {d\ln \langle m_{[J]}^n\rangle_\beta\over d\beta} \right|_{\beta_c}&\sim&N_{n,c} L^{1/\nu}.\label{eq7bisd} \end{eqnarray} At a first-order transition, the order parameter has a discontinuity at the transition temperature, suggesting that $\beta/\nu$ formally becomes zero. Heuristic (and for pure $q$-state Potts models with sufficiently large $q$ even rigorous) arguments also suggest that $\gamma/\nu$, $\alpha/\nu$, and $1/\nu$ should then coincide with the space dimension $D$~\cite{BorgsImbrie89BorgsKotecky90Cabrera90}, restoring the ordinary extensivity of the system in Eqs.~(\ref{eq7bisb}) -- (\ref{eq7bisd}). When the transition temperature is not known exactly, the problem of the value of the inverse critical temperature $\beta_c$ in the expressions above can be a source of further difficulties. Usually, one follows a flow of finite-size estimates given by the location of the maximum of a diverging quantity (for example the susceptibility, $\overline\chi_{\rm max}(L^{-1})\equiv\ \!{\rm max}_\beta[\overline \chi(\beta,L^{-1})]$). From the scaling assumption, supposed to apply at the random fixed point, \begin{equation} \overline \chi(\beta,L^{-1})=L^{\gamma/\nu}f_\chi( L^{1/\nu} t), \end{equation} with $t=|\beta_c-\beta|$, the inverse temperature $\beta_{\rm max}$ where the maximum of $\bar\chi$ occurs, \begin{equation} \overline\chi(\beta_{\rm max},L^{-1})=\overline\chi_{\rm max}(L^{-1}), \label{EqDfnBetaMax} \end{equation} scales according to \begin{equation}\beta_{\rm max}\sim \beta_c + aL^{-1/\nu}.\end{equation} Notice that the scaling function $f_\chi(x)$ takes its maximum value $f_\chi(a)=\overline\chi_{\rm max}L^{-\gamma/\nu}$ at $x=a$. At that very temperature where the finite-size susceptibility has its maximum, we then have similar power law expressions, \begin{eqnarray} \overline m({\beta_{\rm max}},L^{-1}) &\sim&f_m(a) L^{-\beta/\nu},\label{eq7tera}\\[1mm] \overline\chi({\beta_{\rm max}},L^{-1})&\sim&f_\chi(a) L^{\gamma/\nu}, \label{eq7terb}\\[1mm] \overline C({\beta_{\rm max}},L^{-1})&\sim&f_C(a) L^{\alpha/\nu}, \label{eq7terc}\\[1mm] L^{-D}\left.\overline {d\ln \langle m_{[J]}^n\rangle_\beta\over d\beta} \right|_{\beta_{\rm max}}&\sim&f_{m,n}(a) L^{1/\nu}.\label{eq7terd} \end{eqnarray} These equations are similar to Eqs.~(\ref{eq7bisa}) -- (\ref{eq7bisd}) where the amplitudes take the values $B_c=f_m(0)$, $\Gamma_c=f_\chi(0)$, $A_c=f_C(0)$, and $N_c=f_{m,n}(0)$. From this discussion, we are led to give a more precise definition of $\overline\chi_{\rm max}(L^{-1})$. One reasonable alternative definition of this quantity could be the disorder average of the {\em individual maxima\/} corresponding to the different samples. Each of them has its own susceptibility curve, $\chi_{[J]}(\beta,L^{-1})$ which displays a maximum $\chi_{[J],{\rm max}}(L^{-1})$ at a given value of the inverse temperature $\beta_{[J]}^{\rm max}(L^{-1})$. These values $\chi_{[J],{\rm max}}(L^{-1})$ may then be averaged, but this is in general different from the definition that we gave for the average over randomness in Eq.~(\ref{eq2ter}). Here we keep as a physical quantity the expectation value $\overline\chi(\beta,L^{-1})$ which is then plotted against $\beta$, and $\beta_{\rm max}(L^{-1})$ in Eq.~(\ref{EqDfnBetaMax}) is the temperature where the disorder averaged susceptibility displays its maximum which is thus identified with $\overline\chi_{\rm max}(L^{-1})$. In the following, this is the physical content that we understand when discussing $\overline\chi_{\rm max}(L^{-1})$. \section{Numerical procedures\label{sec3}} We conducted a long-term and extensive study of the bond-diluted 3D 4-state Potts model, and it is the purpose of this paper to report results for moderately large system sizes in the first-order regime, and an extended analysis based on really large-scale computations% in the second-order regime. Cross-over effects between different regimes are also discussed. The simulations were performed on the significant scale of several years. A strict organisation was thus required, and we proceeded as follows: as an output of the runs, all the data were stored in a binary format. For each sample (with a given disorder realization and lattice size) and each simulated temperature, the time series of the energy and magnetisation were stored. A code was written in order to extract from all the available files the histogram reweightings of thermodynamic quantities of interest, entering as an input the chosen dilution, lattice size, temperature, \dots. It is also possible to adjust the number of thermalisation iterations, the length of the production runs where the thermodynamic averages are performed, the number of samples for the disorder average, or to pick a specific disorder realization, and so on. In some sense, {\em the time series correspond to the simulation of the system, and we can then measure physical quantities on it}, and virtually produce as many results as we want. Of course, this is {\it not} what we intend to do in the following, we rather shall try to concentrate only on the most important results. \subsection{Choice of update algorithms} We studied this system by really large-scale Monte Carlo simulations. A preliminary study, needed in order to schedule such large-scale Monte Carlo simulations, showed that the transitions at small and high concentrations of non-vanishing bonds $p$ were (as expected) qualitatively different: \begin{itemize} \item[$\triangleright\ $] Close to the pure system, $p\simeq 1$, the susceptibility peaks develop as the size increases to become quite sharp (see Fig.~\ref{Fig2chi}), in agreement with what is expected at a first-order phase transition~\cite{MeyerOrtmannsReisz98WJ03}. \item[$\triangleright\ $] At larger dilutions (small values of $p$) on the other hand, the peaks are softened and are compatible, at least at first sight, with a second-order phase transition (see Fig.~\ref{Fig2chi}). \end{itemize} \begin{figure}[tb] \begin{center} \epsfxsize=9.5cm \mbox{\epsfbox{Chi-vs-KuptoL10p1.00et0.48.eps}} \end{center}\vskip 0cm \caption{\small Evolution of the susceptibility as the size of the system increases (up to $L=10$) in the two different regimes: pure system $p=1.0$ on the left plot and high dilution $p=0.48$ on the right plot.} \label{Fig2chi} \end{figure} As will be demonstrated below, the tricritical dilution dividing these two regimes is roughly located at $p_{\rm TCP} \approx 0.68 - 0.84$. In the regime of randomness-induced continuous transitions (or weak first-order transitions, that is at low non-zero bond concentration $p$), the Swendsen-Wang cluster algorithm~\cite{SwendsenWang87} was preferred in order to reduce the critical slowing-down. As already pointed out by Ballesteros {\sl et al.}~\cite{BallesterosEtAl98, BallesterosEtAl00}, a typical spin configuration at low bond concentration is composed of disconnected clusters for most of the disorder realisations. It is thus safer to use the Swendsen-Wang algorithm, for which the whole lattice is swept at each Monte Carlo iteration, instead of a single-cluster Wolff update procedure. In the strong first-order regime (high bond concentration $p$), the multi-bondic algorithm~\cite{JankeKappler95}, a multi-canonical version of the Swendsen-Wang algorithm, was chosen in order to enhance tunnellings between the phases in coexistence at the transition temperature. The Swendsen-Wang algorithm, being less time-consuming, was nevertheless preferred even in this regime of long thermal relaxation as long as at least ten tunnelling events between the ordered and disordered phases could be observed. As the first-order regime is approached, more and more sweeps are needed to fulfil this condition. We had to use up to \begin{description} \item{--} $200\ \!000$ Monte Carlo steps (MCS) at $p=0.76$ for $L=16$ for example, \end{description} while in the second-order regime for much larger systems, we needed \begin{description} \item{--} $100\ \!000$~MCS at $p=0.68$ for $L=50$, \item{--} $30\ \!000$ MCS at $p=0.56$ for $L=96$, and \item{--} $15\ \!000$ MCS at $p=0.44$ for $L=128$. \end{description} \begin{figure*} \begin{center} \epsfxsize=9.5cm \mbox{\epsfbox{Multicanonique.eps}} \vskip 0cm \epsfxsize=9.5cm \mbox{\epsfbox{HistoChi.eps}} \end{center}\vskip 0cm \caption{\small a) Comparison between canonical Swendsen-Wang and multi-bondic algorithms for a pure system ($p=1.0$) of size $L=6$ (histogram reweightings produced from simulations at inverse temperatures $\beta J=0.605$ to $0.655$ are superimposed). The insert shows a zoom of the peak location. \break b) and c) Histogram reweighting of the average susceptibility $\overline\chi(\beta)$ in a disordered system with $p=0.56$ at different sizes $L=25$ and 64. The maximum is progressively obtained after a few iterations (the next simulation is performed at the temperature of the maximum of the histogram reweighting of the current simulation).} \label{FigMuca-FigHisto} \end{figure*} This is the essential reason for the size limitation in the first-order regime\footnote{A rough estimate of the time needed by a single simulation is given by $L^3\times (\# {\rm MCS})\times 1\mu{\rm s}$ for one sample and one temperature.}. A comparison between the two algorithms is illustrated in the case of the pure system for a moderate size ($L=6$) in Fig.~\ref{FigMuca-FigHisto}a. The insert shows a zoom of the peak in the susceptibility and reveals as expected that in this first-order regime, the multi-bondic algorithm provides a better description of the maximum which, since being higher is probably closer to the truth. In both regimes, the procedure of histogram reweighting enables us to extrapolate thermodynamic quantities to neighbouring temperatures. It leads to a better estimate of the transition temperature and of the maximum of the susceptibility, refining the finite-size estimate at each new size considered, since the maximum is progressively reached (Fig.~\ref{FigMuca-FigHisto}b and \ref{FigMuca-FigHisto}c). The reweighting has to be done for each sample, then the average is obtained as in Eq.~(\ref{eq2ter}). For a particular sample, the probability to measure at a given inverse temperature $\beta$ a microstate $[\sigma]$ with total magnetisation $M_{[\sigma,J]}=M$ and energy $E_{[\sigma,J]}=E$, is $P_\beta(M,E)=(Z_{[J]}(\beta))^{-1}\Omega(M,E) \ \!{\rm e}^{\beta E}$ where $\Omega(M,E)$ is the degeneracy of the macrostate. Note that we defined $E$ as minus the energy in order to deal with a positive quantity. We thus get at a different inverse temperature $\beta'$ \begin{equation} P_{\beta'}(M,E)=(Z_{[J]}(\beta)/Z_{[J]}(\beta'))P_\beta (M,E) \ \!{\rm e}^{(\beta'-\beta)E}, \end{equation} where the prefactor $(Z_{[J]}(\beta)/Z_{[J]}(\beta'))$ only depends on the two temperatures. For any quantity $Q$ depending only on $M_{[\sigma,J]}$ and $E_{[\sigma,J]}$ the thermal average at the new point $\beta'$ hence follows from \begin{equation} \langle Q_{[J]}\rangle_{\beta'} = \frac{\sum_{M,E} Q(M,E)\ \!P_{\beta}(M,E) \ \!{\rm e}^{(\beta'-\beta)E} }{\sum_{M,E} P_{\beta}(M,E) \ \!{\rm e}^{(\beta'-\beta)E}}. \end{equation} It is well known that the quality of the reweighting strongly depends on the number of Monte Carlo iterations, the larger this number the better the sampling of the configuration space and thus of the tails of $P_{\beta}$. Here we have to face up to the disorder average also and a compromise between a good disorder statistics and a large temperature scale for the reweighting of individual samples has to be found, but we are mainly interested in the close neighbourhood of the susceptibility maximum, i.e., in a small temperature window. \subsection{Equilibration of the samples and thermal averages} Before any measurement, each sample has to be in thermal equilibrium at the simulation temperature. Starting from an arbitrary initial configuration of spins, during the initial steps of the simulation process, the system explores configurations which are still strongly correlated to the starting configuration. The typical time scale over which this ``memory effect'' takes place is measured by the autocorrelation time. The integrated energy autocorrelation time $\tau^e$ (one can define more generally an autocorrelation time for any quantity) is given by \begin{equation} \tau_{[J]}^e(\beta)=\frac{1}{2\sigma_e^2}\sum_{i=0}^I\frac{1}{N_{\rm MCS}-I} \sum_{j=1}^{N_{\rm MCS}-I}\left(e^j_{[J]} e^{j+i}_{[J]}-\langle e_{[J]}\rangle_\beta^2 \right), \label{eq-tau_e} \end{equation} where $\sigma_e^2=\langle e_{[J]}^2\rangle_\beta-\langle e_{[J]}\rangle_\beta^2$ is the variance, $e^j_{[J]}$ is the value of the energy density at iteration $j$ for the realization $[J]$, and $I$ is a cutoff (as defined, e.g., by Sokal~\cite{Sokal89}) introduced in order to avoid to run a double sum up to $N_{\rm MCS}$, which would render the estimates very noisy. \begin{figure*} \begin{center} \epsfxsize=8.5cm \mbox{\epsfbox{Tunneling.eps}} \end{center}\vskip 0cm \caption{\small Monte Carlo data of the magnetisation for the disorder realization that gave the largest value of $ \chi_{[J]}(\beta_{\rm max}) $ for $p=0.44$ at lattice size $L=128$, $p=0.56$ ($L=96$) and $p=0.68$ ($L=50$).} \label{Fig11} \end{figure*} \begin{figure*} \begin{center} \epsfxsize=9.5cm \mbox{\epsfbox{HistoChiL128Config1.eps}} \end{center}\vskip 0cm \caption{\small Susceptibility for sample $\# 1$ for $p=0.44$ and $L=128$. The different curves show the result of histogram reweighting of simulations close to the maximum location after $5\ \!000$, $10\ \!000$, $12\ \!000$, and $15\ \!000$ MCS. We can safely consider that the value at the temperature of the maximum of the average susceptibility (vertical dashed line) is reliable after $15\ \!000$ MCS.} \label{FigHistoChiConfig1} \end{figure*} It is worth giving a definition of the errors as computed in this work. There are two different contributions. Assuming the different realizations of disorder as completely independent, one has an error due to randomness on any physical quantity $Q$, defined according to \begin{equation} \Delta_{\rm rdm}\overline Q=\left(\frac{\overline{Q^2}-\overline Q^2} {N\{J\}}\right)^{1/2}. \end{equation} To the thermal average for each sample is also attached an error which depends on the autocorrelation time $\tau^Q$, such that the total error on a physical quantity is here defined as: \begin{equation} \Delta_{\rm tot}\overline Q=\left(\Delta^2_{\rm rdm}\overline Q+ \frac 1{ N\{J\}}\frac{ 2\overline{\tau^Q}\ \overline{\sigma_Q^2}}{N_{\rm MCS}}\right)^{1/2}. \end{equation} For each disorder realization, the preliminary configurations have to be discarded and one usually considers that after 20 times the autocorrelation time $\tau^e$, thermal equilibrium is reached. The measurement process can then start and the thermal average of the physical quantities is considered, in the case of a single sample, as satisfying when measurements were done during typically $10^2\times\tau^e$. For a quantity $Q$, a satisfactory relative error of the order of \begin{equation}\frac{\Delta_{\rm therm.} \overline Q}{\sqrt{\overline{\sigma_Q^2}}} =\sqrt{\frac{2\overline{\tau^Q}}{N\{J\}\times N_{\rm MCS}}} \simeq 10^{-2} \end{equation} indeed requires typically $N\{J\}\times N_{\rm MCS}\simeq 10^4\overline{\tau^Q}$. Since we also need a large number of disorder realizations in order to minimise $\Delta^2_{\rm rdm}\overline Q$, typically $N\{J\}\simeq 10^2-10^4$, each sample requires a ``production'' process during $N_{\rm MCS}\simeq (10^0-10^2)\overline{\tau^Q}$. In this paper, we choose to work at the upper limit with $N_{\rm MCS}> 10^2\tau^e$ (since there is a single dynamics in the algorithm, the time scale is usually measured through the energy autocorrelation time) and $N\{J\}> 10^3$. Examples of times series of the magnetisation are shown in Fig.~\ref{Fig11} for particular samples (those which contribute the most to the average susceptibility) at the three largest sizes studied\footnote{The main illustrations are shown in the worst cases, i.e., for the largest systems at each dilution.} at dilutions $p=0.44$, $0.56$, and $0.64$ in the second-order regime. The simulation temperature is extremely close to the transition temperature and tunnelling between ordered and disordered phases guarantees a reliable thermal average. Another test of thermal equilibration is given by the influence of the number of MCS which are taken into account in the evaluation of thermal averages. An example is shown in Fig.~\ref{FigHistoChiConfig1} where the histogram reweightings of the susceptibility, as obtained with different MCS $\#$'s, are shown for a typical sample (the first sample, $\#\ \! 1$, is in fact supposed to be typical). Although quite different far from the simulation temperature (which is close to the maximum) the different curves are in a satisfying agreement at the temperature $\beta_{\rm max}$ of the maximum of the average susceptibility, shown by a vertical dashed line. The criterion $\#\ \!$MCS~$\ge 250\times\tau^e$ is safely satisfied for the larger number of iterations\footnote{We also note that something happened between 5000 and 10000 MCS, since the shape of $\chi_{\# 1}$ at high temperatures becomes unphysical. This is an illustration of the finite window of confidence of the histogram reweighting procedure.}. \subsection{Properties of disorder averages} For different samples, corresponding to distinct disorder realizations, the susceptibility $ \chi_{[J]}(\beta) $ at thermal equilibrium may have very different values (see Fig.~\ref{FigConvChi96} where the running average over the samples is also shown and remains stable after a few hundreds of realizations). \begin{figure}[h] \epsfxsize=10cm \begin{center} \mbox{\epsfbox{Convergence-Chi-L96.eps}} \end{center}\vskip 0cm \caption{\small Different values of $ \chi_{[J]}(\beta_{\rm max})$ (over the samples) of susceptibility at $p=0.56$, $L=96$ (the simulation is performed at the temperature of the maximum of the average susceptibility). The running average over the samples is shown by the solid line.} \label{FigConvChi96} \end{figure} We paid attention to average the data over a sufficiently large number of disorder realizations (typically 2000 to 5000) to ensure reliable estimates of non-self-averaging quantities~\cite{Derrida84}. Averaging over a too small number of random configurations leads to typical (i.e. most probable) values instead of average ones. Indeed, as can be seen in Fig.~\ref{Fig12}, the probability distribution of $ \chi_{[J]} $ (plotted at the temperature $\beta_{\rm max}$ where the average susceptibility is maximum) presents a long tail of rare events with large values of the susceptibility. These samples have a large contribution to the average, shifted far from the most probable value. The larger the value of $p$, the longer the tail. Scanning the regime close to the first-order transition thus requires large numbers of samples to explore efficiently the configuration space, so the simulations were limited to $L=50$ at $p=0.68$ while we made the calculation up to $L=128$ at $p=0.44$. In the example of Fig.~\ref{Fig11}, the thermodynamic quantities have been averaged over $3500$ disorder realizations for $p=0.44$ at lattice size $L=128$, $2048$ for $p=0.56$, $L=96$, and $5000$ disorder realizations for $p=0.68$, $L=50$. \begin{figure*} \epsfxsize=12.0cm \begin{center} \mbox{\epsfbox{ProbaChi3p.eps}} \end{center}\vskip 0cm \caption{\small Probability distribution of the susceptibility $ \chi_{[J]}(\beta_{\rm max}) $ for the bond concentrations $p=0.44$, $0.56$, and $0.68$ for the largest lattice size in each case. The full curve represents the integrated distribution. At each dilution, a full vertical line shows the location of the average susceptibility, a dashed line shows the median and a dotted line shows the average over the events which are smaller than the median.} \label{Fig12} \end{figure*} Self-averaging properties are quantified through the normalised squared width, for example in the case of the susceptibility, $R_\chi=\sigma_\chi^2 (L)/\overline\chi^2$, where $\sigma_\chi^2= \overline{\chi^2}-\overline\chi^2$. For a self-averaging quantity, say $Q$, the probability distribution, albeit not truly Gaussian, may be considered so in first approximation close to the peak, and $P_\beta(Q)\simeq (2\pi\sigma_Q^2)^{-1/2}{\rm e}^{-(Q -\overline{\langle Q\rangle})^2/2\sigma_Q^2}$ evolves towards a sharp peak in the thermodynamic limit, $P_\beta(Q)\to_{L\to\infty}\delta(Q -\overline{\langle Q\rangle})$. The probability of the average event $\overline{\langle Q\rangle}$ goes to 1 and the normalised squared width evolves towards zero in the thermodynamic limit while it keeps a finite value for a non-self-averaging quantity, as shown in the case of the susceptibility in Fig.~\ref{FigR_chi}. The observation of a longer tail in the probability distribution of the $\chi$ values when $p$ increases is expressed in Fig.~\ref{FigR_chi} by the fact that $\chi$ becomes less and less self-averaging when $p$ increases. \begin{figure}[ht] \epsfxsize=10.0cm \begin{center} \mbox{\epsfbox{R_chi.eps}} \end{center}\vskip 0cm \caption{\small Normalised squared width of the susceptibility, $R_\chi$, plotted against the inverse lattice size for the three dilutions $p=0.44$, 0.56, and 0.68. The solid lines are polynomial fits used as guides for the eyes. Note that $\chi$ is apparently less and less self-averaging as $p$ increases.} \label{FigR_chi} \end{figure} \begin{figure*} \epsfxsize=10.0cm \begin{center} \mbox{\epsfbox{VarianceE.eps}} \end{center}\vskip 0cm \caption{\small Normalised squared width of the energy, $R_{E}$ plotted on a log-log scale against the inverse lattice size for the three dilutions $p=0.44$, 0.56, and 0.68. Power-law fits have been performed and corresponding exponents printed by the curve. The insert presents the same data plotted on a linear scale.} \label{FigR_E} \end{figure*} In contradistinction to the magnetic susceptibility, the energy seems to be weakly self-averaging in the range of lattice sizes that we studied as seen in Fig.~\ref{FigR_E}. The associated exponent depends on the concentration of bonds $p$. This concentration dependence may be effective and due to corrections generated by other fixed points (see below). In Table~\ref{TabChi1}, the influence of the number of MCS is shown for typical samples, but also for the average susceptibility. Although the variations for a given sample and from sample to sample are important, the average seems stable with our choice of number of iterations (the largest), and also the autocorrelation time (for the average) is stable. {\small \begin{table*} \caption{\small Evolution of the susceptibility with the number of Monte Carlo sweeps per spin for different samples, $ \chi_{[J]} $ and the average value (with 2048 samples) at $p=0.56$, $L=96$. The data are given at the maximum location of the average susceptibility, $\beta_{\rm max}$. The last column gives the number of independent measurements per sample. \label{TabChi1} } \begin{center}\begin{tabular}{rrrrrrlrc} \noalign{\vspace{3mm}} \hline\noalign{\vspace{0.1pt}} \hline\noalign{\vspace{0.1pt}} \hline $N_{\rm MCS}$& $ \chi_{\#1} $ & $ \chi_{\#2} $ & $ \chi_{\#3} $ & $ \chi_{\#4} $ & $ \chi_{\#5} $ & $\overline\chi_{\rm max}$ & $\tau^e(\beta_{\rm max})$ & meas. $/$ sample\\ \hline 5000 & 994 & 404 & 611 & 682 & 1803 & 617(8) & 95.1 & $\simeq \,~50$ \\ 10000 & 952 & 390 & 698 & 614 & 1574 & 634(8) & 107.4 & $\simeq \,~90$ \\ 15000 & 1010& 356 & 680 & 819 & 1398 & 638(8) & 111.7 & $\simeq 130$ \\ 20000 & 939 & 351 & 689 & 851 & 1320 & 641(7) & 114.0 & $\simeq 175$ \\ 25000 & 911 & 327 & 675 & 848 & 1308 & 643(8) & 115.3 & $\simeq 200$ \\ 30000 & 934 & 327 & 733 & 837 & 1297 & 643(8) & 116.9 & $\simeq 250$ \\ \hline\noalign{\vspace{0.1pt}} \hline\noalign{\vspace{0.1pt}} \hline \end{tabular}\\[0.5cm] \end{center}\end{table*} } In Fig.~\ref{Fig12}, a full vertical line points out the location of the average susceptibility $\overline\chi_{\rm max}$. In order to give a comparison, the median value $\chi_{\rm med}$, defined as the value of $ \chi_{[J]} $ where the integrated probability takes the value $50\%$, is shown as the dashed line. The more it differs from the average, the more asymmetric is the probability distribution. This is more pronounced when $p$ increases. We also notice that the maximum of the probability distribution (the typical samples) corresponds to smaller susceptibilities. For a given number of disorder realizations, this peak is better described than the tail at larger susceptibilities, so we also define (shown as dotted lines) an average over the samples smaller than the median susceptibility, that we denote $\chi_{50\%}$, \begin{equation} \chi_{50\%}= 2\int_0^{\chi_{\rm med}} \chi_{[J]} P_\beta( \chi_{[J]} )\ \!{d} \chi_{[J]},\quad\quad \int_0^{\chi_{\rm med}} P_\beta( \chi_{[J]} )\ \!{d} \chi_{[J]}={1\over 2}, \label{chi50} \end{equation} where the factor $2$ normalises the truncated distribution. In the particular case of the probability distributions observed here, i.e.\ with a sharp initial increase, a peak located at small events and a long tail at large values of the variable\footnote{This shape of probability distribution is very different than in the case of the 3D dilute Ising model~\cite{BCBJ04}. }, this definition empirically gives a sensitive measure of the typical or most probable value. We shall refer to this quantity when typical behaviour will be concerned. \section{Qualitative description of the transition\label{sec4}} Before performing a quantitative analysis of the transition, it is interesting to study in some detail why the probability distributions have significantly different shapes when $p$ varies, and which type of sample can be considered as a typical one, or which one corresponds to a rare event with quite a large or very small susceptibility. Here we shall focus on the second-order regime and in particular on $p=0.44$ for the largest simulated size, $L=128$, for which the probability distribution of $\chi_{[J]}$ at $\beta_{\rm max}$ can be inspected in Fig.~\ref{Fig12}. \begin{table} \caption{\small Relative variations of the peak height $\Delta\chi_{[J]}/\bar\chi_{\rm max}$ and peak location $\Delta \beta_{[J],{\rm max}}/\beta_{\rm max}$ for a few samples, chosen among the rare and the typical samples at $p=0.44$, $L=128$. For reference, the values of the average are given by $\beta_{\rm max}J=1.4820$, $\overline\chi_{\rm max}=1450$. The asterisks $(*)$ mark those samples that are discussed in detail in Figs.~\ref{FigRareHigh-0.44-128_1} -- \ref{FigRareLow-0.44-128_2}.} \label{TabChiRareAndTypical} \begin{center}\begin{tabular}{clrlrll} \noalign{\vspace{3mm}} \hline\noalign{\vspace{0.4pt}} \hline\noalign{\vspace{0.4pt}} \hline type & sample $\#$ & $\chi_{[J],{\rm max}}$ & $\beta_{[J],{\rm max}}J$ & $\Delta\chi_{[J]}/\bar\chi_{\rm max}$ & $\Delta \beta_{[J],{\rm max}}/\beta_{\rm max} $ \\ \hline & 0035 (*) & 5253 & 1.4823 & $262.3~\%$ & $\,~~0.02~\%$ \\ rare & 0438 & 3862 & 1.4822 & $166.3~\%$ & $\,~~0.013~\%$ \\ (large $\chi$) & 1135 & 3825 & 1.4821 & $163.8~\%$ & $\,~~0.007~\%$\\ & 3302 & 4314 & 1.4823 & $197.5~\%$ & $\,~~0.02~\%$ \\ \hline & 0006 & 1550 & 1.4831 & $6.9~\%$ & $\,~~0.07~\%$ \\ typical & 0008 (*) & 2792 & 1.4810 & $92.5~\%$ & $-0.07~\%$ \\ (around peak) & 0021 & 1473 & 1.4819 & $1.6~\%$ & $-0.007~\%$ \\ & 0039 & 2345 & 1.4817 & $61.7~\%$ & $-0.02~\%$ \\ \hline & 0373 & 946 & 1.4852 & $-34.7~\%$ & $\,~~0.22~\%$ \\ rare & 1492 (*) & 286 & 1.4830 & $-80.3~\%$ & $\,~~0.07~\%$ \\ (small $\chi$) & 1967 & 1063 & 1.4847 & $-26.7~\%$ & $\,~~0.2~\%$ \\ & 2294 & 769 & 1.4853 & $-46.9~\%$ & $\,~~0.2~\%$ \\ \hline\noalign{\vspace{0.4pt}} \hline\noalign{\vspace{0.4pt}} \hline \end{tabular}\\[0.5cm] \end{center}\end{table} Each sample displays its own maximum and due to the fluctuations over disorder, the temperature $\beta_{[J],{\rm max}}$ where it occurs varies from sample to sample. In Table~\ref{TabChiRareAndTypical}, we quote for a few rare and typical samples the values of $\chi_{[J]_{\rm max}}$ and $\beta_{[J],{\rm max}}$, the maximum of the sample susceptibility and the corresponding inverse temperature (see also Figs.~\ref{FigHistoChiRare-0.44-128} to \ref{FigHistoChiFoot-0.44-128}). The relative variations of these numbers with respect to their average values at $\beta_{\rm max}$~: $\Delta\chi_{[J]}/\bar\chi_{\rm max}=[\chi_{[J],{\rm max}} -\bar\chi_{\rm max}]/\bar\chi_{\rm max}$, and $\Delta\beta_{[J]}/\beta_{\rm max} =[\beta_{[J],{\rm max}}-\beta_{\rm max}]/\beta_{\rm max}$ are also collected in Table~\ref{TabChiRareAndTypical}. It turns out that rare events with large susceptibility do also display a very small shift of the temperature $\beta_{[J],{\rm max}}$ with respect to the average. Other events have a smaller susceptibility at $\beta_{\rm max}$ both because their maximum $\chi_{[J],{\rm max}}$ is smaller but also because of a larger shift of the temperature $\beta_{[J],{\rm max}}$ where this maximum occurs. A few examples of {\em rare\/} events corresponding to {\em large\/} values of $ \chi_{[J]} $ are shown in Fig.~\ref{FigHistoChiRare-0.44-128}. {\em Rare\/} events corresponding to {\em small\/} values of $ \chi_{[J]} $ are presented in Fig.~\ref{FigHistoChiFoot-0.44-128}. They have a very small contribution to the phase transition so in the following, we will refer only to events with large values of $\chi_{[J]}$ when mentioning rare events. In Fig.~\ref{FigHistoChiTypical-0.44-128}, the same is done for {\em typical\/} events, i.e., those for which the values of $\chi_{[J]}$ are in the peak of the distribution. The scales of both axis are the same in the three figures in order to facilitate the comparison. \begin{figure}[h] \epsfxsize=11.5cm \begin{center} \mbox{\epsfbox{HistoChi-L128-p0.44Rare.eps}} \end{center}\vskip 0cm \caption{\small Examples of {\em rare\/} events for $p=0.44$ and $L=128$. with {\em large\/} values of $ \chi_{[J]}$. The thick lines show the averages over all realizations. } \label{FigHistoChiRare-0.44-128} \end{figure} \begin{figure}[t] \epsfxsize=11.5cm \begin{center} \mbox{\epsfbox{HistoChi-L128-p0.44Typical-2.eps}} \end{center}\vskip 0cm \caption{\small Examples of {\em typical\/} events for the same parameters as in Fig.~\ref{FigHistoChiRare-0.44-128}. The thick lines show the averages over all realizations. } \label{FigHistoChiTypical-0.44-128} \end{figure} \begin{figure} \epsfxsize=11.5cm \begin{center} \mbox{\epsfbox{HistoChi-L128-p0.44RareSmall-2.eps}} \end{center}\vskip 0cm \caption{\small Examples of {\em rare\/} events for the same parameters as in Fig.~\ref{FigHistoChiRare-0.44-128} with $\chi_{[J]} $ at the foot of the probability distribution. The thick lines show the averages over all realizations. } \label{FigHistoChiFoot-0.44-128} \end{figure} \begin{figure} \epsfxsize=10.0cm \begin{center} \mbox{\epsfbox{Config0035Chi4545.eps}} \end{center}\vskip 0cm \caption{\small Time series of the magnetisation and the energy density and corresponding probability distributions for a {\em rare\/} event ($\# 35$) with {\em large\/} $\chi_{[J]}$ ($p=0.44$, $L=128$, simulation at inverse temperature $\beta J=1.48218$).} \label{FigRareHigh-0.44-128_1} \end{figure} \begin{figure} \epsfxsize=10.0cm \begin{center} \mbox{\epsfbox{Config0008Chi1470.eps}} \end{center}\vskip 0cm \caption{\small Time series of the magnetisation and the energy density and corresponding probability distributions for a {\em typical} event ($\# 8$) with $\chi_{[J]}$ in the vicinity of the most probable value ($p=0.44$, $L=128$, simulation at inverse temperature $\beta J=1.48218$).} \label{FigTypical-0.44-128_2} \end{figure} \begin{figure} \epsfxsize=10.0cm \begin{center} \mbox{\epsfbox{Config1492Chi0251.eps}} \end{center}\vskip 0cm \caption{\small Time series of the magnetisation and the energy density and corresponding probability distributions for a {\em rare\/} event ($\# 1492$) with very {\em small\/} $\chi_{[J]}$, which looks similar to typical events ($p=0.44$, $L=128$, simulation at inverse temperature $\beta J=1.48218$).} \label{FigRareLow-0.44-128_2} \end{figure} In Figs.~\ref{FigRareHigh-0.44-128_1}-\ref{FigRareLow-0.44-128_2}, we can follow the fluctuations of the magnetisation during the thermalisation process (after equilibration) for three different samples. Configuration $\# 35$ (Fig.~\ref{FigRareHigh-0.44-128_1}) corresponds to a rare event, with the definition given above, while the sample $\# 8$ (Fig.~\ref{FigTypical-0.44-128_2}) is a typical one. The last sample, $\# 1492$ (Fig.~\ref{FigRareLow-0.44-128_2}), is an example of a realization of disorder which leads to a very small susceptibility peak. These figures also present the magnetisation and energy probability distributions. The rare event (Fig.~\ref{FigRareHigh-0.44-128_1}) displays a double-peak structure in the probability distributions (only a shoulder is visible in $P_{\beta_{\rm max}}(e_{[J]})$), presumably a remnant of the first-order type transition of the pure system. In the average behaviour, it seems that at small values of $p$, these types of samples are ``lost'' in the large majority of typical samples which have a ``second-order type'' of probability distribution. This observation is corroborated by similar ``signals'' in Figs.~\ref{FigHistoChiRare-0.44-128} to \ref{FigHistoChiFoot-0.44-128} concerning the shape of the susceptibility (narrow peak for rare events with large susceptibilities and broader for others), of the order parameter (sharp increase with $\beta$ at the transition for rare events, and smoother variation for the typical samples), or of the Binder cumulant (deep well at the transition in the case of rare events and less pronounced wells for typical ones). \begin{figure*} \epsfxsize=9.5cm \begin{center} \mbox{\epsfbox{HistoChiL20p084.eps}} \end{center}\vskip 0cm \caption{\small Probability distribution of the susceptibility for a system of size $L=20$ at $p=0.84$ and $\beta J=0.74704$, in the seemingly first-order regime. The simulation was performed with the multibondic algorithm.} \label{FigHistoChiL20p084} \end{figure*} \begin{figure*} \epsfxsize=9.5cm \begin{center} \mbox{\epsfbox{EvRarePdeE-p0.84Smooth.eps}} \end{center}\vskip 0cm \caption{\small Probability distributions $\overline P$ and $P_{[J]}$ of the energy $e$ for the average behaviour and for a rare event (large susceptibility), respectively, at $p=0.84$ ($L=13$). The double-peak structure suggests a behaviour for this specific sample which is similar to the one observed at a first-order transition. The simulation is performed at inverse temperature $\beta J=0.746\,356$.} \label{FigEvRarePdeE} \end{figure*} We may thus argue that a possible mechanism which keeps the pure model's first-order character of the transition at larger values of $p$ is connected to the occurrence of a larger proportion of samples with the ``first-order type'', i.e., a very big susceptibility signal at $\beta_{\rm max}$. In Fig.~\ref{FigHistoChiL20p084}, the quite long tail of large susceptibilities in the susceptibility distribution confirms this assumption, for $p=0.84$ ($L=20$), i.e., closer to, or probably inside the first-order regime. Also the double-peak structure of the energy distribution at this dilution (see Fig.~\ref{FigEvRarePdeE}) is compatible with a first-order like transition for the average behaviour (of course one would have to study the evolution of the energy barrier as the size increases, but this makes no sense for a specific disorder realization for which the notion of thermodynamic limit is meaningless). The possible interpretation is that the rare events of higher susceptibilities when $p$ becomes larger are more comparable to a system displaying a first-order transition. This would explain that the susceptibility peak is narrower (and thus does coincide with the temperature of the maximum of the average only in very rare cases). \section{Phase diagram and strength of the transition\label{sec5}} \subsection{Transition line} We can now come back to the preliminary phases of this work. The transition temperature was determined for 19 values of the bond concentration ranging from $p=0.28$ to $p=1.00$ (pure system). We defined an effective inverse transition temperature $\beta_c(L,p)$ at a given lattice size $L$ as the location of the maximum of the average magnetic susceptibility $\overline\chi$ (see Fig.~\ref{FigTousLesChi}). Any diverging quantity could equally have been chosen but it turned out that the specific heat was displaying larger statistical errors than the magnetic susceptibility. Moreover, the stability of the random fixed point implies a slowly varying specific heat with a critical exponent $\alpha\le 0$.\footnote{We expect a stable randomness fixed point at large enough dilutions, where the exponent $\alpha$ should be negative hence the singular contribution to the specific heat would not be diverging.} For each $p$ and $L$, several Monte Carlo simulations were necessary to get a reasonable estimate of $\beta_c(L,p)$. As mentioned before, histogram reweighting was used to refine the determination. The procedure was applied up to lattice sizes $L=16$. The resulting phase diagram for two different lattice sizes is plotted in Fig.~\ref{Fig1}. The data appear to be in a remarkable accordance. The numerical data presented in Fig.~\ref{Fig1} are furthermore in agreement with the mean-field prediction $T_c(p)=p T_c(p=1)$ for large bond concentration, close to the pure system ($p\simeq 1$). At smaller concentration $p$, the topological properties of the bond configuration become important and the mean-field prediction fails to reproduce the observed behaviour. The effective-medium approximation introduced in this context in the eighties \begin{figure}[h] \epsfxsize=8.5cm \begin{center} \mbox{\epsfbox{Chi-vs-T_L10et16.eps}} \end{center}\vskip 0cm \caption{\small Average susceptibility and its histogram reweighting for systems of sizes $10^3$ and $16^3$ for dilutions (from left to right) $p=0.32$, 0.40, 0.48, 0.56, 0.64, 0.72, 0.80, 0.88, and 0.96.} \label{FigTousLesChi} \end{figure} by Turban~\cite{Turban80} reproduces quite accurately the numerical data. Limiting the approximation to a single bond, the following estimate for the transition temperature is obtained: \begin{equation} \beta_c(p)=J^{-1}\ln\left[{(1-p_c)e^{\beta_c^{\rm pure}J}-(1-p)\over (p-p_c)}\right], \label{eq7} \end{equation} where $\beta_c^{\rm pure}J=0.62863(2)$ for the pure system. This expression is exact (as exact as it might be with numerical factors introduced) in the limits of the pure system ($p=1$) and the percolation threshold ($p_c = 0.248\,812\,6(5)$). \begin{figure} \epsfxsize=9.5cm \begin{center} \mbox{\epsfbox{Kc.eps}} \end{center}\vskip 0cm \caption{\small Transition temperatures $k_BT_c(p)/J$ with respect to the bond concentration $p$ for two lattice sizes $L=10$ and $L=16$. Mean-field and effective-medium approximations are also indicated by the dashed and solid lines, respectively.} \label{Fig1} \end{figure} \subsection{Order of the transition} Distinguishing a weak first-order phase transition from a continuous one is a very difficult task. The autocorrelation time of the energy $\tau^e$ at the transition temperature may be useful, since it displays a behaviour which depends on the order of the transition. When using a canonical Monte Carlo simulation for the study of a first-order transition, the time-scale of the dynamics is dominated by the tunnelling events between the ordered and disordered phases in coexistence at the transition temperature. Such a tunnelling event implies the creation and the growth of an interface whose energy cost behaves as $\beta\Delta F = 2\sigma_{\rm o.d.} L^{D-1}$ where $\sigma_{\rm o.d.}$ is the reduced interface tension. As a consequence, the autocorrelation time grows exponentially as \begin{equation} \tau^e(L) \sim e^{{2\sigma_{\rm o.d.}}L^{D-1}}. \label{eq8} \end{equation} For a continuous phase transition, this interface tension vanishes and the autocorrelation time scales as a power-law of the lattice size, \begin{equation} \tau^e(L)\sim L^z, \label{eq9} \end{equation} where $z$ is the dynamical critical exponent. \begin{figure} \epsfxsize=9.5cm \begin{center} \mbox{\epsfbox{Tau.eps}} \end{center}\vskip 0cm \caption{\small Autocorrelation time of the energy $\tau^e$ with respect to the lattice size at the (pseudo-) transition temperature. The curves correspond to different bond concentrations $p$ (from bottom $p=0.28$ to the top $p=1.00$ in steps of $0.04$). The results shown here all follow from MC simulations using the Swendsen-Wang algorithm.} \label{Fig3} \end{figure} The numerical estimates of the autocorrelation time $\tau^e$ are plotted in Fig.~\ref{Fig3} for several dilutions. They show a growth of the autocorrelation time with the lattice size which becomes dramatic as the bond concentration increases and a behaviour compatible with a power law of the system size when $p$ decreases, as expected since the dilution softens the transition and thus reduces the dynamical exponent $z$. Nevertheless, it is not possible to distinguish precisely the two regimes on a plot of the autocorrelation time versus the lattice size. Here, we may locate approximately the boundary between the two regimes around -- slightly above $p=0.68$. Indeed, the autocorrelation time at $p=0.68$ is very well fitted with a power-law for all lattice sizes smaller than $L=30$. Above, the data display a downward bending that can be explained by a correction to the power-law behaviour but not by an exponential prefactor (the bending would be upward). On the other hand, for $p=0.84$ it is not possible to find any set of three consecutive points that could be fitted by a power-law: the autocorrelation time clearly grows faster than a power-law. Using two successive lattice sizes $L_1$ and $L_2>L_1$, we defined an effective dynamical exponent \begin{equation} z_{\rm eff}(L_1,L_2) ={\ln\tau^e(L_2)-\ln\tau^e(L_1)\over \ln L_2-\ln L_1} \label{eq10} \end{equation} which is expected to reach a finite value for continuous transitions and to diverge for first-order ones. The data, plotted in Fig.~\ref{Fig4}, again do not lead to any sound estimate of the location of the tricritical point. Nevertheless, the transition again definitely remains continuous up to the bond concentration $p=0.68$. For higher concentrations, the data show an increase of the dynamical exponent with lattice size, but it is not possible to state unambiguously whether they develop a divergence or not. We also notice that the necessary finite number of iterations leads to an underestimate of $\tau^e$ and thus of $z$ for bond concentrations close to $p=1$ at large lattice sizes (this is particularly clear in Fig.~\ref{Fig4} for the size $L=13-16$). Multi-bondic simulations were thus needed in this case to improve the measurement of thermodynamic quantities when $p$ is close to 1. \begin{figure} \epsfxsize=8.5cm \begin{center} \mbox{\epsfbox{z.eps}} \end{center}\vskip 0cm \caption{\small Effective dynamical exponent (SW algorithm) with respect to the smaller lattice size at the transition temperature. The curves correspond to different bond concentrations $p$ (from bottom $p=0.28$ to the top $p=1.00$ in steps of $0.04$).} \label{Fig4} \end{figure} \begin{figure} \epsfxsize=12.5cm \begin{center} \mbox{\epsfbox{P_de_E_reweightSmooth.eps}} \end{center}\vskip 0cm \caption{\small Probability distribution of the energy at the temperature for which the two peaks have equal heights. The two plots correspond to two different bond concentrations: $p=0.56$ on the left (SW algorithm, increasing sizes $L=25$, 30, 35, 40, 50, 64, and 96) and $p=0.84$ on the right (multi-bondic simulations, sizes $L=16$, 20, and 25). The order-disorder interface tension $\sigma_{\rm o.d.}=\ln (P_{\rm max}/P_{\rm min})/(2L^{D-1})$ is plotted against $L^{-1}$ in the upper part of the figure.} \label{fig5} \end{figure} Another approach is provided by the behaviour of the order-disorder interface tension. Numerically, the interface tension can be estimated from the probability distribution $P(e)$ of the energy. One has \begin{equation} {P_{\rm min} \over P_{\rm max}} \propto e^{-{\beta\Delta F}}=e^{-2\sigma_{\rm o.d.} L^{D-1}}. \label{eq11} \end{equation} Indeed, the free-energy barrier can be related to the ratio of the (equally high) probabilities of the ordered and disordered phases (corresponding to the two peaks) and of the mixed phase regime involving two interfaces\footnote{Due to the employed periodic boundary conditions only an even number of interfaces can occur for topological reasons.} and which corresponds to the bottom of the gap between the two peaks. We started from the effective transition temperatures estimated from the maxima of the magnetic susceptibility. At this temperature, the statistical weight of the ordered and disordered phases are comparable so the height of the peaks is very different. In order to define the interface tension, we reweighted the time series of the simulations to the (close) temperature for which the two peaks have equal heights. The order-disorder interface tension is plotted against the inverse of the lattice size at the transition temperature in the upper part of Fig.~\ref{fig5}. It shows undoubtedly a vanishing of the interface tension for $p=0.56$, and presumably for $p=0.76$ (not shown here) also, being a clear indication of a disorder induced second-order transition. On the other hand, for $p=0.84$ the interface tension seems to converge towards a finite (but very small?) value in the thermodynamic limit, which can be taken as a signal for the persistence of the first-order nature of the transition in the pure case at $p=1$ down to this dilution. As a consequence, we are led to the conclusion that the tricritical point is presumably located between $p=0.68$ and $p=0.84$, the upper bound corresponding to the observation of an exponential growth of the autocorrelation time and the lower to a constant dynamical exponent and the vanishing of the latent heat (both values of $p$ are indicated in the previous Figs.~\ref{Fig3} and \ref{Fig4}). However, one cannot unambiguously prove by numerical simulations on finite systems that what we identified as a second-order phase transition is not a weak first-order phase transition with a correlation length larger than $L=128$, or that the fast growth of the autocorrelation time for $p\ge 0.84$ is not a cross-over to a power-law regime at larger system sizes. \section{Critical behaviour\label{sec6}} \subsection{Leading behaviour and critical exponents} We now concentrate on the second-order regime only, i.e., on $p\le 0.68$ where we performed an investigation of the universality class at the disorder fixed point. The critical exponents are computed using the finite-size scaling behaviour of the physical quantities (Eqs.~(\ref{eq7bisa})-(\ref{eq7bisd})) at the effective transition temperature $\beta_c(L,p)$. In the usual renormalisation group scheme for disordered systems, the renormalisation flow is subject to the influence of three fixed points describing respectively the pure system, the random system and the percolation transition. The scaling behaviour is thus expected to display large corrections resulting in a cross-over to a unique universal behaviour at large lattice sizes. According to this scheme, the exponents which are measured are expected to be (apparently) concentration dependent. In the previous sections (see, e.g., Fig.~\ref{FigTousLesChi}), the corrections to scaling for the transition temperature have been observed to be weaker for the bond concentration $p=0.56$. This behaviour is illustrated, e.g., in Fig.~\ref{FigBeta_c_vs_L} where the cross-over effect reflects in the bending of the curves $\beta_{\rm max}(L,p)J$ vs. $L^{-1}$ for three dilutions $p=0.32$, $p=0.56$, and $p=0.80$. The curve at $p=0.56$, on the other hand, is almost {\it flat}. The corresponding data for the three main dilutions in the second order regime, $p=0.44$, $p=0.56$, and $p=0.68$, are then plotted against $L^{-1/\nu}$ on the right part. Although the value of $\nu$ is not yet known, we anticipate here the later result, using already the ``to-be-determined-exponent''. Again, the curve at $p=0.56$ has an almost vanishing slope. As a consequence, we decided to make further large-scale Monte Carlo simulations at this concentration $p=0.56$ up to the lattice size $L=96$. To monitor the effects of the competing fixed points, we also made additional large-scale simulations for the concentrations $p=0.44$ (towards the percolation transition) and $p=0.68$ (towards the regime of first-order transitions) up to the lattice sizes $L=128$ and $L=50$, respectively (size limitations at these concentrations are linked to the discussion of Sect.~\ref{sec3}). \begin{figure} \epsfxsize=10.5cm \begin{center} \mbox{\epsfbox{Beta_c_vs_L.eps}} \end{center}\vskip 0cm \caption{\small Evolution of the size-dependent (pseudo-)critical coupling with the inverse system size for relatively small sizes on the left plots. The same on the right plot for the three main dilutions, where the data are by anticipation fitted to a linear relation $\beta_{\rm max}(L,p)=\beta_c(p)+aL^{-1/\nu}+\dots$, where our estimate for $\nu$ ($\approx 0.75$) will be discussed later. The slope coefficient is slightly positive for $p=0.44$, slightly negative for $p=0.68$ and virtually zero at $p=0.56$, where the corrections-to-scaling (at least for this quantity) appear to be the smallest.} \label{FigBeta_c_vs_L} \end{figure} \begin{figure}[ht] \epsfxsize=9.cm \begin{center} \mbox{\epsfbox{ChiMax-vs-L.eps}} \end{center}\vskip 0cm \caption{\small Finite-size scaling behaviour of the susceptibility, the magnetisation and of $\beta L^{-D} d\ln\overline m/d\beta$ at $\beta_{\rm max}$ (the quantities have been shifted in the vertical direction for the sake of clarity). The behaviour at small lattice sizes is presumably governed by the percolation fixed point (shown as dashed lines and characterised by exponent ratios $\gamma/\nu\simeq 2.05$ and $\beta/ \nu\simeq 0.475$). Above a crossover length scale a new (random) fixed point is reached (shown by the solid lines, with exponent ratios $\gamma/\nu\simeq 1.535$, $1/\nu\simeq 1.34$, and $\beta/ \nu\simeq 0.73$, discussed in detail below).} \label{FigChiMax-vs-L} \end{figure} \begin{figure} \epsfxsize=9.cm \begin{center} \mbox{\epsfbox{Chi_3dilutions.eps}} \end{center}\vskip 0cm \caption{\small Log-log plot of the average susceptibility (open symbols) and the typical susceptibility (filled symbols), as defined in Eq.~(\ref{chi50}) by $\chi_{50\%}$ for the three principal dilutions studied, indicating that the asymptotic scaling regime sets in earlier for the latter quantity.} \label{FigChi3p} \end{figure} In Fig.~\ref{FigChiMax-vs-L}, the finite-size scaling behaviour of the maximum susceptibility, $\overline \chi_{\rm max}$, the magnetisation at $\beta_{\rm max}$ and the derivative of $\ln\overline m$ with respect to the inverse temperature evaluated at ${\beta_{\rm max}}$ are plotted versus the system size on a log-log scale. These curves should give access to the exponents $\gamma/\nu$, $\beta/\nu$, and $1/\nu$, respectively. The three main dilutions are represented. One clearly observes a crossover between two regimes. For small lattice sizes, the system is strongly influenced by the proximity of a perturbing fixed point while a different, unique fixed point, is apparently reached at large sizes, as revealed by the slopes which are at first sight independent of the dilution when the linear extent of the lattice reaches values of about $L\ge 30$. The most probable susceptibility $\chi_{50\%}$ is shown in Fig.~\ref{FigChi3p} and can also lead to estimates for $\gamma/\nu$. According to the discussion given in Sect.~\ref{sec3}, we expect that the most probable susceptibility is better described than the average susceptibility, for which there exists a significant contribution of rare events, and these rare disorder realizations might be poorly scanned if a too small number of samples is considered. This difficulty might be circumvented through the study of what we defined as $\chi_{50\%}$ in Eq.~(\ref{chi50}). In the presence of multifractality, the universal behaviour of $\chi_{50\%}$ should differ from that of $\overline \chi$. Since such a peculiar behaviour does not occur in the case of a global quantity~\cite{Derrida84}, like $\chi$, we expect compatible values of $\gamma/\nu$ as deduced from $\chi_{50\%}$ or $\overline\chi$. Observing the data plotted in Fig.~\ref{FigChi3p}, in fact, confirms our previous analysis. It seems that $\chi_{50\%}$ is less influenced by the crossover effects than the average $\overline\chi_{\rm max}$. In order to support this statement, we will present the results of fits of the susceptibility in two different tables for the two regimes and for the three main dilutions: \begin{itemize} \item[$\triangleright$] At small lattice sizes, the behaviour of $\overline \chi_{\rm max}$ and $\overline m_{\beta_{\rm max}}$ is in all three cases compatible with the percolation exponents $(\gamma/\nu)_{\rm perco}\simeq 2.05$ and $(\beta/\nu)_{\rm perco}\simeq 0.475$ shown in Fig.~\ref{FigChiMax-vs-L} by the dashed lines. This seems to be true (particularly in the case of the susceptibility) over a wider range of sizes for $p=0.44$ than for $p=0.56$ or $p=0.68$. This observation is compatible with a stronger influence of the percolation fixed point when $p=0.44$, which is closer to the percolation threshold than the two other dilutions. Surprisingly, the assumption of a percolation influence is absolutely not confirmed\footnote{The expected percolation exponent would be $1/\nu\simeq 1.124$ while the slope at small sizes is larger than in the random regime where it takes a value close to $1/\nu\simeq 1.35$.} by the behaviour at small sizes of the third quantity of interest, $L^{-D}(d\ln\overline m/d\beta)_{\beta_{\rm max}}$. Due to the involved differentiation with respect to inverse temperature, the identification with percolation quantities becomes less obvious, but we do not have any explanation for this strange result. In Table~\ref{TabApp4}, we try to point out the {\em influence of the percolation fixed point}. This is achieved by power-law fits between a fixed minimum size $L_{\rm min}=4$ up to an increasing maximum size $L_{\rm max}$ below the value $L=30$ which apparently marks the modification in the behaviour of the physical quantities under interest. We first observe that $\beta/\nu$ starts from a value very close to the percolation value, and second, that $\chi_{50\%}$ has always a lower exponent (i.e. more distinct from the percolation value). \begin{center} \begin{table}[h] \caption{\small Exponents deduced from the finite-size scaling behaviour of $\overline\chi_{\rm max}$ and $\chi_{50\%}$ in the vicinity of the percolation fixed point (small sizes). Recall the percolation value $(\gamma/\nu)_{\rm perco}\simeq 2.05$ for comparison. \label{TabApp4}}\small \begin{center}\begin{tabular}{llllllll} \noalign{\vspace{3mm}} \hline\noalign{\vspace{0.4pt}} \hline\noalign{\vspace{0.4pt}} \hline & & \multicolumn{2}{c}{$p=0.44$} & \multicolumn{2}{c}{$p=0.56$} & \multicolumn{2}{c}{$p=0.68$} \\ & & \multicolumn{2}{c}{$\gamma/\nu$ deduced from} & \multicolumn{2}{c}{$\gamma/\nu$ deduced from} & \multicolumn{2}{c}{$\gamma/\nu$ deduced from} \\ & & \crule{2} & \crule{2} & \crule{2} \\ $L_{\rm min}$ & $L_{\rm max}$ &\tvi 06 $\overline\chi_{\rm max}$ & $\chi_{50\%}$ & $\overline\chi_{\rm max}$ & $\chi_{50\%}$ & $\overline\chi_{\rm max}$ & $\chi_{50\%}$ \\ \hline 4 & 8 & 2.015 & 1.902 & 2.098 & 1.916 & 2.211 & 1.915 \\ 4 & 13& 1.984 & 1.866 & 2.034 & 1.818 & 2.132 & 1.720 \\ 4 & 20& 1.954 & 1.833 & 1.973 & 1.748 & 2.051 & 1.579 \\ 4 & 30& 1.924 & 1.808 & 1.913 & 1.691 & 1.974 & 1.500 \\ \hline\noalign{\vspace{0.4pt}} \hline\noalign{\vspace{0.4pt}} \hline \end{tabular}\\[0.5cm] \end{center}\end{table} \end{center} \begin{center} \begin{table}[h] \caption{\small Exponents deduced from the finite-size scaling behaviour of $\overline\chi_{\rm max}$ and $\chi_{50\%}$ in the vicinity of the random fixed point (large sizes). The largest size taken into account in the fits is $L_{\rm max}=128$ for $p=0.44$, 96 for $p=0.56$, and 50 for $p=0.68$. \label{TabApp5}}\small \begin{center}\begin{tabular}{lllllll} \noalign{\vspace{3mm}} \hline\noalign{\vspace{0.4pt}} \hline\noalign{\vspace{0.4pt}} \hline & \multicolumn{2}{c}{$p=0.44$} & \multicolumn{2}{c}{$p=0.56$} & \multicolumn{2}{c}{$p=0.68$} \\ & \multicolumn{2}{c}{$\gamma/\nu$ deduced from} & \multicolumn{2}{c}{$\gamma/\nu$ deduced from} & \multicolumn{2}{c}{$\gamma/\nu$ deduced from} \\ & \crule 2 & \crule 2 & \crule 2 \\ $L_{\rm min}$ & \tvi 06 $\overline\chi_{\rm max}$ & $\chi_{50\%}$ & $\overline\chi_{\rm max}$ & $\chi_{50\%}$ & $\overline\chi_{\rm max}$ & $\chi_{50\%}$ \\ \hline 20 & 1.724 & 1.672 & 1.571 & 1.579 & 1.541 & 1.412 \\ 25 & 1.711 & 1.664 & 1.543 & 1.587 & 1.479 & 1.471 \\ 30 & 1.706 & 1.669 & 1.518 & 1.596 & 1.438 & 1.539 \\ 35 & - & - & 1.500 & 1.581 & 1.447 & 1.645 \\ 40 & 1.703 & 1.679 & 1.502 & 1.587 & 1.464 & 1.675 \\ 50 & 1.695 & 1.657 & 1.506 & 1.593 & \\ 64 & 1.680 & 1.659 & & & \\ \hline\noalign{\vspace{0.4pt}} \hline\noalign{\vspace{0.4pt}} \hline \end{tabular}\\[0.5cm] \end{center}\end{table} \end{center} \item[$\triangleright$] At large sizes, for each quantity considered here, the curves corresponding to the three dilutions in Figs.~\ref{FigChiMax-vs-L} and \ref{FigChi3p} evolve, after a crossover regime whose exact location depends on the value of $p$, towards a presumably unique power-law behaviour which seems to remain stable then (solid lines in Fig.~\ref{FigChiMax-vs-L}). We thus believe that we have reached large enough sizes in order to get reliable estimates of the {\em random fixed point exponents}. This is only a visual impression, since in fact the effective exponents are still subject to significant variations, especially for the extreme dilutions $p=0.44$ and $p=0.68$. Effective exponents $\gamma/\nu$, $\beta/\nu$, and $1/\nu$ may be defined from power-law fits of $\overline \chi_{\rm max}$, $\overline m_{\beta_{\rm max}}$, and $L^{-D}d\ln\overline m/d\beta$ between an increasing minimum size, $L_{\rm min}$, and a maximum one, $L_{\rm max}$. The value $L_{\max}$ is kept to the maximum available value $L=128$, $96$, and $50$ for $p=0.44$, 0.56, and 0.68, respectively, and the results for the susceptibility are presented in Table~\ref{TabApp5}. We see there that $\chi_{50\%}$ is again better behaved (more stable) than the average susceptibility. \end{itemize} Since we are mainly interested in the randomness fixed point, we now concentrate on fits at large system sizes. An exhaustive summary (i.e. for all three dilutions under interest) of the results of the fits performed at dilutions $p=0.44$, $p=0.56$, and $p=0.68$ is presented in Table~\ref{TabApp3}. The corresponding effective exponents are also plotted against $L^{-1}_{\rm min}$ in Fig.~\ref{FigExpstEff2}. These results show that the data analysis is much more complicated than our previous preliminary determination of exponents in Table~\ref{TabApp5}. Again, the crossover between percolation and random fixed point behaviours is visible through the variation of effective exponents and the data present large corrections-to-scaling. \begin{center} \begin{table}[t] \caption{\small Linear fits for $\overline\chi_{\rm max}$, $\overline m_{\beta_{\rm max}}$, and $L^{-D}d\ln\overline m/d\beta$ at $\beta_{\rm max}$, leading to finite-size estimates of the combinations of critical exponents $\gamma/\nu$, $\beta/\nu$ and $1/\nu$. These results correspond to the three main dilutions, and they are extracted from the finite-size scaling behaviour of the quantities at the temperature where the maximum of the average susceptibility is found by histogram reweighting. The results for dilutions $p=0.44$ and $p=0.68$ are less stable than for $p=0.56$, reflecting the role of the crossover. \label{TabApp3}} \small \begin{center}\begin{tabular}{llllllllll} \noalign{\vspace{3mm}} \hline\noalign{\vspace{0.4pt}} \hline\noalign{\vspace{0.4pt}} \hline $p$ & $L_{\rm min}$ & $L_{\rm max}$ & $\gamma/\nu$ & error & $\beta/\nu$ & error & $1/\nu$ & error & $\gamma/\nu+2\beta/\nu$\\ \hline 0.44 & 30 & 128 & 1.706 & 0.006 & 0.544 & 0.005 & 1.395 & 0.006 & 2.794(16) \\ --- & 40 & --- & 1.703 & 0.008 & 0.552 & 0.007 & 1.381 & 0.008 & 2.807(22) \\ --- & 50 & --- & 1.695 & 0.010 & 0.540 & 0.009 & 1.358 & 0.010 & 2.775(28) \\ --- & 64 & --- & 1.680 & 0.016 & 0.534 & 0.014 & 1.357 & 0.016 & 2.748(44) \\ \hline 0.56 & 30 & 96 & 1.518 & 0.011 & 0.588 & 0.010 & 1.389 & 0.011 & 2.694(31) \\ --- & 35 & --- & 1.500 & 0.014 & 0.592 & 0.012 & 1.362 & 0.013 & 2.684(38) \\ --- & 40 & --- & 1.502 & 0.016 & 0.608 & 0.015 & 1.353 & 0.016 & 2.718(46) \\ --- & 50 & --- & 1.506 & 0.026 & 0.645 & 0.024 & 1.330 & 0.025 & 2.796(74) \\ \hline 0.68 & 25 & 64 & 1.479 & 0.021 & 0.343 & 0.015 & 1.505 & 0.021 & 2.165(51) \\ --- & 30 & --- & 1.438 & 0.031 & 0.344 & 0.022 & 1.453 & 0.030 & 2.126(75) \\ --- & 35 & --- & 1.447 & 0.047 & 0.342 & 0.033 & 1.437 & 0.046 & 2.13(11) \\ --- & 40 & --- & 1.464 & 0.075 & 0.547 & 0.051 & 1.379 & 0.075 & 2.56(18) \\ \hline\noalign{\vspace{0.4pt}} \hline\noalign{\vspace{0.4pt}} \hline \end{tabular}\\[0.5cm] \end{center}\end{table} \end{center} \begin{figure}[h] \epsfxsize=9.cm \begin{center} \mbox{\epsfbox{ExpstEff2.eps}} \end{center}\vskip 0cm \caption{\small Effective critical exponents $\gamma/\nu$ and $\beta/\nu$, as computed from a power-law fit between $L_{\rm min}$ and $L_{\rm max}$, with $L_{\max}$ fixed to the maximum available value $L=128$, $96$ and $50$ for $p=0.44$, 0.56, and 0.68, respectively. They are plotted against $L^{-1}_{\rm min}$. The thin solid line shows the percolation values and the shadow stripe corresponds to our estimate for the random fixed point values. In the case of the dilution $p=0.56$, the value of $2\beta/\nu +\gamma/\nu$ is also shown.} \label{FigExpstEff2} \end{figure} A precise determination of the magnetic exponents is quite difficult. Indeed, as can be seen in Fig.~\ref{FigExpstEff2}, the effective critical exponents $(\gamma/\nu)_{\rm eff}$ and $(\beta/\nu)_{\rm eff}$ do not converge towards $p$-independent limits when $L_{\rm min} \rightarrow L_{\rm max}$. The cross-over effects on the thermal quantities are much smaller. Indeed, the effective critical exponent $\nu_{\rm eff}$ is converging to a roughly $p$-independent limit when $L_{\rm min} \rightarrow L_{\rm max}$. We can give the following estimates for $\gamma/\nu$ and $1/\nu$~: \begin{eqnarray} p=0.44: & (\gamma/\nu)_{\rm eff} \simeq 1.68(2), & (1/\nu)_{\rm eff} \simeq 1.36(2),\\ p=0.56: & (\gamma/\nu)_{\rm eff} \simeq 1.51(3), & (1/\nu)_{\rm eff} \simeq 1.33(3),\\ p=0.68: & (\gamma/\nu)_{\rm eff} \simeq 1.46(8), & (1/\nu)_{\rm eff} \simeq 1.38(8), \end{eqnarray} simply corresponding to the last line of Table~\ref{TabApp3}, i.e., to the largest studied value of $L_{\rm min}$, for each dilution. The value of $\beta/\nu$ on the other hand is definitely not stable and more subject to the competing influence of fixed points. For $p=0.44$ for example, the estimate of $(\beta/\nu)_{\rm eff}$ is relatively stable against variations of $L_{\rm min}$, with fitted values below $0.5$, close to the expected value for the percolation transition ($0.475$). This is a quantitative indication that the system is probably still subject to cross-over caused by the percolation fixed point. In the case of $p=0.68$, the estimate of $(\beta/\nu)_{\rm eff}$ is very small, then suddenly increasing for $L_{\rm min}=64$. These remarks are consistent with the renormalisation scheme described above. In order to help us to decide between the different effective values measured at the three dilutions, we use the scaling relation $\gamma/\nu + 2\beta/\nu = D = 3$ which is almost satisfied for the bond concentration $p=0.56$ only (shown in Fig.~\ref{FigExpstEff2}) when taking into account the lattice sizes $L\ge 50$. For the bond concentrations $p=0.44$ and $p=0.68$, this scaling relation is not satisfied for any of the accessible values. One is thus led to conclude that the critical regime has not yet been reached for these concentrations, in spite of our efforts to go up to very large sizes. Remember also that the corrections-to-scaling were found to be the smallest at $p=0.56$, so the asymptotic regime in neighbouring dilutions should be more difficult to reach. Figure \ref{FigExpstEff2} thus suggests to rely only on the values measured at dilution $p=0.56$, {\em extrapolated to $L_{\rm min}\to\infty$}, as shown in Fig.~\ref{FigExpstEffp0.56}, where a dashed stripe emphasises such an extrapolation of the effective exponents measured at the largest sizes. The values of $(\gamma/\nu)_{\rm eff}$ and $(1/\nu)_{\rm eff}$ are indeed stable in the regime $L\ge 35$. We may thus have {\em reliable estimates} of the asymptotic values for these exponents, and a {\em reasonable estimate} for $\beta/\nu$, ratifying the scaling relation. Using this extrapolation procedure, our final estimates of the critical exponents of the disorder induced random fixed point of the three-dimensional bond-diluted 4-state Potts model are the following values: \begin{eqnarray} & \gamma/\nu & =1.535(30), \\ & \beta/\nu & =0.732(24), \\ & 1/\nu & =1.339(25), \end{eqnarray} resulting from a linear extrapolation of the data points for $L_{\rm min} = 25$, 30, 35, 40, 50, and 64 at $p=0.56$. Note that since the data are correlated, we have kept the error of the last point. \begin{figure}[t] \epsfxsize=9.cm \begin{center} \mbox{\epsfbox{ExpstEffp0.56.eps}} \end{center}\vskip 0cm \caption{\small Effective critical exponents $\gamma/\nu$, $\beta/\nu$, and $1/\nu$ for the dilution $p=0.56$ obtained from fits between $L_{\rm min}$ and $L_{\rm max} = 96$ and extrapolated to $L_{\rm min}\to\infty$. In this limit, the scaling relation $\gamma/\nu+2\beta/\nu=D$ is nicely satisfied.} \label{FigExpstEffp0.56} \end{figure} \subsection{Corrections to scaling} For the 3D disordered Ising model it is well known that the correction-to-scaling close to the random fixed point are strong (with a corrections-to-scaling exponent around $\omega=0.4$). Let us assume here also the existence of an irrelevant scaling field $g$ with scaling dimension $y_g=-\omega<0$. The scaling expression for the susceptibility \begin{equation} \overline\chi(L^{-1},\beta-\beta_c,g)=L^{\gamma/\nu}f_\chi (L|\beta-\beta_c|^\nu,L^{-\omega}g), \label{eq-IrrelevantChi} \end{equation} expanded at $\beta_c$ (on a finite system the susceptibility is always finite) around the fixed point value $g=0$, leads to the standard expression $\Gamma_c L^{\gamma/\nu}[1+b_\chi L^{-\omega}+O(L^{-2\omega})]$. In order to investigate this question for the 3D 4-state Potts model, we tried to fit the physical quantities for $p=0.56$ as \begin{equation} \overline\chi_{\rm max}(L)=\Gamma_c L^{\gamma/\nu}(1+b_\chi L^{-\omega}), \label{eq-CorrScalChi} \end{equation} and similar expressions for $\overline m_{\beta_{\rm max}}$, in the range $L\ge 25$ where the leading term was already fitted in the previous section, and the subleading correction is due to the first irrelevant scaling field. Since four-parameter non-linear fits are not stable, we preferred linear fits where the exponents are taken as fixed parameters but the amplitudes are free. In Fig.~\ref{FigChi2}, we show a 3D plot of the cumulated square deviation of the least-square linear fit, $\chi^2$, as a function of $\gamma/\nu$ and $\omega$. There is a clear valley which confirms that $\gamma/\nu$ is close to 1.5, but the valley is so \begin{figure}[t] \epsfysize=6.3c \begin{center} \mbox{\epsfbox{Chi2ChiBis.ps}} \end{center} \caption{\small Plot of the $\chi^2$ deduced from linear fits of $\overline\chi_{\rm max}(L)= \Gamma_c L^{\gamma/\nu}(1+b_\chi L^{-\omega})$ in the range $25\leq L\leq 96$ for $p=0.56$. The exponents are treated as fixed parameters and the amplitudes are free. The base plane gives the ranges of variation of the exponents: $1.25\leq\gamma/\nu\leq 1.75$ and $0\leq\omega\leq 5$. The absolute minimum is at $\gamma/\nu=1.49$, $\omega=3.88$, but the valley is extremely flat in the $\omega$-direction. A cutoff at $\chi^2=50$ has been introduced in order to improve clarity of the figure. } \label{FigChi2} \end{figure} flat in the $\omega$-direction that there is no clear minimum to give a reliable estimation of the corrections-to-scaling exponent. The same procedure for $\beta/\nu$ is illustrated in the next figure (Fig.~\ref{FigChi2M}). Again, there is no way to get a compatible corrections-to-scaling exponent from the three fits, but the leading exponents are indeed close to $\beta/\nu=0.71$ (and $1/\nu=1.35$). Of course the minima of $\chi^2$ do not exactly coincide with the data presented in the table which should correspond to $\omega\to\infty$. \begin{figure} \epsfysize=6.3c \begin{center} \mbox{\epsfbox{Chi2MBis.ps}} \end{center} \caption{\small Plot of the $\chi^2$ deduced from linear fits of $\overline m$ (the exponent is thus negative) in the range $25\leq L\leq 96$ for $p=0.56$. In the base plane, the range of variation of the exponents is $-1\leq -\beta/\nu\leq -0.5$ and $0\leq\omega\leq 5$, and the minimum is at $\beta/\nu=0.85$, $\omega=0.135$.} \label{FigChi2M} \end{figure} \section{Conclusion\label{sec7}} We studied the three-dimensional bond-diluted 4-state Potts model by large-scale Monte Carlo simulations. The pure system undergoes a strong first-order phase transition. The numerical estimates of the dynamical exponent $z$ and of the interface tension give evidences for the existence of a disorder-induced tricritical point for bond dilutions between $p=0.68$ and $p=0.84$ below which the transition is softened to second order. Very strong crossover corrections are observed up to lattice size $L\le 30-40$. The regime of the random fixed point is best observed for the bond concentration $p=0.56$. From the values of the ratios of exponents measured at that concentration, \begin{eqnarray} & \gamma/\nu & =1.535(30),\\ & \beta/\nu & =0.732(24),\\ & 1/\nu & =1.339(25), \end{eqnarray} the following estimates of the critical exponents are derived: \begin{eqnarray} & \gamma & =1.146(44),\\ & \beta & =0.547(28),\\ & \nu & =0.747(14). \end{eqnarray} Let us mention that these exponents are in reasonably good agreement with recent star-graph high-temperature expansions~\cite{HellmundJanke02} of this model which give $\gamma=1.00(3)$. The value of $\nu$ is eventually safe with respect to the bound $\nu\ge 2/D = 0.6666\dots$ of the stability of the random fixed point. In the random fixed point regime, we are unable to extract from the numerical data any reliable correction-to-scaling exponent (linked to the possible appearance of irrelevant scaling fields), even though it is clear that such corrections cannot be ignored. In some sense, the outcome of this time-consuming work is disappointing, since we were not able to reach the asymptotic regime where exponents in the second-order regime of the phase diagram become dilution-independent, since the corrections to scaling are too strong, and since the tricritical point was not located with precision. We believe that this is due to the extreme difficulty of the problem and not to an unadapted approach. Perhaps we were too ambitious, but we have the feeling that the final values given for the critical exponents are reliable enough and should not be contradicted in the future by similar studies. \section*{Acknowledgements} This work was partially supported by the PROCOPE exchange programme of DAAD and EGIDE, the EU-Network HPRN-CT-1999-000161 ``EUROGRID: {\sl Discrete Random Geometries: From solid state physics to quantum gravity}", the DFG, and the German-Israel-Foundation (GIF) under grant No.\ I-653-181.14/1999. We gratefully acknowledge the computer-time grants hlz061 of NIC, J\"ulich, h0611 of LRZ, M\"unchen, 062 0011 of CINES, Montpellier and 2000007 of CRIHAN, Rouen, which were essential for this project. The authors gratefully thank Ian Campbell for a critical reading of the preprint which helped to improve it.
{ "redpajama_set_name": "RedPajamaArXiv" }
7,978
A mix of clouds and sun during the morning will give way to cloudy skies this afternoon. High around 0F. Winds NW at 10 to 20 mph.. Considerable clouds this evening. Some decrease in clouds late. Low -9F. Winds NW at 5 to 10 mph. By Jacob Tellers Engagements: Money & Kubat By Daily Journal staff Katelyn Kubat and Joel Money announce their engagement. Parents are Rod and Sue Kubat of Des Moines and Bruce and Sherri Money of Fergus Falls. Kate is a graduate of Dowling Catholic High School in Des Moines, St. Olaf College and has a doctorate of physical therapy degree from the University of Minnesota. She is currently in an orthopedic physical therapy residency program in Minneapolis. Joel graduated from Fergus Falls High School and St. Olaf College. He is a third-year medical student at the University of Minnesota. An Aug. 13 wedding is planned at Gloria Dei Lutheran Church in Des Moines.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,371
\section{Introduction} The importance of the recent direct detection by the LIGO-VIRGO collaboration~\cite{Abbott:2016blz,TheLIGOScientific:2016qqj,Abbott:2016nmj} of gravitational waves emitted from the merger of two black holes can hardly be over-emphasized. It confirms one of the most beautiful predictions of General Relativity (GR), the existence of gravitational waves; it detects the presence and the coalescence of two black holes, another remarkable prediction of the same theory; and, finally, it provides mankind with a new set of eyes to look at the cosmos in a way that we had never done before. These detection events were a first taste of the incredible vista before us. It is clear that the availability of gravitational wave observatories will enable us to learn a great deal about the physics of compact objects and, more generally, the whole of astrophysics. However, it is unclear to what extent they will allow us to increase our knowledge of the fundamental laws of physics~\footnote{Here, by fundamental laws, we mean those laws that describe the most basic phenomena, and from which, at least in principle and possibly at the cost of great complexity, all phenomena can be described. In this sense, usual astrophysical laws are not fundamental laws of physics, even though astrophysics is a fundamentally important discipline.}. One possible scenario where they could provide some insight is in the case of additional weakly interacting---to the standard model sector---light particles, such as the axion. In this case, the large gravitational field in the proximity of black holes and their rapid rotation can source the clustering of large numbers of these light particles, which in turn can have observational consequences on the dynamics of the black holes and their associated gravitational wave emission~\cite{Arvanitaki:2010sy,Arvanitaki:2014wva,Arvanitaki:2016qwi}. It is a fascinating possibility for a variety of reasons and has led to a flourishing body of work in the literature. Here however, we would like to answer the following question: can the new observational window offered by gravitational wave astronomy teach us something about the nature of gravity? We will not be able to answer this question in full generality. However, we will be able to do so if we restrict ourselves to the case in which the modifications of GR is associated to states that are heavier than the curvature scale of the compact objects responsible for the emission of the gravitational waves. Even with our assumptions this is quite a general scenario and consequently our statement might appear over-ambitious. Our confidence stems from the fact that we will use a technique known as Effective Field Theory (EFT) that allows us to construct a Lagrangian and the associated equations of motion that encode the {\it most general} extension to GR of the kind we just described, {\it i.e.} where the new states are heavier than the curvature scale of the compact objects. We will construct this EFT in Sec.~\ref{sec:action}, and we will find that it takes the remarkably simple (given its generality) form \begin{equation} S _{\rm eff}=2M_{\rm pl}^2\int { d}^4 x \sqrt{-g}\ \left(-R + \frac{\mathcal{C}^2}{\Lambda^6} + \frac{\tilde{\mathcal{C }}^2}{\tilde{\Lambda}^6}+ \frac{\tilde{\mathcal{C }}\mathcal{C}}{\Lambda_-^6}+\ldots\right)\label{eq:actionintro} \end{equation} where \begin{equation} \mathcal{C} \equiv R_{\alpha \beta \gamma \delta} R^{\alpha \beta \gamma \delta},\quad \tilde{\mathcal{C }} \equiv R_{\alpha \beta \gamma \delta}\, \epsilon^{\alpha\beta}{}_{\mu\nu}\,{R}^{\mu \nu \gamma \delta}\ , \end{equation} and $\ldots$ stands for terms that give subleading contributions~\footnote{We also consider the case in which the leading extension to GR is represented by three powers of the Riemann tensor, rather than by four. The most general action is given later in~(\ref{eq:sixderivative}). For reasons over which we elaborate later on, the case of the action~(\ref{eq:sixderivative}) appears to be disfavored from the UV point of view, and therefore it does not represent the main focus of our discussion. All the observational effects that we discuss to result from the action (\ref{eq:actionintro}) apply to the case of~(\ref{eq:sixderivative}) as well, with the replacement $(\Lambda r)^6\to (\Lambda r)^4$, where~$\Lambda$ is the typical scale suppressing the leading operators in the two effective actions.}. This EFT is very general. For example, it describes the extension to GR by string theory at energies below the string scale. Of course, in order for the effect to be measurable for experiments such as LIGO, we need to ensure that at least one of the scales $\Lambda,\;\tilde\Lambda$ and $\Lambda_-$ are not too much larger than the curvature scale of the compact objects themselves, which means that, crudely, we need to take the $\Lambda$'s $\sim {{\cal O}}\left({\rm km}^{-1}\right)$. This is a challenging scale for two reasons. First, on the theory side, we have a prejudice that we do not expect GR to be modified at such small energies. But, if the scales $\Lambda$, $\tilde\Lambda$ and $\Lambda_-$ were to be much greater than~${{\cal O}}\left({\rm km}^{-1}\right)$, there simply would be no possible signatures of UV modifications of GR expected by gravitational wave astronomy and similar astrophysical probes. Therefore, with some apologies, we happily put aside our prejudices in favor of empirical verification, especially now that such measurements are actually possible. Second, a more serious concern is that we have already probed gravity at scales much shorter than ${\rm km}$. How can we be sure such low values of the $\Lambda$'s are not already ruled out? The crucial difference between laboratory experiments and compact object observations is the size of the curvature tensor, which is much larger in the astrophysical setting. This allows us to argue in the main text that we can {\it assume} the following: there is a UV completion such that, {\it whenever} the Riemann tensor is sufficiently small, the modifications to GR are small even at length scales shorter than $\Lambda$, $\tilde\Lambda$ and $\Lambda_-$, as it happens in laboratory experiments. Given our set of assumptions, we use the EFT in (\ref{eq:actionintro}) to compute observable consequences in the gravitational wave emission from compact objects from UV modifications of GR. We will perform our analysis in Sec.~\ref{sec:techsummary},\ref{sec:potential},\ref{sec:radiation}, where we will focus on describing the modifications of the signal in the post-Newtonian regime. We will find that the main effects of the operators $\mathcal{C}^2 $, $ \tilde{\mathcal{C }}^2$ and $\mathcal{C}\tilde{\mathcal{C }}$ are to rescale the amplitude and frequency of the emitted gravitational waves. We describe these findings from a phenomenological point of view in Sec.~\ref{sec:observeligo}. Explicitly, for the operator $\mathcal{C}^2$, for a binary of objects of equal mass $m$, in a quasi-circular orbit of radius $r$ and relative velocity $v$, we find \begin{eqnarray}\label{result1} &&\frac{\left[\Delta h^{TT}(t,\vec x)\right]_\Lambda}{h^{TT}}\sim\frac{\Delta\omega_{\Lambda}}{\omega_{PN}}\sim \frac{v^4}{(\Lambda r)^6} \; , \end{eqnarray} where $h^{TT}$ is the strain (or amplitude) of the gravitational wave with frequency $\omega$ produced by the source incident upon the detector and $h^{TT}_\Lambda$ is the contribution to the strain generated by the $\mathcal{C}^2 $ term. Notice that the effect in~(\ref{result1}) depends on two independent parameters, $v$ and $1/(\Lambda r)$, both of which need to be smaller than one in the observable region. However, for $(\Lambda r)\sim 1$, the effect can be as large as $v^4$, i.e. second Post-Newtonian order (2PN), signaling that the effect is potentially observable. The effect of the operators $ \tilde{\mathcal{C }}^2$ and ${\mathcal{C }}\tilde{\mathcal{C }}$ is similar, just differing by the suppression in the powers of the velocity or in the polarization of the emitted signal. We notice that, even after the inclusion of all the numerical factors, the signal can be rather large, and in fact probably the detection of the recent merger events can already put some interesting constraint on the scales $\Lambda$'s. In Sec.~\ref{sec:otherexp} we describe bounds from other experiments, which we find to be mild, apart from light X-ray binaries, that can potentially give interesting constraints. In Sec.~\ref{sec:conclusions}, we conclude by summarizing our findings and discuss future directions. Before we begin a deeper study of our EFT, it is worth spending some additional words elucidating on how our approach differs from others already present in the literature. Since the first observation of gravitational waves, there has been a vast number of publications related to tests of GR with compact object mergers and it is impossible to review them fairly in this short introduction. We would like, however, to compare our approach to that used by the LIGO-VIRGO collaboration in~\cite{TheLIGOScientific:2016src}. In this analysis the first few post-Newtonian~(PN) coefficients were allowed to deviate from the calculated GR values, thus changing the wave form of emitted gravitational waves. In this approach it is unfortunately very hard to see which variation of the parameters corresponds to theories respecting physical principles like locality, Lorentz invariance or the equivalence principle and which do not. More concretely, it is difficult to track which physical principles we have to give up in order to produce one variation or the other of the PN coefficients. Our approach, on the other hand, is automatically in agreement with the principle of locality, diffeomoprphism invariance, etc.. Even more strikingly, even though our proposed modification of GR has much fewer free parameters, it cannot be captured by the analysis of \cite{TheLIGOScientific:2016src}. The reason is that the prediction from our EFT corresponds to giving some very specific radial and time dependence of the PN parameters, in the form of factors of $1/(\Lambda r)^6$ in~(\ref{result1}). In order to be able to cover this signal with the phenomenological analysis of~\cite{TheLIGOScientific:2016src} one would need to give to the PN parameters some time and radial dependence whose form, without guidance from an EFT, would be arbitrary and therefore probably severely weakens the constraining power of the analysis. An approach closer to ours was used in \cite{Yunes:2016jcc}. In that paper the authors studied how theoretically motivated modifications to GR can be constrained by observed BH mergers. However, the focus was on theories containing extra light degrees of freedom while none of the UV modifications captured by our EFT were discussed. In this sense our results are complimentary to those of \cite{Yunes:2016jcc}. Theories with extra light particles predict significant changes in the waveforms due to the existence of new emission channels. However, it turns out to be especially difficult to compute predictions for theories that contain additional light particles in the regime of strong gravity~\cite{Yunes:2016jcc}. Furthermore, in the presence of additional light degrees of freedom, one would need to work out a different prediction for each corresponding theory. One advantage of our EFT approach is that with just a couple of parameters all corrections to the merger process are well defined (to a given precision) and even though in the present paper we restrict to the perturbative phase of the merger, in principle the full numerical study of the corrections in the non-linear regime can be performed as we outline in Section~\ref{sec:numerical}. \section{General Construction of the Effective Field Theory\label{sec:action}} It is a common lore that, given a fixed set of light degrees of freedom, at low energies any possible effect of ultraviolet physics can be parametrized by a set of local interactions involving these light degrees of freedom only~\footnote{Recently, some investigations in the context of string theory have given indications that this theory, in the presence of backgrounds with horizon, might induce non-local effects at scales much longer than the string scale~\cite{Dodelson:2015toa}. Such phenomena would require a different description than the one we develop here, which crucially assumes locality. It would be interesting to extend our analysis to include these non-local effects. It is tempting to say that the leading effects will come from modifying the effective black-hole finite size operators, that we discuss later on at the end of Sec.~\ref{sec:observeligo}. However, we leave a study of this to future work.}. If one is interested in computing a physical observable to a given precision, this set is always finite. This approach to parametrizing physical observables is called Effective Field Theory (see for example~\cite{Weinberg:1995mt}). We are interested here in the most general theory that can be tested by precision experiments measuring gravitational waves produced by mergers of compact astrophysical objects that involves a single light degree of freedom - the graviton~\footnote{Additional light degrees of freedom can be included in a straightforward way, however, we will restrain from doing so here and leave it to future work.} - and that is not already excluded by any other experiments. The most convenient way of classifying such theories is the EFT approach. A crucial ingredient of all EFTs is an energy scale $\Lambda_c$, usually referred to as a "cutoff". The cutoff defines what was meant by "low energies" above. At energies higher than the cutoff, the EFT loses its validity and the knowledge of ultraviolet physics, that often involves some new degrees of freedom, is necessary to do the computation. On the other hand, at energies below the cutoff, the EFT is not only absolutely universal but also is always under perturbative control with expansion parameter $E/\Lambda_c$. For this reason it is convenient to organize the terms in the effective actions in order of increasing number of derivatives and fields. First of all, let us note that in spite of several intriguing mysteries associated with gravity, at low energies it is nothing more than another EFT and importantly its cutoff scale does not have to be parametrically close to $M_{\rm pl}$. Some new physical effects can in principle appear at a much lower scale. In order to produce observable and calculable consequences for mergers of compact objects, this scale has to be close to or below the characteristic curvatures of the space-time nearby these objects, which, for stellar mass black holes and neutron stars, is of order of a few inverse kilometers. One may immediately object that we are already running into a contradiction: gravity has been tested to high precision at distances much smaller than a few kilometers and hence any modifications that we are discussing are by far excluded. We will show, however, that this objection is too quick. Under some broad assumptions about the UV completion at the scale $\Lambda_c$, modifications of gravity can be unobservably small unless the scale of the space-time curvature happens to be close to $\Lambda_c$. Since compact astrophysical objects like black holes and neutron start are the only known sources of large curvatures, it is consistent (though not necessary) for new physics to affect significantly a merger process while keeping all "weak field" processes practically intact. Effective field theories are subject to a set of consistency conditions. A very important one is radiative stability. It implies that if one can construct a Feynman diagram containing UV divergence proportional to some local operator this operator has generically to be included in the action with a coefficient at least as large as given by the diagram with loop momentum integrals cut off at $\Lambda_c$. There are other, more subtle though very reasonable, constraints on effective field theories that usually involve additional, even more essential, assumptions, such as locality and causality~\cite{Adams:2006sv, Gruzinov:2006ie, Camanho:2014apa}. We will review the relevant ones below and will be specific about which extra assumptions are involved. In our case there will be further constraints imposed by the "testability" requirement, of which we will discuss later. In order to optimize the set of terms present at each order in the effective action it is convenient to include only those operators that do not vanish on the equations of motion produced by the lower order action. For a reader not familiar with the EFT approach it could be instructive to consider a simple example. Consider the following action for a single scalar field: \begin{equation}\label{eq:example1} \int d^4x\;\left( \frac{1}{2}\phi \square \phi+\frac{\square\phi \phi^2}{\Lambda_c}\right). \end{equation} Naively there will be an interaction at the linear order in $1/\Lambda_c$, however, for the type of observable we are interested in, instead of using the field $\phi$, one can equivalently use the field $\phi'$ given by $\phi=\phi'-\phi'^2/\Lambda_c$, in terms of which the action only contains operators suppressed by $\Lambda_c^2$. This means that all physical effects in this theory will be suppressed at least by the second power of our cutoff scale\footnote{In fact in this simple example these interactions can be further redefined away and the leading physical interaction appears even at higher orders.} and we could have started classifying operators beginning from dimension 6. In case of (pure) gravity the leading equations of motion are the vacuum Einstein equations \begin{equation} R_{\mu\nu}-\frac{1}{2}R g_{\mu\nu}=0\ , \end{equation} from which it follows that both $R$ and $R_{\mu\nu}$ vanish on the leading equations of motion, and hence one can consider only operators constructed from the Riemann tensor $R_{\mu\nu\rho\sigma}$. We are now ready to start to construct the effective action. This amounts to writing down, in a power law expansion, all the terms that are allowed by the symmetries of the problem, which in our case are operators built out of the Riemann tensor. Let us start classifying them. We will begin with operators that only involve the gravitational field and discuss possible mixed gravity-matter operators later on. At the level of two derivatives there is a single term allowed by diffeomorphism invariance, the usual Einstein-Hilbert term. At the level of four derivatives there is also a single term one can write: \begin{equation} \int d^4 x\,\sqrt{-g}\; R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}\ , \end{equation} on top of the usual $\int d^4x\; \sqrt{-g} \epsilon_{\mu\nu\rho\sigma} R^{\mu\nu\alpha\beta} R_{\alpha\beta}{}^{\rho\sigma}$, which is a total derivative. However, in four dimensions, after integration by parts, this operator can be reduced to terms involving $R$ and $R_{\mu\nu}$ and hence can be ignored according to the discussion above. In fact, the Euler density $E_4$: \begin{equation} E_4=R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}-4R_{\mu\nu}R^{\mu\nu}+R^2\ , \end{equation} is a total derivative. As we show in section~\ref{sec:classification} there are two independent terms involving six derivatives, one parity even and one parity odd, that can be chosen in the following form:~\footnote{To define the second invariant we used $\tilde{R}^{\alpha \beta \gamma \delta}=\epsilon^{\alpha\beta}{}_{\mu\nu}R^{\mu\nu\gamma\delta}$. We define $\epsilon$ in a way that $\epsilon^{0123}=1/\sqrt{-g}$.} \begin{equation}\label{eq:sixderivative} S_{\rm eff}=2 M_{\rm pl}^2\int d^4 x\; \sqrt{-g}\;\left[- R+c_3 \frac{ R_{\mu\nu\rho\sigma} R^{\mu\nu}\,_{\alpha\beta}R^{\alpha\beta\rho\sigma}}{\Lambda^4}+\tilde{c}_3 \frac{ \tilde{R}_{\mu\nu\rho\sigma} R^{\mu\nu}\,_{\alpha\beta}R^{\alpha\beta\rho\sigma}}{\Lambda^4}\right]\ . \end{equation} As we discussed, we are interested only in theories that can be tested with gravitational wave astronomy. We call this requirement the ``Testability" requirement. In order to satisfy this requirement, $\Lambda$ has to be picked of order a few km inverse, if $c_3$ and $\tilde c_3$ are taken to be order one. If radiative stability was the only constraint the two six-derivative operators would be perfect candidates for the leading corrections to General Relativity in four dimensions. However, a recent argument~\cite{Camanho:2014apa} that we briefly review in Sec.~\ref{sec:causality} shows that if $c_3$ or $\tilde{c}_3$ are non vanishing, and under certain assumptions about the UV complition, causality would require an infinite tower of higher spin particles coupled to standard model fields with gravitational strength. The mass of the lightest of those particles has to be of order $\Lambda$ and the couplings such as to allow mediation of long range forces of gravitational strength between any matter fields. Obviously on sub-kilometer distances we have not observed any additional long range forces and hence the $c_3$ and $\tilde c_3$ terms must be suppressed by a much higher scale. {We however warn the reader than the argument of~\cite{Camanho:2014apa} appears to assume that the UV completion of (\ref{eq:sixderivative}) enters at tree level. It could be that the argument of~\cite{Camanho:2014apa} can be extended to include all possible UV complitions. However at the moment we do not have such a proof, and we cautiously conclude that the theory in~(\ref{eq:sixderivative}) can still be considered as a viable theory, and we briefly discuss its phenomenological consequences in Sec.~\ref{sec:numerical} and~\ref{sec:techsummary}.} In the rest of the paper we therefore concentrate mostly on the eight derivative terms. In four dimensions, as shown in Sec.~\ref{sec:classification}, there are three possible terms that we can add to the action: \begin{equation} S _{\rm eff}=\int { d}^4 x \sqrt{-g} 2 M_{\rm pl}^2 \left(-R + \frac{\mathcal{C}^2}{\Lambda^6} + \frac{\tilde{\mathcal{C }}^2}{\tilde{\Lambda}^6}+ \frac{\tilde{\mathcal{C }}\mathcal{C}}{\Lambda_-^6}+\ldots\right)\label{eq:action} \end{equation} where \begin{equation} \mathcal{C} \equiv R_{\alpha \beta \gamma \delta} R^{\alpha \beta \gamma \delta},\quad \tilde{\mathcal{C }} \equiv R_{\alpha \beta \gamma \delta} \tilde{R}^{\alpha \beta \gamma \delta}\ , \end{equation} and $\ldots$ refers to higher order terms. The coefficients of the parity even terms have to be positive due to causality~\cite{Gruzinov:2006ie} and analyticity~\cite{Bellazzini:2015cra} constraints. We keep in mind that these arguments can be subtle for gravity, but taking negative coefficients would not change significantly any part of our results. Furthermore, as we show in the Sec.~\ref{sec:causality}, the argument by~\cite{Gruzinov:2006ie} can be easily extended to incorporate the parity odd term which results in the following constraint: \begin{equation} \Lambda_-^2\gtrsim\Lambda \tilde{\Lambda}\ . \end{equation} Together with power-counting and symmetry arguments presented in Sec.~\ref{sec:observeligo}, this implies that the parity odd term can give the leading contribution to the physical observables only for some rather small region in parameter space~\footnote{Saturating the subluminality bound, the parity odd term can give the leading signal in the parametric window $v^2\lesssim \frac{\tilde\Lambda^6}{\Lambda^6}\lesssim v$.}. However, we still keep this term for generality. We also keep in mind that extensions of GR are very strongly constrained, and it could well be that our extension in (\ref{eq:action}) is incompatible with some general physical principle even within the parameter range that we identified. Still, to the best of our knowledge, such an argument has not yet been presented. We therefore proceed. In the rest of the paper we are going to analyze the implications of the theory given by the action~(\ref{eq:action}) with all higher terms neglected (as they give a subleading effect) for the phenomenology of astrophysical compact objects. Before we go on, however, let us see a little more in detail how the higher order terms in~(\ref{eq:action}) look like. Schematically we can write \begin{equation} S _{\rm eff}=\int { d}^4 x \sqrt{-g} \, 2 M_{\rm pl}^2 \left(-R + \frac{\mathcal{C}^2}{\Lambda^6} + \frac{\tilde{\mathcal{C }}^2}{\tilde{\Lambda}^6}+ \frac{\tilde{\mathcal{C }}\mathcal{C}}{\Lambda_-^6}+c_{mn} \Lambda_R^2\sum_m\sum_n \left(\frac{\nabla_\gamma}{\Lambda_c}\right)^m \left(\frac{R_{\mu\nu\rho\sigma}}{\Lambda_R^2}\right)^n\right), \label{eq:actionfull} \end{equation} where the $c_{mn}$'s are dimensionless constants. Notice that $\Lambda_c$ is the cutoff of the theory, {\it i.e.} when new states are expected to become relevant. Therefore, when computing loop corrections, we can Taylor expand in external momenta much less than $\Lambda_c$, which implies that, in the effective theory, the scale suppressing the derivates is indeed the cutoff. Instead, we have suppressed powers of the Riemann tensor by a so-far arbitrary scale $\Lambda_R$. In the schematic writing of (\ref{eq:actionfull}), it is implied that, at each order, there are a few terms for a given $m$ and $n$, suppressed by the same combination of scales. The scales $\Lambda$, $\tilde\Lambda$ and $\Lambda_-$ are in principle independent but for now it is convenient to keep them parametrically the same and of order a few kilometers inverse so that the corresponding operators give sizable yet perturbative corrections to Einstein equations in the black hole backgrounds. Obviously, in order to be able to use our effective theory for describing black hole mergers when distances and hence gradients of fields are of the order of the curvatures it is necessary to keep \begin{equation} \Lambda_c\gtrsim\Lambda\ . \label{ineq1} \end{equation} If we focus on terms with $m=0$ and $n$ large it also becomes clear that in order for the theory to remain perturbative for $R\sim \Lambda^2$ it is necessary to keep \begin{equation} \Lambda_R\gtrsim\Lambda\ , \label{ineq2} \end{equation} because otherwise terms with infinitely many powers of Riemann tensor will start to dominate before our quartic terms can produce a detectable correction. Let us now see which constraints are imposed by the criteria of radiatiative stability. To do so we calculate the one-loop diagram drawn on Fig.~\ref{fig:operator1} for large number of vertices $n$ and cut off the internal loop momentum at the cutoff $\Lambda_c$. For large $n$ we generate the following correction to the effective action: \begin{equation} \frac{(R_{\mu\nu\rho\sigma})^{2n}\Lambda_c^{2n}}{\Lambda^{4n} }\ . \end{equation} Radiative stability of the action then requires \begin{equation} \frac{\Lambda_c}{\Lambda^{2}} \lesssim \frac{1}{\Lambda_R}\ . \end{equation} Combining this requirement with (\ref{ineq1}) and (\ref{ineq2}), we are forced to set all the three scales approximately equal to each other: \begin{equation} \Lambda_c\approx\Lambda_R\approx\Lambda\ . \label{scales} \end{equation} Let us note that this result, albeit natural, was not immediately obvious. For example the unitarity bound associated with the growth of the tree-level scattering around flat space produced by our quartic vertices would require $\Lambda_c^4<\Lambda^3M_{\rm pl}$, while the suppression scale of power of fields in EFTs stemming from weakly coupled UV completions is usually $\Lambda_c/g$ where $g$ is some combination of coupling constants. \begin{figure}[!ht] \centering \includegraphics[trim = 6cm 14cm 3cm 3cm, clip, width=0.3\textwidth]{operator.pdf} \caption{Radiative generation of $R_{\mu\nu\rho\sigma}^{2n}$ operator, in this case $R_{\mu\nu\rho\sigma}{}^{8}$, through ${\mathcal{C}}^2$ and $\tilde{\mathcal{C}}^2$. }\label{fig:operator1} \end{figure} Eq.~(\ref{scales}) corresponds to suppressing canonically normalized perturbations of the metric, $h^c_{\mu\nu}$, by $M_{\rm pl}$: \begin{eqnarray}\label{eq:radiative2} &&S _{\rm eff}\supset \int { d}^4 x\; \sqrt{-g} M_{\rm pl}^2\; \left(\frac{\nabla_\gamma}{\Lambda}\right)^m \left(\left(1+\tfrac{h_c}{M_{\rm pl}}+\left(\tfrac{h_c}{M_{\rm pl}}\right)^2+\ldots\right) \frac{\nabla_\alpha\nabla_\beta } {\Lambda^2}\frac{h_c}{ M_{\rm pl}}\right)^n \\ \nonumber &&\qquad \sim\int { d}^4 x\; \sqrt{-g}\; M_{\rm pl}^2\,\Lambda^2 \left(\frac{\nabla_\gamma}{\Lambda}\right)^{2n+m} \left(\frac{h^c}{ M_{\rm pl}}\right)^n \ . \end{eqnarray} Above we noticed that the six derivative operator in $c_3$ in (\ref{eq:sixderivative}) has to be suppressed by a scale much higher than $\Lambda$. We can check with which scale this could possibly get generated by one-loop diagrams. Since $\Lambda=\Lambda_c$ we get $c_3=\Lambda^2/M_{\rm pl}^2$, consequently even if we have to introduce new higher spin particles coupled gravitationally to the Standard Model following the argument of~\cite{Camanho:2014apa}, their mass should be of order $\sqrt{\Lambda M_{\rm pl}}\gtrsim{\rm Gev} \sim \frac{10^{15}}{\rm meter}$, which is experimentally perfectly safe. In principle, one could have suppressed the quartic terms in the action as well and start from even higher terms. Such a theory would be radiatively stable, however, we do not pursue it for the following reasons: first, there are no reasons that we aware of that would forbid the quartic terms; second, in all known UV completions of gravity (i.e. string theory) quartic terms do get generated; finally any modifications of General Relativity coming from such theories would be even harder to observe. At this point, let us come back to our claim that the theory (\ref{eq:action}) with $\Lambda$ of order a few inverse ${\rm km}$ is not excluded by any flat-space or approximately flat-space experiments. Of course experiments of interest are performed at energies much larger than $\Lambda$, hence this claim necessarily depends on the UV completion. Our assumption will be that the UV completion is "soft" in the following sense: at energies above $\Lambda$, the vertices suppressed by $\Lambda$ get resolved and become renormalizable, that is they stop growing with energy. In ordinary quantum field theories such behavior is not at all exotic. In Appendix~\ref{app:example}, we present an example of a UV-complete quantum field theory where non-renormalizable operators present in low-energy effective theory get resolved in the "soft" way we just specified. We check explicitly that while the leading higher derivative operators give order one corrections to solutions with large values of the background fields, which is what makes them testable with compact objects, corrections to any experiments performed in a near-vacuum state are parametrically small. Of course it is notoriously hard to provide any example of such a UV completion for gravitational theories. In fact there is only one known (this happens to have the name of string theory). Indeed quartic vertices like those in (\ref{eq:action}) are present in low energy string actions in which the string scale plays the role of $\Lambda$. In that case, the growth of the four-graviton amplitude saturates at this scale and even goes to zero at very high energies~\footnote{As we will mention later, the UV completion of our EFT is not the normal string theory for the way the couplings to matter are affected.}. This is within the class of behaviors we require for the UV completion of our theory. Not surprisingly, similar behavior is also present in large-$N$ QCD. More explicitly, if we expand the metric around a flat background and introduce canonically normalized field for the metric perturbations, $g_{\mu\nu}=\eta_{\mu\nu}+h^c_{\mu\nu}/M_{\rm pl}$, then the leading interaction vertex will schematically read \begin{equation} M_{\rm pl}^{-2}\frac{(\partial^2 h^c)^4}{\Lambda^6}. \end{equation} If we now consider some process that includes energy-momentum transfer of order $E\gg\Lambda$, all $ \Lambda$'s that naively stay in the denominator will get cancelled by powers of the cutoff that is also $\Lambda$ as a consequence of our "softness" assumption, while powers of $M_{\rm pl}$ in the denominator will not get compensated by anything bigger than $E$. As a result all processes involving our vertex, or more precisely, whatever replaces it at energies above $\Lambda$, will have extra powers of $E/M_{\rm pl}, h^c/M_{\rm pl}\sim E/M_{\rm pl}$ or $\Lambda/M_{\rm pl}$, as compared to the leading contribution coming from GR and consequently will be practically unobservable. It is only when the metric $g_{\mu\nu}$ deviates from flat by order one that we have a chance to have sizable (order-one) corrections, as we can replace the fluctuating $h$ with its vacuum expectation value, which, for black holes, is of order $h^c\sim M_{\rm pl}$. We are now ready to comment on operators that contain not only the metric, but also matter fields. These couplings are strongly constrained by various lab experiments, and so they better be small in order for our theory not to be ruled out. If one focuses within the regime of validity of the EFT, one finds that naively matter-matter interactions suppressed by a scale as low $\sqrt{M_{\rm pl}\Lambda}$ are generated. Taken at face value, these operators are ruled out at collider experiments. However, the use of these operators at scales above $\Lambda$ is ill-defined, a fact that is made manifest by the fact that there is a series of higher derivative operators suppressed by the same scale and additional powers of $\partial/\Lambda$, similarly to what we wrote in~(\ref{eq:radiative2}). Our `UV-softness' assumption is indeed crucial to forbid these operators to keep growing at scales above $\Lambda$, beyond their value at energies of order $\Lambda$, which is negligibly small, of order $(\Lambda/M_{\rm pl})^2$. In a sense, this discussion is indeed almost a repetition of the one we had a couple of paragraphs above. Finally, we need to discuss the special operators that are quadratic in the graviton fields, such as $R^2$. If we do not include them in the action, they are not radiatively generated because they vanish on shell. However, one might wonder what happens if we were to include them directly in the action, suppressed by a scale $\Lambda$. The fact that they vanish on-shell implies that we can perform several field redefinitions in such a way that these operators get replaced by operators of the form $T^2/(M_{\rm pl}\Lambda)^2$ (similarly to the example in~(\ref{eq:example1})), where $T^2$ contains scalar functions of the energy momentum tensor $T_{\mu\nu}$, i.e., $(T^{\mu}_{\mu})^2$ and $T^{\mu \nu} T_{\mu \nu}$. At energies of order $\Lambda$, these operators give an order one correction to gravity, which is ruled out. In the case of the higher order operators, such as $\mathcal{C}^2$, we obtained smaller corrections at scales of order $\Lambda$, because these operators were interactions, and so we paid powers of the coupling constant $M_{\rm pl}$. Instead, in this case, $R^2$ is just a kinetic term. This discussion makes it clear therefore that adding these $R^2$-like operators corresponds to simply adding a new degree of freedom with mass of order $\Lambda$ directly coupled to matter with gravitational strength. It is clear that this kind of theories will be better tested by lab experiments rather than by gravitational wave observations, as they do not require strong fields. Our purpose, instead, is to write theories that indeed can be tested best by strong gravity experiments, which we called `testability requirement' earlier on. This justifies us neglecting to include these operators in the action. \subsection{Classification of operators made of $R_{\mu\nu\rho\sigma}$\label{sec:classification}} In this section, we classify all operators containing up to eight derivatives that do not vanish for $R_{\mu\nu}=0$. An uninterested reader can skip this subsection. First, it is useful to remember the second Bianchi identity: \begin{equation}\label{eq:bianchiII} \nabla_\mu R^{\nu\rho\sigma\gamma}+\nabla_\gamma R^{\nu\rho\mu\sigma}+\nabla_\sigma R^{\nu\rho\gamma\mu}=0\ , \end{equation} which, upon contraction of $\mu$ and $\nu$ indices and the use of $R^{\nu\mu}=0$, gives \begin{equation} \nabla_\mu R^{\mu\rho\sigma\gamma}=0\ . \label{divR} \end{equation} Since the commutator of two derivatives gives another Riemann tensor it is a straightforward exercise in integration by parts to rewrite all terms containing two Riemann tensors with extra covariant derivatives acting on them through terms with three Riemanns or more plus terms vanishing due to (\ref{divR})~\footnote{Let us sketch the proof. The only non-trivial terms are those where the covariant derivatives are contracted among themselves, as otherwise, after integration by parts and commutators we can use~(\ref{divR}). In this remaining case, one can use (\ref{eq:bianchiII}) to shuffle the derivative as being contracted with an index of the Riemann tensor, reducing therefore to the simple case. }. It is a significantly more complicated task to classify all terms containing three Riemann tensors. Intuitively we expect two independent terms with three Riemanns and no extra derivatives and we expect to be able to rewrite all terms with extra derivatives through terms containing four Riemann tensors or more. The reason is that there are only two independent on-shell cubic vertices for gravitons in four dimensions and this vertices correspond to $c_3$ and $\tilde c_3$ terms in (\ref{eq:sixderivative}). Indeed, the results of \cite{doi:10.1063/1.529470} and \cite{Fulling:1992vm} give exactly these terms in case of six derivatives. Moving to eight derivatives (and of course three Riemanns), parity even terms were also classified in \cite{Fulling:1992vm}. In four dimensions there is only one independent term that can be chosen in the following form: \begin{equation} R^{\mu\nu\rho\sigma}\nabla_\rho R^{\gamma\delta\beta}\,_\mu \nabla_\sigma R_{\gamma\delta\beta\nu}. \end{equation} To simplify this term we can integrate $\nabla_\sigma$ by parts, because of (\ref{divR}) the derivative has to go on the second Riemann but then the two derivatives are anti-symmetrized so the term is reduced to four Riemann tensors. For three Riemanns, in order to classify parity odd eight-derivative terms we used {\it{Invar}} tensor package~\cite{MartinGarcia:2008qz}. In four dimensions there is again only one independent term with three Riemanns that can be chosen in the following form: \begin{equation} \epsilon_{\mu\nu\rho\sigma} R^{\alpha\beta\mu\nu}\nabla^\rho R_\alpha\,^{\delta\gamma\kappa}\nabla^\sigma R_{\beta\gamma\delta\kappa}. \end{equation} Now we integrate $\nabla^\sigma$ by parts, if it acts on the second Riemann the derivatives are again anti-symmetrized, while if it acts on the first Riemann we can use the second Bianchi identity (\ref{eq:bianchiII}) on indices $\sigma$, $\alpha$ and $\beta$ to get a structure proportional to $\epsilon_{\mu\nu\rho\sigma} R^{\sigma\alpha\mu\nu}$, which is zero due to the first Bianchi identity~\footnote{One can contract three indices in (\ref{eq:bianchiI}) with the epsilon tensor, and show that each one of the three terms of the Bianchi identity is proportional to $\epsilon_{\mu\nu\rho\sigma} R^{\sigma\alpha\mu\nu}$.}: \begin{equation}\label{eq:bianchiI} R^{\sigma\alpha\mu\nu}+ R^{\mu\sigma\alpha\nu}+ R^{\alpha\mu\sigma\nu}=0. \end{equation} Parity even four-Riemann terms were classified in \cite{Fulling:1992vm}, while for parity odd terms we can use reference \cite{doi:10.1063/1.529470}. The results are that the terms present in the action (\ref{eq:actionfull}) are the only independent ones, with no extra derivatives~\footnote{The latter reference, which applies to parity odd operators, classified terms up to algebraic equivalence (which means that they consider as dependent different operators that are built out of products of operators that appeared at lower orders), consequently our $\Lambda_-$ term is not presented as an independent one. In fact, there is no single algebraic-independent parity-odd four-Riemann operator, which implies that, if there is a linearly independent one, it must be a product of lower order scalar operators, which have been classified. Therefore, the results of \cite{doi:10.1063/1.529470} are enough to argue that the term we include is the only possible term.}. \subsection{Review of causality constraints on coefficients in the effective action\label{sec:causality}} \subsubsection*{Quartic operators} We begin with a brief summary of the argument of~\cite{Gruzinov:2006ie}. The authors considered the effect of quartic Riemann operators (\ref{eq:action}) on the dispersion relation of a graviton propagating in a background with $R_{\mu\nu}=0$. This takes the following form:{~\footnote{\cite{Gruzinov:2006ie} did not consider the parity odd term, however here we present the easily-generalized results.}} \begin{equation}\label{eq:superluminarR4} k^2=\frac{64}{\Lambda^6}\left( S^{\alpha\beta}e_{\alpha\beta}\right)^2+\frac{64}{\tilde\Lambda^6}\left( \tilde S^{\alpha\beta}e_{\alpha\beta}\right)^2+\frac{64}{\Lambda_-^6}S^{\alpha\beta}e_{\alpha\beta} \tilde S^{\mu\nu}e_{\mu\nu}, \end{equation} where $k^\mu$ is the gravitons four-momentum, $e_{\alpha\beta}$ the polarization tensor and \begin{equation} S_{\mu\nu}=k^\alpha k^\beta R_{\mu\alpha\beta\nu}, \quad \tilde S_{\mu\nu}=k^\alpha k^\beta \tilde R_{\mu\alpha\beta\nu}. \end{equation} By picking different graviton polarizations and backgrounds we first conclude that indeed in order to avoid superluminal graviton propagation the first two coefficients must be positive and also that the coefficient of the parity odd term cannot be very large, namely, \begin{equation}\label{eq:superluminality1} \Lambda^6>0\,,\qquad \tilde\Lambda^6>0\,, \qquad\frac{1}{\Lambda_{-}^{12}}\leq\frac{2}{\Lambda^6\tilde\Lambda^6}. \end{equation} Constraints on the positivity of the parity even terms were obtained from independent arguments involving analyticity of the graviton amplitudes in~\cite{Bellazzini:2015cra}. To ensure that there is an obvious inconsistency, one should be sure that the time advance resulting from the change in the dispersion relation is larger than the time-delay that is present in normal GR. Clearly, in order to trust the equations of motion, the curvature scale $\rho$ cannot be larger than $\Lambda$. In GR, the time delay will generically be proportional to $1/\rho$. For example for a black hole the time delay is of order of the Swartzschild radius. For $k\lesssim \Lambda$, the time advanced does not appear to be generically larger than the GR time delay. Naively, one can try to go to high $k$'s for the graviton, as the right hand side of~(\ref{eq:superluminarR4}) grows as $k^4$. However, for $k\gtrsim \Lambda$, other operators present in our EFT and containing more powers of Riemann can possibly give a contribution that grows faster than $k^4$. Consequently, we do not seem to be able to guarantee that the overall time advancement can beat the GR time delay within the validity of the computation. One can in principle be worried even by graviton propagation faster than in GR, however this does not seem to be a necessary requirement, as such a phenomenon already happens in QED~\cite{Drummond:1979pp}. We therefore consider the constraint given in~(\ref{eq:superluminality1}) simply as indicating a somewhat preferred region, but we cautiously suggest that whole of the parameter space should be explored. Of course, while the set of inequalities in (\ref{eq:superluminality1}) means that, when they are violated, there is faster-than-GR propagation, we cannot state for sure that, by performing some additional analysis, one cannot find that even in the parametric regime allowed by (\ref{eq:superluminality1}), superluminality is present~\footnote{For example, the analysis of~\cite{Gruzinov:2006ie} is insensitive to the superluminal propagation induced by the cubic operators, as the different analysis of~\cite{Camanho:2014apa} reveals.}. In such a case, the sensible parameter space should be further reduced. \subsubsection*{Cubic operators} Let us now turn to reviewing the constraints on the cubic couplings $c_3$ and $\tilde c_3$ derived in~\cite{Camanho:2014apa}~\footnote{We thank Sasha Zhiboedov for discussions about technical aspects of~\cite{Camanho:2014apa}.}. The authors assume that the new UV physical states that, as we explained in our case, must be present at the scale $\Lambda$, couple at tree level (that is the scattering amplitude remains a meromorphic function of the kinematic invariants). The idea is to consider small-angle scattering of a gravitational wave off some arbitrary (Standard Model) particle. This process is dominated by the eikonal approximation, or equivalently by ladder diagrams. The latter exponentiate in the impact-parameter representation and the leading order answer depends exclusively on the on-shell cubic vertices present in the theory. The result in the GR limit is the phase shift associated with the Shapiro time delay for gravitational waves. Of course this is always positive. At the impact parameter $b\sim c_3^{1/4}/\Lambda$ corrections from the $R^3$ vertices will become significant. Crucially, within the stated assumptions, these corrections can become observable while the calculation is under control. The result of \cite{Camanho:2014apa} is that independently of the signs of $c_3$ and $\tilde c_3$ there will be a polarization that instead of a time delay acquires time advance. This violates causality because the time-advancement becomes larger than the time-delay in GR. Ref.~\cite{Camanho:2014apa} also showed that the only way to cure superluminality is by introducing (an infinite tower of) higher spin particles with mass of order $\Lambda/c_3^{1/4}$ coupled both to the graviton and to the matter particle on which the graviton is scattering, Standard Model particles in our case. However, these new particles would mediate a new force between all Standard Model particles, basically through the same set of diagrams but with graviton replaced with the second matter field. The range of such a force will be of order $r\sim c_3^{1/4}/\Lambda$ and the strength will be parametrically equal to gravitational. Recently an independent argument that uses AdS/CFT correspondence~\cite{Afkhami-Jeddi:2016ntf} was given that leads to conclusions equivalent to those in~\cite{Camanho:2014apa}. This argument puts a strong constraints on $c_3$ and $\tilde c_3$ for the values of $\Lambda$ of interest to us. We therefore conclude that, within the set of assumptions about the UV completion made in~\cite{Camanho:2014apa}, the six-derivatives operators made with three Riemanns must be suppressed by a scale smaller than the one probed in laboratory experiments, and therefore are completely negligible in the context of compact objects. Similarly to the case of quartic operators with negative coefficients, that we discussed just above, the cubic operators always lead to faster-than-GR propagation, independently of any assumption about the UV completion. However, this is not obviously enough to violate causality, and therefore we conclude that we cautiously should consider also the cubic operators. \section{Classical equations of motion and Numerical Simulations\label{sec:numerical}} The equations of motion resulting from action (\ref{eq:action}) in the $R_{\mu\nu}=0$ background are \begin{align} R^{\mu\alpha} - \frac{1}{2}g^{\mu\alpha} R = \frac{1}{\Lambda^6}\left(8 R^{\mu \nu \alpha \beta} \nabla_{\nu} \nabla_{\beta}\mathcal{C} + \frac{1}{2} g^{\mu \alpha} \mathcal{C}^2\right) + \frac{1}{\tilde\Lambda^6} \left( 8\tilde{R}^{\mu\rho\alpha\nu}\nabla_{\rho}\nabla_{\nu}\tilde{\mathcal{C }} + \frac{1}{2} g^{\mu\alpha} \tilde{\mathcal{C }}^2\right) \nonumber \\ +\frac{1}{\Lambda_-^6} \left( 4\tilde{R}^{\mu\rho\alpha\nu}\nabla_{\rho}\nabla_{\nu}\mathcal{C} + 4 R^{\mu\rho\alpha\nu}\nabla_{\rho}\nabla_{\nu}\tilde{\mathcal{C }} + \frac{1}{2} g^{\mu\alpha} \tilde{\mathcal{C }}\mathcal{C}\right). \label{eq} \end{align} where we used $R_{\mu\nu}=0$ to simplify the right hand side. Higher derivative terms are sometimes feared because of potential instabilities of the equations of motion. EFTs always contain higher derivative operators, however, as long as one works at energies below the cutoff, instabilities never occur. Indeed, by definition, the solutions are small, perturbative deformations of the leading term in the action, in our case Einstein-Hilbert term, that only has healthy well-behaved solutions. As we mentioned, one can consider also the case of an effective theory where the leading operators are cubic, which is given in~(\ref{eq:sixderivative}). In this case, the equations of motion are given by \begin{eqnarray}\label{eq:sixderivativeequations} &&R_{\mu \nu} -\frac{1}{2}g_{\mu\nu}R = \frac{6 c_3}{\Lambda^4} \left(\nabla_{\alpha} R_{\mu \beta \delta \gamma}\right)\left(\nabla^{\beta} R_{\nu} {}^{\alpha \delta \gamma}\right)\\ \nonumber &&\qquad\qquad\qquad+ \frac{2 \tilde c_3}{{\Lambda}^4} \left\{ \epsilon_{\mu \delta \rho \sigma} R_{\nu}{}^{\alpha\beta\gamma} \left( \nabla^{\sigma}\nabla_{\alpha} R_{\beta\gamma}{}^{\delta\rho}\right) + \epsilon_{\nu \delta \rho \sigma} R_{\mu}{}^{\alpha\beta\gamma} \left( \nabla^{\sigma}\nabla_{\alpha} R_{\beta\gamma}{}^{\delta\rho}\right) \right. \nonumber\\ &&\qquad\qquad\qquad+ \left. \epsilon_{\mu \delta \rho \sigma} \left(\nabla_{\alpha} R_{\beta\gamma}{}^{\rho\sigma}\right) \left(\nabla^{\delta} R_{\nu}{}^{\alpha\beta\gamma}\right) + \epsilon_{\beta \gamma \rho \sigma} \left(\nabla_{\alpha} R_{\mu\delta}{}^{\rho\sigma}\right)\left(\nabla^{\delta} R_{\nu}{}^{\alpha\beta\gamma}\right) \right. \nonumber\\ &&\qquad\qquad\qquad+ \left. \epsilon_{\nu \delta \rho \sigma} \left(\nabla_{\alpha} R_{\beta\gamma}{}^{\rho\sigma}\right)\left(\nabla^{\delta} R_{\mu}{}^{\alpha\beta\gamma}\right) \right\}\nonumber\ . \end{eqnarray} In this paper, we will study the phenomenology of our effective field theory confining ourselves to the inspiralling phase where the velocity is non relativistic. This is the so-called post-Newtonian regime. Another regime that is prone to a perturbative treatment is the study of the quasi-normal modes. We leave this study to a subsequent publication. Of course, it would be interesting to study the merging phase, where the velocity is relativistic. In standard GR, this is done numerically using the renowned codes that can handle horizons and singularities~\cite{Pretorius:2005gq}. In this section, we wish to briefly highlight how it appears to us a potential adaptation of the same GR codes can be used to simulate the coalescence of black holes within our effective field theory. For simplicity, we will refer only to eq.~(\ref{eq}), but everything we say in this section applies equally also to (\ref{eq:sixderivativeequations}). The equations of motion we wish to solve are given in~(\ref{eq}). However, they cannot be solved numerically as is. In fact, the terms on the right-hand side contain more than two time derivatives. If solved as is, these terms will induce exponentially growing unstable modes that would destroy the ordinary GR solution. Why this statement does not rule out completely from the get-go the modifications of the Einstein-Hilbert action we are proposing? The answer is that we should not overinterpret the meaning of effective field theories. The new terms that we are inserting represent a consistent theory only in the limit in which these terms provide small perturbations. When the correction becomes of order one, the whole series of terms that we neglected to write down under the assumption that the correction is small will become important, and so, an effect of order one originating from the right-hand side of (\ref{eq}) simply cannot be trusted. How therefore can we numerically solve this equation~(\ref{eq}) assuming that the effect of the terms on the right-hand side are small? Here we outline how we can adapt a standard perturbative method to the study of our specific effective operators that requires rather possibly minimal modifications of the existing codes, but that we do not implement for lack of technical knowledge of the relevant codes (and not because the problem is mathematically-ill posed or there are physical instabilities in this theory). Our proposed approach is valid for the entire merger event for $\Lambda's\gtrsim 1/r_s$, where $r_s$ is the Schwarzschild radius of the compact object, while for $\Lambda's\lesssim 1/r_s$ it should be trusted, with a slight modification, until the distance $r$ between the compact objects is $r\gtrsim 1/\Lambda$, as we discussed next. Suppose we have solved the ordinary, $\Lambda,\tilde\Lambda,\Lambda_{-}\to \infty$, equations to obtain the solution of the inspiralling, merging, and ring down phases of GR, which is what is normally done to obtain the templates for experiments such as LIGO. Let us denote the obtained metric, and resulting Riemann tensor with the subscript $_{(0)}$: $g_{(0),\mu\nu},\, R_{(0),\mu\nu\rho\sigma}$. Let us suppose this solution is stored in our computer. Let us denote the leading correction from the right-hand side of~(\ref{eq}) with the subscript $_{(1)}$: $g_{(1),\mu\nu},\, R_{(1),\mu\nu\rho\sigma}$, so that the full solution is $g_{\mu\nu}=g_{(0),\mu\nu}+g_{(1),\mu\nu}$. To obtain the leading correction $g_{(1),\mu\nu}$, we then have just to solve~\footnote{We point out the following. The recipe we are going to describe will give the correct prediction of the EFT at order $1/\Lambda^6$. Iteration of this procedure using the same eq.~(\ref{eq}) would naively give the corrections to order $1/\Lambda^{12}$. However, at order $1/\Lambda^{12}$ one expects many new operators to appear in the EFT, so that the equation of motion, at this order, would need to be modified. Still, a similar procedure to what we describe here can be implemented. } \begin{eqnarray}\label{eq_temp_source} && R^{\mu\alpha} - \frac{1}{2}g^{\mu\alpha} R =\\ \nonumber && \qquad\qquad \left[\frac{1}{\Lambda^6}\left(8 R^{\mu \nu \alpha \beta} \nabla_{\nu} \nabla_{\beta}\mathcal{C} + \frac{1}{2} g^{\mu \alpha} \mathcal{C}^2\right) + \frac{1}{\tilde\Lambda^6} \left( 8\tilde{R}^{\mu\rho\alpha\nu}\nabla_{\rho}\nabla_{\nu}\tilde{\mathcal{C }} + \frac{1}{2} g^{\mu\alpha} \tilde{\mathcal{C }}^2\right)\right.\\ \nonumber &&\qquad\qquad \left.+\frac{1}{\Lambda_-^6} \left( 4\tilde{R}^{\mu\rho\alpha\nu}\nabla_{\rho}\nabla_{\nu}\mathcal{C} + 4 R^{\mu\rho\alpha\nu}\nabla_{\rho}\nabla_{\nu}\tilde{\mathcal{C }} + \frac{1}{2} g^{\mu\alpha} \tilde{\mathcal{C }}\mathcal{C}\right)\right]_{g_{\mu\nu}=g_{(0),\mu\nu}} \ , \end{eqnarray} with appropriate initial and boundary conditions. The difference between (\ref{eq_temp_source}) and (\ref{eq}) is that in (\ref{eq_temp_source}) the right hand side is a known source, {\it i.e.} it does not contain any term in the unknown $g_{(1),\mu\nu}$. The differential operator acting on $g_{(1),\mu\nu}$ is the same as in the standard Einstein equations, so it is second order in derivatives, and it {\it does not} lead to any unstable solutions or ill-posed mathematical problems. Notice, furthermore, with large enough $\Lambda$'s, {\it i.e} $\Lambda\gtrsim 1/r_s$, where $r_s$ is the Schwarzschild radius of the black hole, the right-hand side of (\ref{eq_temp_source}) is small over the whole spacetime of interest for the simulation, i.e. even at the horizon, so the perturbative expansion will apply. Notice that solving this problem is very similar to solving the usual Einstein equations: the only difference is that there are two iterations: in the first iteration, ones saves $g_{(0),\mu\nu}$ to compute the sources, and in the second iteration one adds those to the right hand side and solve again using the standard Einstein solver. Additionally, if this were to be simpler, one could also linearize the left-hand side of (\ref{eq_temp_source}) in $g_{(1),\mu\nu}$ and solve the linearized Einstein equations with the known source provided by the right hand side of~(\ref{eq_temp_source}). Let us comment to what extent this same simulations can be trusted in the regime $\Lambda's\lesssim 1/r_s$. The effective field theory breaks down for distances shorter than $1/\Lambda$'s. However, our assumptions about the UV complition tell us that the effects of new physics are suppressed away from the horizon. Therefore, even though one cannot trust the numerical solution inside the region $r<1/\Lambda$, one can still perform the simulation with exactly the same algorithm we just described by adding the following modiification: one should smoothly damp the right-hand-side source for distances $r\lesssim 1/\Lambda$'s, so that this never becomes non-perturbative, mimicking in this way the softening of the assumed UV complition. While the black holes themselves are at distances longer than $1/\Lambda$'s, the emitted radiation, whose frequency is slower than $1/\Lambda$, can be trusted as being universal (and in particular independent of the softening procedure)~\footnote{There is a slightly technical issue associated with this setup that we would like to highlight. Strictly speaking, for $\Lambda's\lesssim 1/r_s$ this algorithm simulates a particular UV completion, given by the specifically-chosen softening of the source term. The only effect of this short-distance procedure at long distances is that it rescales (renormalizes) the coefficients, $\Lambda$'s, of the cubic or quartic operators. Therefore, in interpreting the size of the effect in terms of $\Lambda$'s, one should adjust (i.e. renormalize) the values of the simulated $\Lambda$'s with the result of the PN calculations that we perform later on.}. In this regime parametric regime $ \Lambda's\lesssim 1/r_s$ simulations can be trusted only in the regime $v\lesssim 1$, where PN calculations are also reliable. However, there are clear different advantages in both approaches. Let us make some comments on possible technical issues. In order to evaluate the right-hand side of (\ref{eq_temp_source}), one needs to evaluate four derivatives of the metric $g_{(0),\mu\nu}$. This probably will require to store the metric for a few time steps, which we are unable to judge if it is a technical challenge. Let us also comment more on the initial conditions for~(\ref{eq_temp_source}). Once the black holes are enough far apart in the past, we are tempted to argue that possibly the initial conditions are well approximated by the perturbed metric of two isolated black holes, where the perturbed metric is obtained by solving perturbatively~(\ref{eq_temp_source}) again, but this time in a static and isolated configuration (where the boundary conditions are vanishing) and the right hand side of~(\ref{eq_temp_source}) is this time known analytically: it is the one given by a Kerr metric. Finally, let us add a discussion about another possible technical issue~\footnote{We thank Frans Pretorius for pointing out to us the existence of such a potential difficulty and of the two possible solutions mentioned here. We also thank him for mentioning the possibility of using the standard Einstein solver in~(\ref{eq_temp_source}) instead of linearizing the left hand side, which makes the required modifications to existing code smaller.}. Notwithstanding the smallness of the corrections from our EFT, the accumulation of the phase difference with respect to the GR solution could, after enough orbits, become so large that, at a given instance of time, the actual solution is out of phase with respect to the GR one, and therefore naively one should not be able to solve perturbatively in $g_{(1),\mu\nu}\ll g_{(0),\mu\nu}$. If present, this issue can potentially be addressed with one of the two following approaches. In the first approach, one could solve (\ref{eq_temp_source}) in multiple iterations with smaller effective coupling parameters that slowly build up to the desired values. In a second approach, one could break the full evolution down into tiny segments where in each one we evolve the system for a small time such that the phase accumulation is small and the solution given by (\ref{eq_temp_source}) is reliable, and then we start the next segment with the corrected solution as the initial conditions. In reality, we might need to use both method in some extreme cases, and run the simulation multiple times. For large enough values of $\Lambda$ numerical procedure outlined above should converge and give the precise results, however, the signal for those values of parameters will also be small and potentially high signal to noise events will be required to detect it. On the other hand, for smaller values of $\Lambda$ close to $1/r_s$, there is a possibility that significant systematic errors will be present due to the sensitivity to the UV completion of the theory. Indeed, it is plausible that locally in some near-horizon regions excitations of the UV modes become important rendering our approximate equations insufficient. A further, most likely numerical, study is required to determine for which values of the parameters this can happen and what is the largest value of the signal that can be obtained while staying in the regime of validity of the EFT.\footnote{We thank the referee for bringing to our attention two papers~\cite{Okounkova:2017yby,Cayuso:2017iqc} that study numerical techniques which can be used for simulating merger events in extended gravitational theories containing higher-derivative operators.} None of the authors of this paper have the expertise to tackle the numerical solution of (\ref{eq_temp_source}). However, we do hope that the strategy described in this section might encourage the experts of the field to attempt to solve this numerical problem, so that we will be able to study the effect of the UV-extension of GR in the relativistic regime as well. Instead, we will now move on to study the post-Newtonian regime. \section{Outline of the post-Newtonian calculation}\label{sec:techsummary} We are interested in studying the effects of adding the higher derivative terms \begin{eqnarray} &&\frac{M_{\rm pl}^2}{\Lambda^6}(R_{\alpha \beta \gamma \delta} R^{\alpha \beta \gamma \delta})^2 \ , \quad \quad \frac{M_{\rm pl}^2}{\tilde\Lambda^6}(\epsilon^{\alpha \beta}\,_{\mu \nu}R_{\alpha \beta \gamma \delta} R^{\mu \nu \gamma \delta})^2 \\ \nonumber &&\quad \text{and} \quad \frac{M_{\rm pl}^2}{\Lambda_-^6}(R_{\alpha \beta \gamma \delta} R^{\alpha \beta \gamma \delta})(\epsilon^{\alpha \beta}\,_{\mu \nu}R_{\alpha \beta \gamma \delta} R^{\mu \nu \gamma \delta})\ , \end{eqnarray} to the canonical Einstein-Hilbert action in the post-Newtonian regime. We talk about the EFT with cubic operators at the end of this section. The energy scales $\Lambda$, $\tilde \Lambda$ and $\Lambda_-$ control at what scales these terms become relevant. When treated perturbatively we can compute the effects of these terms. This leads to predictions that can be in principle measured, leading to a discovery of modification of GR, or, in absence of detection, can be used put lower bounds on the size of $\Lambda$, $\tilde \Lambda$ and $\Lambda_-$. There are many different physical effects these terms can modify, but here we focus on the inspiral problem. We therefore compute the corrections that these terms generate both to the instantaneous potential as well as to the form of the radiation coupling (i.e. corrections to the quadrupole formula). This is sufficient to compute the modification to the gravitational wave signal from the inspiralling regime. To perform these calculations we utilize the EFT framework developed by Goldberger and Rothstein~\cite{Goldberger:2004jt}. The EFT framework is proposed for systematically calculating to any order in the PN expansion. Such an EFT is the result of integrating out gravitational and matter perturbations with wavelength shorter than the size of the extended object involved. The properties of the compact sources are encapsulated by a particular series of operators with coefficients respecting the symmetries of the extended object~\footnote{This is similar in spirit to the operators that we add to the GR action in Sec.~\ref{sec:action}: there we include all the operators compatible with diffeomorphism invariance and built out of the graviton, now we include also operators build out of the world-line of the extended objects.}. Given a model of stellar structure, the precise value of the coefficients in front of these operators can be found from UV matching conditions. In the case where the extended object is a black hole, the properties of the source are captured by the mass and spin of the black hole. An EFT also has the advantage of having manifest power counting in the expansion parameters of the theory. In the case of this EFT for compact objects, such an expansion parameter is the relative velocity of the extended objects, $v$. In the non-relativistic limit ($v\ll 1$), the size of various post-Newtonian corrections compared to the leading newtonian potential can often be estimated by some simple power counting rules. The gravitons in the problem can be generally separated into two categories according to the energy and momentum they carry. For gravitons that are responsible for mediating long range interactions between two extended objects, the typical energy and momentum carried by these gravitons is $(p^0 \sim v/r,{\bf p}\sim 1/r)$, while for gravitons that are emitted by the system, the typical energy and momentum carried by these gravitons is $(p^0 \sim v/r,{\bf p}\sim v/r)$ since they should be on shell (i.e. gravitational wave satisfy the relativistic dispersion relation~$p^0=p$).~\footnote{\label{footnote:omega}It is important to point out that in the non-relativistic regime of the inspiral, the velocity $v$ and the distance between the extended objects are related by the virial theorem as $v^2\sim G M/r$. The rotational frequency ($\omega \sim v/r$), as a result, is at order $v^3$. It should be noted that this power counting rule can be extended to a much more complicated potential $V$ as long as the potential respects a rotational symmetry. The rotational frequency can be found from $\omega \sim \sqrt{\frac{1}{r}\frac{{\rm d} V}{{\rm d} r}}$ and the order of PN-corrections can be read off accordingly.} With this in mind, we can work out the Feynman rules from the Einstein-Hilbert action, and in particular the lowest order vertices that captures the interaction between the source (mass and spin of a black hole) and the graviton. These are summarized in Appendix~\ref{app:A}. Such a framework can be extend to include new interactions between the gravitons in the form of our leading higher dimensional operators. The new quartic interaction vertices from the new EFT operators are summarized in Appendix.~\ref{App:new_quartics}. In addition to~$v$, the post-Newtonian expansion parameter, there is one more expansion parameter of the new EFT, which is ${p}/\Lambda$. When this expansion parameter becomes $O(1)$, additional higher-dimensional operators or new resonances lead to significant deviations from our leading order predictions~\footnote{As we anticipated in the introduction and in Sec.~\ref{sec:action}, some observations/experiments are sensitive to the regime where $p\gg \Lambda$. We discuss more this regime in Sec.~\ref{sec:otherexp}.}. The broad strategy of the calculation is the following. From far away, the inspiralling binary can be thought as a single compact object endowed with a small extension in space. It emits gravitational waves by the oscillations of its multipoles. The effective action (which is equivalent to its effective equations of motion) takes therefore the form of the one of a particle that is characterized by its multipole moments. In the center of mass frame, this takes the form \begin{equation}\label{eq:PNeffectiveaction} S_{\rm ext.\, obj.}=\int { d} t \,\left\{ \left[m_1+m_2+\frac{1}{2} \mu(t) {\bf v}_{\rm rel}^2 - V(r(t)) \right] + \frac{1}{2}Q_{ij}(t) R^{i0j0} - \frac{1}{3}J_{ij}(t) \epsilon_{jkl}R^{kli0}+\ldots \right\}, \end{equation} where $\mu$ is the reduced mass of the system, and ${\bf v}_{\rm rel}$ is the relative velocity between the inspiralling binary. We also included in the effective one-body action the kinetic energy and the potential energy $V(r(t))$, because, even though they describes internal degrees of freedom from the point of view of the single-body system, they allow us to compute the time dependence of the multipoles. Thinking of the binary as an extended object allows us to compute the gravitational wave emission directly using the standard multipole formulas. Therefore, the problem of GW emission is reduced to computing the effective action (\ref{eq:PNeffectiveaction}) of the binary system starting from the action of two point-like particles \begin{eqnarray}\label{eq:point}\nonumber && S_{\rm EH+p.p.}=S _{\rm eff}+\\ \nonumber &&\qquad+ \int d^4x \,\left\{ \delta^{(3)}(\vec x-\vec x_1) \left( m_1(1+ {\bf v}_1^2/2)+ d_{2}^{(1)} \;\sqrt{g_{\alpha\beta}\dot x_1^\alpha\dot x_1^\beta}R^{\mu\nu\rho\sigma}R_{\mu\nu\rho\sigma}+\ldots\right)\right.\\ &&\qquad\qquad\qquad\left.+\delta^{(3)}(\vec x-\vec x_2)\left( m_2(1+{\bf v}_2^2/2)+\ldots \right)\right\}, \end{eqnarray} where $S _{\rm eff}$ is our extension of GR of Sec.~\ref{sec:radiation}, and $\ldots$ represent the coupling between gravity and particles, given explicitly in (\ref{eq:pointverteces}) and other higher order terms associated to the finite size of the point particle, out of which we just wrote a representative one, $\sqrt{g_{\alpha\beta}\dot x^\alpha\dot x^\beta} R^{\mu\nu\rho\sigma}R_{\mu\nu\rho\sigma}$, which is proportional to an unknown coupling constant $d_2^{(1)}$. While the effective quadrupole is trivially given by $Q^0_{ij}(t)=\sum_a m_a \left(x_a(t)^i x_a(t)^j-\frac{1}{3} x_a(t)^2\delta^{ij}\right)$ at leading order, the expressions for the multipoles and their time-dependence become more subtle at post-Newtonian level. In particular, the derivation of the time dependence requires knowledge of the potential between the two compact objects. To compute all of this, the EFT of gravity for extended objects~\cite{Goldberger:2004jt} provides the aforementioned Feynman rules (see appendix~\ref{app:A}), that we extend here to include our new vertices. The computation of the effective multipoles and of the potential is presented in Sec.~\ref{sec:potential} and~\ref{sec:radiation}, and it has the following schematic structure: \begin{enumerate} \item We identify the leading contributing diagrams using the scaling arguments of \cite{Goldberger:2004jt}. \item In order to facilitate the computations, we identify recurring computable subsections of the graphs. \item We use these to evaluate the graphs in question. \end{enumerate} Notice that evaluation of these Feynman diagrams involves integration over momenta, represented as loop diagrams. This should not mislead us to think that we are computing quantum corrections. All the effects we are computing here are classical ones, higher loops are suppressed by powers of $v$. In the following, we summarize the main results of Sec.~\ref{sec:potential} and~\ref{sec:radiation}, which contain only the technical details (and therefore can be skipped by an uninterested reader). The corrections to the potentials due to $\mathcal{C}^2$ and $\tilde{\mathcal{C}}^2$ are \begin{equation}\label{Lambda_pot} \Delta V_{\Lambda}= \frac{2}{\pi^6}\frac{G m_1 m_2}{r} \left(\frac{2\pi}{\Lambda r}\right)^6\frac{4 G^2 (m_1^2+m_2^2)}{r^2} \end{equation} and \begin{equation}\label{Lambda_tilde_pot} \Delta V_{\tilde \Lambda} = \frac{216}{11 \pi^6}\frac{G m_1 m_2}{r} \left(\frac{2\pi}{\tilde \Lambda r}\right)^6 \frac{4 G^2 \left(m_1S_1^{i}+m_2 S_2^{i} \right) \epsilon_{inm}v_{12}^n r_{12}^m}{r^{4}}, \end{equation} where $S^i = \epsilon^i{}_{jk} S^{j k}$, and $S_{ij}$, carefully defined in App.~\ref{app:A}, parametrizes the spin of the a compact object. For a maximally rotating black hole with spin along the $z$-direction, $S_z = G m^2$. From these we extract the correction to the orbital frequency $\omega$. For the~$\tilde{\mathcal{C}}^2$, the modification to gravitational wave emission from the potential is subleading with respect to the modification of the multipole moments that we discuss next. The same is true also for the operator~$\mathcal{C}\tilde{\mathcal{C}}$, for which we therefore neglect to compute the correction to the potential. We have organized our result in (\ref{Lambda_pot}) and (\ref{Lambda_tilde_pot}) by factoring out the combination $\left(2\pi/(\tilde \Lambda r)\right)$, with the idea that our EFT is reliable only for $r\gtrsim (2\pi)/\Lambda$. Estimating the value when an EFT breaks down to order one numbers is not universal, {\it i.e.} it depends on the UV completion. The factor of $(2\pi)$ we have chosen is expected to be an upper bound to the smallest value of $r$ at which our EFT is under control. This means that the estimate of the maximum sizes of the effect that one can exact from (\ref{Lambda_pot}), and (\ref{Lambda_tilde_pot}), and from the similar equations for the multipoles (\ref{eq:quad_one}) and (\ref{eq:quadtwo}) that follow, by setting $r=(2\pi)/\Lambda$ and $v=1$, should be meant only at a conservative level. In particular, the fact that the dependence on $(\Lambda r)$ is raised to the sixth power, makes the ambiguity in the estimate (not in the calculation) rather large. The $\mathcal{C}^2$ term can also lead to corrections to the quadrupole moment of a binary system, which will change the amplitude of the radiation. The modification to the quadrupole can be expressed as a renormalization of the initial quadrupole moment as \begin{eqnarray}\label{eq:quad_one} Q_{ij} = \left(1+ \frac{21}{2\pi^6} \left(\frac{2\pi}{\Lambda r}\right)^6 \left(\frac{2G (m_1+m_2)}{r}\right)^2\right)Q^{(N)}_{ij}, \end{eqnarray} where $Q^{(N)}_{ij}$ is the quadrupole of a binary system in GR at leading order. Similarly, the $\tilde{\mathcal{C}}^2$ and $\mathcal{C}{\tilde{\mathcal{C}}}$ terms can lead to corrections to the current quadrupole of a binary system. We express this modification as \begin{eqnarray}\label{eq:quadtwo} J_{ij} &\rightarrow&\left(1-\frac{36}{\pi^6}\left(\frac{2\pi}{\tilde \Lambda r}\right)^6 \left(\frac{2G (m_1+m_2)}{r}\right)^2\right) J^{(N)}_{ij}\\ \nonumber && +\frac{63}{8\pi^6} \left(\frac{2\pi}{\Lambda_- r}\right)^6 \left(\frac{2G (m_1+m_2)}{r}\right)^2 Q^{(N)}_{ij} \; , \end{eqnarray} where $J^{(N)}_{ij}$ is the current quadrupole of a binary system in GR at leading order (see Sec.~\ref{sec:radiationsummary} for the definition of $J^{(N)}_{ij}$). The corrected frequency and the corrected multipoles allow us to compute the GW emission using the standard formulas. Notice that the parameters associated to the finite size effects in the point-particle action in (\ref{eq:point}), such as $d_2^{(1)}$, did not appear in the former expressions. In reality, they do contribute both to the potential and the multipoles of the effective one-body system, but, as we argue in Sec.~\ref{sec:observeligo}, their effect is negligible for the regime of interest of post-Newtonian calculations. In table~\ref{tab:vcount} in Sec.~\ref{sec:observeligo}, we summarize the post-Newtonian order of each contribution. Readers who are mainly interested in the effect of these terms in LIGO observables can skip to Sec.~\ref{sec:observeligo}. Let us comment briefly also on the EFT with cubic operators~(\ref{eq:sixderivative}). In this case, one could expect that the leading corrections to the potential and multipoles appear at $v^2$ order. However, we find that these, as well as the order $v^3$, contributions cancel, and the leading corrections are expected to arise not earlier than order $v^4$. The computation of these effects becomes quite challenging (at least to us) given the proliferation of diagrams at this order. We leave this problem to future work, possibly attacking it with the help of on-shell techniques as discussed in~\cite{Neill:2013wsa}. In summary, at parametric level, the physical consequences of the theory with cubic operators are expected to be quite similar to the ones we discuss in greater detail for the theory with quartic operators, with just the replacement $(\Lambda r)^6\to (\Lambda r)^4$ in every formula in the rest of the paper, as, as we said, we expect the leading effect to be 2PN in this case as well. The factor of $(\Lambda r)^4$ makes the observational signatures of this EFT somewhat more promising. \section{Corrections to the potential}\label{sec:potential} In this section, we will show the various diagrams that will lead to corrections to the gravitational potential of a binary system. As outlined in section~\ref{sec:techsummary}, this is important to compute the time-dependence of the multipoles. We will not compute this correction to the potential for the $\mathcal{C}{\tilde{\mathcal{C}}}$ operator, as this is subleading. \subsection{Corrections to potential: $\mathcal{C}^2$ term} The diagrams that correct the gravitational potential are those which do not involve external gravitational waves. Given our quartic vertices, there are only two different topologies: one is obtained by inserting one of our quartic vertices and contracting a pair of legs with each particle trajectory, and the other is when three legs are contracted on a single particle trajectory. \subsubsection{Cross Diagram} Let us begin by analyzing the diagram of the form in Fig. \ref{R4_cross}. \begin{figure}[!ht] \centering \includegraphics[width=0.3\textwidth]{R4_cross_diagram.pdf} \caption{The two loop diagram with the ``cross" topology. Arrows indicate the direction of the momentum flow. The $m$'s in the vertex specifies that we used the leading source-gravity vertex.} \label{R4_cross} \end{figure} Such a diagram has the structure \begin{equation} \text{Figure 1 } \sim \int_{\vec q} \int_{\vec k} \int_{\vec w} \frac{kk(q+k)(q+k)ww (q+w)(q+w)}{k^2(\vec q +\vec k)^2 w^2(\vec q +\vec w)^2}e^{i\vec q \cdot \vec {x}_{12}(t)} \end{equation} where the momenta in the numerator are contracted with each other in some manner (which we will see is not important). In this expression there are no factors of $v$, as they give a subleading contribution with respect to the main non-vanishing one. The crucial property to notice is that the resulting integrals factorize into two separate tensor loop integrals in $k$ and $w$ of the form \begin{equation} \label{integral_to_solve_tem} \int_{\vec k} \frac{k^{i_1} \ldots k^{i_N}}{k^2(k+q)^2}\ . \end{equation} As we can see explicitly in the Appendix~\ref{app:loops}, in three-dimensions there are no divergences for a single loop integral. {{The intuitive reason is that all divergences should correspond to local counter-terms that are polynomial in momenta, however by dimensional analysis a one-loop counter-term would be proportional to $\sqrt{q^2}$, which corresponds to a non-local term.}} Consequently, the resulting integral over $q$ can only have the following form with a finite prefactor: \begin{equation} \text{Figure \ref{R4_cross} } \sim \int_{\vec q} ( q^2)^3 e^{i\vec q \cdot \vec {x}_{12}(t)} \; . \end{equation} Note however, that this integral is zero in our dimensional regularization scheme as seen from~(\ref{qInt}), and so, such a diagram vanishes for the $\mathcal{C}^2$ interaction vertex at lowest order in the PN expansion. Physically, i.e. independent of regularization, this means that this diagram does not induce a long range force, but rather only a contact, $\delta$-function supported, force, which is inconsequential for the prediction of the time dependence of the system. \subsubsection{Peace/Log Diagram} There is another topology we can investigate, one where three of the legs are associated with one source and one leg with the other. This leads to diagrams of the form in Fig.~\ref{R4_log}~(\footnote{The name Peace/Log clearly shows that all the authors of the paper are currently living in the San Francisco Area. Some things never die.}). \begin{figure}[!ht] \centering \includegraphics[width=0.3\textwidth]{R4_log_diagram.pdf} \caption{The two loop diagram with the ``peace/log" topology.} \label{R4_log} \end{figure} As we can redefine the loop momentum as we choose, the diagram in Fig. \ref{R4_log} is given by just a single momentum structure contraction. Using the result of (\ref{RR_H00_H00}), and after accounting for combinatorics, we can write \begin{equation} \text{Figure \ref{R4_log} } = \frac{i}{8} \frac{m_1 m_2^3}{\Lambda^6 M_{\rm pl}^6} \int dt \, \int_{\vec q}\int_{\vec w} \int_{\vec k} \frac{(\vec q \cdot \vec w)^2(\vec k \cdot (\vec k+\vec q+\vec w))^2}{q^2 w^2 k^2 (\vec k+\vec q+\vec w)^2 } e^{i\vec q \cdot \vec {x}_{12}(t)} \; . \end{equation} The first simplification we can make is to drop all terms that are proportional to $k^2$ in the numerator (as they will vanish in dim. reg. as demonstrated in the App.~\ref{app:loops}). We begin by computing the integral over $k$ (the first loop). We do not need to keep the subleading in $\epsilon$ terms, as, as we will show momentarily, the $q$ integral gives a term proportional to $\epsilon$, so we need a $1/\epsilon$ term from the $k\&w$ integrals (there are no $1/\epsilon^2$ terms). Following the general results in App.~\ref{app:loops} given by (\ref{final_loop_int_form}) we have that \begin{equation} \text{Figure \ref{R4_log} } = \frac{i}{8}\frac{m_1 m_2^3}{\Lambda^6 M_{\rm pl}^6} \int dt \, \int_{\vec q}\int_{\vec w} \frac{e^{i\vec q \cdot \vec {x}_{12}(t)}(\vec q \cdot \vec w)^2\bar{w}^i \bar{w}^j}{q^2 w^2} \times \frac{\bar w}{32}T^{ij}(\hat{\bar{w}}) \end{equation} where $\bar w \equiv w+q$ and the traceless symmetric tensor $T^{ij}$ is defined in the App.~\ref{app:loops}. As $\bar{w}^i \bar{w}^jT^{ij}(\hat{\bar{w}})=\bar{w}^2$, the integral over $w$ (the second loop) can be calculated using our master formulas (\ref{eq:maseq1}), giving \begin{eqnarray} \text{Figure \ref{R4_log} } = -\frac{1}{\epsilon}\frac{1}{2}\frac{i}{128}\frac{m_1 m_2^3}{\Lambda^6 M_{\rm pl}^6} \left(-\frac{1}{315 \pi^2} \right) \int dt \, \int_{\vec q} q^6 e^{i\vec q \cdot \vec {x}_{12}(t)} \; , \end{eqnarray} where we kept only the divergent peace of the two-loop integral to get a non-vanishing result. Notice in particular there are no $1/\epsilon^2$ divergencies. Using (\ref{qInt}), the integral over~$q$ is readily evaluated: \begin{equation} \int_{\vec q} q^6 e^{i\vec q \cdot \vec {x}_{12}(t)}=-\epsilon\frac{1260}{\pi r^9} \; . \end{equation} Collecting everything we have, finally, we obtain \begin{eqnarray} \text{Figure \ref{R4_log} } = i \int dt \, \left(\frac{-m_1 m_2^3}{64 \pi^3 \Lambda^6 M_{\rm pl}^6} \right) \frac{1}{r^9(t)} \; . \end{eqnarray} There is also the diagram with $1 \leftrightarrow 2$. Combining these together, we obtain a correction to the potential of the form \begin{equation} \Delta V_{\text{$\Lambda$}}=\Delta V_{\text{$\Lambda$, Peace/Log}}= \frac{2}{\pi^6}\frac{G m_1 m_2}{r} \left(\frac{2\pi}{\Lambda r}\right)^6\frac{4 G^2 (m_1^2+m_2^2)}{r^2}\ . \end{equation} \subsection{Correction to the potential: $\tilde{\mathcal{C}}^2$ term} The correction to the potential from the ${\tilde{\mathcal{C}}}^2$ operator is subleading to the effect arising from the modification of the multipoles. However, we present the result as an illustration of the Feynman rules involving the $\tilde{\mathcal{C}}$ operator, and also because, if one were to compute the waveform, even a subleading effect can accumulate with time over many orbits and become sizable. \subsubsection{Cross Diagram} As mentioned in Appendix~\ref{App:new_quartics}, the $\delta R(h\rightarrow H_{00}) \delta \tilde R (h\rightarrow H_{00})$ vanishes. Therefore we need to have higher order vertices, hence the leading diagram from $\tilde{\mathcal{C}}^2$ will be proportional at least to a total of two powers of $v^i$ or $\partial_i S^{ij}$, where $S_{ij}$ parametrizes the spin of the black hole, carefully defined in~App.~\ref{app:A}. However, analogously to the cross diagram with the $\mathcal{C}^2$ vertex, the integrals over loop momenta factorize and after they are carried out the $q$ integral will be again proportional to \begin{equation} \int_{\vec q} q^m e^{-i\vec q \cdot \vec {x}_{12}(t)} \end{equation} with $m$ even and positive. Consequently at this order, the contribution vanishes for non-zero $r$ as follows from~(\ref{qInt}). \subsubsection{Peace/Log Diagram} \begin{figure}[!ht] \centering \includegraphics[width=0.3\textwidth]{RtildeR_loop_1.pdf} \caption{One of the leading contributions to the effective potential from $\tilde{\mathcal{C }}^2$ with the ``peace/log'' topology. The dashed arrow indicates the contraction of tensor indices within one of the two $\tilde{\mathcal{C}}$, the other becoming at this point automatic.} \label{RtildeR_loop_1} \end{figure} For the same reasons as above, higher order source vertices are needed to produce non-vanishing contribution. It turns out that we need one velocity vertex and one spin vertex (the diagram with two velocity vertices vanishes). Just as in the $\mathcal{C}^2$ case the result will be proportional a pole in $1/\epsilon$ produced by the overlapping two loop integrals which them multiply the linear-in-$\epsilon$ $q$ integral. There are two choices of how to locate the spin and velocity vertices that give non-vanishing contributions that we show on Fig. \ref{RtildeR_loop_1} and~\ref{RtildeR_loop_2}. First, consider the diagram on Fig. \ref{RtildeR_loop_1}: Performing the calculation, and taking into account of combinatorial factors, we have that \begin{eqnarray} \text{Figure \ref{RtildeR_loop_1} } &=& 2 \frac{m_2 m_1^2 }{\tilde \Lambda^6 M_{\rm pl}^6} \int dt \, \int_{\vec q}\int_{\vec w} \int_{\vec k} e^{-i\vec q \cdot \vec {x}_{12}(t)} \nonumber \\ &&\frac{\epsilon_{nmi}(k+\bar w)_i (k+\bar w)_j k_j k_n k_p \epsilon_{rsl}w_l w_k q_k q_r S_1^{pm}(t) v_2^s(t)}{ q^2 w^2 k^2 ( k+ \bar w)^2 } \; , \end{eqnarray} where $\bar w=w+q$. Doing the loop integral over $k$ first we have that \begin{eqnarray}\nonumber \int_{\vec k}\frac{\epsilon_{nmi}(k+\bar w)_i (k+\bar w)_j k_j k_n k_p S_1^{pm}(t)}{k^2 (k+\bar w)^2} \rightarrow -\frac{\epsilon_{nmi}}{2^6}\bar w^i T_2^{np}{(\hat {\bar w})}S_1^{pm} \bar w ^3 \rightarrow \frac{1}{2^7}\epsilon_{nmi}\bar w^i S_1^{nm} \bar w ^3 \; .\\ \end{eqnarray} Now we need to do the loop integral over $w$ which takes the form \begin{eqnarray} \int_{\vec w}\frac{\epsilon_{nmi}\epsilon_{rsl}q_k q_r\left(q_i w_l w_k+w_i w_l w_k\right)(w+q)^2 (w+q)^2 S_1^{nm}v_2^s}{w^2 \left((w+q)^{2}\right)^{1/2}} \; . \end{eqnarray} Using the formula~(\ref{eq:maseq2}), we can integrate over $\omega$. The remaining $q$ integral will be proportional to \begin{equation} \int_{\vec q} q^6 (i q_n) e^{-i\vec q \cdot \vec {x}_{12}(t)}=-\epsilon \partial_n \frac{1260}{\pi r^9} \; , \end{equation} where again (\ref{qInt}) was used. So, in totality, we obtain: \begin{eqnarray} \text{Figure \ref{RtildeR_loop_1} } &=& -i\left(\frac{27 }{176 \pi^3}\right) \frac{m_2 m_1^2 }{\tilde \Lambda^6 M_{\rm pl}^6} \int dt \, \frac{ x_{12}^n S_1^{nm} v_2^m}{r^{11}} \; . \end{eqnarray} The other diagram configuration that we can construct has both the spin vertex and the velocity vertex associated with the same source. That diagram is given by Fig. \ref{RtildeR_loop_2}. \begin{figure}[!ht] \centering \includegraphics[width=0.3\textwidth]{RtildeR_loop_2.pdf} \caption{One of the other leading contributions to the effective potential from $\tilde{\mathcal{C }}^2$ with the ``peace/log'' topology. The dashed arrow indicates contraction of indices within $\tilde{\mathcal{C }}.$} \label{RtildeR_loop_2} \end{figure} When we compute this diagram we find that it is structurally the same as that of Fig. \ref{RtildeR_loop_1} but with $\epsilon_{rsl}w_l w_k q_k q_r v_2^s(t) \rightarrow\epsilon_{rsl}q_l q_k w_k w_r v_1^s(t)$ which tells us that \begin{equation} \text{Figure \ref{RtildeR_loop_2}}=-\text{Figure \ref{RtildeR_loop_1} (with $v_2 \rightarrow v_1$)} \end{equation} and consequently \begin{eqnarray} \text{Figure \ref{RtildeR_loop_1}}+\text{Figure \ref{RtildeR_loop_2}}=i\left(\frac{27 }{176 \pi^3}\right) \frac{m_2 m_1^2 }{\tilde \Lambda^6 M_{\rm pl}^6} \int dt \, \frac{ x_{12}^n S_1^{nm} \Delta v_{12}^m}{r^{11}} \; . \end{eqnarray} Including the diagrams with $(m_1\leftrightarrow m_2)$, we have the correction to the potential in the form \begin{align} \Delta V_{\text{ $\tilde\Lambda$}}=\Delta V_{\text{ $\tilde\Lambda$ Peace/Log}} = \frac{216}{11 \pi^6}\frac{G m_1 m_2}{r} \left(\frac{2\pi}{\tilde \Lambda r}\right)^6 \frac{4 G^2 \left(m_1S_1^{i}+m_2 S_2^{i} \right) \epsilon_{inm}v_{12}^n r_{12}^m}{r^{4}}\ . \end{align} \section{Correction to radiation}\label{sec:radiation} In this section, we will show the various diagrams that will lead to corrections to the quadrupole $Q_{ij}$ and the current quadrupole $J_{ij}$ of a binary system as outlined in section~\ref{sec:techsummary}. \subsection{Corrections of quadrupole: $\mathcal{C}^2$ \label{sec:radiativeonesec}} Here the leading order diagram has the structure of Fig. \ref{R4_rad}. \begin{figure}[!ht] \centering \includegraphics[width=0.3\textwidth]{R4_radiation.pdf} \caption{The leading order radiative correction diagram from the $\mathcal{C}^2$ operator. The wiggly line represents the outgoing gravitational wave. } \label{R4_rad} \end{figure} Using (\ref{RR_H00_hbar}) and (\ref{RR_H00_H00}) there are two possible tensor structures of this digram coming from the two distinct places to contract the $\delta R (H\rightarrow H_{00}) \delta R (\bar h) $. After accounting for combinatorial factors, we have \begin{eqnarray} \text{Figure \ref{R4_rad} } =i \frac{m_1 m_2^2}{\Lambda^6 M_{\rm pl}^4} \int dt \, \int_{\vec q}\int_{\vec k} \frac{2(\vec q \cdot ( \vec q+\vec k))^2k^i k^j+(\vec k \cdot ( \vec q+\vec k))^2q^i q^j}{q^2 k^2 (\vec k+\vec q)^2 } e^{-i\vec q \cdot \vec {x}_{12}(t)} \times R^{0i0j}(\bar h) \nonumber .\\ \end{eqnarray} The factor of $2$ inside the integral comes from the fact that there are two contraction configurations that are identical upon a shift in the loop momentum. Dropping the $k^2$ terms in the numerator, which, as usual, gives a vanishing contribution, we have four structures when we expand the numerator. These are \begin{equation} \text{Numerator}=2q^4k^i k^j+4q^2 q^n k^n k^i k^j+2q^n q^m k^n k^m k^i k^j +q^i q^j q^n q^m k^n k^m \ . \end{equation} Each of these can then be computed using formula~(\ref{general_structure}) and~(\ref{master_prefactor}) for the loop integral over momentum $k$. Performing these loop integrals we will obtain a result whose tensorial structure will be of the form \begin{equation} \int_{\vec k}\frac{\text{Numerator}}{k^2(\vec k+\vec q)^2}=\frac{q^3}{32}\left(\frac{q^2}{2}T^{ij}_2+q^i q^j \right) \; . \end{equation} As this is contracted with $R^{0i0j}(\bar h)$ we can simplify further by explicitly removing the trace, and writing $q^iq^j\rightarrow \frac{2 }{3}q^2T_2^{ij}$. And so finally, we may write: \begin{eqnarray} \text{Figure \ref{R4_rad} } = i \frac{m_1 m_2^2}{\Lambda^6 M_{\rm pl}^4}\left(\frac{7}{192}\right) \int dt \, \int_{\vec q} \frac{q^5T_2^{ij}(\hat q)}{q^2 } e^{-i\vec q \cdot \vec {x}_{12}(t)} \times R^{0i0j}(\bar h) \; . \end{eqnarray} We can compute the integral over $q$ using (\ref{eq:trensotial}) and (\ref{qInt}), obtaining \begin{eqnarray} \text{Figure \ref{R4_rad} } = i \frac{m_1 m_2^2}{\Lambda^6 M_{\rm pl}^4}\left(\frac{7}{16 \pi^2}\right) \int dt \, \left(\frac{3r^ir^j}{r^8}-\frac{\delta^{ij}}{r^6} \right) \times R^{0i0j}(\bar h) \; . \end{eqnarray} When we include the same diagram but exchanging $1 \leftrightarrow 2$, we find that this diagram corresponds to adding to the effective action of a single object (which, as we explained, corresponds to the action of the two compact objects seen from far away) a term which has the functional form of a quadrupole (see eq.~(\ref{eq:PNeffectiveaction})): \begin{eqnarray}\label{eq:radiativeone} S_{\Lambda,rad}=\int dt \, \frac{21}{4\pi^6} \left(\frac{2\pi}{\Lambda r}\right)^6 \left(\frac{2G (m_1+m_2)}{r}\right)^2 \frac{m_1 m_2}{m_1+m_2} \left(r^ir^j-\frac{r^2 \delta^{ij}}{3} \right) \times R^{0i0j}(\bar h). \end{eqnarray} We can therefore think of this term as a correction to the quadrupole of the binary. \subsection{Corrections to current quadrupole: $\tilde{\mathcal{C}}^2$} Similarly to the calculation of the potential presented in the previous Section, the lowest order diagrams are not as simple as the $\mathcal{C}^2$ case. Due to the epsilon structure, $\delta R(h\rightarrow H_{00}) \delta \tilde R (h\rightarrow H_{00})$ vanishes. This means that we must compute diagrams with higher order couplings to the world lines (graviton-source couplings), as explained in more detailed in~App.~\ref{App:new_quartics}. Consequently we will need the leading order contribution to structures like $\delta R(h\rightarrow H_{00}) \delta \tilde R (h\rightarrow v^i H_{0i})$. We will also need structures like $\delta R(h\rightarrow H_{00}) \delta \tilde R (\bar h )$, which are given in App.~\ref{App:new_quartics}. Putting these all together with the right pre-factors given by the Feynman rules, we have two types of diagrams: one where the velocity vertex is paired with a mass vertex acting on the same source, and the other where it is the only vertex acting on a source. More explicitly we have that the first diagram is given in Fig. \ref{RtildeR_radiative_1} while the other is give by Fig. \ref{RtildeR_radiative_2}. \subsubsection{First Diagram: paired $v$} \begin{figure}[!ht] \centering \includegraphics[width=0.3\textwidth]{RtildeR_radiative_1.pdf} \caption{A diagram with the first ``topology" for the leading order radiative correction diagram for $\tilde{\mathcal{C }}^2$.} \label{RtildeR_radiative_1} \end{figure} This diagram has contributions from two likes of structures: one where the $\delta R(h\rightarrow H_{00}) \delta \tilde R (\bar h )$ is on source $1$ (which is contracted only with one leg); and one where it is on source $2$ (which is contracted with two legs). Let us examine the first case. The numerator will be proportional to \begin{equation} \epsilon_{nmi}(k+q)^i (k+q)^j k^j k^n \rightarrow \epsilon_{nmi}q^i q^j k^j k^n \end{equation} as anything proportional to $k^2$ vanishes in the loop (for the usual argument) and where we have taken advantage of the structure of the epsilon tensor. When we compute the loop integral over $\vec k$ we get \begin{equation} \epsilon_{nmi}q^i q^j T_2^{jn}\sim 3 \epsilon_{nmi}q^i q^j q^j q^n-\epsilon_{nmi}q^i q^n\ , \end{equation} each of which vanishes by anti-symmetry. Consequently, the only (possible) non-zero piece is when the contraction is on source $2$. Therefore, after a shift in the loop integral and after including combinatorics, we can write \begin{eqnarray} \text{Figure \ref{RtildeR_radiative_1} } &=& 4i\frac{m_1 m_2^2}{\tilde \Lambda^6 M_{\rm pl}^4} \int dt \, \int_{\vec q}\int_{\vec k} \frac{\epsilon_{nmr}q^r q^s (k+q)^s k^n \epsilon_{ijk}k^k k^l}{q^2 k^2 (\vec k+\vec q)^2 } e^{-i\vec q \cdot \vec {x}_{12}(t)} \nonumber \\ &&\times v_2(t)^m \left(R^{ijl0}(\bar h)+2R^{ilj0}(\bar h)\right) \; . \end{eqnarray} Performing the loop integral over $\vec k$, the momentum dependent tensor structure of the diagram becomes \begin{equation} \frac{1}{8}\frac{\epsilon_{nmr}\epsilon_{ijk}q^r \left(q^s T_4^{snkl}q^3-2q^2 T_3^{nkl}q^2\right)}{q^2}\quad\rightarrow\quad -\frac{1}{8}\frac{\epsilon_{nmr}\epsilon_{ijk}q^r T_3^{nkl}q^4}{q^2} \; . \end{equation} Examining the structure of $T_3$ we see that the term $\propto \hat q^n \hat q^k \hat q^l$ will vanish as will the term with $\hat q^n \delta^{kl}$, leaving us with just \begin{eqnarray} \text{Figure \ref{RtildeR_radiative_1} } &=& i\frac{m_1 m_2^2}{\tilde \Lambda^6 M_{\rm pl}^4}\frac{1}{64} \int dt \, \int_{\vec q} \frac{\epsilon_{nmr}\epsilon_{ijk}q^r \left(q^k \delta^{nl}+q^l \delta^{nk} \right)q^3}{q^2 } e^{-i\vec q \cdot \vec {x}_{12}(t)} \nonumber \\ &&\times v_2(t)^m \left(R^{ijl0}(\bar h)+2R^{ilj0}(\bar h)\right) \nonumber \\ &\rightarrow&\quad i\frac{m_1 m_2^2}{\tilde \Lambda^6 M_{\rm pl}^4}\frac{1}{64} \int dt \, \int_{\vec q} \frac{8 \,q^j q^l q^3}{q^2 } e^{-i\vec q \cdot \vec {x}_{12}(t)} \times v_2(t)^i R^{ijl0}(\bar h)\; , \end{eqnarray} where we have utilized the trace free condition of the on-shell radiation graviton in our final manipulations. Performing the final integrals over $\vec q$ as illustrated in App.~\ref{app:loops} we arrive at \begin{eqnarray} \text{Figure \ref{RtildeR_radiative_1} } &=& i\int dt\, \frac{m_1 m_2^2}{\tilde \Lambda^6 M_{\rm pl}^4}\frac{1}{2\pi^2} \frac{1}{r^8}\left(6v_2(t)^i r^j r^l-v_2(t)^i r^2 \delta^{jl} \right) \times R^{ijl0}(\bar h)\; . \end{eqnarray} \subsubsection{Second Diagram: alone $v$} \begin{figure}[!ht] \centering \includegraphics[width=0.3\textwidth]{RtildeR_radiative_2.pdf} \caption{A diagram with the second ``topology" for the leading order radiative correction from the $\tilde{\mathcal{C }}^2$ operator.} \label{RtildeR_radiative_2} \end{figure} Let us now compute the contribution from Fig. \ref{RtildeR_radiative_2} where the velocity vertex is isolated. Using our Feynman rules we have \begin{eqnarray} \text{Figure \ref{RtildeR_radiative_2} } &=& 4i\frac{m_1 m_2^2}{\tilde \Lambda^6 M_{\rm pl}^4} \int dt \, \int_{\vec q}\int_{\vec k} \frac{\epsilon_{nmr}(k+q)^r (k+q)^s q^s q^n \epsilon_{ijk}k^k k^l}{q^2 k^2 (\vec k+\vec q)^2 } e^{-i\vec q \cdot \vec {x}_{12}(t)} \nonumber \\ &&\times v_1(t)^m \left(R^{ijl0}(\bar h)+2R^{ilj0}(\bar h)\right) \; . \end{eqnarray} Following almost identical manipulations as those for the previous diagram, we can first compute the loop integral over $\vec k$, which gives us \begin{eqnarray} \text{Figure \ref{RtildeR_radiative_2} } &=& 2i\frac{m_1 m_2^2}{\tilde \Lambda^6 M_{\rm pl}^4} \int dt \, \int_{\vec q} \left(\frac{-1}{64}\right)\frac{\epsilon_{nmr}\epsilon_{ijk}q^n T_3^{rkl}q^4}{q^2} e^{-i\vec q \cdot \vec {x}_{12}(t)} \nonumber \\ &&\times v_1(t)^m \left(R^{ijl0}(\bar h)+2R^{ilj0}(\bar h)\right) \; . \end{eqnarray} As $\epsilon_{nmr}=-\epsilon_{rmn}$, we see that this diagram is exactly the same as the previous one where we have changed $v_1\leftrightarrow v_2$ and now has a minus sign. And so we obtain \begin{eqnarray} \text{Figure \ref{RtildeR_radiative_2} } &=& i\int dt\, \frac{m_1 m_2^2}{\tilde \Lambda^6 M_{\rm pl}^4}\frac{1}{2\pi^2} \frac{1}{r^8}(-1)\left(6v_1(t)^i r^j r^l-v_1(t)^i r^2 \delta^{jl} \right) \times R^{ijl0}(\bar h)\; . \end{eqnarray} \subsubsection{Total radiative corrections} Now, notice that when we take these two radiation diagrams together we get the nice structure (throwing out the trace term as it vanishes for the on-shell graviton): \begin{eqnarray} \text{Figure \ref{RtildeR_radiative_1} } +\text{Figure \ref{RtildeR_radiative_2} }= i\int dt\, \frac{m_1 m_2^2}{\tilde \Lambda^6 M_{\rm pl}^4}\left(-\frac{3}{\pi^2}\right) \frac{1}{r^8}\Delta v_{12}^i r^j r^l \times R^{ijl0}(\bar h) \end{eqnarray} where $\vec{\Delta v}_{12}\equiv \vec v_1-\vec v_2$. Now we also need to compute the same diagrams but with the sources exchanged, i.e. $1\leftrightarrow2$. When we do so we get a contribution that would exactly cancel that above (as $\vec{\Delta v}_{12}+\vec{\Delta v}_{21}=0$) were it not for the altered masses in the pre-factor. In summary, all of these diagrams combine to give: \begin{eqnarray} \text{Figure \ref{RtildeR_radiative_1} } +\text{Figure \ref{RtildeR_radiative_2} }+(1\leftrightarrow2)&=&i\int dt\, \frac{6m_1 m_2(m_1-m_2)}{\tilde \Lambda^6 M_{\rm pl}^4} \frac{1}{r^8}\Delta v_{12}^i r^j r^l \times R^{ijl0}(\bar h)\nonumber\\ = i\int dt\,&& \frac{12}{\pi^6} \left(\frac{4 G^2 m_1 m_2 (m_1-m_2)}{r^2}\right)\left(\frac{2\pi}{\Lambda r}\right)^6\Delta v_{12}^i r^j r^l \times R^{ijl0}(\bar h) \nonumber\\ \; \end{eqnarray} Notice that we write this as an effective ``magnetic'' quadrupole term, $\sim \Delta v_{12}^i r^j r^l$. Taking into account combinatorial coefficients, we have \begin{eqnarray} S_{\tilde\Lambda,rad}=\int dt\, \frac{12}{\pi^6}\left(\frac{2\pi}{\tilde \Lambda r}\right)^6 \left(\frac{4G^2 m_1 m_2 (m_1-m_2)}{r^2}\right) \Delta v_{12}^i r^j r^l \times R^{ijl0}(\bar h) \; , \end{eqnarray} The vanishing of the result for $m_1=m_2$ can be understood by noticing that the binary (and in particular the angular momentum distribution) in this limit becomes symmetric under a parity transformation around the origin. Therefore, the effective single-object action must be invariant under this same parity. So far, we have neglected another diagram for radiation where we generate an effective quadrupole. This diagram corresponds to contracting the Riemann of the radiation graviton with the velocity graviton-source vertex. Upon performing the integrations, we obtain an effective quadrupole term whose tensorial structure is of the form of the product of two angular momenta: $\epsilon_{ilm}v^l r^m\; \epsilon_{jpq}v^p r^q\sim L_i L_j$, minus its trace. This term is however subleading by one power of $v$ with respect to the current-quadrupole radiation for the emitted field, and therefore we neglect it here. However, one should keep in mind that this term would contribute comparably to the leading corrections if one were interested in the power emitted at the source, because this correction to the quadrupole would interfere with the newtonian quadrupole (unless the orbit is circular, in which case the interference vanishes). \subsection{Corrections to current quadrupole: ${\mathcal{C}}\tilde{\mathcal{C}}$} The structure of the leading diagram in this case is the same as in Fig.~\ref{R4_rad}, where we contract one of the Riemann tensors in $\tilde{\mathcal{C}}$ with the external graviton. Apart for combinatoric factors, the resulting diagram is identical to the one computed in Sec.~\ref{sec:radiativeonesec}, eq.~(\ref{eq:radiativeone}), with the replacement \begin{equation} R^{0i0j}(\bar h) \quad\to\quad -\epsilon^{iab} \left(\frac{R^{abj0}}{2}+ R^{ajb0}\right)\ \ \times \ \ \frac{2}{4} \ , \end{equation} where the factor of $2/4$ comes from the different combinatorics. We therefore obtain \begin{eqnarray}\nonumber &&S_{\Lambda_-,rad}=-\int dt \, \frac{21}{8\pi^6} \left(\frac{2\pi}{\Lambda_- r}\right)^6 \left(\frac{2G (m_1+m_2)}{r}\right)^2 \frac{m_1 m_2}{m_1+m_2} \left(r^ir^j-\frac{r^2 \delta^{ij}}{3} \right)\\ &&\qquad\qquad \times\ \epsilon^{iab} \left(\frac{R^{abj0}}{2}+ R^{ajb0}\right) \ . \end{eqnarray} This can be interpreted as an effective current quadrupole with the tensor structure of $r_i r_j - r^2 \delta_{ij}/3$: \begin{equation} J_{ij} = \frac{63}{8\pi^6} \left(\frac{2\pi}{\Lambda_- r}\right)^6 \left(\frac{2G (m_1+m_2)}{r}\right)^2 \frac{m_1 m_2}{m_1+m_2}\left(r^ir^j-\frac{r^2 \delta^{ij}}{3} \right) \end{equation} Interestingly, due to the CP-odd nature of the ${\mathcal{C}}\tilde{\mathcal{C}}$ term, we find there is a term with a tensor structure similar to the GR quadrupole that couples to the emitted graviton through an $\epsilon$-tensor, and therefore contributing as a $J_{ij}$ current quadruple (which normally, unlike here, contains an $\epsilon$-tensor in its definition). In particular this means that the effective one body system violates parity. \subsection{Summary}\label{sec:radiationsummary} Combining the result of the calculation of the corrections to the quadrupole and current quadrupole of a binary system, the leading correction to the radiative coupling of the effective single object is given by terms in the single-object effective action of the form \begin{eqnarray}\nonumber \label{Lambda_rad_coupling} S_{\Lambda,rad}=\int dt \, \frac{21}{4\pi^6} \left(\frac{2\pi}{\Lambda r}\right)^6 \left(\frac{2G (m_1+m_2)}{r}\right)^2 \frac{m_1 m_2}{m_1+m_2} \left(r^ir^j-\frac{r^2 \delta^{ij}}{3} \right) \times R^{0i0j}(\bar h)\ ,\\ \end{eqnarray} and \begin{eqnarray} \label{Lambda_tilde_rad_coupling} &&S_{\tilde\Lambda,rad}=\int dt\, \frac{12}{\pi^6}\left(\frac{2\pi}{\tilde \Lambda r}\right)^6 \left(\frac{4G^2 m_1 m_2 (m_1-m_2)}{r^2}\right) \Delta v_{12}^i r^j r^l \times R^{ijl0}(\bar h) \; ,\\ \label{Lambda_minus_rad_coupling}\nonumber &&S_{\tilde\Lambda_-,rad}=-\int dt\, \, \frac{21}{8\pi^6} \left(\frac{2\pi}{\Lambda_- r}\right)^6 \left(\frac{2G (m_1+m_2)}{r}\right)^2 \frac{m_1 m_2}{m_1+m_2} \left(r^ir^j-\frac{r^2 \delta^{ij}}{3} \right) \times \epsilon_{jkl} R^{kli0}(\bar h),\\ \; \end{eqnarray} generated by the $\mathcal{C}^2/\Lambda^6$, $\tilde{\mathcal{C}}^2/\tilde\Lambda^6$ and $\tilde{\mathcal{C}}{\mathcal{C}}/\Lambda_-^6$ terms respectively. These expressions can be cast in a more familiar form by comparing to the structure of the leading gravitational multipole coupling (to on-shell gravitons) in the center of mass frame anticipated in~(\ref{eq:PNeffectiveaction}) (see for example~\cite{Goldberger:2009qd}): \begin{eqnarray} S_{{\rm ext.\, obj.}}\supset \frac{1}{2}\int dt \, Q_{ij} R_{0i0j}-\frac{1}{3}\int dt \, J_{ij} \epsilon_{jkl}R^{kli0} + \dots \ , \end{eqnarray} where $Q_{ij}$ and $J_{ij}$ are the mass and ``current'' quadrupole moments~\footnote{See also \cite{Maggiore:1900zz} for a discussion based on the a direct multipole expansion of the linearized Einstein equations.} given---to leading order---by the integrals \begin{eqnarray} Q_{ij}&=&\int d^3x \; \left(x^i x^j-\frac{1}{3}\delta^{ij}x^2\right) T^{00}\quad \overset{\text{pp limit}}{\longrightarrow} \quad \sum_a m_a \left(x_a^i x_a^j-\frac{1}{3}\delta^{ij}x_a^2\right)\ , \\ J_{ij}&=&\int d^3x \; \left[x^{\{i}\epsilon^{j\}nm}- \frac{1}{3} \delta^{ij}x^{\{l}\epsilon^{l\}nm}\right]x^n T^{0m} \quad \\ &&\qquad\overset{\text{pp limit}}{\longrightarrow} \quad \sum_a m_a \left[x_a^{\{i}\epsilon^{j\}nm}-\frac{1}{3} \delta^{ij}x_a^{\{l}\epsilon^{l\}nm}\right]x_a^n v_a^m\ , \end{eqnarray} where $a^{\{i}b^{j\}}=\frac{1}{2} \left(a^i b^j+a^jb^i\right)$. When we consider two point particles about their center of mass frame, we can write the quadrupole moments in a simplified form \begin{eqnarray} Q_{ij}&=&\mu \left(r^i r^j -\frac{1}{3}\delta^{ij}r^2\right) \\ J_{ij}&=&\frac{\mu}{m_1+m_2} \left[r^{\{i}\epsilon^{j\}nm}-\frac{1}{3}\delta^{ij}r^{\{l}\epsilon^{l\}nm}\right] r^n (m_1v_2^m+m_2 v_1^m) \nonumber \\ &=& - \frac{\mu(m_1 -m_2)}{m_1+m_2} \left[r^{\{i}\epsilon^{j\}nm}-\frac{1}{3}\delta^{ij}r^{\{l}\epsilon^{l\}nm}\right]r^n v_{12}^m \end{eqnarray} where $\mu=m_1 m_2/(m_1+m_2)$ is the reduced mass. When we compare our results to these expressions we find that, for the terms in $\Lambda$ and $\tilde\Lambda$, the coupling is not only of a similar tensor structure (this is unsurprising as it really is just a general consequence of gauge invariance---see \cite{Goldberger:2009qd}) but it has the same structure as the leading PN case but with a modified coefficient. The term in $\Lambda_-$ has instead a different structure beyond the tensorial one. In other words, we can write our total radiative coupling as simply renormalized quadrupole and current quadrupole moments as follows \begin{eqnarray} Q_{ij} &\rightarrow& \left(1+ \frac{21}{2\pi^6} \left(\frac{2\pi}{\Lambda r}\right)^6 \left(\frac{2G (m_1+m_2)}{r}\right)^2\right)Q^{(N)}_{ij} \\ J_{ij} &\rightarrow&\left(1-\frac{36}{\pi^6}\left(\frac{2\pi}{\tilde \Lambda r}\right)^6 \left(\frac{2G (m_1+m_2)}{r}\right)^2\right) J^{(N)}_{ij}\\ \nonumber && +\frac{63}{8\pi^6} \left(\frac{2\pi}{\Lambda_- r}\right)^6 \left(\frac{2G (m_1+m_2)}{r}\right)^2 Q^{(N)}_{ij} \; , \end{eqnarray} where $Q^{(N)}_{ij}$ and $J^{(N)}_{ij}$ are the Newtonian mass and current quadrupoles. \section{Observable consequences for LIGO-VIRGO}\label{sec:observeligo} In principle, we can fold in these corrections to the effective action and the radiative coupling to modify the dynamics of a compact binary during inspiral and deduce the observable consequences. In broad strokes, the effective potential changes the acceleration on each object which shifts the frequency of the emitted gravitational wave. Meanwhile, the corrected radiative coupling---as well as the shifted frequency itself---changes the amplitude of the emitted radiation and consequently the rate at which which power is emitted. Additionally, the effective potential also changes the energy as a function of the orbital parameters and so its modification effects the orbital decay. To deduce the observable consequences in a completely accurate way for general orbits, one would use the results derived in the former sections and just numerically integrate until the PN expansion breaks down when $v/c \sim 1$. Exploring all of the parameter space (various mass ratios, ellipticity, spin orientations, etc.) goes beyond the scope of this first paper on the EFT. In the future, however, it would be very worthwhile to perform such an exploration and produce templates for the LIGO-VIRGO (and future gravitational wave observatories) pipeline. For the purpose of this paper, we will restrict ourselves to a much simpler analysis, which is sufficient for us to illustrate the main observable effects. We consider (quasi) circular motion of two compact objects and treat the radiation reaction in an adiabatic manner, that is in the regime where $\omega_{\rm orb} \gg \dot r/r$, with $r$ being the orbital separation. For a given orbital separation, the orbital frequency of the particles is given by the the full post-Newtonian equations of motion to some order, which we indicate as $\omega_{PN}(r, m_i, S_i)$. If we were to turn on the $\mathcal{C}^2/\Lambda^6$ term how would this frequency change? Using (\ref{Lambda_pot}) to derive the acceleration on a single particle, we can compute (the leading order) change in the frequency as a function of $\omega_{PN}$ using the simple mechanics of circular motion as we already explained in footnote (\ref{footnote:omega}). We find that \begin{equation} \frac{\Delta\omega_{\Lambda}}{\omega_{PN}}= -\frac{2304G^3 (m_1+m_2)(m_1^2+m_2^2)}{ \Lambda^6} \cdot \frac{1}{r^{11}(t)} \cdot \frac{1}{\omega_{ PN}^2}\ . \end{equation} As we have computed the correction to the effective potential (and thus the equation of motion) to leading order in the PN-expansion it would be inconsistent to keep the full $\omega_{PN}$ in the full expression above. Consequently, we input the leading Newtonian contribution to the frequency $\omega_N=\sqrt{G (m_1+m_2)/r^3}$ yielding \begin{eqnarray} &&\frac{\Delta\omega_{\Lambda}}{\omega_{N}}= -\frac{9}{\pi^6}\frac{4G^2 (m_1^2+m_2^2)}{ r^2}\left(\frac{2 \pi}{ \Lambda r}\right)^6 \end{eqnarray} Of course, in order to be able to measure this effect, one should have computed the $\omega_{PN}$ to a sufficiently high order not to overshadow this effect. Let us move on to the effect of the radiation. In the full post Newtonian treatment, the asymptotic strain tensor incident upon the detector is given by some function $h^{ij}_{PN}(t-R, \hat n)$ where $R$ is the distance to the source from the detector and $\hat n$ is the direction of propagation of the wave. We want to compute how, in the quasi static approximation, this is altered by the presence of our new interactions. From the leading PN radiative coupling (i.e. the usual quadrupole formula \cite{Maggiore:1900zz}) we have that--in the usual TT gauge-- \begin{equation} \left[h_{ij}^{TT}(t,\vec x)\right]_{quad}=\frac{2 G}{R} \Lambda_{ij,kl}(\hat n) \ddot{Q}_{kl}(t-R)\ , \end{equation} where the $\Lambda$ tensor is given by \begin{equation} \Lambda_{ij,kl}(\hat n)=P_{ik}P_{jl}-\frac{1}{2}P_{ij}P_{kl}\ , \end{equation} where $P_{ij}=\delta_{ij}-\hat n_i \hat n_j$. Restricting ourselves to a quasi-circular orbit, we take $\vec r=(r \cos (\omega t) , r\sin( \omega t), 0)$. We have then that the second time derivative of the quadrupole moment is given by \begin{eqnarray} \ddot Q_{ij}&=&-4 \omega^2 Q_{ij} = \frac{4}{3}\omega^2 \mu r^2 \delta_{ij}-4\omega^2 \mu r^2 \hat z_i \hat z_j \\ &=&-2 \mu r^2 \omega^2\left( \begin{array}{ccc} \cos (2 \text{$\omega $t}) & \sin (2 \text{$\omega $t}) & 0 \\ \sin (2 \text{$\omega $t}) & -\cos (2 \text{$\omega $t}) & 0 \\ 0 & 0 & 0 \\ \end{array} \right) \; . \end{eqnarray} Note that the dominant frequency of the gravitational radiation is $2\omega$. As mentioned briefly above, one source of corrections from the EFT terms we are now considering is in the shift in frequency, another is in the ``rescaling'' of the quadrupole moment. When we put these effects together, we have to leading order that \begin{eqnarray} \left[\Delta h_{ij}^{TT}(t,\vec x)\right]_\Lambda&=& \left(\frac{2\pi}{\Lambda r}\right)^6 \frac{8 G^2}{\pi^6 r^2} \left( \frac{21(m_1+m_2)^2}{4}-9(m_1^2+m_2^2)\right) \nonumber\\ && \times \frac{2 G}{R} \Lambda_{ij,kl}(\hat n) \ddot{Q}_{kl}(t-R)\ . \end{eqnarray} In terms of scaling, we can see directly that \begin{equation} \Delta h_\Lambda \sim h \times \left(\frac{\Delta \omega}{\omega} \right) \sim h \times \left(\frac{1}{\Lambda r} \right)^6 \left(\frac{G m}{r} \right)^2 \sim h \times \left(\frac{1}{\Lambda r} \right)^6 v^4 \; . \end{equation} We can see that the effect is suppressed not only by $1/(\Lambda r)^6$, but also by four powers of~$v$. Therefore, for a given $\Lambda$ and $r$, in order to trust this correction, one needs to compute the ordinary PN waveform up to an order larger than $v^4$ by an amount that can compensate for the $1/(\Lambda r)^6$ suppression. Continuing on, we can compute the effects generated from the $\tilde{\mathcal{C}}^2/\tilde \Lambda^6$ term. Here the potential is not just a function of the radial distance, and so recovering the change in frequency is slightly more complicated than in the previous case. As a first step, let us compute the change in the equations of motion due to the effective potential. On particle~$1$, for instance, we have \begin{eqnarray} [m_1 \Delta a_1]_j &=& \frac{\delta}{\delta x_1^j} \int dt \left(-V_{\tilde \Lambda}\right) \\ &=& \frac{216 G}{11\pi^6} \frac{4 G^2 m_2 m_1 }{r^2}\left(\frac{2\pi}{\tilde \Lambda r}\right)^6 \cdot \frac{1}{r^{3}}\times \left[ (m_1 S_1^i +m_2 S_2^i)\epsilon_{inm} \left(\frac{11 v_{12}^n r_{12}^m r_{12}^j}{r^2}-2v_{12}^n \delta_m^j \right. \right. \nonumber \\ &&\left. \left. -\frac{11 r_{12}^n\delta_m^j (r_{12}\cdot v_{12})}{r^2}\right)- (m_1 \dot S_1^i +m_2 \dot S_2^i)\epsilon_{inj} r_{12}^n \right] \;, \end{eqnarray} and similarly for particle $2$. $\vec v_{12} = \vec v_{1} -\vec v_{2}$ is the relative velocity between the two particles. This expression looks a bit daunting, but there are a few simplifications that occur at the order we are working at. First of all, to leading order in the PN expansion $\dot S=0$ as the spin angular momentum is conserved. This means that at this order we may drop terms proportional to $\dot S$. When we work in the circular motion limit, $\vec v_{12}$ and $ \vec r_{12}$ are perpendicular, and if we again take $\vec r_{12}=(r \cos (\omega t) , r\sin( \omega t), 0)$ we can simplify the above force as \begin{eqnarray} [m_1 \Delta a_1]_j &=& \frac{216 G}{11\pi^6} \frac{4 G^2 m_2 m_1 }{r^2}\left(\frac{2\pi}{\tilde \Lambda r}\right)^6 \cdot \frac{1}{r^{3}} \times \Big[-11 v_{12} (m_1 S_1^z +m_2 S_2^z) \hat r_{12}^j \nonumber \\ &&-2 \left( (m_1 \vec S_1 +m_2 \vec S_2)\times \vec v_{12}\right)^j\Big]\ , \end{eqnarray} where $v_{12}$ is just the vector's magnitude. If the spin vectors are in arbitrary directions their components in the orbital plane serve to torque the orbit. To simplify the situation, let us just consider the objects' spin to be perpendicular to the orbital plane (this is also astrophysically the most likely scenario). In particular, let us take them to be in the same direction as the orbital angular momentum, that is, we take $S$ to have a positive value if it is pointed in the $+ \hat z$ direction. In this restricted scenario, we have that \begin{eqnarray} [m_1 \Delta a_1]_j &=& -\frac{2808 G}{11\pi^6} \frac{4 G^2 m_2 m_1 }{r^2}\left(\frac{2\pi}{\tilde \Lambda r}\right)^6 \cdot \frac{ v_{12} (m_1 S_1 +m_2 S_2)}{r^{3}} \hat r_{12}^j \; . \end{eqnarray} We can see that the force is an attractive one for positive $(m_1 S_1 +m_2 S_2)$. We are now back to the form of a radially symmetric force, for which $\omega\propto\sqrt{ \frac{F_r}{m\, r}}$. Therefore computing the change in the orbital frequency we get \begin{equation} \frac{\Delta \omega_{\tilde \Lambda}}{\omega_{PN}}=\frac{1404 }{11\pi^6} \left(\frac{2\pi}{\tilde \Lambda r}\right)^6 \cdot \frac{ 4 G^2 v_{12} (m_1 S_1 +m_2 S_2)}{r^3} \; . \end{equation} This change in the orbital frequency changes the gravitational wave emission by shifting the frequency of the quadrupole motion. This contribution takes precisely the form computed above for the $\mathcal{C}^2/\Lambda^6$ term. It is given by \begin{eqnarray}\label{eq:tildeeffect} \left[\Delta h_{ij}^{TT}(t,\vec x)\right]_{quad,\, \tilde\Lambda}&=& \frac{2808}{11\pi^6}\left(\frac{ 2\pi}{{\tilde \Lambda}r}\right)^6 \frac{4 G^2 v_{12} (m_1 S_1 +m_2 S_2)}{r^{3}} \nonumber \\ && \times \frac{2G}{R} \Lambda_{ij,kl}(\hat n) \ddot{Q}_{kl}(t-R) \; . \end{eqnarray} As before, we can estimate the size of this contribution. In the case of (nearly) equal binary compact objects of similar spin we have that \begin{equation}\label{eq:spinpotentialcoup} \Delta h_{quad, \, \tilde \Lambda} \sim h \times \frac{\Delta \omega_{\tilde \Lambda}}{\omega_{PN}} \sim h \times \left(\frac{1}{\tilde \Lambda r} \right)^6 \left(\frac{G m}{r} \right)^2 \times v^{2+s}\sim h \times \left(\frac{1}{\tilde \Lambda r} \right)^6 v^{6+s}\ , \end{equation} where we have used that the spin angular momentum scales as $S\sim L v^s$ where $ L \sim m r v$ is the orbital angular momentum and $s=1$ for maximally rotating compact objects and $s=4$ for co-rotating objects, non-rotating objects have $s=\infty$. As we can see \begin{equation} \Delta h_{quad, \, \tilde \Lambda} \sim \Delta h_{ \Lambda} \times \left(\frac{\tilde \Lambda}{\Lambda} \right)^6 v^{2+s}\ , \end{equation} which tells us the quadrupole effect from this operator is smaller than the one in $\mathcal{C}^2$ when~$\Lambda\sim\tilde \Lambda$. For the $\tilde{\mathcal{C}}^2/\tilde \Lambda^6$ term there is also another way to change the gravitational waveform. We still have to consider the correction to the radiative coupling. The linearized gravitational wave given by the coupling to the current quadrupole is given by \cite{Maggiore:1900zz} \begin{equation}\label{eq:current_quadr_emission} \left[h_{ij}^{TT}(t,\vec x)\right]_{curr \; quad}=\frac{1}{R}\frac{4 G}{3} \Lambda_{ij,kl}(\hat n) \hat n _m \left( \epsilon^{mkp}\ddot{J}^{pl}(t-R)+\epsilon^{mlp} \ddot{J}^{pk}(t-R)\right)\ . \end{equation} Now, for circular orbits, $\vec r$ and $\vec v_{12}$ are always perpendicular, and their cross product is perpendicular to the orbital plane. In particular, by defining the direction $\hat z$ such that $\epsilon^{jnm}r^n v_{12}^m=\omega r^2 \hat z ^j$, and tensor $J^{pl}$ is given by \begin{eqnarray} J^{pl}&=& - \frac{\mu(m_1 -m_2)}{2(m_1+m_2)}r^2 \omega \left(r^p \hat z^l + r^l \hat z^p \right)\\ &=&- \frac{\mu(m_1 -m_2)}{2(m_1+m_2)}r^3 \omega\left( \begin{array}{ccc} 0 & 0 & \cos (\text{$\omega $t}) \\ 0 & 0 & \sin (\text{$\omega $t}) \\ \cos (\text{$\omega $t}) & \sin (\text{$\omega $t}) & 0 \\ \end{array} \right) \\ \Longrightarrow \quad \ddot{J}^{pl} &=&-w^2 J^{pl} \; . \end{eqnarray} Note that the frequency of the current quadrupole is just $\omega$, not $2\omega$ as in the mass quadrupole case. We can write the contribution to the gravitational wave coming from the ``renormalization'' of the current quadrupole moment, \begin{eqnarray} \left[\Delta h_{ij}^{TT}(t,\vec x)\right]_{curr \; quad, \, \tilde{\Lambda}}&=&-\frac{36}{\pi^6}\left(\frac{2\pi}{\tilde \Lambda r}\right)^6 \frac{4G^2 (m_1+m_2)^2}{r^2} \\ \nonumber &&\times \frac{4 G}{3 R}\Lambda_{ij,kl}(\hat n) \hat n _m \left( \epsilon^{mkp}\ddot{J}^{pl}(t-R)+\epsilon^{mlp} \ddot{J}^{pk}(t-R)\right) \; , \end{eqnarray} where the second line is the second derivative of the leading current quadrupole moment from General Relativity. Dimensionally, this scales like \begin{equation} \left[\Delta h_{ij}^{TT}(t,\vec x)\right]_{curr \; quad, \, \tilde{\Lambda}} \sim h\times \frac{\Delta J}{J} \times v \sim h \times \left(\frac{1}{\tilde \Lambda r} \right)^6 \left(\frac{G m}{ r} \right)^2 \times v \sim h\times \left(\frac{1}{\tilde \Lambda r} \right)^6 v^5\ , \end{equation} where $h$ above is the size of the leading quadrupole radiation. As we can see \begin{equation} \left[\Delta h_{ij}^{TT}(t,\vec x)\right]_{curr \; quad, \, \tilde{\Lambda}} \gg \left[\Delta h_{ij}^{TT}(t,\vec x)\right]_{quad, \, \tilde{\Lambda}} \; . \end{equation} Notice that, if one is interested just in the emitter power, $\sim h^2$, the contribution from the current quadrupole has a polarization orthogonal to the one emitted from the quadrupole. We therefore have \begin{equation} [\Delta P]_{curr \; quad, \, \tilde{\Lambda}} \sim \dot h^2 v^{6}\sim P_{N}\, v^6\; \sim \frac{\left[\Delta P\right]_{quad, \, \tilde{\Lambda}}}{v^s}\gg \left[\Delta P\right]_{quad, \, \tilde{\Lambda}}\ . \end{equation} and so the leading observable effect from the $\tilde{\mathcal{C}}^2/ \tilde{\Lambda}^6$ term for a binary inspiral comes dominantly from the corrected radiation coupling. We can perform a similar analysis for the $\mathcal{C}\tilde{\mathcal{C }}$ operator. The leading effect for the amplitude of the emitted gravitational waves comes from the modified current quadrupole. Applying the formula~(\ref{eq:current_quadr_emission}) to our case, we have \begin{eqnarray} \left[\Delta h_{ij}^{TT}(t,\vec x)\right]_{curr \; quad, \, {\Lambda}_-}&=&\frac{63}{8\pi^6}\left(\frac{2\pi}{ \Lambda_- r}\right)^6 \frac{4G^2 (m_1+m_2)^2}{r^2} \\ \nonumber &&\times \frac{4 G}{3 R}\Lambda_{ij,kl}(\hat n) \hat n _m \left( \epsilon^{mkp}\ddot{Q}^{pl}(t-R)+\epsilon^{mlp} \ddot{Q}^{pk}(t-R)\right) \; , \end{eqnarray} which, at parametric level, scales as \begin{equation} \left[\Delta h_{ij}^{TT}(t,\vec x)\right]_{curr \; quad, \, {\Lambda_-}} \sim h \times \left(\frac{1}{ \Lambda_- r} \right)^6 \left(\frac{G m}{ r} \right)^2 \sim h\times \left(\frac{1}{ \Lambda_- r} \right)^6 v^4\ . \end{equation} Notice however that if we were interested in the emitted power, the polarization of the total emitted gravity wave is orthogonal to the one associated to the mass quadrupole, as it comes from the current quadrupole. However, it is also true that, upon average in time and angle, the interference between the $\left[\Delta h_{ij}^{TT}(t,\vec x)\right]_{curr \; quad, \, {\Lambda}_-}$ and its Newtonian counterpart, $\left[\Delta h_{ij}^{TT}(t,\vec x)\right]_{curr \; quad, \, {N}}$, vanishes. At subleading level, there is a correction to the matter quadrupole that scales as $\Delta Q/Q\sim v^5$. However, similarly to what happened for the current quadrupole, the form of the induced matter quadrupole will be such that it will not interfere with the newtonian one after time- and angle- integration. Therefore, the leading contribution for the total emitted power comes from the correction to the potential, which is of order \begin{equation} V_{\Lambda_-}\sim \frac{G m^2}{r} \frac{1}{(\Lambda_- r)^6} \left(\frac{G m}{r}\right)^3 a\ , \end{equation} where $a=\sqrt{S_i S^i}/(G M^2)$ is the spin parameter in the Kerr metric of the faster spinning black hole. This implies $\Delta \omega/\omega\sim v^6 a$, and in turn we have \begin{equation} [\Delta P]_{ quad, \, \Lambda_-} \sim P_{N}\, v^6\ a \sim P_{N}\, v^{5+s}\ . \end{equation} where we used that $a\sim v^{s-1}$. Independently of the radiated power, there is a possibly interesting effect that one might consider, in relation to the evolution of the spins of a black hole binary systems. Depending on the sign of the coefficient of the $\mathcal{C}\tilde{\mathcal{C }}$ operator, this might lead to enhancement or suppression of the component of the spin that is along the direction between the two black holes, an effect that can build up over many orbits and could induce very striking property for the spins of coalescing black holes, in contrast with the standard GR prediction. A careful study of such effect requires taking into account spin-orbit and spin-spin couplings to the same order in $v$, which is beyond the scope of this paper. A summary of the parametric dependence of the various contributions is given in Table~\ref{tab:vcount}. However, one should not naively disregard subleading contributions. For example, one could consider the following effect. All the above arguments apply for the instantaneous wave detection. However, for real gravitational wave detectors, detection is a more complicated process that involves the analysis of many cycles of the source starting from some initial condition. Therefore, observationally it could be that we are particularly sensitive to effects that, although they are naively smaller in terms of the amplitude of the emitted gravitational wave, build up with time (such as, for example, the aforementioned effect on the spin alignment with the orbital plane), and in particular can affect the initial conditions at the observational window. For example, for the lightest objects, such as Neutron star binaries, the sources can stay in the LIGO band for many cycles during their inspiral phase. Since studying these effects requires knowledge of the post-Newtonian evolution at the relevant order, which is not at our disposal here, we leave the study of these additional effects to future work. This is also why we cannot give a sharp assessment of the implications of the first detections of LIGO-VIRGO on our EFT and in particular on the scales $\Lambda$'s. The second event~\cite{Abbott:2016nmj} has probably too low a signal-to-noise ratio. In the first event~\cite{TheLIGOScientific:2016qqj}, the signal to noise is dominated by the highly relativistic phase, for which we do not have (yet) predictions. However, it is probably true that $\Lambda$'s such that $\Lambda r_s\sim 1$, where $r_s$ is the Schwartzschild radius of the final black hole, would give order one modifications to the signal. Therefore, $\Lambda$'s such that $\Lambda \sim 1/r_s\sim 10^{-2} $km$^{-1} $ are probably excluded. $\Lambda$'s such that $\Lambda \lesssim 1/r_s\sim 10^{-2} $km$^{-1}$ are probably also excluded, because the merger phase occurs beyond the regime of validity of the EFT, and so we expect order one corrections (though, as we stressed, this depends on the UV completion)~\footnote{These bounds appear to be confirmed by naively extrapolating the bounds on the PN parameters of~\cite{Yunes:2016jcc} where both events seem to contribute comparatively. In fact, even though our effect is 2PN, the shift in the phase produced scales as $v^{16}/(\Lambda r_s)^6$, so that one can rescale the bounds obtained on the 8th order PN parameter in~\cite{Yunes:2016jcc}. By very naively extrapolating the results of Fig.~4 of~\cite{Yunes:2016jcc} we find that indeed the bound is $\Lambda\gtrsim 10^{-1}/r_s$. However, one should not over-interpret this bound as one should have perturbative control of the theory in all the regions in $r$ where the signal-to-noise is relevant. Therefore, a dedicated analysis appears to us is still needed.}. A detailed study of the current bounds on $\Lambda$'s and forecast for constraints (or measurements!) from future observations is left to future work. \begin{table} \centering \begin{tabular}{c|c|c|c} \hline & \multicolumn{2}{ |c| }{Parity Even}& Parity Odd \\ \hline Operator &$\mathcal{C}^2$ & $\tilde{\mathcal{C }}^2$ & $\mathcal{C}\tilde{\mathcal{C }}$ \\ \hline $\Delta V/V$ & $v^4$ & $v^7 a$ & $v^6 a$ \\ $\Delta \omega/\omega$ & $v^4$ & $v^7 a$ & $v^6 a$ \\ $\Delta Q_{ij}/Q_{ij}$ & $v^4$ & $-$ & $v^5$ \\ $\Delta J_{ij}/J_{ij} $ & $-$ & $v^4$ & $v^3$ \\ \hline $\Delta P/P$& $v^4$ & $v^6$ & $v^6 a$ \\ \hline $\Delta h/h $& $v^4$ & $v^5$ & $v^4$ \\ \hline \end{tabular} \caption{Summary of the relative contribution of the parity even and parity odd operators to various quantities in comparison to the leading contribution to the same quantity in General Relativity. Only the dependence on $v$ and $a$, the spin parameter in the Kerr metric of the fastest spinning black hole, is shown (there is always, respectively, a factor of $1/(\Lambda r)^6$, $1/(\tilde \Lambda r)^6$ or $1/(\Lambda_- r)^6$ in each column, left implicit). $a$ is related to the spin vector of a black hole $S_i$, as $a=\sqrt{S_i S^i}/(G M^2)$. The~$`-'$ sign indicates that the contribution is subleading and was not estimated.} \label{tab:vcount} \end{table} \subsubsection*{Finite size operators} We add a final discussion. In this section, we have so far neglected the contribution of the finite size operators present in the point particle action in (\ref{eq:point}), such as the one proportional to $d_2^{(1)}$ (see~\cite{Goldberger:2004jt} for a list of the leading finite-size operators). These operators do contribute to the effective potential and multipoles of the one-body effective action. An estimate gives an induced relative correction to the quadrupole and potential of order $\frac{d_2^{(1)}}{M_{\rm pl}^2 r^5}$. The size of this contribution crucially depends on the coefficients, such as $d_2^{(1)}$, whose value is determined by the UV description of the system. For black holes in pure GR, all these coefficients are fixed in terms of the mass and spin of the black hole, and are parametrically of the form $d_2^{(1)}\sim m r_s^4$, where $r_s$ is the Schwarzschild radius. This makes this contribution scale as~$v^{10}$. In the case of black holes in our EFT, we should consider two limits (see Fig.~\ref{fig:scaleLambda21111}). If $\Lambda\gtrsim 1/r_s$, then we can compute these terms within the validity of the EFT, and find that $d_2^{(1)}\sim m r_s^4 (\Lambda r_s)^{-6}$, so that the induced $\Delta h/h\sim v^{10} (\Lambda r_s)^{-6}$. In this regime, this effect is leading with respect to the one we computed. However, this is also the regime where the post-Newtonian result is very small, and furthermore exactly the regime where the perturbative numerical simulation discussed in sec.~\ref{sec:numerical} can be performed all the way to the merger (as the effect of our operators is perturbative up to the horizon). So we neglect to compute this contribution in this regime. When instead $\Lambda\lesssim 1/r_s$, the black hole solution cannot be derived within the regime of our EFT, and therefore the calculation of the finite-size coefficients, such as $d_2^{(1)}$, depends on the unknown UV completion. In our setup, in order to avoid constraints from small-scale experiments, it is crucial to assume the `softness' of the UV completion (see Sec.~\ref{sec:action}). Since at $r\sim 1/\Lambda\gtrsim r_s$ we are still in the post-Newtonian regime, the gravitational field is still weak, and therefore our softness assumption implies that at distances shorter than $\Lambda$, effects suppressed by powers of $1/\Lambda$ disappear. This implies that the induced finite-size terms scale as $d_2^{(1)}\sim m r_s^4$, without additional powers of $1/\Lambda$. In this case, the effect from the finite-size operators is dominant over the ones we compute only for large distances $r\gtrsim 1/(\Lambda^2 r_s)$. Therefore, the results presented in this section give the leading effect in the parametric window \begin{equation} r_s\lesssim \frac{1}{\Lambda}\lesssim r\lesssim \frac{1}{\Lambda^2 r_s}\ , \end{equation} which is the most interesting region, as it is where the effect is the largest. Similar conclusions apply for the operators suppressed by $\tilde\Lambda$ and $\Lambda_-$. If we now pass to neutron stars, we can see that a similar discussion holds. We can argue that the induced modifications to the ordinary contact terms from our EFT are even smaller than in the case of the black hole: since the system is never completely relativistic, our UV-softness assumption suggests that there is a further suppression. Therefore, these corrections are contributing to an order smaller than $v^{10}$ and therefore we can neglect them as in the case of the black holes. \begin{figure}[!ht] \centering \includegraphics[trim = 5cm 8cm 3cm 9cm, clip, width=0.3\textwidth]{scales.pdf} \caption{A schematic view of the scales in our theory. The blue solid circle shows the Schwarzschild radius of a black hole ($r_S \sim G M$), and the black solid circle shows the orbit of the second black hole with radius $r$. The dashed and dotted red circles are two possible choices of the scale $\Lambda$ for a given black hole: $1/\Lambda\gtrless r_S$, that lead to different qualitative contributions, and that are discussed in the text. \label{fig:scaleLambda21111} } \end{figure} \section{Constraints from other experiments\label{sec:otherexp}} Experimental measurements of our effective action can be classified into two categories depending on the strength of the measured gravitational potential, weak field or strong field. We will first summarize current measurements in weak gravity systems by discussing their current precision and the one required to probe our effective operators. Secondly, we will discuss the only known strong gravity system before the gravitational wave events at LIGO: the X-ray binaries. \subsection{Weak gravity systems} Weak gravity systems are systems where the gravitational potential is much smaller than unity in natural units. They can contain strongly relativistic objects like neutron stars and black holes as long as the gravitational field experienced by the test mass is weak. Phenomenologically, weak gravity systems correspond to the situation where the distance $r$ is so large that $ v^2 \sim G M/r$ is small. In these systems, in order for measurements to constrain the higher dimensional operators, one needs experimental measurements with precision at least of order $v^4$ in the case of $\mathcal{C}^2$ terms, at least $v^6$ in the case of $\tilde{\mathcal{C }}^2$ terms, and at least $v^6a$ in the case of $\tilde{\mathcal{C }}{\mathcal{C }}$ (see Table~\ref{tab:vcount}). The most relevant of the current experimental tests of gravity in weak gravity systems are weak equivalence principle tests ~\cite{Wagner:2012ui,Smith:1999cr,Dimopoulos:2006nk} and indirect measurement of gravitational wave through orbital decay~\cite{Taylor:1982zz}. In the following, we will summarize the current measurements. Earth based experiments, in particular the E$\ddot{o}$t-Wash experiment, constrain violations of the weak equivalence principle through measurement of the differential acceleration of berylium and titanium in the earth gravitational field to a precision of $(a_{\rm Be} - a_{\rm Ti})/a = (0.3 \pm 1.8) \times 10^{-13}$~\cite{Wagner:2012ui,Smith:1999cr}. However, as we described, within our assumptions the violation of the weak equivalence principle from our effective Lagrangians is negligibly small. Lunar laser ranging accurately measures the distance between the earth and moon~\cite{Turyshev:2006gm}. The system has a typical $v\sim 10^{-5}$. Because of our UV-softness assumption that we discussed at length in Sec.~\ref{sec:action}, the effect of our operators is never larger than the one obtained at $r\sim 1/\Lambda$. Therefore the effect is at most of order $v^4 \sim 10^{-22}$. This means that it is negligibly small, independently of the value of $\Lambda$. Neutron star binaries are binary star systems where two neutron stars orbit around a common center of mass. Two famous systems are the Hulse-Taylor pulsar and double pulsar PSR J0737-3039. The closest of these binary systems seen up to date have orbital period of a few hours. The period decay rate of the binary neutron star systems are measured to $10^{-6}$ precision. The correction due to the higher dimensional operators are at most at the level of $v^4 \sim 10^{-13}$, much beyond current precision. Similar measurement of the orbital decay rate can be done with another type of compact binary objects, the low mass X-ray binaries (LMXB), for example A0620-00~\cite{McClintock} and XTE J1118+480~\cite{Orosz:2004ac}. However, the orbital decay in these systems is not yet measured to high precision, and therefore, with a companion orbital period of a few hours, these systems do not place constraint on our theory unless the precision of measurement reaches $v^4\sim (GM/T)^{4/3}$, with $T$ being the orbital period. For the strongest system, this reaches $\sim 10^{-11}$. Short distance modifications to the gravitational force are measured to distances as small as a few micron, much smaller than the $1/\Lambda$'s that are of interest for the LIGO-like experiments. However, as we argued in the introduction and in Sec.~\ref{sec:action}, since these experiments are performed with very light sources~\cite{Hoskins:1985tn,Kapner:2006si,Geraci:2008hb} (for a review, see~\cite{Adelberger:2009zz}), our `UV-softness' assumptions implies that the effect of our extension to GR is negligibly small in these cases. To conclude, current measurements of weak gravity system do not constrain the parameter space of our theory due to the smallness of gravitational binding energy ($\sim G M m/r$) compared with the energy of the lighter object. In particular, the higher dimensional operators violate the strong equivalence principle in a very special way: gravitational fields have additional couplings among themselves. In weak gravity systems, the gravitational energy is a very small component of the total energy density that gravitates and therefore one needs to perform very precise experiments in order to measure these effects. \subsection{Strong gravity system: X-ray binaries} X-ray binaries are binary systems that consist of a massive star and a compact object: a neutron star or a black hole. In this section, we will discuss a type of X-ray binary where the emission from the accretion of the massive star onto a black hole can be observed to good precision~\cite{Reynolds:2013qqa}. The emission profile of the accretion disk can be measured with continuum fitting and X-ray relativistic reflection methods. These measurement of the emission profile of the accretion disk can be used to determine the innermost stable orbit, and provide an alternative measurement of the mass and the spin of the compact object, compared to the measurement of the orbital period of the binary. The measurements of X-ray binaries, especially the measurement of GRS 1915+105 and Cygnus X-1 can be used to put constraints on deviations from the Kerr metric of a black hole~\cite{Bambi:2014nta,Bambi:2014oca}. For example, the mass of the host black hole of Cygnus X-1 can be determined by measuring the orbital period of the massive star up to sub-leading corrections due to spin-orbit couplings. In this case, the sub-leading terms due to spin and our higher-dimensional operators are both velocity suppressed since the massive star is at a location where gravitational field is already weak, and are much beyond current sensitivity of orbital period measurements, which are at percent level. The spin of the black hole and new corrections from our higher dimensional operators can furthermore be in principle determined by measuring the innermost orbit of the same (or similar) Kerr black hole through measurement of the X-ray emission from the accretion disk~\cite{Reynolds:2013qqa}. The $\tilde{\mathcal{C }}^2$ term corrects the Kerr-metric in a way that is proportional to the spin of the black hole because $\tilde{\mathcal{C }}$ vanishes for the Schwarzschild metric, while the $\mathcal{C}^2$ term corrects the metric even in absence of spin. Focussing first on the ${\mathcal{C}}^2$ operator, the Newtonian potential and the potential in eq.~(\ref{Lambda_pot}) can be reorganized, in the limit $m_2\ll m_1$, in the following way \begin{align} V_\Lambda = \frac{G m_1 m_2}{r}\left(1 + \frac{2}{\pi^6}\left(\frac{2 G m_1}{r}\right)^2 \left(\frac{2 \pi}{\Lambda r}\right)^6 \right)\ . \end{align} It is clear that there is a sizable correction for $\Lambda r\sim1$, which could well be probed by X-ray observations. However, the region $\Lambda r\sim 1$ is exactly where our EFT is supposed to break down. Our EFT is indeed an expansion in $\Lambda r\gg1$, and predictions for $\Lambda r\sim 1$ strongly depend on the UV completion. Of course, given the strong dependence on $\Lambda r$, the effect quickly becomes very small as we make $\Lambda r\gg 1$. Therefore, overall, it appears to us that these are potentially very interesting probes, if the associated observations are able to control the statistics and the potential systematic effects associated to the astrophysical matter present in the environment, so that they can set limits also when $\Lambda r$ is safely larger than one (for recent reviews, see~\cite{Reynolds:2013qqa,McClintock:2013vwa}). Computing the modifications to the metric due to our extension of gravity is rather straightforward, by solving the perturbative extended Einstein equations~(\ref{eq_temp_source}). However, it is clear that to compute observable quantities for these systems, much more astrophysical ingredients are needed, that we leave for future work. Similarly, we can show that the $\tilde{\mathcal{C }}^2$ term generates corrections to the leading order newtonian potential depending on the spin of the black hole. The size of correction is~$\mathcal{O} (10^{-2})$ for a maximally rotating black hole with $\frac{2\pi}{\Lambda r} \sim 1$. With current measurement, the observational effect of the above operators in an X-ray binary system is quite degenerate with the spin of the black hole. Therefore, unless the corrections due to the higher dimensional operator is such that the best fit dimensionless black hole spin parameter is measured to be larger than unity ($a/m > 1$), such an effect is experimentally in practice indistinguishable. Finally, let us mention the constraint coming from a system where we usually observe the strong field regime of GR, which is cosmology. Here, the most powerful bound comes from BBN, which however constraints our scales $\Lambda$'s to be shorter than just about a thousand kilometers. This is clearly a subleading constraint. Of course one might wonder what happens in our set up to the early universe cosmology, for example to inflation. The constraint from BBN leaves some room in energy for inflation to happen within the validity of our EFT. However, it could also be that inflation happens at higher energies. In this case, the answer will depend on the UV completion, which is not at our disposal. In particular, if the UV completion will follow the ``softness" assumptions we made in this paper, then it might be easier to expect that inflation will happen in the usual way. Similar considerations apply to other strong field phenomena that might happen in the universe. Cleary, it would be interesting to study these aspects further. \section{Conclusion\label{sec:conclusions}} The recent discovery of gravitational waves from black hole merger by the LIGO-VIRGO collaboration opens up a new observational window on the universe and the laws that determine it. One of the characteristics associated to these observations is that black holes probe the strong field regime of gravity, where the deviation of the metric from Minkowski is order one. Only in the cosmological setting this regime of gravity has been probed in a comparably precise way, but, as we have described in the paper, cosmology is not yet directly sensitive to very high values of the curvature. We have therefore constructed the most general effective field theory for a modification of General Relativity that satisfies the following constraints. The first is that it is testable with gravitational wave observations. This means that the effect of new physics should not be largely suppressed with respect to the curvature scale of the probed compact objects. This forces our higher dimension operators to be suppressed by a scale of order of a few inverse km. Second, this modification of GR must not be ruled out by already existing experiments. This forces our theory not to alter the coupling to matter. We discuss that this choice is stable under radiative corrections. Furthermore, this same requirement imposes a strong condition on the UV completion of our EFT. We need to assume that the physical effect of our extension to GR {\it saturates} at the scale when our EFT breaks down. We argue that this is both essential for the viability of the EFT, but also a rather common phenomenon in field theory. Third, it must not violate any widely accepted principle of physics, such as Lorentz invariance, unitarity and locality. In particular this implies that our theory does not allow for superluminal propagation of signals, which in turns, forces the leading operators in our EFT to take the form of four contracted Riemann tensors with some restrictions on the relative sign and size of the coefficients. Fourth and last, but not least, we restrict our EFT not to have any additional light degrees of freedom beyond the two helicity-two states of the usual graviton. While this is a limitation (that be rather easily fixed), as experiments like LIGO-VIRGO can probe also theories with additional degrees of freedom with masses smaller than a few inverse km, our EFT represents the most general extension to GR in the UV without additional degree of freedom. Therefore, by testing our EFT, one is guaranteed to investigate a vast class of physically consistent theories all at once. After setting up the EFT formalism, we have calculated the effect of the leading higher dimensional operators in the inspiral phase of a black hole merger. We have done this by adapting the EFT for extended objects, which was formulated for GR, to our theory. The resulting effects, which amount to a shift in the phase, amplitude and polarization of the emitted waves, though small corrections compared to the leading gravitational wave emissions, can in principle be extracted from future gravitational wave events. For this reason, it will be worthwhile to compute the actual modification to the waveform that our EFT produces in the inspiral phase. The merger phase, with $v\approx 1$, probes regions where our higher dimensional operators have the largest effect on $h$. It is very important to calculate their impact in this phase. We have argued and highlighted a procedure according to which currently available numerical codes should be modifiable to compute these effects. \vspace{0.15cm} Of course, detecting a deviation from GR in the form of our EFT would represent a revolution in physics, though quite unexpeted. % % \subsubsection*{Acknowledgments} We thank S.~Dimopoulos, W.~East, G.~Giudice, P. Graham, M~Okounkova, M.~Mirbabayi, L.~Shao, L.~Stein M.~Zaldarriaga and S.~Zhiboedov, for interesting conversations and comments on the draft. We thank Frans Pretorius for carefully reading and commenting a preliminary draft. SE and LS are partially supported by DOE Early Career Award DE-FG02-12ER41854. JH and VG are partially supported by NSF grant PHYS-1316699. \section*{Appendices}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,098
package com.example.network; /** * Created by roro on 2015/7/1. */ public class Response{ public String getResult() { return result; } private String result; public Response() { } public void setResult(String result){ this.result = result; } }
{ "redpajama_set_name": "RedPajamaGithub" }
6,711
Q: me left wanting more by a narrative Given this considerable dichotomy, between the me that was significantly impressed by Mann's obvious talent, and the more emotional, "enjoyment-centric" me left wanting more by a narrative that seemed dry and lifeless, I've resolved to revisit this work in a few years (it's only 150 pages) for a follow up. Source: https://www.goodreads.com/book/show/53061.Death_in_Venice I would like to ask you to help me with understanding the passage in bold from the above excerpt. I understand that the author expressed the experience of reading Mann's novel. One side of his reader personality was impressed, the other has the objections. What does it mean "left wanting" from the context? Does the word "left" function as a verb? A: The writer is saying that there were two 'mes', the 'me' who was impressed by Mann's talent, and the 'me' that was not satisfied. 'Left' is the past participle of the verb 'leave', here functioning as an adjective. To leave someone wanting more of something is to give that person a certain amount, but not enough, so that they are not satisfied. I was hungry, but my mother only gave me a small sandwich, so I was left wanting more. Participle adjectives A: There's an old English StackExchange question that asks something similar: https://english.stackexchange.com/questions/153742/what-does-leaves-you-wanting-more-mean This is an idiomatic expression - "left X wanting more" can mean at least two opposite things, as the commenters on that question mentioned: * *X was dissatisfied/disappointed by the subject and wished there were more substance that would have improved the situation. *X was so wowed by the subject that they couldn't get enough, and at the end, they were ready to have more of it. The meaning here seems to be the first - the second "me" is disappointed by Mann's writing. (and yes, if you couldn't already tell, "left" is a verb - the past tense or past participle of "to leave")
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,543
\section{Introduction} The subject of thermodynamics is notoriously difficult for mathematicians. V.I. Arnold [Ar] famously put it in a nutshell as follows: \begin{quote} Every mathematician knows that it is impossible to understand any elementary course in thermodynamics. \end{quote} He continues by explaining that \begin{quote} the reason is that [the] thermodynamics is based on a rather complicated mathematical theory, on [the] contact geometry. \end{quote} It is the purpose of this note to present an axiomatisation which is mathematically transparent, avoids anthropomorphisms, is elementary (no contact geometry) and preserves all of the structure of the classical theory. It also allows us to carry out explicit computations for the classical cases (ideal gas, van der Waals gas) in a simple and general way which works for virtually any of the standard models for real gases. We illustrate this for an equation of state suggested by Feynman to allow for the fact that, for real gases, the adiabatic index is far from being constant and for a new model which combines the advantages of the van der Waals and the Feynman gas. We also discuss a theme which seems to us to be of eminent practical significance but which we have never seen treated in the literature---namely, how much information is required to recreate the equations of state of a substance. Thus we show that knowledge of all of the isotherms and just two adiabats (dually two isotherms and the adiabats) suffices. Our system of axioms has two basic ingredients---firstly a measure space, i.e., a set with a $\sigma$-finite positive measure (which describes the possible states of a thermodynamic system), provided with two pre-orderings, \lq \lq warmer than'' and \lq \lq adiabatically accessible from''. We show that if these orderings satisfy several physically plausible conditions, then they are induced by numerical functions---empirical temperature and empirical entropy in the thermodynamical case. We then show that an area condition which was made explicit by the distinguished economist P.A. Samuelson ensures the existence of essentially unique absolute temperature and entropy. These have the property that the four Maxwell relations then hold. If we then interpret these as integrability conditions, we can deduce the existence of the four energy type functions of thermodynamics. This is in contrast to many standard treatments where these relations are deduced from the existence of the energy functions via the Schwarz Lemma. In a final section, we present a unified and systematic approach to the classical thermodynamical identities between derived quantities and discuss some known and new models of real gases in much more detail than one finds in standard treatments. \section{Samuelson's vision} Samuelson noted that classical thermodynamics and economics are related by a common search for an optimising basis for observed behaviour. In thermodynamics the observed isotherms and adiabats are hypothesised to be derived from the minimisation of a scalar quantity \lq \lq energy''. In economics, the observed input demand functions are hypothesised to be derived from the maximisation of the scalar quantity \lq \lq profits''. Deriving a test for these hypotheses then becomes a common task of both disciplines. Thus we can interpret results of Maxwell as establishing the equivalence of the existence of an energy function which is minimised with the fact that the isotherms and the adiabatics fulfill a simple and natural geometric condition which we called in an earlier paper the \lq \lq $S$-condition''---see below for a precise formulation of the latter. Such a condition is certainly implicit in Maxwell's argument. However, it is not stated explicitly. Samuelson claims no priority for having noticed via this diagram that constrained optimisation implies this relationship but we have been unable to find another reference stating the equilibrium condition in such a geometrically simple way. Thus although the above area condition appears {\it implicitly} in many areas, we have found no previous instance of its explicit formulation in the literature. In [Co1] we obtained a number of equivalent formulations of this condition, notably a rather lengthy partial differential equation in two arbitrarily chosen functions which have the given curves as contours (corresponding to empirical temperature and entropy in the thermodynamical situation). We then showed that when this equation is satisfied there are canonical recalibrations of empirical temperature and entropy for which the Jacobian is identically \lq \lq 1'' and this implies the existence of an energy function and the validity of the Maxwell relations for the recalibrated quantities. Note that, as in Maxwell's original treatment, we derive his relations from an area condition in $(p,V)$-space, the existence of recalibrations with the $J=1$ condition being a consequence of the $S$ condition. This paper can thus be viewed as an attempt to rehabilitate and perhaps clarify the Maxwell approach. We shall use this theory to discuss existence and uniqueness of solutions to the above partial differential equation. In particular, we show that given one family of curves (say the isotherms) and two other members of the other family, then this uniquely determines the family of adiabats (for a more precise mathematical formulation, see below). The Maxwell/Samuelson area condition thus establishes an important duality between isotherms and adiabats and, more generally, between any two suitable families of level curves which are derived by minimizing energy in two different constraint regimes. These facts can be used to show how to exploit this duality to derive some explicit formulae for the dual functions (adiabats given isotherms, isotherms given adiabats) for two standard textbook cases, the ideal gas and the van der Waals gas. Recently there has been renewed interest in the derivation of the Maxwell relations in thermodynamics from the Jacobian identity $\dfrac {\partial (p,V)}{\partial (T,S)} = 1$ (see, for example, [Ri]). As is well known, this identity means that the the corresponding map from the $(T,S)$-plane into the $(p,V)$-plane is area preserving, and so this approach links thermodynamics to such areas as geometrical mechanics where area-preserving mappings play a central role. Maxwell was well aware that his relations had a geometric foundation and indeed he used Euclidean geometry \lq \lq to get his four identities in an amazingly obscure way'' [Am]. Perhaps because of its opacity, Maxwell's use of geometry to derive his relationships seems to have largely disappeared from the physics textbooks. In the discipline of economics, however, exactly this obscure geometrical argument of Maxwell was the subject of the acceptance speech of Paul Samuelson, the first American Nobel laureate in Economics. The Maxwell/Samuelson area condition thus establishes an important duality between isotherms and adiabats and, more generally, between any two families of level curves which are derived by minimizing energy in two different constraint regimes. \section{Orderings and the axiomatics} In this section we discuss briefly a topic which is probably the basic problem of the theory of measurement---when is a physical quantity described (in a meaningful fashion) by a number? If this can be done, then the values of the quantity in question can be compared (as in \lq \lq warmer than'', \lq\lq worth more than'' etc.) and the basic question is when the converse holds , i.e., when is such an ordering induced by a numerical function (\lq \lq temperature'', \lq \lq price'')? This has been examined and re-examined countless times (to our knowledge, the first rigorous formulation of a mathematical theorem of this sort was due to Debreu [De] who gave a sufficient condition for a preordering to be induced by a utility function). We give a brief discussion, firstly for the sake of completeness and secondly because we would like to emphasise what we regard as the central point, namely that the real line has a simple and elegant characterisation as an ordered space. The latter fact is wellknown (although perhaps not quite as wellknown as it should be) but we have never seen it used on the problem we are now addressing. The opening pages of Maxwell's treatise [Ma] give a lucid treatment of temperature as an ordering. \subsection{Orderings and utility functions} Suppose that we are given a set $\Omega$ and a surjective mapping $f$ from it onto the real line. (Since we are only concerned with the order-theoretic aspects of the latter, we can, at will, replace it by any order-isomorphic set, for example an open interval, in particular the half-line). Then $f$ induces a preordering $\leq_f$ where we define: $x\leq_f y$ if and only if $f(x)\leq f(y)$. This preordering has the following properties: \begin{enumerate} \item it is total and has no largest or smallest element; \item it is order complete; \item $\Omega$ contains a countable, order dense subset. \end{enumerate} For our purposes it will be convenient to restate these properties in terms of the family of predecessor sets of elements. Thus, we define, for each $\alpha \in \Omega$ the sets $$A_\alpha=\{x\in \Omega : x\leq_f \alpha\} , \qquad U_\alpha =\{x\in \Omega : x<_f \alpha\},$$ which have the following properties: \begin{enumerate} \item The $A_\alpha$ are distinct and totally ordered by inclusion; \item the countable subfamily $\{A_q: q \in \text {\bf Q}\}$ is order dense; \item $\bigcup A_\alpha = \Omega$, $\bigcap A_\alpha = \emptyset$; \item The family of the $A_\alpha$ is closed under intersections and for each $\alpha$, $$A_\alpha=\bigcap \{A_\beta:\beta > \alpha\}.$$ \end{enumerate} The family of the $U$'s satisfies the corresponding properties, except that 4. is replaced by \begin{itemize} \item[4'.] the family is closed under unions and for each $\alpha$, $U_\alpha=\bigcup \{U_\beta:\beta < \alpha\}$. \end{itemize} Further, if $\alpha < \beta <\gamma$, then $$A_\alpha \subset U_\beta\subset A_\beta\subset U_\gamma\subset A_\gamma.$$ (Note that if we start with a family $A_\alpha$ as above, and define $U_\alpha=\bigcup \{A_\beta:\beta < \alpha\}$, then the above conditions are fulfilled). We remark at this point that in this paper the inclusion $\subset$ will always be exclusive, i.e., $A \subset B$ implies that $A$ and $B$ are distinct. \noindent We are interested in the following converse statements: \begin{theor} Suppose that we are given a family $\cal A$ of subsets of $\Omega$ which is totally ordered by inclusion, is closed under arbitrary intersections and satisfies the properties \begin{enumerate} \item if $A \in \cal A$, then $A=\bigcap\{B\in {\cal A}, A \subset B\}$; \item there is a countable subset ${\cal A}_0$ which is order dense in $\cal A$; \item $\bigcap \cal A = \emptyset$ and $\bigcup {\cal A} = \Omega$. \end{enumerate} Then there is a surjective mapping $f$ from $\Omega$ onto {\bf R} such that $\cal A$ is the family $$\{f\leq \alpha: \alpha \in \text{\bf R}\}.$$ \end{theor} The simple proof of this result follows from the following standard order-theoretical characterisation of the real line: \begin{theor} Suppose that we have a set $A$ with a total ordering such that \begin{enumerate} \item $A$ has neither a smallest nor a greatest element; \item $A$ has a countable, order-dense subset; \item $A$ is order complete. \end{enumerate} Then $A$ is order-theoretically isomorphic to the real numbers. \end{theor} This in turn follows easily from the following characterisation of the rationals, which is due to Cantor: \begin{theor} Suppose that we have a set $A$ with a total ordering so that \begin{enumerate} \item $A$ has neither a smallest nor a greatest element; \item $A$ is countable; \item $A$ is dense in itself (i.e., if $x<z$ in $A$, then there is a $y\in A$ with $x<y<z$). \end{enumerate} Then $A$ is order-theoretically isomorphic to the rational numbers. \end{theor} The idea behind the proof of theorem 1 is now simple. We introduce the following equivalence relationship on $\Omega$: $x\sim y$ if and only if $x$ and $y$ are in exactly the same sets of $A\in \cal A$, i.e., for each $A$, $x\in A$ if and only if $y\in A$. Then the quotient space $\Omega|_\sim$ has a natural order structure which satisfies the properties which characterise the real line. The required mapping $f$ is then the canonical one from $\Omega$ onto the quotient space. An important point is that the $f$ in theorem 1 is not uniquely determined. We can replace it by any $F= \phi \circ f$ where $\phi$ is an arbitrary order isomorphism of the line. We call such an $F$ a {\it recalibration} of $f$. In the general situation there is no canonical choice of $F$. This will be crucial in the following. There are two refinements of theorem 1 which will be of particular interest to us. Firstly, if $\Omega$ is provided with a suitable $\sigma$ algebra and each $A\in\cal A$ is measurable (i.e., a member of the algebra---we then say that the ordering is {\it measurable}), $f$ will be measurable. Secondly, if $\Omega$ is a topological space and each $A \in \cal A$ is closed and, further, for each $A \in \cal A$, $U=\bigcup \{B\in {\cal A}: B \subset A\}$ is open, then $f$ is continuous. There are corresponding conditions which ensure semi-continuity. \subsection{The axiomatics} We are now in a position to state the four axioms which describe the mathematical structure of a thermodynamical theory: \begin{enumerate} \item The states of a thermodynamical system are specified by the points of a set $\Omega$ with a positive $\sigma$-finite measure $\mu$. \item $\Omega$ is provided with two families $\cal A_{\text{temp}}$ and $\cal A_{\text{ent}}$ of measurable subsets which satisfy the conditions of theorem 1 (and so the preorderings are induced by numerical functions which we denote by $t$ and $s$). \item for each $A_{-1}\subset A_0 \subset A_1$ in $\cal A_{\text{temp}}$ and $B_{-1}\subset B_0 \subset B_1$ in $\cal A_{\text{ent}}$ we have \begin{eqnarray*} &\mu((A_1\setminus A_0)\cap(B_1\setminus B_0)) \mu((A_0\setminus A_{-1})\cap(B_0\setminus B_{-1}))\\ &\qquad \qquad \qquad = \mu((A_1\setminus A_0)\cap(B_0\setminus B_{-1})) \mu((A_0\setminus A_{-1})\cap(B_1\setminus B_0)).\end{eqnarray*} \item Condition 3. means that if $t$ and $s$ are the (measurable) functions which induce these orderings, then the image measure of $\mu$ in $\text{\bf R}^2$ under the mapping $\omega\mapsto(t(\omega),s(\omega))$ splits multiplicatively. We further assume that this measure is equivalent to Lebesgue measure on the plane (equivalent in the sense of being mutually absolutely continuous). \end{enumerate} It follows from this that for each $A \in \cal A_{\text{temp}}$, and for each $A \in \cal A_{\text{ent}}$, $\mu (A \setminus \bigcup \{B\in {\cal A}: B \subset A\})=0$, and that for each pair $A \subset A_1$ from $\cal A$ and $B \subset B_1$ from $\cal B$ we have $$\mu((A_1\setminus A)\cap(B_1\setminus B))$$ is strictly positive and finite. We remark that these axioms are physically natural and plausible. (In classical thermodynamics, $\Omega$ is $(p,V)$-space and the measure is interpreted as mechanical work). Since this article is, despite its title, one in mathematics rather than in physics, we will not go into this in detail; but we emphasise that the area condition, in particular, is not a {\it deus ex machina} inserted to save the day but has a natural physical justification. A further point, which is important more for philosophical reasons, is that they do not explicitly refer to the real numbers (compare the axioms for Euclidian geometry, particularly in the form as perfected by Hilbert). \subsection{The existence of absolute temperature and entropy} If we only assume 1. of the above axioms, $s$ and $t$ are, as is the function $f$ in the general result, not uniquely determined. A crucial point of our treatment is that in the presence of conditions 3. and 4. above, then there are (essentially) unique such choices, which we call the {\it canonical recalibrations}. For the condition 4. means that the image measure has the form $a(u)b(v)\,du\,dv$ where we denote the coordinates in $\bf R^2$ by $(u,v)$ and $a$ and $b$ are locally Lebesgue-integrable functions whose reciprocals are also locally integrable. Now if we replace the two functions $t$ and $s$ by the recalibrations $T=\phi \circ t$ and $S=\psi \circ s$, where $\phi$ is a primitive of $\dfrac 1a$ and $\psi$ of $\dfrac 1b$, then we obtain the following result: \begin{theor} Suppose that the above axioms are satisfied. Then we can choose the functions $T$ and $S$ so that the mapping $\omega \mapsto (T(\omega),S(\omega))$ is area-preserving. \end{theor} This choice is unique up to suitable affine transformations (loosely speaking, we can choose the zero point and a change of scale---c.f. the difference between the Celsius and Fahrenheit systems). In thermodynamics, these canonical calibrations are called {\it absolute temperature} and {\it entropy} (as opposed to empirical temperature and entropy). We now proceed to show that these axioms imply the usual contents of elementary treatments of thermodynamics. As we shall see shortly, this choice of calibration is crucial since the fact that the area condition holds is equivalent to each (and hence all) of the four Maxwell relations. Since the latter can be interpreted as integrability conditions, they ensure the existence of the four energy type functions of thermodynamics. (Once again, these are purely mathematical facts, but the underlying motivation from thermodynamics is the principle of Joule-Maxwell on the mechanical equivalence of heat). \section{Samuelson configurations} \subsection{The area condition} We emphasise at this point that up till now smoothness (except in the very mild form of measurability) has played no part in our considerations, neither in the formulation of the axioms nor in the derivation of the canonical recalibrations. This is as it should be, for philosophical reasons but also because the presence of phase transitions makes it clear that in real substances we can and should expect the isotherms and adiabats to have \lq \lq corners''. We now turn to the case where they {\it are} smooth, i.e., we suppose that $\Omega$ is the plane $\text{\bf R}^2$ or some suitable subspace (in thermodynamics usually the positive quadrant) and that the functions which induce the ordering are smooth (in the sense of being infinitely differentiable) as are their level curves. It is convenient to use the mathematically neutral notation $x$ and $y$ for the coordinates in the plane which we shall initially regard as the independent variables and $u$ and $v$ for the two (potential) functions. As shown in [Co1], the area condition is equivalent to the fact that the functions $u$ and $v$ satisfy a certain non-linear partial differential equation of the third order which is displayed explicitly there. Familes of level curves satisfying this partial differential equation are thus of great interest in the study of optimizing systems. In the following, we concentrate on the two foliations consisting of the level curves of $u$ and $v$, i.e., the isotherms and adiabatics in the thermodynamical context. The area condition can then be formulated as follows. The plane (or a suitable part thereof) is foliated by two families of curves--- the level curves of two potential functions $u$ and $v$. We assume that these are transversal at each point, i.e., the Jacobian $J = (u_x v_y - u_y v_x)$ never vanishes. Since the regions we consider are connected (in the topological sense), $J$ cannot change sign. Hence there is no essential loss of generality if we assume that it is always strictly positive. Locally the families of curves form a network which is topologically equivalent to the standard network of the plane induced by the parallels to the $x$ and $y$ axes (i.e., the case where $u (x,y) = x$ and $v(x,y) = y$). We say that the foliations satisfy {\bf condition $S$} (or that $v$ is $S$-transversal to $u$ or that the $v$-curves are $S$-transversal to the $u$ curves) if the following holds: for any choice of values $c_{-1} < c_0 < c_1$, and $d_{-1} < d_0 < d_1$ respectively, we have $$\mbox{area $A$/area $B$ = area $C$/area $D$}$$ where \begin{eqnarray*} A& =&\{ (x,y) : c_{-1} < u(x,y) < c_0, d_0 < v(x,y) < d_1\},\\ B &=&\{ (x,y) : c_{0} < u(x,y) < c_1, d_0 < v(x,y) < d_1\},\\ C& =&\{ (x,y) : c_{-1} < u(x,y) < c_0, d_{-1} < v(x,y) < d_0\},\\ D& =&\{ (x,y) : c_{0} < u(x,y) < c_1, d_{-1} < v(x,y) < d_0\}. \end{eqnarray*} In order to avoid topological problems, we assume that the values of the $c$'s and $d$'s are sufficiently close for the above condition on the network to be satisfied. This means that the condition we are considering is a local one, as it should be if it is to be equivalent to a partial differential equation. However, it is easy to obtain a global form from the local one. We refer to the configuration consisting of two foliations which are $S$-transversal as a {\it Samuelson configuration}. The $S$-configuration consisting of the adiabats and isotherms of the ideal gas is one of the most iconic images of modern science. The precise relation of our result to this question will be made more explicit below. For obvious reasons, we discuss the case of the isotherms for an ideal gas and for a van der Waals gas in some detail. In particular, we show that if a function $v$ is $S$-transversal to the function $u = xy$ (i.e., the potential defining the isotherms of an ideal gas) and {\it two} of $v$'s level curves have the form $x y^\gamma = \text {constant}$ for the same $\gamma$, then all of them have this form, i.e., the adiabatics are precisely those for the ideal gas with exponent $\gamma$. \subsection{Thermodynamical notation} Although this article is one on mathematics and not on physics, its main motivation comes, of course, from thermodynamics. For this reason, we recall the standard notation and concepts from classical thermodynamics for the reader's convenience. The coordinates $x$ and $y$ in the neutral notation correspond to $p$ and $V$ in Gibbsian thermodynamics. The choice of the latter as independent variables is natural since these two quantities can be directly measured. Also the natural meaure on this space (mathematically speaking, two-dimensional Lebesgue measure) has a natural physical interpretation (mechanical work). Our starting point is the situation where we are given the temperature $T$ and the entropy $s$ as functions of the pressure $p$ and the volume $V$. We use the lower case $s$ to indicate that this is empirical entropy. Absolute entropy will be denoted by $S$. (The standard models do not require a recalibration of temperature so that there will be no need at this point to distinguish beween lower and upper case $T$---however, we consider below an interesting model due to Feynman where we {\it shall} require such a recalibration). We use the following dictionary to jump between the purely mathematical notation and the thermodynamical one: $u$ corresponds to $T$, $v$ to $s$, $p$ to $x$ and $V$ to $y$. For example, the thermodynamical equations $$T = p V, \quad s = pV^\gamma $$ \noindent of the ideal gas corresponds to $$u(x,y) = x y, \quad v(x,y) = x y^\gamma.$$ For reasons which will be clear shortly, we use the recalibrated form $$u(x,y) = x y,\quad v(x,y) = \frac 1 {\gamma -1}\left (\ln x + \gamma \ln y \right )$$ of these equations. It is a consequence of the Maxwell relations that one can define the following four energy type functions (whose definitions we repeat for the readers' orientation. They can be found in any textbook on thermodynamics, e.g. [La]). Firstly, the {\bf energy $E = E(S,V)$} is a function of entropy and volume. From this one derives the quantities $T$ (temperature) and $p$ (pressure) by the equations $$T = \frac{ \partial E(S,V)}{\partial S}, \quad p = - \frac{ \partial E(S,V)}{\partial V}.$$ Analogously, one has the {\bf enthalpy} $H = H(p,S)$, from which one derives the quantities $$T = \frac{ \partial H(p,S)}{\partial S}, \quad V = \frac{ \partial H(p,S)}{\partial p};$$ the {\bf free energy} $F = F(T,V)$, from which one gets $$S = - \frac{ \partial F(T,V)}{\partial T}, \quad p = - \frac{ \partial F(T,V))}{\partial V};$$ and the {\bf free enthalpy} $G = G(p,T)$, which gives $$S = - \frac{ \partial G(p,T)}{\partial T}, \quad V = \frac{ \partial G(p,T))}{\partial p}.$$ In the German-language literature, e.g. [La], one employs $\Phi$ for $G$ and $W$ for $H$. We emphasise that in our treatment the logical development is reversed---the existence of such functions is a consequence of our axiom system, since it follows from the Maxwell relations which in turn are equivalent to the validity of the $S_1$-condition (see below). \subsection{Canonical recalibrations} We saw above that if the level curves of $u$ and $v$ satisfy the $S$ condition, then we can find (essentially unique) recalibrations $U = \phi \circ u$ and $V = \psi \circ v$ (where $\phi$ and $\psi$ are diffeomorphisms between, say, intervals of the real line), so that the Jacobian is identically one. If we assume that $u$ and $v$ are so calibrated, then this means that the diffeomorphism $(x,y) \mapsto (u(x,y),v(x,y))$ of the plane (or a suitable subset thereof) is area preserving. In this case we say that the functions $u$ and $v$ satisfy the {\bf $S_1$-condition}, or that $v$ is {\bf $S_1$-transversal} to $u$. (The recalibration for the ideal gas which was used above arose in this way). We now come to the crucial point in our argument. If we write the basic equations $u=f(x,y)$, $v=g(x,y)$ in differential form, i.e., as $$du=f_1\, dx+f_2\, dy,\quad dv=g_1\, dx+g_2\, dy$$ ($f_1$, $f_2$ are the partials with respect to $x$, $y$ etc.), we can solve for $du$ and $dx$, say, to get $$du=\frac{f_1}{g_1}\, dv-\frac J{g_1}\, dy, \quad dx=\frac 1{g_1}\, dv-\frac{g_2}{g_1}\, dy,$$ where $J$ is the Jacobi-determinant $f_1g_2-f_2g_1$, and so we see that the condition $J=1$ is equivalent to the Maxwell relation $\dfrac{\partial u}{\partial y}\Big |_v=-\dfrac{\partial x} {\partial v}\Big |_y$ which is an integrability condition and ensures the existence of a function $h$ of the two variables $y$ and $v$ such that $u$ and $v$ are the solutions of the equations $$x - f(y,v) = 0,\quad u - g(y,v) = 0, $$ where $f(y,v) = - \dfrac {\partial h}{\partial y}$ and $g(y,v) = \dfrac {\partial h}{\partial v}$. (We are using the standard conventions employed in thermodynamics---thus $\dfrac{\partial u}{\partial y}\Big |_v$ denotes the partial derivative of $u$, regarded as a function of $y$ and $v$, with respect to $y$). The proof of this result employs the inverse function theorem and so the precise statement is local. The same remark applies to many of the following enunciations. We shall call such a function $h$ a {\it geometric energy function} since in certain situations where the foliations arise as the level curves of suitable physical quantities it corresponds to the energy of a system. However, in such situations, the energy function satisfies some structural properties (monotonicity, convexity) which have natural physical interpretations, and these are of no direct relevance in our considerations below. In a similar manner, we will talk of geometrical adiabatics associated with families of isotherms, or geometrical isotherms associated with families of adiabatics respectively. The existence of the above energy functions is one of those facts which have been discovered and rediscovered time and again in the history of mathematics. We have traced it as far back as to Gau\ss{} [Ga] who used it to describe all equivalent projections (in the sense of mathematical cartography) and it appears in contact geometry (under the name of a generating function). Of course, as remarked above, it has long been used in thermodynamics. It follows from the above observation that we have a remarkable symmetry (corresponding to the Maxwell relations in thermodynamics). If we start with a given energy function, we can as above calculate $u$ and $v$ as functions of $x$ and $y$. The energy function arises from the process of replacing $x$ and $y$ as independent variables by $y$ and $v$. There are four such possibilities (the interesting ones for us are those with $y$ and $v$, $x$ and $v$, $y$ and $u$ and $x$ and $u$ respectively as independent variables), each of which is associated with an \lq \lq energy function'' (as we noted above, in thermodynamics they are called energy, free energy, enthalpy and free enthalpy respectively). Hence any one such function automatically defines three others. (In fact, the situation is more complicated than described here. This is due to the fact that we are relying on global solvability of the corresponding non-linear equations. The general results we use employ the inverse function theorem and so only guarantee local solubility. In many concrete situations which we compute, we do, of course, have global invertibility and hence the kind of symmetry evoked here). In the case of an ideal gas, the permutations of the various variables can be computed by hand and are valid globally---we include the formulae below for completeness. We have also added a more general case since it displays the fact that the familiar presence of the logarithm in the expression for the entropy of an ideal gas is in a certain sense unique to this case. Already the van der Waals gas offers difficulties here and we shall shortly develop an alternative method of computing these energy functions which is often more practical and doesn't require us to compute these permutations. \section{Thermodynamical identities--- an {\it anthologie raisonn\'ee}} \subsection{The basic machinery} We suppose that $u$ and $v$ are given as functions $f$ and $g$ of $x$ and $y$. Thus $u=f(x,y)$, $v=g(x,y)$ and, when $J=1$, simple manipulations with differential forms proved the basic identities: $$\begin{array}{lclclcclclcl} du&=&f_1\, dx&+&f_2\, dy, &&dv&=&g_1\, dx&+&g_2\, dy\\ &&&&&&&&&&\\ dx&=&g_2\, du&-&f_2\, dv, &&dy&=&-g_1\, du &+&f_1\, dv\\ &&&&&&&&&&\\ du&=&\frac{f_2}{g_2}\, dv&+&\fbox{\text{ $\frac 1{g_2}$}}\, dx, &&dy&=&\fbox{\text{$ \frac 1{g_2}$}}\, dv&-&\frac{g_1}{g_2}\, dx\\ &&&&&&&&&&\\ du&=&\frac{f_2}{g_1}\, dv&-&\fbox{\text{$ \frac 1{g_1}$}}\, dy, &&dx&=&\fbox{\text{$ \frac 1{g_1}$}}\, dv&-&\frac{g_2}{g_1}\, dy\\ &&&&&&&&&&\\ dv&=&\frac{g_1}{f_1}\, du&+&\fbox{\text{$ \frac 1{f_1}$}}\, dy, &&dx&=&\fbox{\text{$ \frac1{f_1}$}}\, du& -&\frac{f_2}{f_1}\, dy\\ &&&&&&&&&&\\ dv&=&\frac{g_2}{f_2}\, du&-&\fbox{\text{$ \frac 1{f_2}$}}\, dx, &&dy&=&\fbox{\text{$ \frac 1{f_2}$}}\, du&-&\frac{f_1}{f_2}\, dx\end{array}$$ where we have highlighted the expressions which correspond to the Maxwell relations. \noindent In thermodynamic notation these are $$\begin{array}{lclcclcl} dT&=&f_1\, dp+f_2\, dV,&&dS&=&g_1\, dp+g_2\, dV\\ dp&=&g_2\, dT-g_2\, dS,&&dV&=&-g_1\, dT+f_1\, dV\\ dT&=&\dfrac{f_2}{g_2}\, dS+\dfrac 1{g_2}\, dp, &&dV&=&\dfrac 1{g_2}\, dS-\dfrac{g_1}{g_2}\, dp\\ dT&=&\dfrac{f_2}{g_1}\, dS-\dfrac 1{g_1}\, dV,&&dp&=&\dfrac 1{g_1}\, dS-\dfrac{g_2}{g_1}\, dV\\ dS&=&\dfrac{g_1}{f_1}\, dT+\dfrac 1{f_1}\, dV,&&dp&=&\dfrac 1{f_1}\, dT-\dfrac{f_1}{g_2}\, dV\\ dS&=&\dfrac{g_2}{f_2}\, dT-\dfrac 1{f_2}\, dp,&&dV&=&\dfrac 1{f_2}\, dT-\dfrac{f_1}{f_2}\, dp \end{array}$$ where we are using the key: $u\leftrightarrow T$, $v\leftrightarrow S$, $x\leftrightarrow p$, $y\leftrightarrow V$ introduced above. \noindent In order to isolate the underlying patterns, we now use a numerical code. Thus $$\begin{array}{ccccc} u&\rightarrow&3\leftarrow&T\\ v&\rightarrow&4\leftarrow&S\\ x&\rightarrow&1\leftarrow&p\\ y&\rightarrow&2\leftarrow&V.\end{array}$$ \noindent Partial derivatives will be denoted by triples in brackets. $(3,1,2)$, for example, denotes $\frac{\partial u}{\partial x}|_y$ in the neutral notation, $\frac{\partial T}{\partial p}|_V$ in the thermodynamical one. In general, $(i,j,k)$ denotes the partial derivative of variable $i$, regarded as a function of the $j$-th and $k$-th variable, with respect to the $j$-th variable. By reading off from the above list, we can express each partial derivative of the form $(i,j,k)$ in terms of $f_1,f_2,g_1,g_2$ as follows: $$ (3,1,2)=f_1,\quad (3,2,1)=f_2,\quad (4,1,2)=g_1,\quad (4,2,1)=g_2;$$ $$ (1,3,4)=g_2,\quad (2,3,4)=-g_1,\quad (1,4,3)=-f_2,\quad (2,4,3)=f_1;$$ $$ (3,4,1)=\frac{f_2}{g_2},\quad (2,4,1)=\frac 1{g_2},\quad (3,1,4)=\frac 1{g_2},\quad (2,1,4)=-\frac{g_1}{g_2};$$ $$ (4,3,1)=\frac{g_2}{f_2},\quad (2,3,1)=\frac 1{f_2},\quad (4,1,3)=\frac 1{f_2},\quad (2,1,3)=-\frac{f_1}{f_2};$$ $$ (3,4,2)=\frac{f_1}{g_1},\quad (1,4,2)=\frac 1{g_1},\quad (3,2,4)=-\frac 1{g_1},\quad (1,2,4)=-\frac{g_2}{g_1};$$ $$ (4,3,2)=\frac{g_1}{f_1},\quad (1,3,2)=\frac 1{f_1},\quad (4,2,3)=\frac 1{f_1},\quad (1,2,3)=-\frac{f_2}{f_1}.$$ \noindent We can then express any derivative $(a, b, c)$ in terms of ones of the form $(d, 1, 2)$ or $(e, 2, 1 )$. Thus the four derivatives with $x$ and $v$ as independent variables are as follows: $$\begin{array}{rcr} (3,4,1)&=&\dfrac{(3,2,1)}{(4,2,1)}\\ (2,4,1)&=&\dfrac 1{(4,2,1)}\\ (3,1,4)&=&\dfrac 1{(4,2,1)}\\ (2,1,4)&=&-\dfrac{(4,1,2)}{(4,2,1)}\end{array}$$ Then, as above, we can introduce four energy functions $E^{13}$, $E^{14}$, $E^{24}$, $E^{14}$ such that $dE^{24}=u\, dv-x\, dy$, $dE^{14}=u\,dv+y\,dx$, $dE^{13}=-v\, du+y\, dx$, $dE^{23}=-v\, du-x\, dy$ (the superfixes correspond to the independent variables---thus for $E^{13}$ these are $x$ and $u$ , i.e., $1$ and $3$). We will discuss these in more detail below where the rationale of our notation will be explained. In terms of the classical notation: $$\begin{array}{lcrcrl} dE&=&T\, dS&-&p\, dV &\mbox{(energy)}\\ dF&=&-S\, dT&-&p\, dV &\mbox{(free energy)}\\ dG&=&-S\, dT&+&V\, dp &\mbox{(Gibbs potential)}\\ dH&=&T\, dS&+&V\, dp &\mbox{(enthalpy),}\end{array}$$ i.e., $E^{13}=G$, $E^{23}=F$, $E^{14}=H$ and $E^{24}=E$. If we arrange the energy functions in lexicographic order , i.e., as $E^{13}$, $E^{14}$, $E^{23}$, $E^{24}$ and denote them by $5$, $6$, $7$ and $8$ in this order, then we can incorporate them into our system. For it follows from the definitions and simple substitutions that \begin{eqnarray*} dE^{13}&=& (y-v f_1)dx - f_2 v dy\\ dE^{14}&=&(u f_1+y) dx + u f_2 dy\\ dE^{23}&=&-v f_1 dx +(x-vf_2) dy\\ dE^{24}&=&uf_1 dx+(uf_2-x) dy \end{eqnarray*} and so $$\begin{array}{cclcccl} (5,1,2)& =& y-gf_1, && (5,2,1)&=& -gf_2\\ (6,1,2)&=&y+fg_1,&& (6,2,1)&=&fg_2\\ (7,1,2)&=&-gf_1,&& (7,2,1)&=&-x-gf_2\\ (8,1,2)&=&fg_1,&& (8,2,1)&=&-x+fg_2. \end{array}$$ One of the potentially irritating features of the thermodynamical identities is that many are related by a simple swapping of the variables while this is accompanied by changes of sign which seem at first sight to be random. The simplest example is displayed by the four Maxwell relations. We can systemise such computations by introducing the symbol $[a,b;c,d]$ for the Jacobi determinant of the mapping $(c,d)\mapsto (a,b)$, i.e., $$[a, b;c, d] = (a, c, d)(b, d, c) - (a, d, c)(b, c, d).$$ The determinant then takes care of the sign. For example $[3,4;1,2]$ is the Jacobian $\dfrac{\partial(u,v)}{\partial(x,y)}$ and is therefore $1$ (which here denotes the number $1$), $[3, 2;4, 1]$ is $\dfrac{\partial(u,y)}{\partial(v,x)}$ and therefore $=-\dfrac{f_1}{g_2}=-\dfrac{(3, 1, 2)}{(4, 2, 1)}$. \noindent Note that there are $1,680$ such Jacobians. However, lest the reader despair, we then have the following simple rules for manipulating these expressions which allow us to express them all in terms of our primitive quantities ($f$ and $g$ together with their partials and, of course, $x$ and $y$). $$[a, b;c, d] = -[b, a;c, d] =-[a, b;d, c]$$ $$[c, d;a, b] = \dfrac1{[a, b;c, d]}$$ $$(a,b,c)=[a,c;b,c]$$ Further useful rules for computation are $$ [a,b;c,d]=\dfrac{[a,b;e,f]}{[c,d;e,f]},$$ in particular, $$ [a,b;c,d]=\dfrac{[a,b;e,b]}{[c,d;e,b]}$$ and $$ [a,b;c,d]=\dfrac{[a,b;1,b]}{[c,d;1,b]}.$$ Using these rules, we can compute any of the 336 expressions of the form $(i,j,k)$ (for $i$, $j$ and $k$ running from $1$ to $8$) by routine computations. For example, if we wish to compute $(8,3,5)$ then we proceed as follows: Firstly, $$ (8,3,5)=[8,5;3,5]=\dfrac{[8,5;1,2]}{[3,5;1,2]}$$ Now $[8,5;1,2]=(8,1,2)(5,2,1)-(8,2,1)(5,1,2)$ and $[3,5;1,2]=(3,1,2)(5,2,1)-(3,2,1)(5,1,2)$. Hence, finally $$(8,3,5)=\frac{(8,1,2)(5,2,1)-(8,2,1)(5,1,2)}{(3,1,2)(5,2,1)-(3,2,1)(5,1,2)} $$ and so can be expressed in terms of $f$ and $g$, together with their partials. \noindent As a simple example, we can compute again the basic formulae $$(4, 3, 1) = \dfrac{g_2}{f_2}, \quad (4, 1, 3) = -\dfrac1{f_2}, \quad (2, 3, 1)=\dfrac1{f_2},\quad(2, 1, 3)= \dfrac 1{f_2}.$$ For example $$(4, 3, 1)=[4, 1;3, 1]=\dfrac{[4, 1;1,2]}{[3, 1;1, 2]}= \dfrac{g_2}{f_2}$$ and the other three terms can be computed analogously. \subsection{Higher derivatives} Some of the thermodynamical identities involve higher derivatives and we indicate briefly how to incorporate these into our scheme. We use the self-explanatory notation $((a,b,c),d,e)$ for second derivatives. Thus $((3,1,2),2,1)$ is just $f_{12}$. Note that this notation allows for such derivatives as $\left(\dfrac{\partial}{\partial T} \left(\dfrac{\partial E}{\partial p} \right )_V\right )_S $ which is $((8,1,2),3,4)$. Once again, we can express all such derivatives (there are now 18,816 of them) in terms of $x$, $y$, $f$, $g$ and their partials (now up to the second order) using the chain rule. For $$((a,b,c),i,j)=((a,b,c),1,2)(1,i,j)+((a,b,c),2,1)(2,i,j) $$ and $(a,b,c)$, $(1,i,j)$ and $(2,i,j)$ can be dealt with using the above tables. \subsection{Derived quantities and thermodynamical identities} The reason why there is a plethora of thermodynamical identities is simple. A large number of significant (and also insignificant) quantities can be expressed or defined as simple algebraic combinations of a very few (our primitive quantities $x$, $y$, $f$, $g$ and their partials). Hence there are bound to be many relationships between them. Our strategy to verify (or falsify) an identity is to use the above methods to express both sides in terms of these quantities and check whether they agree. Of course, there are myriads of such quantities and identities and we can only bring a sample. Thus we have $$c_V =T \left(\frac{\partial S}{\partial T} \right )_V, $$ the heat capacity at constant volume, and $$c_p = T \left(\frac{\partial S}{\partial T} \right )_p, $$ the heat capacity at constant pressure. In our formalism, $c_V=f (4,3,2)$ and $c_p=f(4,3,1)$, and so, from our tables, $$c_V=f\dfrac{g_1}{f_1}, \quad c_p=f\dfrac{g_2}{f_2} .$$ Hence for the important quantities $\gamma =\dfrac {c_p}{c_V}$ and ${c_p}-{c_V}$ we have $\gamma =\dfrac {f_2 g_1}{f_1 g_2}$ and ${c_p}-{c_V}=f\dfrac1{f_1 f_2}.$ Further examples are $$l_V = \left(\dfrac{\partial S}{\partial V} \right )_T=-(4,2,3), $$ the latent heat of volume increase, and $$l_p = \left(\dfrac{\partial S}{\partial P} \right )_T=(4,1,3), $$ the latent heat of pressure increase. Further definitions are: $$m_V = \left(\frac{\partial S}{\partial V} \right )_p = (4,2,1) $$ and $$m_p = \left(\frac{\partial S}{\partial p} \right )_V=-(4,1,2). $$ The coefficient of volume expansion at constant pressure is $$\alpha_p = \frac 1 V \left(\frac{\partial V}{\partial T} \right )_p= \frac 1 y (2,3,1) $$ and the isothermal bulk modulus of elasticity is $$B_T = -V \left(\frac{\partial P}{\partial V} \right )_T=-y(1,2,3).$$ Then $K_T = \dfrac 1{B_T}=\dfrac {-1} {y(1,2,3)}$ is the isothermal compressibility. We illustrate our method by verifying the simple identity: $$c_p-c_V=T\left(\dfrac{\partial P}{\partial T} \right)_V \left(\dfrac{\partial V}{\partial T} \right)_p .$$ Using the tables above, we can easily compute both sides in terms of our primitive expressions and get $\dfrac f{f_1f_2}$ in each case. \subsection{Computing $(a,b,c)$ and $((a,b,c),d,e)$} We can summarise these results in the following formulae: $$[a,b;c,d]=\frac{(a , 1,2 )( b,2 ,1 )-( a,2 ,1 )(b ,1 ,2 )}{(c ,1 ,2 )(d ,2 ,1 )-(c,2 ,1 )(d ,1,2)} $$ and so $$(a,b,c) =[a,c;b,c]=\frac{(a , 1,2 )( c,2 ,1 )-( a,2 ,1 )(c ,1 ,2 )}{(b ,1 ,2 )(c ,2 ,1 )-(b,2 ,1 )(c ,2 , 1)},$$ which allow us to systematically compute any of the derivatives of the form $(a,b,c)$ in terms of our primitives $x$, $y$, $f$, $g$, $f_1$, $f_2$, $g_1$ and $g_2$, using the above data basis for expressions of the form $(a,1,2)$ and $(a,2,1)$. For the second derivatives we substitute $$\phi=\dfrac{(a , 1,2 )( c,2 ,1 )-( a,2 ,1 )(c ,1 ,2 )}{(b ,1 ,2 )(c ,2 ,1 )-(b,2 ,1 )(c ,1,2)}$$ into the formula $$(\phi,d,e)=\frac{(\phi , 1,2 )(d ,2 ,1 )-( \phi,2 ,1 )(d ,1 ,2 )}{(d,1 ,2 )(e ,2 ,1 )-(d,2 ,1 )(e ,1,2)} $$ to compute $((a,b,c),d,e)$ in terms of our primitive terms (this time with the first and second derivatives of $f$ and $g$). The advantage of these formulae is, of course, that one can write a simple programme to compute them. (It is always tacitly assumed in the above formulae that the appropriate conditions which allow a use of the inverse function theorem hold). It remains only to produce the corresponding data basis for second derivatives, i.e. to express all of the non-trivial quantities of the form $((a,b,c),d,e)$ with $b$, $c$, $d$ and $e$ either $1$ or $2$ in terms of $x$, $y$ and $f$ and $g$ and their partials. Of course, $((3,1,2),1,2)$, $((3,2,1),1,2)$, $((3,2,1),1,2)$, $((3,2,1),2,1)$ are just $f_{11}$, $f_{12}$ (twice) and $f_{22}$. Similar identities hold for the partials of $g$. Further, \begin{eqnarray*} ((5,1,2),1,2))&=&-g_1f_1-gf_{11};\\ ((5,1,2),2,1))&=&1-g_2f_1-gf_{12};\\ ((5,2,1),1,2))&=&-g_1f_2-gf_{12};\\ ((5,2,1),2,1))&=&-g_2f_2-gf_{22}.\end{eqnarray*} Note that the two expressions for the mixed partial coincide, since $f_1g_2-f_2g_1=1$. Similarly, \begin{eqnarray*} ((6,1,2),1,2))&=&f_1g_1+fg_{11};\\ ((6,1,2),2,1))&=&1+f_2g_1+fg_{12};\\ ((6,2,1),1,2))&=&f_1g_2+fg_{12};\\ ((6,2,1),2,1))&=&g_2f_2+fg_{22}.\end{eqnarray*} \begin{eqnarray*} ((7,1,2),1,2))&=&-g_1f_1-gf_{11};\\ ((7,1,2),2,1))&=&-g_2f_1-gf_{12};\\ ((7,2,1),1,2))&=&-1-g_1f_2-gf_{12};\\ ((7,2,1),2,1))&=&-g_2f_2-gf_{22}.\end{eqnarray*} and \begin{eqnarray*}((8,1,2),1,2))&=&g_1f_1-fg_{11};\\ ((8,1,2),2,1))&=&g_1f_2+fg_{12};\\ ((8,2,1),1,2))&=&-1+f_1g_2+fg_{12};\\ ((8,2,1),2,1))&=&-g_2f_2+fg_{22}.\end{eqnarray*} We emphasise that the numerical code for the various thermodynamical quantities is a mere construct to facilitate their computation (ideally with the aid of suitable software) and that the final goal is to express them all in terms of the basic quantities ($x$, $y$, $f$, $g$ and the partials of the latter). It is then a routine matter to translate these into the standard terminology of thermodynamics if so required. \subsection{A notational survival kit} In this treatment we have used three notations---the mathematically neutral symbols $x$, $y$, $u$ and $v$ etc. (to develop the mathematical theory which is independent of any reference to thermodynamics), the standard thermodynamical terminology $p$, $V$ etc. (for readers interested in the thermodynamical interpretation) and finally the numerical code (to systematise the computations of derived quantities and our approach to thermodynamical identities). For the convenience of the reader we give a dictionary of the relationships between them: $$\begin{matrix} 1&2&3&4&5&6&7&8\\ p&V&T&S&G&H&F&E\\ x&y&u&v&E^{13}&E^{14}&E^{23}&E^{24}. \end{matrix}$$ Of course, $p$ is pressure, $V$ volume, $T$ temperature, $S$ entropy and $G$, $H$, $F$ and $E$ are free enthalpy, enthalpy, free energy and energy respectively. \section{Existence and uniqueness of $S$-transversals} In this section, we consider in more detail the restraints which are imposed on families of curves by the Samuelson area condition. We begin with a special situation which we can compute directly and then show how to reduce the general case to it. We show that, given any family of level curves, there always exist families of intersecting level curves which satisfy the area condition, and the method of proof allows us to compute these curves explicitly in many interesting cases. Speaking loosely, this means that in thermodynamics to every family of isotherms (adiabats) there correspond families of possible adiabats (isotherms) (this subject is discussed more carefully below). Since the area condition is not very demanding, there are infinitely many collections of level curves which are $S$-transversal to a given family; but we show that if we are given two curves which are transversal to the latter (in the differential geometric sense) then they can be embedded in an essentially unique fashion into a system of $S$-transversal curves. This allows us, for example, to write down all families which are $S$-transversal to the set of isotherms for the van der Waals gas. We include some examples which we found to be of interest in a later section. \subsection{A special case} We begin by investigating the questions of existence and uniqueness when one of the families consists of lines parallel to one of the axes, in this case, the $x$-axis. For this example, elementary computation shows that, as claimed, we can always describe all other possible families of level curves which are $S$-transversal to the first one. Moreover, in this case, it is also straightforward to show that knowledge of two curves determines the whole family. Remarkably, as we show in the next section, the general case can be reduced to this one, thus allowing a simple derivation of the basic theorems. If the first family of curves is calibrated, i.e., they are the level curves of a particular potential function $u$, then any {\it single} curve suffices to determine the second family. So let the $v$-foliation consist of the lines parallel to the $x$-axis (i.e., where $v(x,y) = y$), where in order to avoid topological difficulties we suppose that our potentials are defined on a product of intervals. (We are exchanging the roles of $u$ and $v$ here and will compute all $u$-functions which are transversal to this $v$). Then we know that if the foliation induced by the potential $u$ is $S$-transversal, we can recalibrate $u$ and $v$ so that the Jacobian is identically one. If we assume that the $u$ foliation has already been recalibrated and that the recalibration of $v$ is $v(x,y) = c(y)$, then a straightforward computation shows that $J = c'(y)u_x$ and so $u$ must have the form: $$u(x,y) = a(y) x + b(y)$$ where $b$ is an arbitrary smooth function of one variable and $a$ is such that $a(y) c'(y) = 1$. We can state this formally as follows: \begin{theor} The function $u$ is $S_1$-transversal to the function $v$ = $c(y)$ if and only if $u$ has the form $u(x,y) = a(y) x + b(y)$ where $a(y) = \dfrac 1 {c'(y)}$. (Note that we are assuming here that $c$ is a diffeomorphism between two intervals of the line). The function $u$ is $S$-transversal to the function $v = y$ if and only if $u$ has the form $$u(x,y) = \phi (a(y) x + b(y))$$ where $\phi$ is a diffeomorphism between two intervals of the line, and $a$ and $b$ are any two smooth functions of one variable, for which $a$ has no zeros. \end{theor} Since it will often be convenient to switch the roles of $u$ and $v$, or $x$ and $y$ respectively, in the above, we document the corresponding formulae: $$u(x,y) = c(x), \quad v(x,y) = a(x) y + b(x).$$ We now consider the question of uniqueness in the above situation. By virtue of the general theory developed in the next section, this will suffice to cover the general case. Our starting point is the typical pair of $S_1$-transversal functions $$u(x,y) = a(y) x + b(y), \quad v(x,y) = c(y)$$ for arbitrary (generic) functions $a$ and $b$ (of one variable), with $a c' = 1$ (i.e., $c$ is a primitive of $\dfrac 1a$) for the case where the $v$-lines are the parallels to the $x$-axis. We now suppose that we have two transversals to the $x$ axis which we want to incorporate into a family of geometric adiabatics. We can suppose that the curves correspond to the values $u = 0$ and $u = 1$. If $c = 0$, then we have $$x = - \frac {b(y)}{a(y)}$$ and, for $c = 1$, $$x = \frac {1 - b(y)}{a(y)}.$$ We note now that if the $u$-level curves are to be $S_1$-transversal to the parallels to the $x$-axis, then they are transversal in the differential geometrical sense and so can be regarded as the graphs of functions (more precisely, $x$ as a function of $y$). Hence if we suppose that the \lq \lq adiabatics" $c = 0$, $c = 1$ have the form $x = f_0(y)$, and $x = f_1(y)$, then a simple computation shows that the general adiabatic has the form $u = c$, where $$u(x,y) = \frac x{f_1(y) - f_0(y)} - \frac {f_0(y)}{f_1(y) - f_0(y)}.$$ \subsection{A reduction} We now show that the general case can be reduced to the previous special case. We begin with the remark that if we have any smooth non-vanishing function $f$ of two variables, say on the product of two intervals, then we can always find a smooth vector field $(u,v)$ which has $f$ as its Jacobi function. Probably the easiest way to do this, as was pointed out to us by Michael Schm\"uckenschl\"ager, is to use a field of the form $$u(x,y) = \phi(x,y),\quad v(x,y) = \psi (y)$$ with $\phi$ a smooth function of two variables, $\psi$ one of one variable. (Such fields are called Knothe fields). The above form also has the advantage of leaving the level curves $y = d$ invariant. The Jacobian of the above function is $\phi_x (x,y) \psi'(y)$ and we can, of course, easily choose the two free functions in such a way that this product gives $f$. For example, in the case where $f$ is the constant function $1$, then we can take for $\psi$ any smooth function of one variable and then $\phi$ is determined up to a function of $y$ alone , i.e., has the form $\phi(x,y) = \frac 1 { \psi'(y)} x + \chi (y)$ where $\chi$ is an arbitrary smooth function of one variable\footnote{When we include such a formula we are, of course, tacitly assuming that the operations carried out on the generic functions involved are legitimate. In this case this means explicitly that we are assuming that the derivative of $\psi$ never vanishes , i.e., that $\psi$ is a diffeomorphism. This type of situation will occur frequently in the following and since it would be tedious to state the explicit assumptions on the generic functions which arise, we will rely on the reader to fill in the details.} Using this result, we can prove the following: \begin{theor} Suppose that we have a foliation of part of the plane by the level curves of a suitable function $u(x,y)$ (with non-vanishing gradient) which is defined on a domain (i.e., an open, connected subset) $G$ in $\bf R^2$. Then we can linearise $u$ locally by means of an area-preserving mapping. More precisely, for each point $(x_0,y_0)$ in $G$ we can find a neighbourhood $\tilde G$ of the point in $G$ and a function $v(x,y)$ on $\tilde G$ which is such that the mapping $(x,y) \mapsto (u(x,y),v(x,y))$ is area-preserving and maps the lines $u = c$ onto lines parallel to the $y$-axis. \end{theor} This is another result which is part of mathematical folklore. In order to prove it, we start with a foliation consisting of the level curves of a potential $u$ and a point $(x_0,y_0)$. Since the gradient of $u$ never vanishes, we can find a function $v$ which is transversal to $u$ in a neighbourhood of this point and we introduce the new variables $X = u(x,y), \quad Y = v(x,y)$. By the inverse function theorem we can suppose that this can be solved to obtain $x$ and $y$ as smooth functions of $X$ and $Y$, say $x = a(X,Y), \quad y = b(X,Y)$. We now introduce further new variables $\tilde X$ and $\tilde Y$ of the form $$\tilde X = \phi (X),\quad \tilde Y = \psi (X,Y)$$ for suitable smooth functions $\phi$ and $\psi$ of one and two variables respectively. Then elementary calculations show that the Jacobian of $\tilde X$ and $\tilde Y$ with respect to the variables $x$ and $y$ (but expressed in terms of the variables $X$ and $Y$) is $$\phi'(X)\psi_2(X,Y) J(X,Y)$$ where $J(X,Y)$ is the Jacobian of $(u,v)$ with respect to $x$ and $y$, expressed as a function of $X$ and $Y$ {\it via} $a$ and $b$ , i.e., $J(X,Y) = \bar J(a(X,Y),b(X,Y))$ where $\bar J(x,y) = u_x(x,y)v_y(x,y) - u_y(x,y)v_x(x,y)$. We can clearly arrange for this to be identically one by using the freedom in the choice of $\phi$ and $\psi$ and this completes the proof. \subsection{The general situation} Using these results, we can now extend the existence and uniqueness results from the special case in which one of the foliations is parallel to the axis to any given $u$-foliation. Since the method is explicit we can also use it to find all possible $S$-transversal systems for several interesting special types of $u$-curves. The method used is as follows: Suppose that we can find an area-preserving mapping which maps the $u$-curves onto the lines parallel the the $x$-axis. Then we can transfer the above example to this situation. We remark that this is in a certain sense a rigorous justification for a ploy of Maxwell's, who argued from this special situation (for reasons of simplicity), assuming that his conclusions then carried over to the general case (cf. the passage: \lq \lq For the sake of the distinctness in the figure, I have supposed the substance to be partly in the liquid and partly in the gaseous state, so that the isothermal lines are horizontal, and easily distinguished from the adiabatic lines, which slope downward to the right. The investigation, however, is quite independent of any such restriction as to the nature of the working substance'', [Ma], p. 155). We state this formally as a theorem: \begin{theor} Suppose that we are given a foliation of the plane which we take to be the level curves of a suitable function $u$. Then there exists (locally) a family of curves (the level curves of a potential $v$) which are $S$-transversal to the level curves of $u$. Furthermore, given any two curves which are transversal to the level curves of $u$, then there exists a unique family of $S$-transversal curves which include the given two. \end{theor} \subsection{Examples of the uniqueness and existence results} We bring some explicit computations in connection with the question of the existence and uniqueness of $S$-transversals to some simple cases. \paragraph{The ideal gas:} We begin with the case of the adiabatics for the ideal gas. In this case we use the new variables $X=x y,\quad Y= \dfrac1{\gamma-1}\log\left (xy^\gamma \right)$ to reduce to the simple case of transversals to the parallels to the coordinate axes. Then we have that two functions $u$ and $v$ where $v$ is a recalibration of $xy^\gamma$ are $S_1$-transversal if and only if they have the form $$u(x,y)=a\left(\dfrac1{\gamma-1}(\ln x+\gamma\ln y) \right )x y+ b\left (\left(\dfrac1{\gamma-1}(\ln x+\gamma\ln y) \right )\right ),$$ $$v(x,y)=c\left(\dfrac1{\gamma-1}(\ln x+\gamma\ln y) \right )$$ where $c$ is a primitive of $\dfrac 1a$. Similarly, two functions $u$ and $v$ where $u$ is a recalibration of $xy$ are $S_1$-transversal if and only if they have the form $$u(x,y)=c(xy),\quad v(x,y)=a(xy)\left (\dfrac1{\gamma-1}\left (\ln x +\gamma \ln y\right )\right )+ b(x y)$$ where $c$ is again a primitive of $\dfrac 1a$. From these formulae it is easy to give the general form of functions $u$ which are $S$-transversal to the adiabatics of the ideal gas resp. functions $v$ which are $S$-transversal to its isotherms. At this point we bring a concrete example related to the ideal gas which was constructed to answer a question of Samuelson. Suppose that we are given $v(x,y) = x y$ and require an $S$-transversal function $u$ which interpolates between the curves $x y^2 = 1$ and $x y^3 = 10$ (i.e., two adiabatics corresponding to distinct cases of the ideal gas). Then a simple computation shows that $$u(x,y) = \frac {2\ln (xy^2)}{\ln (10 x y)}$$ is such that the contour $u = 0$ is the first curve, while $u = 1$ is the second one. Interestingly, this then forces $u$ to contain adiabatics of all the intermediary exponents, as the reader can easily verify. (This is an example where the representations are only valid locally, since any two curves of the form $xy^2=c$ and $xy^3=d $ will cross). Analogous considerations lead to the following result: \begin{theor} Let $v$ be $S$-transversal to $u = xy$. Then if two level curves of $v$ have the form $xy^\gamma$ constant for a fixed $\gamma$, $v$ is a recalibration of $xy^\gamma$. \end{theor} \paragraph{The van der Waals gas:} We now turn to the van der Waals equation. In order to simplify the notation, we use the following solutions of the Maxwell relationships: $$u=\left(x+\frac 1{y^2}\right)(y-1);$$ $$v=\dfrac 1{\gamma-1}\left (\ln\left(x+\frac 1{y^2}\right)+\gamma \ln(y-1)\right ).$$ In this case we use the new variables $$X=\left(x+\frac 1{y^2}\right)(y-1)$$ and $$Y=\dfrac 1{\gamma-1}\left (\ln\left(x+\frac 1{y^2}\right)+\gamma \ln(y-1)\right ) $$ to reduce to the simple case. Then we have that two functions $u$ and $v$ where $v$ is a recalibration of $Y$ are $S_1$-transversal if and only if they have the form $$u(x,y)=a(Y)X+ b(Y),\quad v(x,y)=c(Y)$$ where $c$ is a primitive of $\dfrac 1a$. Similarly, two functions $u$ and $v$ where $u$ is a recalibration of $xy$ are $S_1$-transversal if and only if they have the form $$u(x,y)=c(X),\quad v(x,y)=a(X)Y+ b(X)$$ where $c$ is again a primitive of $\dfrac 1a$. From these formulae it is again easy to give the general form of functions $u$ which are $S$-transversal to the adiabatics of the van der Waals gas, or functions $v$ which are $S$-transversal to its isotherms. Similar methods can be applied to the Feynman gas which are now discussed. \section{Five basic models} We conclude by collecting some explicit computations for various gas models: beginning with the ideal gas. \subsection{The ideal gas} Here the recalibration is $u=xy$, $v=\dfrac1{\gamma-1}(\ln xy^\gamma)$. In this case we can explicitly compute the relationships, which are obtained by permuting the variables to get: $x = e^{(\gamma-1)v} y^{-\gamma},\quad u = e^{(\gamma-1)v} y^{-\gamma+1}$; $x = \dfrac u y, \quad v = \dfrac1{\gamma-1}\ln u +\ln y$; $y = \dfrac u x, \quad v =\dfrac\gamma{\gamma-1}\ln u -\ln x $; $y = e^{\frac{\gamma-1}\gamma v} x^{\frac{-1}{\gamma}}, \quad y = e^{{\frac{\gamma-1}\gamma}v}x^{\frac{\gamma-1}{\gamma}}$. The reader can check that the four Maxwell relations are indeed valid and thus compute the corresponding energy fuctions. We shall shortly describe a simpler and more systematic way of doing this. \subsection{A generalisation of the ideal gas} We now consider a simple generalisation of the ideal gas. This will presumably not describe any real gas (but see the remarks on Nernst's law below): we include it since the results are particularly transparent. The starting point is the equation $$u = x^a y^b, v = x^c y^d.$$ with $a \neq 1$, $b \neq 1$ and $ab-cd \neq 0$. \noindent The canonical recalibration is \begin{eqnarray*}U&=&\frac 1{\sqrt J}\left(\frac 1{d-c-J+1}\right)x^{a(d-c-J+1)}y^{b(d-c-J+1)}\\ V&=&\frac 1{\sqrt J}\left(\frac 1{a-b-J+1}\right)x^{c(a-b=J+1)}y^{d(a-b-J+1)}.\end{eqnarray*} where $J=ad-bc$, and so, for the special case $J=1$, (which we can always achieve by means of a simple recalibration),\\ \begin{eqnarray*}\frac 1{d-c}&x^{a(d-c)}y^{b(d-c)}\\ \frac 1{a-b}&x^{c(a-b)}y^{d(a-b)}\end{eqnarray*} We have included this example since the results have a pleasing simplicity and symmetry (see, in particular, the further computations below). The case of an ideal gas can be obtained by setting $a=c=1$ and letting $b$ tend to $1$ but there are some subtleties involved, as the presence of the logarithmic term in the recalibration of the ideal gas would suggest. \subsection{The van der Waals gas} The natural calibrations are $$u = \left (x+\dfrac a {y^2}\right )(y-b)\qquad v = \dfrac 1{\gamma-1}\ln \left( \left (x+\dfrac a {y^2} \right ) (y-b)^\gamma \right ).$$ In this case the computations for computing $E^{23}$ and $E^{24}$ can be carried out by hand and we get: $$x - \frac{u}{y-b}+\frac a{y^2},\quad v=\ln(y-b)+\frac 1{\gamma-1} \ln u, $$ and $$x=e^{(\gamma-1)v}(y-b)^{-\gamma},\quad u=e^{(\gamma-1)v}(y-b)^{1-\gamma}. $$ Again this can be used to compute the two energies, but see below for a more systematic treatment which gives all such functions. \subsection{The Feynman gas} Here $u=x y$ and $v=x y^{\gamma(xy)}$ for a function $\gamma$ of one variable. This example was introduced by Feynman (see [Fe]) to cope with the fact that, in a real gas, the adiabatic index depends on temperature. This is a case where one genuinely requires the equation in [Co1] to verify that it is a Samuelson configuration. This turns out to be the case and the recalibrations (which are not computed by Feynman) are $$U = \phi(x y) \qquad V=\ln(x y^{\gamma(x y)})$$ where $\phi$ is a primitive of $\dfrac 1{\gamma-1}$. This is the first example which we have met where we genuinely have to recalibrate temperature (i.e., Boyle's law holds only in the weak form that $pV$ is constant for constant temperature). For a discussion of the relevance of such recalibrations, see Chang [Ch]. The recalibrations introduced provide an at least qualitative explanation of the diagram on p. 78 of this reference, which displays comparative data of Le Duc on spirit thermometers. Here we can solve for the cases where $u$ and $x$, or $u$ and $v$, are the independent variables. (We would like to thank P.F.X. M\" uller who pointed out this passage in Feynman's text to us). \subsection{A synthesis} We can include all of the above (except the second example) in the form: $$u = \left(x+\frac a{y^2}\right)(y-b) \qquad v= \left(x+\frac a{y^2}\right)(v-b)^{\gamma(u(x,y))}.$$ with recalibrations $$u = \phi\left( \left(x+\frac a{y^2}\right)(y-b)\right) \qquad v= \ln\left(\left(x+\frac a{y^2}\right)(v-b)^{\gamma(\phi^{-1}(u(x, y))}\right).$$ Once again, $\phi$ is a primitive of $\dfrac1{\gamma-1}$. This is another case where it seems hopeless to check that this represents a Samuelson configuration without the theory and computational methods developed here. \subsection{A gallimaufry of formulae} For completeness, we now bring a list of the expressions $(i,1,2)$ and $(i,2,1)$ for the substances introduced above. We emphasise again that we include these results since they allow us to compute the various energy functions and further thermodynamical quantities without explicitly calculating the various permutations of the variables implicit in the definitions. We have found no indication in the literature that this is possible. Again we start with the ideal gas: \paragraph {The ideal gas:} For the ideal gas \begin{eqnarray*} (5,1,2) &=&y-\frac{y \log \left(x y^\gamma\right)}{\gamma-1};\\ (5,2,1) &=&-\frac{x \log \left(x y^\gamma\right)}{\gamma-1};\\ (6,1,2)&=&\frac{y \gamma}{\gamma-1};\\ (6,2,1)&=&\frac{\gamma x}{\gamma-1};\\ (7,1,2)&=&-\frac{\left(y \log\left(x y^\gamma\right)\right)}{( \gamma-1)};\\ (7,2,1) &=&-x - \frac{(x \log(x y^\gamma))}{(\gamma-1)};\\ (8,1,2)&=&\frac y {(\gamma-1)};\\ (8,2,1)&=&\frac{x}{(\gamma-1)}. \end{eqnarray*} \paragraph {The generalisation of the ideal gas:} Here $$(5,1,2)= \dfrac {by} {b-a}, \qquad (5,2,1) = \dfrac{bx}{b-a},$$ $$ (6,1,2)=\dfrac{dy}{d-c},\qquad (6,2,1)=\dfrac{dx}{d-c},$$ $$(7,1,2)=\dfrac{ay}{b-a}, \qquad (7,2,1)=\dfrac{ax}{b-a},$$ $$(8,1,2)= \dfrac{cx}{d-c},\qquad (8,2,1)=\dfrac{cy}{d-c}.$$ Hence the energy functions are given by $E^{12}=\dfrac{bxy}{b-a}$, $E^{13}=\dfrac{dxy}{d-c}$, $E^{23}=\dfrac{axy}{b-a}$ and $E^{24}=\dfrac{cxy}{d-c}$. Note the pleasing symmetry of these results. We have found them useful as a litmus test for the validity of thermodynamical identities. We remark here that although this model may not correspond to any real gas, it does have the advantage that it satifies Nernst's law---the third law of thermodynamics in the precise form given in [La]---i.e., that if we express the entropy $S$ as a function of $p$ and $T$ or of $V$ and $T$, then we get the form $S= P_1(p)T^n$ or $S= P_2(p)T^m$ for suitable positive indices $n$ and $m$ and functions $P_1$ and $P_2$ of pressure. We know of no other explicit model for a real gas which has this property. \paragraph {A non-example} Continuing on the theme of Nernst's law, we note that we have examined the Feynman model in this respect (for natural choices of $\gamma$) and found that it again failed to reproduce this phenomenon---the problem lies in the logarithm term in the recalibration of entropy. In view of the above remark, it was then tempting to combine the Feynman model and the above generalisation of the ideal gas, i.e. to consider the case $$u=x^a y^b,\quad v=x^c y^{\gamma(xy)}.$$ Unfortunately, these functions do not normally satisfy the $S$-condition. Despite this disappointment, this computation at least shows the usefulness of the P.D.E. characterisation of the latter condition, in particular that it can be used to eliminate possible models which cannot be recalibrated to satisfy the Maxwell relations. \paragraph {The van der Waals gas:} \begin{eqnarray*} (5,1,2)&=&\dfrac{(-b + y)}{(\gamma-1)};\\ (5,2,1)&=&-\frac{\left(-\dfrac{2 (y-b) a}{y^3}+\dfrac{a}{y^2}+x\right) \log \left(\left(\dfrac{a}{y^2}+x\right) (y-b)^\gamma\right)}{\gamma-1};\\ (6,1,2)&=&y -\dfrac{ ((-b + y) \log\left((x + \dfrac a{y^2}) (-b + y)^\gamma\right )])}{(\gamma-1)};\\ (6,2,1)&=&-\frac{\left(-\dfrac{2 (y-b) a}{y^3}+\dfrac{a}{y^2}+x\right) \log \left(\left(\dfrac{a}{y^2}+x\right) (y-b)^\gamma\right)}{\gamma-1};\\ (7,1,2)&=&y-\frac{(y-b) \log \left(\left(\dfrac{a}{y^2}+x\right) (y-b)^\gamma\right)}{\gamma-1};\\ (7,2,1)&=&\frac{(y-b)^{1-\gamma} \left(\gamma \left(\dfrac{a}{y^2}+x\right) (y-b)^{\gamma-1}-\dfrac{2 a (y-b)^\gamma}{y^3}\right)}{\gamma-1};\\ (8,1,2)&=&\frac{y-b}{\gamma-1};\\ (8,2,1)&=&\frac{(y-b)^{1-\gamma} \left(\gamma \left(\dfrac{a}{y^2}+x\right) (y-b)^{\gamma-1}-\frac{2 a (y-b)^\gamma}{y^3}\right)}{\gamma-1}-x.\\ \end{eqnarray*} \paragraph {The Feynman gas:} \begin{eqnarray*} (5,1,2)&=&y-y \log \left(x y^{\gamma(x y)}\right) \phi'(x y);\\ (5,2,1)&=&-x \log \left(x y^{\gamma(x y)}\right) \phi'(x y);\\ (6,1,2)&=&\frac{\phi(x y) \left(y^{\gamma(x y)}+x \log (y) \gamma'(x y) y^{\gamma(x y)+1}\right) y^{-\gamma(x y)}}{x}+y;\\ (6,2,1)&=&{\phi}(x y) \left(\frac{\gamma(x y)}{y}+x \log (y) \gamma'(x y)\right);\\ (7,1,2)&=&-y \log \left(x y^{\gamma(x y)}\right) {\phi}'(x y);\\ (7,2,1)&=&-\log \left(x y^{\gamma(x y)}\right) {\phi}'(x y) x-x;\\ (8,2,1)&=&\frac{y^{-\gamma(x y)} {\phi}(x y) \left(y^{\gamma(x y)}+x \log (y) \gamma'(x y) y^{\gamma(x y)+1}\right)}{x};\\ (8,2,1)&=&{\phi}(x y) \left(\frac{\gamma(x y)}{y}+x \log (y) \gamma'(x y)\right)-x. \end{eqnarray*} In reading Feynman's treatment, one gains the impression that he is tacitly assuming that the formulae for his model are obtained simply by plugging a variable $\gamma$ into those for the ideal gas. The presence of terms involving the derivative of $\gamma$ in the above show that this is not the case (for example, in the formulae for the important quantities $c_p$, $c_V$ and their difference and quotient). It is an easy task to compute the above quantities for the combined Feynman and van der Waals gas (using Mathematica), but the results are too elaborate to be included here. \section{Final remarks} The mathematics of thermodynamics have never ceased to fascinate mathematicians, who generally experience a sense of unease at the standard representations, in particular of the laws of thermodynamics as an axiom system (see, for example, [Se] for a critical evaluation). There have been many attempts to put them on a solid basis. We mention, in particular, Caratheodory [Ca1] and [Ca2], Lieb and Yngvason [Li] and Truesdell [Tr]. We have, of course, been influenced by these treatments and, inevitably, there are certain common points. However, we believe that our approach is sufficiently original to justify its presentation. Thus in [Li] the ordering \lq \lq adiabatically accessible from'' is centre stage but apart from that the method is completely different. We know of two systematic approaches to thermodynamical identities (Bridgman [Br] and Jayne [Ja]) and they have influenced our treatment. Thus the idea of using Jacobians to derive identities can be found in the latter\footnote{In Jayne's notation $[A,B]$ corresponds to our $[A,B;x,y]$ for an unspecified pair of quantities $x$ and $y$. The latter can be freely chosen for any specific computation and so are not explicitly documented in his symbolism. Regardless of this choice, we always have that our $[A,B;C,D]$ is his $[A,B]/[C,D]$, which establishes the relationship between our notation and that of Jayne.}. In conclusion, we would like t o express our gratitude to Iain Fraser and Elena Kartashova, who read and commented on an earlier version of our manuscript.
{ "redpajama_set_name": "RedPajamaArXiv" }
7,496
{"url":"https:\/\/astarmathsandphysics.com\/university-maths-notes\/vector-calculus\/4193-purely-radial-functions-with-zero-divergence.html","text":"## Purely Radial Functions With Zero Divergence\n\nFor any purely radial vector field,\n$\\mathbf{F} = F_r \\mathbf{e_r}$\n.\nIf\n$F_r = r^n$\n, are there any values of\n$n$\nfor which\n$\\mathbf{\\nabla} \\cdot \\mathbf{F} =0$\n?\nYes there are.\n$\\mathbf{\\nabla} \\cdot (r^n \\mathbf{e_r}) = \\frac{1}{r^2} \\frac{\\partial}{\\partial r} (r^2 r^n) =\\frac{1}{r^2} \\frac{\\partial}{\\partial r} (r^{2+n})=\\frac{(2+n)r^{1+n}}{r^2} = (2+n)r^{-1+n}$\n\nThis is only zero if\n$n=-2$\n, then\n$\\mathbf{F} = \\frac{K}{r^2} \\mathbf{e_r}$\n.\nNotice that this is only true if\n$r \\neq 0$\n\/","date":"2017-10-21 14:15:33","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4859022796154022, \"perplexity\": 7959.274663166028}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-43\/segments\/1508187824819.92\/warc\/CC-MAIN-20171021133807-20171021153807-00263.warc.gz\"}"}
null
null
Dedication To the memory of all those who died in London's first Blitz, whether on the ground or in the air. CONTENTS Introduction The Growing Aerial Threat to London The Zeppelin Raids – The Men That Mattered Preparing for the Zeppelin War The 1915 Zeppelin Raids The 1916 Zeppelin Raids The 1917 Zeppelin Raids The End of the Zeppelin War The Coming of the Bombers The Bomber Raids – The Men That Mattered Preparing for the Bomber Blitz The 1917 Bomber Raids The 1918 Bomber Raids The Aftermath of the First Blitz Select Bibliography Appendix 1: In Touch With London's First Blitz Appendix 2: Chronology Appendix 3: The Forces Engaged in London's First Blitz **RANK ABBREVIATIONS** **British:** General: Gen Lieutenant-General: Lt Gen Major-General: Maj Gen Lieutenant Colonel: Lt Col Major: Maj Captain: Capt Lieutentant: Lt Flight Lieutenant: Flt Lt Second-Lieutenant: 2nd Lt Sub-Lieutenant: Sub-Lt Flight Sub-Lieutenant: Flt Sub-Lt **German:** Generalleutnant: GenLt Korvettenkäpitan: Kvtkpt Hauptmann: Hptmn Kapitanleutnänt: Kptlt Oberleutnant: Oblt Oberleutnant-zur-See: Oblt-z-S Leutnant: Lt Vizefeldwebel: Vfw Unteroffizier: Uffz Part of a poster fron 1915, depicting a raid on London (National Army Museum). At the beginning of the war many in Britain feared immediate aerial attack by Germany's much-vaunted fleet of airships. Despite this German propaganda picture highlighting these fears, it would not be until January 1915 that the first Zeppelin bombs fell on Britain. INTRODUCTION The rapid descent to war experienced in the hot summer of 1914 alerted scaremongers in Britain to the possibility of immediate aerial bombardment of London and other major industrial cities. The publicity and propaganda surrounding the development of Germany's fleet of airships had spread far and wide, and the spectre of these great leviathans of the air sowing their seeds of death and destruction in the streets of London suddenly became very real. Two days after Britain declared war on Germany, the editor of _The Times_ newspaper informed his readers that the enemy boasted a force of 11 airships serving with their armed forces – but he claimed, reassuringly, that only two were capable of reaching Britain. The following day, 7 August, preparations for the air defence of London began when a single, unarmed aircraft took up station at Hendon, a north-western suburb of the city. Then, the next day, the Admiralty added to the defence by assigning three 1-pdr 'pom-pom' guns to anti-aircraft duties in Whitehall, close to the seat of government. But no attack materialized. In fact almost ten months would pass before German airships finally appeared menacingly over the streets of London. Yet even then Britain had little answer to the threat, and not until the late summer of 1916 could the armed forces offer a serious and deadly response. Even before Germany launched her Zeppelin offensive against London, however, the German army held high hopes of sending bomber aircraft against the city. In the early weeks of the war, as the German army pushed the Allies back and the 'race to the sea' was underway, Wilhelm Siegert, commander of the army's Flieger-Battalion Nr. 4 proposed the creation of a force to open a strategic bombing campaign against the hub of the British Empire and seat of its government. The limited range of the aircraft available at that time, however, meant that any bombing unit required a base close to the French port of Calais, which offered the shortest route to London. Confident of reaching that goal the Oberste Heeresleitung (OHL) _–_ Army High Command _–_ approved the creation of a force for the task, allowing Siegert to select the best pilots and observers, and within a few weeks they were formed at an airfield about 6 miles south of Ostend, at Ghistelles (Gistel) in occupied Belgium. Formed as Fliegerkorps der Obersten Heeresleitung, this highly secret unit adopted the unlikely codename Brieftauben-Abteilung Ostende – the Ostend Carrier Pigeon Detachment – to disguise its identity. But when, in November 1914, the German advance ground to a halt with Calais out of reach, the OHL shelved Siegert's plan, though it was not forgotten. The aeroplane's time would come, but for now, Germany waited for its airship fleets to strike the first blow against London from the air. THE GROWING AERIAL THREAT TO LONDON GERMANY TAKES TO THE AIR The father of rigid airship development was Count Ferdinand von Zeppelin. Others had experimented with the principles of lighter-than-air flight, but it was Zeppelin's creation, _Luftschiff_ (airship) _Zeppelin 1,_ which first took to the air in July 1900, that offered the greatest promise. His first flight preceded that of the Wright brothers by three years when in 1903 they coaxed their flimsy Wright Flyer into the air for the first manned, controlled and powered flight by a heavier-than-air machine – an aeroplane. Although both tentative and primitive, these skyward leaps marked the first steps on a journey that would bring airship and aeroplane to battle over the darkened streets of London just 12 years later. Count Zeppelin continued to develop his airships and, despite a number of potentially crushing setbacks, he persevered and engendered massive support from the German people. After fire destroyed his fourth airship in 1908, a public fund raised 6 million Marks to enable him to continue his work. This influx of money financed the creation of the Luftschiffbau Zeppelin GmbH (Zeppelin Airship Company) and also the Deutsch Luftschiffahrts-Aktien-Gesellschaft (DELAG), the world's first commercial airline. DELAG airships soon became a common sight over Germany. The population stared skywards in admiration and wonder; Count Zeppelin's airships became a source of national pride and, for many onlookers, they provided a highly visible demonstration of German technical superiority. The German military had already shown an interest in the count's work and in 1909 the army purchased two of his airships, numbering them Z.I and Z.II. After a period of evaluation, the army ordered two more. Not to be outdone, the navy placed its first order in April 1912, with a second following in 1913. Zeppelin, however, was not the only builder of rigid airships. In 1909 a rival company set up business. Fronted by Johann Schütte, Professor of Naval Architecture at Danzig University, and funded principally by an industrialist, Karl Lanz, Luftschiffbau Schütte-Lanz GmbH built its first airship in 1911 before selling its second model, SL.2, to the army in 1914. Although more streamlined, Schütte-Lanz airships were not greatly dissimilar in appearance to the Zeppelin, although there was one significant structural difference. While the Zeppelin Company based construction on a latticed aluminium framework – later replaced by duralumin, an aluminium alloy – those of Schütte-Lanz utilized laminated plywood. However, this wooden construction did not meet favour with senior naval officers who felt it liable to suffer catastrophic failure under continual exposure to moisture in operations over the North Sea. As such, Schütte-Lanz airships generally received more favour from the army than the navy. But in Britain when the raids began, to those on the ground looking up in awe and horror as these vast dirigibles (steerable airships) passed overhead, all airships were simply 'Zeppelins'. At the start of the war Germany possessed 11 airships, as the editor of _The Times_ had correctly informed his readers; ten commanded by the army – of which three were former DELAG commercial airships now used mainly for training new crews – and one by the navy. Ferdinand von Zeppelin (1838–1917). Zeppelin retired from the army in 1891 and concentrated on airship development. In 1900 he launched his first steerable rigid airship, based on original designs of aviation pioneer David Schwarz, who had died three years earlier. 'NO LONGER AN INACCESSIBLE ISLAND' In Britain, concern over Germany's airship programme grew until in 1908 the government authorized an examination of the threat posed by airships and what advantage Britain might gain from their use. As a result, the Admiralty began a programme to build its own rigid airship, hoping to evaluate the threat in practical ways. The resultant airship, _His Majesty's Airship 1r_ , also known as _Naval Airship No. 1_ – or popularly as _'The Mayfly'_ – was, like its namesake in nature, short-lived, wrecked by a squall in September 1911 before she even flew. At the same time, aeroplane development, a late starter in Britain, was gradually advancing. The first recognized aeroplane flight in Britain had only taken place in October 1908, and lasted just 27 seconds. In comparison, the following year, the French aviator Louis Blériot made his dramatic flight across the English Channel, prompting the visionary author H.G. Wells to echo the earlier warning of the newspaper baron, Lord Northcliffe, when he wrote, 'in spite of our fleet, this is no longer, from the military point of view, an inaccessible island.' Gradually the British military turned its attention towards aviation. In April 1911 the Balloon Section of the Royal Engineers disbanded to reform as the Air Battalion, while their headquarters at Farnborough was renamed the Army Aircraft Factory. The Royal Navy also began experimenting with aircraft and a year later the efforts by both organizations were concentrated in a single Royal Flying Corps, with a military wing, a naval wing, a flying school at Upavon in Wiltshire and the base at Farnborough, renamed again as the Royal Aircraft Factory. It soon became clear, however, that this imposed relationship between the army and navy flyers was an uneasy one. Despite the army upholding its traditional responsibility to protect the homeland, the military wing revealed in June 1914, on the eve of war, that there was still no aerial home defence organization and, furthermore, all existing squadrons were committed to providing aerial reconnaissance for any British Expeditionary Force (BEF) destined for Europe. After an uneasy relationship lasting just two years the naval wing left the Royal Flying Corps (RFC) in July 1914 to set up its own independent organization, governed directly by the Admiralty, and named the Royal Naval Air Service (RNAS). Shortly after the commencement of the war, the Admiralty formally accepted responsibility for the home defence tasks it had already been performing. As the clouds of war gathered, both the RFC and RNAS gradually increased their strength, acquiring a diverse range of aircraft with which to fight this very first war in the air. The next four years witnessed remarkable advancements in aviation and in the methods available to defend against aerial attack. THE AIR WAR TAKES OFF Although the perceived menace presented by Germany's airship fleet at the beginning of the war was great, by the autumn of 1916 its threat to London was virtually over. Zeppelins had failed to deliver the anticipated killer blow. But Germany did not let matters rest there; the effect on morale of bombing London remained a great prize. Despite its losses, their navy remained committed to the development of airships to counter Britain's improved defences, but the army, disillusioned, turned its attention to the potential offered by aeroplanes to carry an effective bomb load to London. By the spring of 1917 aeroplanes were finally available with the required range. From airfields in Belgium the air assault on London stepped up a notch, signalling the advent of the capital's first Blitz. But that lay in the future. For now, London awaited the attack of the Zeppelin raiders. THE ZEPPELIN RAIDS – THE MEN THAT MATTERED **First Lord of the Admiralty, Winston S. Churchill** While serving in government as Home Secretary, Churchill displayed a keen interest in naval development, leading to his appointment as First Lord of the Admiralty in October 1911. Fascinated by the developing science of aviation he enrolled for flying lessons in 1913. Although senior army figures saw the future of aircraft in a purely passive reconnaissance role, Churchill quickly recognized its offensive potential. He was also a prime mover in the separation of the naval wing from the Royal Flying Corps and the formation of the RNAS. On 29 July 1914 he decreed that naval aircraft should regard defence against aerial attack their prime responsibility and expressed his opinion that London, the Woolwich Arsenal and the naval dockyards at Chatham and Portsmouth were all prime targets for attack. Then, in September 1914, with the RFC in Belgium with the BEF, Churchill officially accepted responsibility for home defence on behalf of the Admiralty. At a time when many were still coming to terms with this new danger from the skies, Churchill created the initial defence of Britain against aerial attack, with his first line of defence centred on the RNAS squadron based at Dunkirk in France. Winston Churchill (1874–1965). Churchill took up the post of First Lord of the Admiralty in 1911 and continued until May 1915. He was an enthusiastic supporter of aviation development and was the prime mover in establishing London's earliest aerial defence. In January 1915, Churchill outlined the latest plans for London's defence, asserting that within the London-Sheerness-Dover triangle about 60 rifle-armed aeroplanes stood permanently on standby to repel air invaders – and added, bullishly, that some pilots were even prepared to ram Zeppelins in the air! Churchill, for one, did not see a future for the Zeppelin; 'this enormous bladder of combustible and explosive gas... these gaseous monsters' as he once described them. For a time though it looked as if he had misjudged their potential, but by the end of the war the military career of the airship was over. However, Churchill was not to oversee their final demise. As the prime architect of the disastrous Gallipoli campaign he was removed from office in May 1915. **Major-General David Henderson, RFC** After army service in the Sudan in 1898, Henderson served in the Anglo-Boer War as an intelligence officer under Sir George White, enduring the siege of Ladysmith. From October 1900 to September 1902 he served under Lord Kitchener as director of military intelligence, and on his return from the war Henderson published highly respected books on intelligence and reconnaissance. Confirming his reputation as an adventurous spirit, Henderson gained his pilot's licence in 1911 at the age of 49. He immediately developed a strong belief in the future of air power. In September 1913, having been selected to represent the army in discussions on the development of aviation, he was appointed Director-General of Military Aeronautics at the War Office, exerting control over all aspects of the military wing of the RFC – recruitment, training and equipment. On the outbreak of war in 1914, Henderson went to France as head of the RFC. However, the rapid expansion of the organization combined with the workload generated as director of military aviation affected his health, and in August 1915 he handed over control at the front to Brigadier-General Hugh Trenchard and returned to the War Office. He faced much criticism for the failure of the RFC to stop the Zeppelin raids on London in 1915, but he persevered, developing the RFC until it was a match for both airship and aeroplane raiders. Towards the end of the war he also played a hugely important role in the amalgamation of the RFC and RNAS into a single Royal Air Force in April 1918. Maj Gen David Henderson, RFC (1862–1921). A former army intelligence officer, in 1911 (aged 49) he learned to fly, making him the world's oldest pilot at that time. Appointed to the RFC in September 1912, he became Director of Military Aeronautics a year later. **Admiral Sir Percy Scott, RN** Sir Percy Moreton Scott joined the Royal Navy as a cadet in September 1866 and progressed to the rank of Admiral in 1913. Throughout his career he devoted much of his time to the improvement of naval gunnery. In October 1899, Scott, as captain of HMS _Terrible_ , arrived in South Africa shortly after the commencement of the Second Anglo-Boer War and came to public attention when he designed and built land mountings for 4.7in guns from his ship, enabling them to serve at the Siege of Ladysmith. Scott repeated this feat the following year when _Terrible_ arrived in China during the Boxer uprising. He continued his involvement in naval gunnery until he retired in 1913. In November 1914 the Admiralty recalled Scott as a gunnery advisor, and he also became involved in anti-submarine work. Then, following the poor anti-aircraft gun response to the London Zeppelin raid of 8/9 September 1915, he was tasked with creating an effective gun defence for the capital. Admiral Sir Percy Moreton Scott, Royal Navy (1853–1924). Scott, a gunnery expert, was called upon to take command of London's anti-aircraft guns after their poor showing during Heinrich Mathy's raid in L.13 on the night of 8/9 September 1915. **Konteradmiral Paul Behncke** As the Deputy Chief of the German Naval Staff, Behncke became one of the most vociferous supporters for a bombing campaign against London. As early as August 1914, following the advance of the German army into Belgium, Behncke proposed the construction of airship bases on the Belgium coast to facilitate raids against Britain. He highlighted the importance of London as a target and expected these raids 'to cause panic in the population which may possibly render it doubtful that the war can be continued'. Yet Behncke's proposals met opposition at the highest level – from Kaiser Wilhelm II. With his close ties to the British royal family and the genuine belief, like so many others, that the war would soon be over, the Kaiser forbade the bombing of Britain. Despite, this Behncke continued to press for an air campaign, yet it was not only the opposition of the Kaiser which prevented them taking place. Even by October 1914, neither the army nor the navy were yet in a position to begin raiding as they lacked suitable bases and had few airships. Behncke continued lobbying, along with others, for permission to bomb London and, under increasing pressure, the Kaiser finally accepted the reality of a limited bombing campaign on England in January 1915. However, on the Kaiser's insistence, London remained excluded. Undeterred, Behncke kept up his campaign and produced a list of recommended targets that included the Admiralty buildings, Woolwich Arsenal, the Bank of England and the Stock Exchange. In February, the Kaiser accepted the London docks as a legitimate target, but still clung naively to the notion that he could specifically forbid attacks on residential areas, royal palaces and important monuments. Yet the combination of unsophisticated bombing methods and the proximity of countless tightly packed streets around the docks meant these restrictions were impossible to observe. Then, in May 1915, the Kaiser approved bombing east of the Tower of London, followed in July by the inclusion of the whole of London. Behncke finally had his wish. Konteradmiral Paul Behncke, Reichskriegsmarine (1866–1937). As Deputy Chief of the Naval Staff he was one of the most persistent campaigners in persuading the Kaiser to approve aerial bombing raids on London. He highlighted the importance of London as a target and in particular the Admiralty buildings in Whitehall, as well as the docks. (Colin Ablett) **Fregattenkapitän Peter Strasser** Strasser joined the German navy as a 15 year old, before entering the naval academy at Kiel. He made good progress through the ranks, serving on a number of ships between 1897 and 1902, during which time he became expert in naval gunnery. After this period at sea, Strasser joined the Navy Office as a gunnery specialist, but in 1911 he volunteered for aviation training. Two years later, in September 1913, Kvtkpt Strasser was offered command of the Naval Airship Division. Strasser, displaying his inspirational leadership qualities to the full, quickly galvanized the moribund division and instilled fresh confidence and pride into his men. He then made his mark on the naval hierarchy, pushing his ideas for the development of the Airship Division all the way to the top at a time when some were considering disbanding the division altogether, eventually earning promotion to Fregattenkapitän. Although a harsh disciplinarian, Strasser also took great care of his men, and those that passed through his rigorous training schedule developed a fine _esprit de corps_ and hero-worship for their commander. Despite numerous setbacks during the war, Strasser never lost his unswerving faith in the Zeppelin, although Vizeadmiral Reinhard Scheer reined in his aggressive independence a little following his appointment as Commander-in-Chief of the High Seas Fleet in January 1916. However, Strasser's devotion to the Airship Division received recognition in November 1916 with his appointment as Führer der Luftschiffe, carrying the equivalent rank of Admiral second class. Fregattenkapitän Peter Strasser, Reichskriegsmarine (1876–1918). Strasser took command of the Naval Airship Division in September 1913 following the death of its commander, Kvtkpt Friedrich Metzing, in the crash of Zeppelin L.1. An inspirational and charismatic leader, he regularly flew on missions to understand first hand the difficulties his crews experienced, although many considered him a 'Jonah' as those airships often returned early with mechanical problems. In spite of mounting Zeppelin losses, he maintained an unshaken belief in the value of airships right to the end. This folded leaflet, distributed with the _Daily News_ , offers simple advice for householders to employ during an air raid. Inside the leaflet, the _Daily News_ offers readers free 'Zeppelin Bombardment Insurance' – for those who subscribe to the newspaper. PREPARING FOR THE ZEPPELIN WAR The development of airships opened the path for a new branch of warfare: strategic bombing. With aviation still a new science, there were no established rules or tactics for aerial conflict. As such, the role of aviators on both sides constantly evolved, each devising and implementing strategies and counter-strategies in response to changing circumstances. BRITISH STRATEGY AND TACTICS At the outbreak of war, the Royal Flying Corps mustered five squadrons, of which four were active and had departed for France with the BEF. No. 1 Squadron, formerly assigned to airships, remained behind at Brooklands in Surrey, to be re-equipped with aeroplanes. On paper, the RFC claimed about 190 aircraft, but many of these were unfit for service. The four squadrons that went to France took 63 aircraft of various types; of those left in Britain, perhaps as few as 20 were fit for service. The RNAS established a number of air stations along the coast, mainly between the Humber and the Thames, where it based its 39 aeroplanes, 52 seaplanes and a flying boat – but perhaps only half of these were operational. Amongst its eclectic collection the RNAS included one Vickers F.B.4 'Gunbus', the only fighter aircraft in Britain at that time, mounting a single machine gun in the observer's cockpit. In August 1914, as part of its commitment to the defence of London, the RNAS took over Hendon airfield in north-west London. On 5 September 1914 Churchill outlined his home defence plan. He announced that the front line – formed by the RNAS in France – would engage enemy airships close to their own bases, attack those bases and establish aerial control over a wide area extending from Dunkirk. An aerial strike force formed the second line, located 'at some convenient point within a range of a line drawn from Dover to London, with local defence flights at Eastchurch (Isle of Sheppey) and Calshot (Southampton Water)'. Other aircraft with an interceptor role occupied stations along the east coast. The RNAS pilots based at Hendon formed the final line in the airborne defence plan. Other defensive moves saw additional anti-aircraft guns assigned to key military installations, while instructions for the implementation of a blackout came into effect on 1 October. The Commissioner of Police issued an order that all powerful outside lights be extinguished, while street lights be either extinguished or the tops of lamps shaded with black paint. Railway lighting was to be reduced to a minimum, lights inside shops and other premises were to be shaded and lights on buses and trams should just be sufficient for the collection of fares. Notices also instructed the public to ensure they had well-fitting curtains. The war, as a journalist observed, had ushered in a new dark age. For the first time Londoners knew what it was to be hampered a little by the darkness. They took it in the Londoners' way, good-humouredly, which is almost more than might have been expected, because the idea that his city might be in danger has scarcely yet penetrated the Londoner's easy going mind... taking the whole thing as a not very good, but tolerable joke. In October 1914, the RNAS requested RFC assistance in defending London. In response, four aircraft from No. 1 Squadron were dispatched, two to Hounslow and two to Joyce Green (near Dartford). By the end of the year the Admiralty defence plan had formalized and about 40 RNAS aircraft based at 12 stations covering the approaches to Britain's east coast between Killingholme (near Grimsby) in the north and Dover in the south. In addition, over 20 seaplanes remained on stand-by. This system provided a two-tier defence. It took a long time for an aircraft to climb to a height where it could engage an enemy airship. This plan anticipated that those aircraft based inland would receive enough warning of a raid on London to ascend to meet it, while in the meantime those on the coast would attain the altitude needed to intercept the raiders on their return journey. Aircraft production increased rapidly to meet the demand from both the RFC and RNAS, and by the end of 1914 they had together ordered almost a thousand new aircraft. The Royal Aircraft Factory at Farnborough had been working on producing a stable and easy-to-fly machine for reconnaissance and scouting duties. This resulted in the B.E. (Blériot Experimental) series, of which the government approved the production of the BE2c variant in great quantities. However, when fast and nimble German fighters began to appear on the Western Front, the slow and steady BE2c became an easy victim. Back in England, and relegated to home defence duties, these very same qualities, combined with new developments in armaments, eventually saw the BE2c develop into an excellent night-flying anti-Zeppelin platform. The Imperial War Museum's BE2c (Blériot Experimental 2c) The Royal Aircraft Factory-designed BE2c had a 90 hp engine, a wing span of 36ft 10in and was 27ft 3in long. Outclassed on the front line in France, it proved an ideal anti-Zeppelin night-fighter at home. Although London could now, on paper at least, boast an aerial defence force, in reality it offered little opposition to raiding airships. The British understood that the use of highly inflammable hydrogen as a lifting gas, contained in a number of separate gas cells within the rigid framework, presented a great weakness in airship design, but struggled for a means to exploit this. Ordinary bullets carried little threat, merely puncturing individual gas cells with only a limited immediate effect on overall performance. Therefore, at the beginning of the war, many pilots flew into action armed with single-shot, breech-loading Martini-Henry cavalry carbines, of Zulu War vintage, firing a new 'flaming bullet'. This .45-calibre bullet contained an incendiary compound, but pilots struggled to hold their aircraft steady while using both hands to fire the carbine. A number of bombs were available for use against airships. The main problem with these being that the pilot needed to coax his aircraft up above the hostile airship to drop them, and yet the available aircraft did not have the ability to out-climb the airships. The aerial armoury included the 20lb explosive Hales bomb and 10lb and 20lb incendiary bombs. Another weapon in this early arsenal was the fearsome-sounding 'Fiery Grapnel'. This device comprised a four-pointed hook, loaded with explosives, that was trailed on a cable below the aircraft until, hopefully, it caught on the outer skin of an airship and could be detonated. Never a favourite with the pilots, it was never tested in combat. To boost the initial provision of guns in Whitehall, London received a further ten weapons detailed to serve in an anti-aircraft role, but all generally lacked the elevation or range to hit airships. Special constables, enrolled in the Royal Naval Volunteer Reserve, manned these three 6-pdr Hotchkiss guns and five 1-pdr 'pom-poms', but both had an effective anti-aircraft range below a Zeppelin's operational height. In addition, the Royal Marines manned two 3in naval guns in the capital, one at Tower Bridge, and the other in Green Park. Behind this flimsy defensive façade, London lay open and exposed to attack. GERMAN PLANS At the outbreak of war the German army had ten operational airships (nine Zeppelins and one Schütte-Lanz), although this figure included the three commercial DELAG Zeppelins acquired for training purposes. The army assigned four airships to the Western Front and three to the east. The navy had just one airship, L.3, which came into service at the end of May 1914. The army, tactically naive when it came to the deployment of their airships, had lost three of those in the west within three weeks of the start of the war. All were claimed by enemy fire while flying at relatively low level. Before the month was out they had also lost one at the Battle of Tannenberg in the east. Just two Zeppelins and one Schütte-Lanz remained operational, along with the navy's single airship, and with that detailed for naval patrol work, any threat to Britain from the air had temporarily evaporated. New airships, ordered before the war, gradually began to arrive and by the end of August 1914 the navy doubled its strength with the acquisition of L.4. This was the first of ten M-class airships, evenly distributed between the army and navy, based on the same design as the pre-war L.3. The order reached completion in February 1915. With more airships on the production line, the requirement now was for sheds in which to house them. Early in 1913 the navy selected a remote spot at Nordholz, near Cuxhaven, to build an airship base, complete with four revolving sheds; this provision would enable take-off whatever the wind direction. While Nordholz was prepared, the navy rented a hangar at Fuhlsbüttel, near Hamburg. The army initially based their airships in Belgium for service in the west but later relocated to Germany. With increasing demands from the armed services, the Zeppelin Company proposed to produce a bigger airship by adapting a commercial design they had been working on. This design introduced duralumin (an aluminium alloy) to replace aluminium, which made the framework even lighter without a reduction in strength. This model, the P-class, with a hull of 536ft would be 18ft longer than the L.3 design and, with a 61ft diameter, would be 13ft wider. This additional size increased the gas capacity from 880,000 to 1,126,000 cubic feet. The effect of the greater capacity was to increase the ceiling of the new Zeppelins from about 6,500 to 10,500ft. Adding a fourth engine increased speed from 52 to 63mph. The new design also increased crew safety and comfort by providing fully enclosed gondolas (the cars suspended from the hull) for the first time. The armed services readily accepted the new design and ordered 22, while plans to build new sheds went ahead too, the navy expecting those at Tondern and Hage to be ready by the end of the year. Both services also placed new orders with Schütte-Lanz. The army took delivery of LZ.38, the first of the eagerly awaited P-class Zeppelins, on 3 April 1915. The navy waited another five weeks for their first vessel, the L. 10, and the order was complete by the end of 1915. The first of the navy's new sheds at Nordholz opened in late January 1915, at Tondern in late March and at Hage in April. The revolving shed at Nordholz was originally 597ft long and designed to hold two airships. As larger airships were ordered, the angular extensions were added to increase the length to 656ft. One officer described Nordholz as 'the most God-forsaken hole on earth.' Yet while the army and navy awaited delivery of their new airships, Kaiser Wilhelm firmly blocked any attempt to bomb England, despite the determined efforts of men like the Deputy Chief of the Naval Staff, Konteradmiral Paul Behncke, backed by Kvtkpt Peter Strasser. While the German airship commanders strained at the leash to bomb England, but were prevented from doing so by Kaiser Wilhelm, British airmen experienced no such restrictions. Heeding Churchill's directive that the defence of Britain started at the airship bases in Europe, the RNAS launched a successful raid on the army airship shed at Düsseldorf on 8 October 1914, incinerating Zeppelin Z.IX. Another raid, on 21 November, daringly targeted Friedrichshafen, the home of the Zeppelin Company, causing some damage although it narrowly missed destroying the navy's new L.7 as it approached completion. Then, on Christmas Day 1914, the British attempted an ambitious combined air and sea operation, with seaplanes attacking the nearly completed Nordholz sheds. The raid failed and was to be the last of its kind, but the Germans were not aware of this. The German navy, concerned that these raids would destroy their airships before they had even begun to attack England, increased the pressure on Kaiser Wilhelm to sanction air attacks. Finally, on 9 January 1915, he gave his qualified approval. There were to be no attacks on London, but the Thames estuary and east coast of Britain were now legitimate targets. Bomb damage in Bartholomew Close caused during the raid of 8/9 September 1915. Kptlt Heinrich Mathy dropped a single 300kg high-explosive bomb from Zeppelin L.13, the largest yet dropped on London. The bomb killed two men, gouged a great hole in the ground and shattered surrounding business premises. (IWM, LC 30) THE 1915 ZEPPELIN RAIDS THE CAMPAIGN BEGINS The Kaiser's approval immediately spurred the German Naval Airship Division into action, with Strasser ordering four of his airships to attack England on 13 January 1915. However, bad weather forced the abandonment of the raid. The weather was probably Britain's greatest ally in restricting German determination to bomb the nation into submission. Heavy rain absorbed by the outer envelope of an airship added tons of extra weight, as did snow and ice, forcing them dangerously low. When ice froze on the propellers, sharp fragments flung backwards with terrific force could puncture the outer envelope and internal gas cells. Thunder, lightning and fog each offered their own dangers. Strong headwinds could bring progress to a halt, while crosswinds could blow an airship miles off course. Navigation itself was basic, with each airship carrying a magnetic compass and steering by dead reckoning over the sea. This process used a combination of speed and compass direction to calculate a position, but the direction and speed of the wind greatly affected accuracy. Over land, crews used maps to identify ground features and towns, often illuminating them with magnesium-burning parachute flares. However, if the wind pushed the ship off course then landfall over England was easy to misjudge, adding confusion to later aerial identification of targets. From April 1915 airship commanders benefited from the use of radio bearings to pinpoint their position, but the accuracy often left much to be desired. At the same time British stations were able to intercept and plot these transmissions too, forcing airship commanders to keep communications to a minimum. There was also the constant threat of mechanical breakdown. After surmounting all these dangers, there remained one more obstacle to overcome, namely the aircraft and anti-aircraft guns of Britain's Home Defence organization. Initially, though, this opposition was limited. After the first aborted mission, Strasser ordered a second six days later, on the night of 19/20 January 1915. This time two navy M-class Zeppelins, L.3 and L.4, successfully crossed the North Sea, but were blown off course from their intended target of the Humber. Instead, unopposed by the RNAS and RFC, they released their bombs over Great Yarmouth, King's Lynn and a number of small Norfolk villages. Konteradmiral Paul Behncke received the news of this first successful Zeppelin raid with great delight, enthusiastically shared by the German population. Back in England, shocked families mourned the deaths of four innocent civilians while 16 others received treatment for their injuries. On 12 February, in the face of constant pressure, the Kaiser relented further and declared the London docks a permitted target. Two weeks later navy Zeppelin L.8 flew from Düsseldorf to attack London but strong headwinds made the attack impossible. She sought refuge at Gontrode army airship base near Ghent in Belgium, but then, on her return journey, a combination of small-arms fire and engine failure forced her down and strong winds destroyed her on the ground. The Army Airship Service launched its four available airships against England on 17 March, but heavy fog over the English Channel forced them back. Returning to base, one airship suffered damage on landing and three days later, while attacking Paris, enemy fire forced down another. Then, on 13 April, anti-aircraft fire brought down a third vessel, LZ.35, near Ypres. Fortunately for the army they took delivery of the last of their M-class airships – LZ.37 – in March and their interim O-class LZ.39 and the first of their P-class airships – LZ.38 – in April 1915. With the army licking its wounds, the onus returned to Strasser and the Naval Airship Division. On 14 April Kptlt Heinrich Mathy in L.9 bombed around Blyth and Wallsend in the north-east of England, but the damage was negligible. One British aircraft took off from RNAS Whitley Bay and patrolled over Newcastle but, with no searchlights in operation, L.9 escaped undetected. The following day three airships – L.5, L.6 and L.7 – set out to raid the Humber, but strong winds again blew them off course. L.5 and L.6 released bombs over Suffolk and Essex before returning to base. Strasser came to the conclusion that his older M-class Zeppelins, with their limited endurance and lifting capabilities, were not up to the task of attacking England, and put further plans on hold until his new P-class airships were ready. Now the pendulum swung back to the Army Airship Service, and in particular to Hptmn Erich Linnarz. Army Zeppelin LZ.38, the first P-class to enter service. LZ.38 was 536ft long and powered by three 210 hp Maybach engines (later P-class vessels had four engines). Commanded by Hptmn Erich Linnarz, she was the first airship to bomb London. Linnarz, appointed to command the first of the new Zeppelins, began very careful planning, for the general feeling amongst airship crews was that the Kaiser would soon have to declare London open to airship attack. On 29/30 April Linnarz bombed Ipswich and Bury St Edmunds from LZ.38; thick coastal mist prevented RNAS Yarmouth from opposing the raid. Linnarz returned on the night of 9/10 May. This time he made two successful bombing runs over Southend on the south coast of Essex. LZ.38 continued its probing raids and on the night of 16/17 May, dropped bombs on Ramsgate and Oxney, near Dover in Kent. For the first time a searchlight illuminated a Zeppelin raid and a defence pilot – Flt Sub-Lt Redford Mulock, RNAS – actually saw a Zeppelin, although LZ.38 easily out-climbed him. Linnarz bombed Southend again on the night of 26/27 May, but with little advance warning, the five RNAS aircraft that took off were unable to climb high enough before LZ.38 turned for home. The sum total of material damage accumulated on these four raids only amounted to about £17,000 (1915 value), but they claimed the lives of six civilians and caused injuries to a similar number. Although not destruction on a grand scale, it proved valuable experience for Linnarz and the crew of LZ.38, experience they were about to put to devastating effect. THE FIRST LONDON RAID – THE ARMY CLAIMS THE PRIZE In May 1915 the Kaiser, under constant pressure, gave his reluctant approval for the bombing of the British capital east of the Tower of London. Overlooking a number of stipulations imposed by the Kaiser, the Army Airship Service prepared to lead the way. At about dusk on the evening of Monday 31 May, Linnarz ascended in LZ.38 from the base at Evère, just north of Brussels, while LZ.37 took off from Namur. Damage to its outer envelope forced LZ.37 to return early, but, unhindered, Linnarz flew over Margate at 9.42pm headed for the now-familiar landmark of Southend. From there he steered a westerly course for the capital. It was now almost ten months since Britain had declared war on Germany. The feared Zeppelin onslaught on London had not materialized and the tentative probes at East Anglia, Essex and Kent had little effect on Londoners. Although the streets remained darkened, most people went about their lives as normal. The Metropolitan Police received notification of an impending raid at about 10.55pm. While they were still absorbing this unexpected news, a shocked sub‑inspector of the Special Constabulary observed a Zeppelin approaching Stoke Newington Station. Moments later bombs began to fall. The first bomb on London, an incendiary, fell just south of the railway station on a house at 16 Alkham Road, the home of a clerk, Albert Lovell. It smashed through the roof, setting fire to the bedroom and back room on the top floor. The bewildered Mr Lovell, his wife, children and two guests tumbled from the house without injury and the fire brigade arrived promptly to extinguish the blaze. Linnarz passed on over Stoke Newington High Street before turning onto a course heading south, directly on a line leading towards the Tower of London and parallel with Stoke Newington Road/Kingsland Road in the direction of Hoxton. In Cowper Road, a Mr C. Smith was in bed when he heard 'a terrible rushing of wind and a shout of "Fire" and "The Germans are here."' He rushed his children down to the basement, then went outside to find the neighbouring house on fire. An incendiary bomb had crashed through the roof; passing through the top floor it set fire to a first-floor bedroom where Samuel Leggatt's five children were sleeping. Leggatt fought his way into the room, suffering burns to his face and hands, but, helped by neighbours, pulled four of his children to safety and led them away to hospital. Tragically, in the confusion, the family believed neighbours had taken in the youngest child, three-year-old Elsie, but a policeman later discovered her blackened body under debris in the room. Another of the Leggatt's daughters, Elizabeth May, died in hospital a few days later. The first house in London to be bombed from the air: 16 Alkham Road, in Stoke Newington. Linnarz, in LZ.38, continued flying south, deploying a heavy concentration of bombs as he went. Two incendiaries fell on 187 Balls Pond Road, a three-storey house owned by a builder, Thomas Sharpling. A police constable who saw the bombs fall said that he 'heard the sound of machinery in the air, and suddenly the house burst into flames.' Sharpling and his family scrambled clear while a lodger leapt from a window into a blanket, but later searchers discovered the charred bodies of two other lodgers kneeling by their bed as if in prayer: Henry Good, a 49-year-old labourer, and his wife Caroline. A Zeppelin incendiary bomb. The interior was packed with a mixture of Benzol, tar and Thermite, which burned at an extremely high temperature, easily setting wood and combustible materials alight. The outside of the bomb was covered with tarred rope which helped keep the liquid contents inside and the fire burning. The bomb in the photo is that which struck 16 Alkham Road and was retained by the occupier for many years. Other incendiary bombs fell relatively harmlessly in Southgate Road, yet they provided a rude awakening for the residents. One of them, Mr A.B. Cook, later recalled that people were unaware what was happening at first: 'People flung up their windows and saw an astonishing sight, the roadway a mass of flames... Flames reached a height of 20ft... The sky was red with the light of flames.' In Southgate Grove the sound of the bombs affected a fragile 67-year-old spinster, Eleanor Willis, so badly that she died from shock three days later. LZ.38 continued wreaking destruction through the streets of Hoxton and continued over Shoreditch High Street. Here, at 11.08pm, three incendiaries fell on the roof of the Shoreditch Empire music hall, where a late performance was in progress. The manager calmly addressed the audience who then left in an orderly manner as the band 'played lively airs'. Further along the High Street bombs fell on Bishopsgate Goods Station, then, within a mile of the Tower of London, Linnarz turned away to the south-east and bombed Spitalfields. Crossing the Whitechapel Road and then heading east over Commercial Road, bombs hit a bonded warehouse full of whisky and a synagogue, before two explosive bombs fell in the roadway in Christian Street; 12 passers-by received injuries and one, an eight-year-old boy called Samuel Reuben, who was on his way home from the cinema, died. One of the badly injured, Leah Lehrman, died two days later. These bombs fell only 600 yards from the London Western Dock. London's first Zeppelin raid – Nevill Road, Stoke Newington Shortly after 11pm on Monday 31 May 1915, Zeppelin LZ.38 appeared unannounced over Stoke Newington in north London. The commander, Erich Linnarz, later described the tense moment as he prepared to release the first bombs on the capital: 'My finger hovered on the button that electrically operated the bombing apparatus. Then I pressed it. We waited. Minutes seemed to pass before, above the humming song of the engines, there rose a shattering roar... A cascade of orange sparks shot upwards, and a billow of incandescent smoke drifted slowly away to reveal a red gash of raging fire on the face of the wounded city.' The first bomb fell on Alkham Road, the next in Chesholm Road, then Dynevor Road before LZ.38 steered over Nevill Road. An incendiary bomb crashed through the roof of an outbuilding at the back of the Nevill Arms, but failed to ignite. Two houses further on, at No. 27, another incendiary smashed through the roof causing a tremendous conflagration. Five rooms were gutted and two badly damaged. Alfred West, the 26-year-old son of the owner, suffered burns to his face. The fire was eventually extinguished by the police and neighbours as LZ.38 continued on its path of destruction. Large crowds gathered in the streets throughout the bombed area but the police reported that the behaviour was generally good and no panic ensued. However, there was an incident where a mob attacked a Russian nightwatchman as he left the burning premises of a bamboo furniture manufacturer in Hoxton Street, believing him to be German and the cause of the fire. Tension simmered the following day (1 June), and anti-German feeling ran high in Hoxton and Shoreditch where mobs attacked and damaged a number of shops owned by persons believed to be of German nationality. Linnarz turned north-east and, passing over Stepney, dropping four explosive and two incendiary bombs, which caused only minor damage. Almost 3 miles further on he dropped an incendiary over Stratford at about 11.30pm, that smashed through the roof of 26 Colegrave Road, passing through the bedroom of Peter Gillies and his wife, within 5ft of where they lay asleep. A neighbour who saw the bomb fall said, 'I heard the droning of an aeroplane but I could not see anything. According to the noise it came lower and then I saw the bomb drop. It was simply a dark object and I saw it drop through the roof of number 26.' Then, half a mile further on, LZ.38 dropped five bombs over Leytonstone, causing minor damage before heading back towards Southend and out over the coast near Foulness. The main bombing run, from Stoke Newington to Stepney, lasted 20 minutes. The Fire Brigade attended 41 fires; members of the public extinguished others. Seven premises were completely burnt out, but the largest fire occurred at 31 Ivy Street, Hoxton, gutting a cabinetmaker's and timber yard. The Fire Brigade calculated material damage for the night at £18,596, with seven killed. Some 3,000lb of bombs were dropped (the police recorded 91 incendiary and 28 explosive bombs and two grenades). The night of 31 May was dark, with no moon, and, although the atmosphere remained fairly clear, no searchlights located LZ.38 and no guns opened fire, and hardly anyone actually saw her as she passed over the capital. The RNAS managed to get 15 aircraft airborne, but only one pilot, flying from Rochford near Southend, saw LZ.38. Engine trouble forced him down before he could climb high enough to engage. There could be no hiding the fact: a Zeppelin had passed freely over London and, facing no opposition, had bombed civilian targets at will, before departing without a shot fired in return. The German government in Berlin enthusiastically but falsely claimed that the raid 'threw numerous bombs on the wharves and docks of London.' In Britain, the government slapped an immediate press restriction on reporting airship raids, limiting coverage to official communications. FIRST BLOOD The German Army Airship Service took the laurels for the first successful raid on London, a fact not well received by the Navy Airship Division. The navy now prepared to send its first P-class Zeppelin, L.10, into action against Britain. On the afternoon of 4 June, L. 10 and the Schütte-Lanz airship SL.3 set off, but only L.10 headed for London. The commander of L.10, Kptlt Klaus Hirsch, misjudged his position and, believing he could not reach the city, instead bombed what he thought was the naval base at Harwich. Hirsch in fact had been carried south-west by strong winds and his bombs actually fell on Gravesend, within easy reach of the city. Fog hampered British defensive sorties that night and neither L.10 nor SL.3, which sought targets in the north of England, encountered any opposition. Two days later, on the night of 6/7 June, another raid took place, and for the first time both the navy and army sent airships out on the same night. The navy sent Kptlt Mathy in L.9 to 'attack London if possible, otherwise a coastal town according to choice'. Weather conditions forced Mathy to switch his target to Hull, where his bombs caused widespread devastation and claimed 26 lives. The army raid that night comprised three Zeppelins – LZ.37, LZ.38 and LZ.39 – and resulted in a fresh disaster for the Army Airship Service. Hauptmann Linnarz knew the route to London well now, but LZ.38 developed engine trouble early in the flight, forcing him to return to his base at Evère. Meanwhile, LZ.37 and LZ.39 ran into thick fog over the North Sea and abandoned the raid too. Advised by the Admiralty of their return, four aircraft of No. 1 (RNAS) Squadron, based at airfields near Dunkirk in France, set out to intercept them or bomb their Belgium sheds. Linnarz had already docked LZ.38 when two Henri Farman aircraft arrived over Evère. Their bombs destroyed the shed and with it LZ.38, only six days after it had successfully bombed London. Meanwhile, one of the other pilots, Flt Sub-Lt R.A.J. Warneford, flying a Morane-Saulnier Parasol, caught sight of the returning LZ.37 and turned in pursuit. Her commander, Oblt van der Haegen, made a dash for his base, attempting to keep his assailant at bay with machine-gun fire. As LZ.37 descended, Warneford climbed above her and released six bombs over the doomed airship. LZ.37 exploded into a mass of burning flame and crashed down to earth, onto the Convent of St. Elisabeth in Ghent, killing two nuns and a civilian. Miraculously, one of her crew survived. After an eventful return journey Warneford became an instant hero, an antidote to the growing anger in Britain caused by the inability of the home defences to engage the Zeppelin raiders. He immediately received the award of the Victoria Cross, but did not live long to enjoy his success; ten days after his exploits he was killed when his plane crashed near Paris. The vulnerability of the Belgian hangars now became apparent and outweighed the benefit they offered of a shorter route to England. Both the army and navy abandoned any further plans for their regular use. The navy returned to the offensive on the night of 15/16 June. Two P-class Zeppelins, L.10 and L.11, left Nordholz and headed for Tyneside, but only L. 10 reached the target. On his return the commander of L.10, Kptlt. Klaus Hirsch, reported to Strasser that the evening had never really become dark, pointing out that the June and July nights were too short to provide effective cover for air attacks. Strasser agreed with Hirsch and the initial flurry of raiding by the naval airships ended. The army, meanwhile, with its last two operational airships (Z.XII and LZ.39) dispatched to the eastern front, temporarily had no offensive capability. The success of Warneford in bringing down LZ.37 with bombs confirmed for many in authority that this remained the most likely method of destroying airships. A theory much in evidence suggested that a layer of inert gas surrounded the hydrogen cells contained within the outer envelope, preventing their ignition by incendiary bullets. As such, the belief became prevalent that an airship could only be destroyed by a major trauma caused by an explosive bomb – a theory seemingly confirmed by Warneford's singular success. In fact, ten days before the destruction of LZ.37, the War Office informed the RFC that it believed the 'flaming bullets' were 'useless against Zeppelins'. However, incendiary bullets used in combination with explosive bullets were the answer, but it was not until 1916 that the authorities finally recognized this. In Britain the uneasy relationship between the War Office and Admiralty as to the responsibility for Home Defence continued. When Arthur Balfour replaced Churchill as First Lord of the Admiralty in May 1915, just before the first London raid, he felt the defence of London was not a naval responsibility. In June, the Admiralty requested that the War Office take on the role and, after some posturing, the War Office finally stated that they hoped to be able to fulfil the obligations for home defence by January 1916. In June 1915 the RFC had 20 aircraft detailed to support the RNAS in the defence of London. These flew from Brooklands, Farnborough, Dover, Hounslow, Joyce Green, Northolt, Shoreham and Gosport. All 20 carried an armament of bombs, except two Vickers Gunbuses based at Joyce Green that mounted machine guns and two BE2c aircraft at Dover fitted with the Fiery Grapnel. Unfortunately, nine of these aircraft were the unsuitable Martinsyde S1 Scout, unsteady in the air, with a low ceiling and sluggish climbing ability. Despite this necessary co-operation, relations between the War Office and the Admiralty were not always harmonious, and, against this lack of a unified defence, the Zeppelins returned in August 1915 after a two-month absence. By this time the Kaiser had relented under pressure and approved unrestricted bombing of London. THE SECOND LONDON RAID – THE NAVY STRIKES Fresh from operations with the fleet, the Naval Airship Division resumed its air campaign against Britain on the night of 9/10 August and, despite launching four P-class Zeppelins – L.10, L.11, L.12 and L.13 – against London, none reached the target. Oberleutnant-zur-See Werner Peterson in L. 12 caused minor damage in Dover, but, illuminated by a searchlight, he came under anti-aircraft fire. With two gas cells punctured, L.12 began to lose height. Limping homewards, Peterson ordered all excess weight overboard to lighten his ship, but she came down in the sea off Zeebrugge. A torpedo boat towed her into Ostend where British pilots made a number of unsuccessful attempts to bomb her. While she was being lifted on to the dock by crane, the front portion of L.12 burst into flames. Peterson was left to salvage what he could from the rear portion which remained in the water. This photograph was taken from the command gondola of L.11 at the start of the raid of 9/10 August 1915 against London. The other Zeppelins are, from left to right, L.10, L.12 and L.13. None reached the capital and L.12 was damaged by anti-aircraft fire. Undeterred and making the most of the dark skies of the new moon, Strasser authorized another raid on London three days later, on the night of 12/13 August. Zeppelin L.9 joined the three survivors of the previous raid, but a combination of strong headwinds and engine problems prevented any of them reaching the city. Only L.10 reached England, where it bombed Harwich, but four aircraft that ascended from RNAS Yarmouth failed to intercept her. Eleven weeks had now passed since the Army Airship Service had successfully bombed London, and Strasser was unceasing in his determination to strike an equal blow for the navy. He launched his next raid on the dark and moonless night of 17/18 August, sending L.10, L.11, L.13 and L.14 against the capital. The frustrated Kptlt Mathy in L.13 turned for home early again with engine trouble, the third time in three raids for the new airship. Kptlt Alois Böcker, commanding L.14, also returned with engine problems. Further south L.11, commanded by Oblt-z-S Horst von Buttlar, flew across Kent, dropping bombs at Ashford and on villages near Faversham before setting course for home, although his report falsely claimed great success in bombing Woolwich, some 40 miles from Ashford. Elsewhere, however, Strasser could take comfort, for finally a navy Zeppelin had reached London. The stricken L.12 being towed back to Ostend. The RNAS flew nine sorties in an attempt to bomb the wreck, but, encountering heavy anti-aircraft fire, all were unsuccessful and one pilot, Flt Lt D.K. Johnston, was killed. Oberleutnant-zur-See Friedrich Wenke brought L. 10 in over the coast about 6 miles north of Felixstowe at around 9.00pm. Steering southwards, he skirted Felixstowe, avoided Harwich, and followed the River Stour to Manningtree in Essex. From there he steered by the railway line to Colchester and passed over Witham at about 9.50pm, before skirting the north of Chelmsford at Broomfield. From there L.10 headed west towards Waltham Abbey. The two RNAS aircraft at Chelmsford did not get airborne until 45 minutes after L.10 had passed, but over Waltham Abbey a searchlight caught the airship and the anti-aircraft gun stationed there managed to fire off two rounds before L.10 moved out of range and headed for London. Wenke later reported that the London searchlights found it very difficult to hold him in their beams at his height of 10,200ft (almost two miles). However, he then appears to have become disorientated, possibly confusing the great six-mile line of reservoirs running down the Lea Valley from Waltham to Walthamstow with the line of the River Thames. For in his report he stated that he was flying a little to the north of the Thames and began his bombing run between Blackfriars and London Bridges. Perhaps the roads running between the reservoirs added to his confusion, appearing like the Thames bridges from altitude in the dark. The anti-aircraft gun at Edmonton opened fire with no result, then his first bomb, an incendiary, fell at 10.32pm as he flew over Lloyd Park in Walthamstow. Flying south over Hoe Street he dropped two incendiaries south of Hoe Street Station, followed by a string at the junction with the Lea Bridge Road. Here bombs destroyed four flats on Bakers Avenue, and damaged 20 tenements at Bakers Almshouses; three incendiary bombs that landed on the Leyton Tram Depot at 10.37pm caused serious fires. An explosive bomb also landed in the road between the almshouses and the depot, ripping up tramlines and causing damage to the depot. Another explosive bomb landed on the Midland Road Station at Leyton causing significant local damage, and others fell close by as L.10 continued on a south-east line across the streets of Leyton. One, which exploded in Claude Road, killed three members of the same family, while two explosive bombs that dropped in Oakdale Road and Ashville Road killed two people and injured 20, as well as badly damaging 30 houses and smashing the windows of another 123 properties. Wenke then steered over Leytonstone, where three incendiaries in Lincoln Street gutted St Augustine's Church, just a few yards from where Linnarz's final bombs had fallen during the first London raid. Wenke's final bombs landed at about 10.43pm on the open space of Wanstead Flats. As L. 10 steered away in the direction of Brentwood those left in her wake evaluated the damage. Seven men, two women and a child were dead, with another 48 people sustaining injuries. The London Fire Brigade estimated material damage to property at £30,750. Friedrich Wenke reported that L.10's initial bombs fell between Blackfriars and London Bridge, as recorded in this illustration that appeared in a German newspaper. Wenke was wrong: his bombs fell near the great reservoirs in the Lea Valley. Approaching Chelmsford, Wenke released two final bombs, but one failed to explode. The two aircraft that took off from Chelmsford after L. 10 passed on the way to London were still in the air when she returned. These aircraft, Caudron G.3s, were tricky to handle at the best of times and not ideal for night flying. Flight Sub-Lieutenant H.H. Square, flew in pursuit of L.10, but was unable to claw his way up high enough and abandoned the chase. Both Square and the pilot of the other Caudron, Flt Sub-Lt C.D. Morrison, suffered bad accidents on landing and their aircraft were destroyed. L.10 escaped that night, but, as with LZ.38, success was short lived. Returning from a North Sea patrol 16 days later, commanded by Kptlt Hirsch, she flew into a tremendous thunderstorm. It seems that lightning ignited leaking hydrogen, causing L.10 to explode and crash into an area of tidal flats off Cuxhaven; the entire crew perished. Zeppelin L.10, the first of the Naval Airship Division airships to bomb London. L.10 entered service in May 1915, based at Nordholz. She participated in five raids on England before lightning destroyed her on 3 September 1915. THE THIRD RAID – SOUTH-EAST LONDON TARGETED Avoiding the period of the full moon, raids on Britain commenced again on the night of 7/8 September. This time the army returned to the fray, with three airships heading for London. A heavy ground mist blanketed the coastal airfields, thwarting any attempts to oppose the raid. LZ.77, commanded by Hptmn Alfred Horn, came in over the Essex coast at about 10.55pm just south of Clacton, but quickly became lost. Having flown erratically over Essex and Suffolk for a few hours, Horn eventually unloaded six bombs over villages around Framlingham and departed over Lowestoft at about 2.25am. The other two airships were more successful, though there is evidently much confusion in the reports of the routes they took over London, mainly because they both passed through the same area. It seems likely that Hptmn Richard von Wobeser steered the recently rebuilt SL.2 in over the coast at Foulness at about 10.35pm, and took a westerly course over Billericay and Chigwell before turning south over Tottenham and heading for the Thames. Flying over the Millwall Docks at about 11.35pm, von Wobeser dropped 11 bombs, all of which landed on the western side of the Isle of Dogs, along the line of West Ferry Road. One explosive bomb landing in Gaverick Street demolished three houses and injured 11 people. Only one bomb caused slight damage to the dock itself and an incendiary landed on a sailing barge moored off the dock entrance; both men on board later died of their injuries. SL.2 then crossed the Thames, turned eastwards, and dropped an incendiary on the Foreign Cattle Market, Deptford, which the Army Service Corps used as a depot, destroying some boxes of tea and bags of salt. However, a short distance further on a bomb dropped on the home of 56-year-old William Beechey, at 34 Hughes Fields, killing him, his wife Elizabeth and three of their children aged between 11 and three. Continuing eastwards, von Wobeser steered SL.2 over Greenwich, where eight incendiaries fell, four of them harmlessly in Greenwich Park. He continued over Charlton where he dropped more incendiaries, and finally to Woolwich where a last explosive bomb landed close to the Dockyard Station. The Woolwich anti-aircraft guns only received notice of the approach of SL.2 at 11.50pm and opened fire two minutes later, loosing off four rounds, but had no time to switch on the searchlight. The gunners estimated her to be flying at about 8,000ft and travelling at between 50 and 60mph. By 11.54pm, SL.2 was out of range, crossing back over the Thames and passing close to the Royal Albert Dock. She headed out on a north-east course, passing over Bradwell-on-Sea at 1.38am on the morning of 8 September. This German propaganda postcard published in 1915 may illustrate SL.2 attacking the Isle of Dogs on 7 September 1915. Most of the 11 bombs fell along West Ferry Road, while one demolished three houses in Gaverick Street and another hit a sailing barge. Shortly before midnight, as von Wobeser's raid ended, Hptmn Friedrich George, commanding LZ.74, approached the northern outskirts of London. Having made landfall over Clacton at about 10.40pm, George took a westerly course, flying over Mersea Island and Chipping Ongar to Broxbourne. There he turned south and, on approaching London, he released 45 bombs to lighten his ship. George believed he was over Leyton, but his bombs fell on Cheshunt, some 15 miles north of central London, causing significant damage among the horticultural nurseries and large houses that proliferated in the area. The anti-aircraft gun at Waltham Abbey opened fire at 11.55pm; its crew estimated LZ.74 to be flying at a height of 9,000ft and travelling at about 40mph, but the searchlight was unable to get a fix on the target. LZ.74 continued south and passed out of range of the gun at 11.59pm. Later accounts suggest George had dropped all but one of his bombs on Cheshunt, but official reports indicate that he must have retained almost half his load. Following a course due south, Hptmn George brought LZ.74 directly over the City of London. Shortly after midnight he dropped one sighting incendiary in the Fenchurch Street area, causing a small fire in a bonded warehouse. Then, passing directly over the Tower of London, the Zeppelin followed a course towards the south-east, dropping two explosive bombs on Keetons Road, Bermondsey, within half a mile of the Surrey Commercial Docks and, in Ilderton Road, Rotherhithe, another fell on a house let out in tenements, killing six and injuring five people. LZ.74 then turned towards New Cross, dropping another nine bombs and causing more death and destruction, before departing London on a south-easterly course, reaching the Bromley/Chislehurst area at about 12.35am. There, LZ.74 turned north-east and passed close to the Purfleet anti-aircraft guns, which opened fire at 12.53am. The searchlight only caught her momentarily, her speed estimated at 40mph and height at 10,000ft. The gun ceased firing two minutes later. LZ.74 was still flying at between 40 and 50mph when she attracted more anti-aircraft fire, from Harwich at 2.11am, before she passed out to sea four minutes later. In total 18 people were killed in the raid and at least 28 injured. However, official estimates put material damage at only £9,616. THE FOURTH RAID – CENTRAL LONDON BLASTED The success of the Army Airship Service raid immediately stung Strasser into action. The following night he launched L.11, L.13 and L.14 against London, while the older L.9 headed north and bombed the chemical plant and ironworks at Skinningrove, between Redcar and Whitby. Weather conditions were favourable for once, and there were high hopes of success as the London-bound airships set out. However, only an hour into the flight L.11 developed engine trouble and returned to base at Nordholz. Kptlt Böcker in L.14 had reached Norfolk when he too encountered problems with his engines. Realizing he could not reach London, he eventually off loaded his bombs around East Dereham, 14 miles west of Norwich, then set course for home. Kptlt Heinrich Mathy. Mathy transferred to the Airship Division in January 1915 and became the best-known of all the airship commanders. He took part in 15 raids against England, four of these on London. This left the spotlight on Kptlt Heinrich Mathy, the 32-year-old commander of L.13 – and he did not disappoint Strasser. On his three previous flights Mathy had returned early with engine problems, but this time there would be no recurrence. L.13 made landfall over King's Lynn at about 8.45pm. He followed the line of the River Ouse and Bedford Level Canal to Cambridge, from where the glow on the southern horizon illuminated the route to London. From Cambridge, Mathy appears to have followed the road running through Buntingford to Ware in Hertfordshire, before he circled to the north-west of London and set his course for the city. Coming in over the suburb of Golders Green, Mathy dropped two explosive and ten incendiary bombs at about 10.40pm, damaging three houses as he checked his bombsight. Following the Finchley Road for a while, L.13 then veered off over Primrose Hill and Regent's Park. By-passing Euston Station at a height of about 8,500ft, he slowed his speed to 37mph and dropped his first bomb on central London, an incendiary, which fell on Woburn Square in Bloomsbury at about 10.45pm. Continuing over Russell Square, he dropped more incendiaries before releasing his first explosive bomb; it landed in the central gardens of Queen's Square, just missing the surrounding hospital buildings, but shattering hundreds of windows. Approaching Holborn, L.13 released a number of bombs close to Theobalds Road. One damaged the offices of the National Penny Bank and blew out the front of The Dolphin Public House on the corner of Lamb's Conduit Passage. The blast killed 23-year-old Henry Coombs who was standing by the entrance to the pub and injured 16 others. Having bombed Gray's Inn, Mathy then steered a little to the north over Gray's Inn Road, dropping one explosive and two incendiary bombs on Portpool Lane, severely damaging a number of tenements, killing three children and injuring about 25 other people. Twisting to the south-east over Clerkenwell Road, the Zeppelin meted out more damage in Leather Lane and Hatton Garden, and badly damaged buildings in Farringdon Road between Cross Street and Charles Street (now Greville Street). From there L. 13 passed over Smithfield Market and entered the City of London, the financial heart of Britain. Amongst his bombload Mathy carried onboard a single massive 300kg bomb, the first unleashed on Britain. He called it the 'Love Gift' and dropped it in the middle of his bombing run; it fell on Bartholomew Close, just a short distance from St Bartholomew's Hospital (St Bart's) and blasted a hole in the ground eight feet deep. All around was destruction. Fire gutted a printing works while the concussion of the blast shattered shopfronts, scattering battered remnants of stock across the road. Two men emerging from a public house started to run for cover as they saw L. 13 overhead but they 'were blown to pieces' by the blast. The clock hanging in the close offered silent witness to the destruction – it stopped at 10.59pm. From the control gondola of L.13 Mathy watched the bomb fall and observed: 'The explosive effect... must be very great, since a whole cluster of lights vanished in its crater.' Now, having passed just to the north of St Paul's Cathedral, at least ten more incendiary bombs rained down on the narrow streets surrounding the Guildhall: Wood Street, Addle Street, Basinghall Street and Aldermanbury. However, despite fierce fires breaking out, which gutted at least two warehouses, the historic Guildhall escaped harm. Most of London's anti-aircraft guns had been firing away at L. 13 from about 10.50pm with no effect, and an urgent message issued from central control three minutes later exasperatedly stated: 'All firing too low. All shells bursting underneath. All bursting short.' An official memorandum later stated: 'Ideas both as to the height and size of the airship appear to have been somewhat wild.' However, the guns, and 20 searchlights that Mathy counted, may have proved distracting because he passed within 300 yards of the Bank of England without taking any action. An American reporter, William Shepherd, who witnessed the scene wrote: Among the autumn stars floats a long, gaunt Zeppelin. It is dull yellow – the colour of the harvest moon. The long fingers of searchlights, reaching up from the roofs of the city are touching all sides of the death messenger with their white tips. Great booming sounds shake the city. They are Zeppelin bombs – falling – killing – burning. Lesser noises – of shooting – are nearer at hand, the noise of aerial guns sending shrapnel into the sky. Shepherd watched as one shell burst quite close, and someone next to him shouted, 'Good God! It's staggering!', but the airship moved steadily on. Next, L.13 crossed London Wall, where Alfred Grosch was working at the telephone exchange. He recalls what he saw as he looked out of the window: A streak of fire was shooting down straight at me, it seemed, and I stared at it hardly comprehending. The bomb struck the coping of a restaurant a few yards ahead, then fell into London Wall and lay burning in the roadway. I looked up, and at the last moment the searchlight caught the Zepp, full and clear. It was a beautiful but terrifying sight. The burnt-out premises of Messrs Glen and Co., Woollen Merchants, at 4–5 Addle Street. A single incendiary bomb dropped by L.13 caused this damage. Mathy now approached Liverpool Street Station preparing a horrific finale. Just outside Broad Street Station, only 50 yards from the entrance to Liverpool Street Station, an explosive bomb smashed into a No. 35A bus, passing over the driver's head, down through the floor to explode under the conductor's platform at the rear. The driver was wandering in the road in shock, staring at his hand from which a number of fingers were missing, the conductor was dead and the passengers, all thrown to the front of the bus were 'shockingly injured and killed.' Other bombs fell around the station, causing great destruction around Norton Folgate and the southern end of Shoreditch High Street. One bomb landed in the street and blasted a passing No. 8 bus. It killed the driver and eight passengers. Another bomb blew a hole in the roadway over a railway tunnel, severing the water main and damaging the electricity and gas mains. The last anti-aircraft gun in central London ceased firing at 11.00pm, but as Mathy steered away northwards, the gun on Parliament Hill put a shell uncomfortably close to him as he passed over Edmonton, persuading him to climb to a little over 11,000ft as he turned for home. Two buses were hit by bombs dropped from L.13. This No. 8 bus was in Norton Folgate, north of Liverpool Street Station, when a bomb exploded in the road, killing the driver, F. Kreppel, and eight passengers. Only three BE2cs, from RNAS Yarmouth, took to the air but they did not see L. 13 and one pilot, Flt Sub-Lt G.W. Hilliard, died in a landing accident. The damage inflicted was the highest recorded for any single airship raid of the war, London suffering to the extent of £530,787. Amongst the rubble, the bodies of 22 Londoners awaited recovery and 87 more bore injuries that would remind them of this terrible night for the rest of their lives. CONCERNS FOR LONDON'S DEFENCE Concerns over London's vulnerability to aerial attacks increased with each incursion over the capital. Four raids had hit the city, and, during the last, thousands observed L. 13 sailing relatively unmolested over the heart of London. No aeroplanes appeared in opposition, while falling shrapnel from anti-aircraft shells fired at the airship caused more damage on the ground than in the air. Politicians demanded answers, the newspapers posed questions, dubbing the night 'Murder by Zeppelin', and the public felt alone and unprotected in the face of the previously unimagined horrors of aerial bombardment. However, although the Germans predicted that the bombing would cause panic on the streets of the city, they were wrong. It did nevertheless engender a universal anger amongst the population, shocked that Germany could indiscriminately target women and children in this way. From all quarters there arose a demand for a significant counter to the Zeppelin menace. In response, the Secretary of State for War, Lord Kitchener, ordered Maj Gen David Henderson, Director-General of Military Aeronautics and commander of the RFC, to his office. Kitchener, under great pressure himself, demanded of Henderson, 'What are you going to do about these airship raids?' Even though Henderson pointed out that the defence of London was in the hands of the RNAS, Kitchener threatened to hold Henderson personally responsible if the RFC did not oppose the next raid. Accordingly, on 9 September, Henderson ordered BE2cs to both Writtle (near Chelmsford) and Joyce Green and began to overhaul his resources. Matters were stirring in the corridors of the Admiralty too. In September, in an effort to improve the situation, they appointed the gunnery expert Admiral Sir Percy Scott, recently recalled from retirement, as sole commander of London's artillery defence. Scott wasted no time in attending to his task. A quick inventory told him that his command amounted to 12 guns manned by part-time crews; he ignored the ineffective and outdated 'pom-poms'. He immediately sent to France for a 75mm auto-cannon, a gun far in advance of anything available at that time in Britain. With this weapon, an anti-aircraft gun mounted on an automobile chassis, he formed the nucleus of a mobile anti-aircraft battery. From all available sources, he pressed guns into service and at the same time established fixed gun positions with linked searchlight stations, while recruiting and training the personnel to operate them. Fortunately for London, three more attempted raids on consecutive nights in September all failed and it was a month before a Zeppelin reached London again. A Vickers 3-pdr quick-firing gun mounted on a Lancia chassis. The mobile anti-aircraft battery set up headquarters at Kenwood House in Hampstead under Lt Col Alfred Rawlinson, an army officer recently appointed to the Royal Naval Volunteer Reserve as deputy to Sir Percy Scott. Major-General Henderson meanwhile sent out reconnaissance parties to find suitable sites for new forward airfields positioned astride the north-eastern approaches to London. He secured farmland at Suttons Farm near Hornchurch and at Hainault Farm near Romford, adding them to the RFC roster. In addition, an observer cordon was organized to operate beyond the forward airfields, in telephone communication with the War Office. In a very short time portable canvas hangars arrived at the new airfields, landing grounds were marked out and a group of newly qualified pilots, awaiting overseas postings, reported for duty on London's new front line. Initial plans only required this hastily arranged response to provide cover from 4–12 October 1915, but an extension was authorized. The day after it had originally been due to expire, the Zeppelins returned to the capital. THE FIFTH RAID – 'THEATRELAND' AND THE ARTILLERY RESPONSE It had been Strasser's intention to commence raids on Liverpool in October, but the weather forecast for the night of 13/14 October precluded that. Instead, he launched five Zeppelins against London. Alongside L.11, L. 13 and L.14, Strasser now had two new airships, L.15 and L.16, both fitted with four new 240hp engines, an improvement on the 210hp versions carried by the others. The airship fleet planned to rendezvous over the North Sea prior to launching the attack, but with no sign of Oblt-z-S von Buttlar's L.11, Heinrich Mathy, leading the raid from L.13, ordered the other ships to move off. They reached the coast of north-east Norfolk near Bacton between 6.20pm and 6.45pm; then at North Walsham, about 5 miles inland, the fleet encountered mobile machine-gun fire, the new first line of defence. As the four airships continued towards London, the leading trio gradually drew away from Oblt-z-S Werner Peterson in L. 16 and lost contact. The Admiralty received early advice of the impending raid via reports from the North Sea lightships and increased radio traffic. At 5.30pm the six RFC airfields around London received a warning order of Zeppelin activity. This was followed at 6.55pm by an order for Northolt, Joyce Green, Suttons Farm and Hainault Farm to have an aircraft on stand-by; an hour later each airfield received instructions to get an aircraft into the air if weather permitted. Thick ground fog prevented any take-off from Northolt or Suttons Farm, but 2nd Lt F.H. Jenkins ascended from Hainault Farm at 8.00pm in a BE2c, followed 20 minutes later by Lt R.S. Tipton from Joyce Green. As these two aircraft laboured up into the sky – the BE2c taking some 50 minutes to climb to 10,000ft – the Zeppelins continued on their way. It appears that L.13 and L. 15 kept more or less together, flying over Thetford and Saffron Walden before diverging near Bishop's Stortford. Kptlt Böcker in L.14 had already separated from this pair, heading towards the Thames estuary where he intended to pass to the east of London before swinging around and approaching from the south. Mathy planned to circle around the west of the capital and come in from the south-west, while Kptlt Joachim Breithaupt in L.15, in his first raid on London, followed the shortest route in from the north. At about 8.45pm, Breithaupt approached Broxbourne, Hertfordshire. A 13‑pdr anti-aircraft gun opened fire on the looming airship, to which she replied with extraordinary accuracy, dropping three bombs and knocking over the guncrew, damaging a Royal Engineer lorry and destroying the officer's car. L.15 passed over Potters Bar, High Barnet, Elstree and then Wembley, before finally turning eastwards. Releasing ballast, she rose to 8,500ft and headed for the centre of London. As he progressed, Breithaupt kept Hyde Park on his port side until he approached the Thames close to the Houses of Parliament. As the anti-aircraft gun in Green Park opened on L.15 and two searchlights found her, a journalist watching her progress noted that 'she looked a thing of silvery beauty sailing serenely through the night'. The famous landmarks of London were clear even from a mile and half up in the sky and Breithaupt ordered bombing to commence at Charing Cross Station at 9.35pm. Just at that moment, an army officer on leave from Flanders was driving along the Strand in a taxi when the driver suddenly came to halt, got out and ran off. The officer looked up at the sky and later recalled: Kptlt Joachim Breithaupt. Breithaupt took command of L.15 on 12 September 1915, having commanded L.6 for the previous four months. The 'Theatreland' raid of 13 October 1915 was the first time he had flown over Britain. Right overhead was an enormous Zeppelin. It was lighted up by searchlights, and cruised along slowly and majestically, a marvellous sight. I stood gaping in the middle of the Strand, too fascinated to move. Then there was a terrific explosion, followed by another and another. The first bomb fell in Exeter Street, just off the Strand, in the heart of London's 'Theatreland'. The bomb hit a corner of the Lyceum Theatre, causing limited damage inside but killing one person and injuring two others in the street. Another bomb fell seconds later close to the corner of Exeter and Wellington streets. An interval was in progress at the Lyceum and many of the audience were buying refreshments from street traders and at The Old Bell public house. The bomb gouged a large crater in the road and fractured a gas main. As the dust settled, amid the debris, rubble and flames lay 17 bodies while another 21 people sustained terrible injuries. A third bomb fell in Catherine Street near the Strand Theatre. Scenes of devastation and horror confronted the theatregoers as they emerged into the bomb-scarred streets, but high above them Breithaupt continued on his path, disconnected from the trail of destruction below. He later recalled: 'The picture we saw was indescribably beautiful – shrapnel bursting all around (though rather uncomfortably near us), our own bombs bursting, and the flashes from the anti-aircraft batteries below.' Indeed, for the first time a Zeppelin commander was aware of a significant barrage of anti-aircraft fire. The scene of devastation at the junction of Wellington and Exeter Streets in Covent Garden after Breithaupt's raid. The building on the right is the Old Bell public house. The bomb here killed 17 and injured 21. From Catherine Street, L.15 continued to Aldwych, where two bombs killed three people and injured 15. Incendiary bombs then fell on the Royal Courts of Justice as L.15 turned onto a northerly course before more explosive bombs fell on Carey Street and Lincoln's Inn, an explosive bomb in Old Square badly damaging the 17th-century stained-glass window of Lincoln's Inn Chapel. Chancery Lane suffered next, then L.15 crossed over Holborn, dropping more incendiaries and an explosive bomb on Gray's Inn before turning east again and releasing incendiaries over Hatton Garden and one in Farringdon Road, close to where bombs had fallen in the raid of 8/9 September. Breithaupt now steered L. 15 towards the City of London. Unknown to him, at the very same time Cdr Rawlinson (Sir Percy Scott's deputy) had been involved in a hair-raising dash across London from Wormwood Scrubs to the Honorable Artillery Company grounds near Moorgate with the new French 75mm auto-cannon. He swung the gun into action just as Breithaupt approached. With no time to lose, Rawlinson quickly estimated range and height and gave the order to fire. The high-explosive shell burst short of the target, but it was immediately clear to Breithaupt that this was something new. He swiftly released two bombs, which fell in Finsbury Pavement, and started to climb. By the time the gun fired a second shot L.15 was over Aldgate, where she dropped an explosive bomb that landed on Minories, partly demolishing a hotel, bank and restaurant as well as causing damage to numerous other buildings in Houndsditch and Aldgate High Street. Rawlinson's second round burst above L.15, forcing Breithaupt to release water ballast to enable him to climb rapidly. Before turning away to the north, Breithaupt dropped two more explosive bombs, narrowly missing the Royal Mint. The raid had lasted only ten minutes. Damage caused on 13 October by an explosive bomb at Gray's Inn, one of London's four Inns of Court and home to many top barristers' chambers. (IWM, HO.5) In addition to Rawlinson's gun, Woolwich had fired 137 rounds, and guns at Clapton Orient football ground, Nine Elms, West Ham, Finsbury Park, Parliament Hill, Green Park, Tower Bridge, King's Cross, Foreign Office, Blackheath, Honor Oak, Barnes and Waterloo had added their firepower. Percy Scott's new defences had made their presence felt. However, the aerial response was less successful. A combination of ground mist and engine problems contrived to restrict the RFC response, while no RNAS aircraft flew defence sorties that night. Breithaupt's raid was by far the most successful of the night. Heinrich Mathy in L.13 had passed to the west of London, but lost his way, dropping 12 bombs around Guildford and Shalford in Surrey, while attempting to locate the Hampton waterworks. Then, flying eastwards, he unexpectedly found himself in close proximity to Böcker near Oxted. The corner of Minories and Aldgate High Street. The bomb partly demolished the London and South Western Bank and the hotel above, also severely damaging a restaurant next door. A woman injured in the explosion subsequently died. Having crossed the Thames estuary, Böcker, in L. 14, also lost his way, flying south until he reached the English Channel at Hythe. Circling over the nearby Otterpool Camp at about 9.05pm, he released eight bombs, killing 14 soldiers, wounding 12 and also killing 16 army horses. Böcker then turned back inland and, after dropping bombs on Frant and Tunbridge Wells, he encountered Mathy and L.13 at about 10.40pm. The two airships exchanged signals then diverged; Böcker headed north-west towards Croydon while Mathy flew north. At 11.19pm Böcker dropped 13 explosive bombs near the busy railway junction at East Croydon, but only one caused minor damage to the track; the rest demolished or damaged nearby houses, killing and injuring a number of civilians. From there Böcker turned eastwards intending to head home, but near Bromley, he almost collided with L. 13. Some accounts claim that the two commanders later exchanged words over the incident. After this encounter Mathy flew on to attack Woolwich, although he thought he was attacking the Royal Victoria Dock on the other side of the Thames. L. 13 flew in slowly, coming under intense anti-aircraft fire from the Woolwich guns. The first bomb dropped at 11.50pm and two or three minutes later it was all over. Although three explosive bombs and over 20 incendiaries hit the artillery barracks and arsenal, they recorded little significant damage. Casualties amounted to four men wounded in the barracks and nine in the arsenal, one of whom later died from his injuries. Bomb damage at the rear of 92 Chamber Street, close to the Royal Mint. Four people (Frederick Coster, John Wilshan, Reuben Pizer and Mary Hearn) were injured in the explosion. Three explosive bombs dropped on Oval Road, East Croydon, from Kptlt Böcker's L.14 killed three civilians, wrecked six houses in the street and damaged four others. The remaining two Zeppelins, L.16 and L.11 never got close to London. Peterson in L.16 lost touch with the others on the journey across East Anglia and at 9.35pm, as L.15 was bombing London, he came under anti-aircraft fire from the gun at Kelvedon Hatch, south of Chipping Ongar. Perhaps with memories of the raid of 9/10 August still fresh in his mind, when in command of L.12 he was forced down into the sea by anti-aircraft fire, Peterson turned away from London and dropped nearly 50 bombs on Hertford, 20 miles north of the city. In his report he incorrectly claimed hits on extensive factories or railway premises in East London. Von Buttlar in L.11 missed the rendezvous over the North Sea and came in over the coast about an hour after the others. He dropped a few bombs over the countryside of north-eastern Norfolk and returned home. In all, total casualties for the raid amounted to 71 killed and 128 injured (in London and Croydon the total was 47 killed and 102 injured). Despite the improvement in London's defences, the Naval Airship Division suffered no casualties, although ground fire may have caused some engine damage to L.15, contributing to an eventful homeward voyage. The Zeppelins did not come again in 1915. The arrival of the new moon in November brought the return of darkened skies, but with them came strong gales. Then, in both December 1915 and January 1916, the new moon heralded an extended period of fog, rain, sleet and snow. THE 1916 ZEPPELIN RAIDS A PERIOD OF CONSOLIDATION In the lull between the 1915 and 1916 raids, both the German army and navy took delivery of more P-class airships. However, the navy lost the new L.18 in November, just ten days after commissioning her, in an accident at her home base at Tondern. In December, the first of ten new Q-class Zeppelins were delivered, five to each service. These new Zeppelins were basically P-class airships with the addition of two extra gas cells, increasing the length from 536ft to 585ft and improving lifting capacity and ceiling. Of more interest to Strasser, though, was the order, placed in July 1915, for the next development in airship design, the R-class, better known in Britain as the Super Zeppelins. These six-engined monsters were 650ft long and increased the lifting capacity and operational ceiling even further. In Britain, the RFC's temporary defence arrangements for London ended, and by 26 October those pilots recently drafted in to serve on the forward airfields departed to other duties. A number of conferences then took place between the War Office and the Admiralty about the future responsibility for the defence of London. Finally, on 10 February 1916, an agreement stipulated that enemy aircraft approaching Britain were the navy's responsibility. Then, once they crossed the coastline, the responsibility passed to the army – and the RFC. Plans to reinstate October's temporary defence plan for London on a permanent basis received approval and a proposal for the formation of ten home defence squadrons was accepted. The War Office, now responsible for the defence of London, also adopted Sir Percy Scott's plan for two gun rings around the capital, with a third ring of searchlights, known as 'aeroplane lights' beyond them. Initially Lord French, Commander-in-Chief, Home Forces, exercised command of the London guns through seven area sub-commanders. In February 1916 the War Office recalled Lt Col M. St L. Simon, R.E., from France to supervise the construction of gun and searchlight positions in the London area. Later, in December 1916, he became Anti-Aircraft Defence Commander, London. L.32, one of the new R-class airships, known by the British as the Super Zeppelins. With six 240hp engines and a length of 649ft 7in, these vessels were 113ft longer than the P-class and 64ft longer than the interim Q-class. A weakness in the patrol pattern flown by the RFC during the October raid was recognized and solved. With the second aircraft from each airfield not taking off until the first had landed, a gap appeared in the protective cover. By extending the patrols from 90 minutes to two hours, with the second aircraft beginning its ascent 30 minutes before the first was due to land, this gap closed and ensured unbroken air cover during a raid. Also in February 1916, the RFC grouped all the aircraft assigned to the London defences in No. 19 Reserve Aeroplane Squadron (RAS), with headquarters at Hounslow. THE RAIDS RECOMMENCE On the night of 31 January/1 February 1916, before all the defensive changes were in place, the Naval Airship Division launched nine Zeppelins on their biggest raid so far. The primary target this time was Liverpool, but difficult weather conditions and engine failures resulted in the fleet ranging over a wide area of the Midlands and the North, dropping bombs on what seemed appropriate targets. Despite 22 RFC and RNAS aircraft taking to the air, they met with no success in the thick foggy weather, while the rudimentary take-off and landing provisions made the whole process riddled with danger. Only six landed again without incident, and two pilots suffered fatal injuries in crash-landings. The German forces suffered too; Zeppelin L.19, on her first raid over England, came down in the North Sea on the return journey and her crew were lost in controversial circumstances. However, as far as the Naval Airship Division was concerned, Britain lay as open and vulnerable to their attacks as ever. As the RFC and RNAS prepared for the next Zeppelin onslaught, work continued on the development of weapons to counter the threat. The principal armament remained the 20lb Hales bomb and a 16lb incendiary device. In February 1916 a new missile joined this limited arsenal – the Ranken dart – but, again, this weapon required a height advantage before use. More importantly, work was progressing on the development of explosive and incendiary bullets. Until now the Lewis gun had generally offered little threat to enemy airships, but that would soon change. However, this highly significant leap forward in the war against the Zeppelins was still a few months away. March 1916 brought another respite for London when the German navy withdrew five of their newest airships in an attempt to solve the recurring problems of engine failure. In the meantime, in appalling weather, three of the older vessels raided the north of England on the night of 5/6 March, causing significant damage, particularly on Hull. Snowstorms and strong winds prevented any aircraft opposition. A SHIFT IN FORTUNE At the end of March the Zeppelins came again. This time, on the night of 31 March/1 April 1916, both the army and navy launched attacks. The army airships raiding East Anglia achieved nothing; three Zeppelins set out, but only one, LZ.90, came inland as far as Ipswich. She returned home without unloading her bombs. The navy dispatched seven Zeppelins, with London as their target. Despite four of the most experienced Zeppelin commanders – Mathy, Böcker, Peterson and Breithaupt – taking part, none reached London, although both Böcker in L.14 and Peterson in L.16 claimed they had. Böcker, with Strasser on board, dropped his bombs on towns in Essex but claimed Tower Bridge as his target. Peterson falsely claimed hits on Hornsey in north London, but in fact his bombs fell on Bury St Edmonds. Elsewhere, L.22 – one of the new Q-class Zeppelins – switched targets and attacked Cleethorpes in Lincolnshire, where a bomb falling on a church hall killed 32 soldiers of the Manchester Regiment billeted inside, and injured another 48. The ever-present problem of mechanical failure forced L.9 and L.11 to return early, but the journey proved particularly dramatic for the crews of L.13 and L.15. AN AIRFIELD AT NIGHT – ZEPPELIN ALERT! An RFC pilot is shown here preparing for a Zeppelin patrol in early 1916. The devastating Zeppelin raids of September 1915 led to an urgent demand for the RFC to do more to oppose future raids. Accordingly, land was sought for two new airfields on the north-eastern approaches to London. Suitable farmland was acquired in Essex near Hornchurch at Suttons Farm, designated Landing Ground No. II, and Hainault Farm, near Romford, designated Landing Ground III. Suttons Farm comprised 90 acres of corn stubble bounded by low hedges. Two BE2c aircraft were dispatched to Suttons Farm, with one pilot to be on stand-by each night for anti-Zeppelin duty. Besides carrying bombs fixed in racks under the wings, the BE2c also carried an upward-firing Lewis gun. Two canvas hangars designated for Suttons Farm were erected on 3 October and a landing ground marked, outlined with flares. Initially these were just old petrol cans with the tops cut off, half filled with petrol and cotton waste then set alight. By arranging the lines of flares in a specific order, individual landing grounds could be identified by disorientated pilots from the air. Although originally intended only as a temporary airfield, Suttons Farm became a permanent base, eventually home to a flight of No. 39 (Home Defence) Squadron – the most successful Zeppelin-fighting squadron of the war. Mathy, in L.13 en route for London, attacked an explosives factory near Stowmarket, but a hit from a 6-pdr anti-aircraft gun holed two of L.13's gas cells. He turned for home, losing gas as he went. The crew jettisoned equipment and the remaining bombs to lighten the ship, allowing L. 13 to limp back to base at Hage. Breithaupt, who had bombed London so successfully the previous October, was not so lucky this time. L.15 flew in over Dunwich in Suffolk at 7.45pm, following a course to the Thames via Ipswich and Chelmsford. His route took him directly into the area defended by No. 19 RAS. Notified of the Zeppelin's approach, 2nd Lt H.S. Powell took off from Suttons Farm at 9.15pm, quickly followed by 2nd Lt A. de Bathe Brandon from Hainault Farm and 2nd Lt C.A. Ridley from Joyce Green. As these pilots urged their BE2c planes up into the night sky, Breithaupt turned towards Woolwich. Six minutes after take-off, Ridley spotted L.15 caught in a searchlight ahead of him, but several thousand feet higher. He attempted to close and opened fire with his Lewis gun at extreme range, but then the searchlight lost contact, and so did Ridley. When the searchlights picked up Breithaupt again, the anti-aircraft guns on the stretch of the Thames between Purfleet and Plumstead exploded into action. At about 9.45pm the Purfleet gun scored a direct hit. 2nd Lt Brandon had also spotted her and, as she turned away from the guns and lights, he set an interception course. L. 15 started to lose height and the crew quickly established that two gas cells were virtually empty and two others leaking; to lighten the ship over 40 bombs were dropped on open ground near Rainham. At about 9.55pm Brandon closed with L. 15 over Ingatestone, about 15 miles north-west of Purfleet, and from a position about 300 or 400ft above, he released three Ranken darts as the machine guns on the upper platform of L.15 opened fire at him. The darts missed the target, so Brandon came around again and prepared an incendiary bomb, but, fumbling in the dark, he took his eyes off the target and almost overshot. Having failed to find the launching tube, he rested the incendiary bomb in his lap and dropped more darts without result. Turning to make a third attack, the inexperienced Brandon – with only 30 hours' flying time behind him – became confused by the speed of action, found himself flying away from L.15 and lost contact. Zeppelin L.15 was brought down about 15 miles north of Margate, following damage inflicted by the Purfleet anti-aircraft gun. The crew made desperate attempts to keep her aloft but she had lost too much hydrogen. The crew of L.15 were eventually rescued and taken to Chatham aboard the destroyer HMS _Vulture_. These pictures show two of the crew under guard; the original captions identify the men as a warrant officer (left) and leading mechanic (right). Breithaupt was now free of pursuit, but was in a bad way. Jettisoning all excess weight, L.15 continued to lose height, and as he approached the coast he began to doubt whether he could nurse the ailing ship to Belgium. At 10.25pm he sent a last radio message – 'Require immediate assistance between Thames and Ostend – L. 15' – then threw the radio overboard and flew out over Foulness. Just after 11.00pm the stress to L. 15's frame proved too much, and at 2,000ft, following 'an ominous crack', her back broke and she crashed into the sea about 15 miles north of Margate. One of the crew, Obersignalmaat Willy Albrecht drowned; the others clambered up onto the top of the outer covering and waited. After five hours floundering uncomfortably at sea, a British destroyer rescued Breithaupt and the surviving 16 members of the crew, taking them to Chatham as prisoners of war. Attempts to tow L. 15 failed and the wreckage finally sank off Westgate, near Margate. The attack of 31 March/ 1 April marked the start of a run of five consecutive nights of raiding. Three of these were targeted on London, but because of strong winds only one airship, army Zeppelin LZ.90 commanded by Oblt Ernst Lehmann, got close. On 2 April he dropped 65 incendiary and 25 explosive bombs as he approached Waltham Abbey, causing only minimal damage to a farm, breaking windows, roof tiles and killing three chickens. Then, just after midnight, as the Waltham Abbey anti-aircraft guns opened up with a heavy bombardment, Lehmann turned for home. Seven aircraft from No. 19 RAS took off to intercept LZ.90 but only one claimed a sighting. Strasser realized his raids were not having the effect he had originally anticipated, but he retained absolute belief that his airships would eventually bring Britain to its knees. To ensure he retained the support he needed, Strasser allowed the issue of reports such as that released to the Kaiser after the raid of 31 March/ 1 April. It falsely claimed success against specific targets in London including an aeroplane hangar in Kensington, a ship near Tower Bridge, fires in West India Docks and explosions at Surrey Docks as well as the destruction of a munitions boat at Tilbury Docks with massive casualties. REORGANIZATION AND RE-ARMAMENT In March 1916 those aircraft defending London were placed under the new No. 18 Wing, commanded by Lt Col Fenton Vesey Holt. Three weeks later, on 15 April, No. 19 RAS became No. 39 (Home Defence) Squadron. Its various detachments, currently spread around the outskirts of London, were concentrated at Suttons Farm and Hainault Farm. The headquarters flight remained for the time being at Hounslow, where all training continued to take place until a new airfield could be located north-east of London. As the squadron quickly began to take shape, it received the welcome news in June that home defence squadrons were finally able to divorce themselves entirely from training responsibilities, which they had until now combined with their defensive duties. With this positive change they became part of No. 16 Wing, which in July was simply designated Home Defence Wing. While these pilots honed their skills, elsewhere technical developments were finally about to provide them with a weapon to strike fear into the hearts of the Zeppelin crews. Unknown to those men, who flew into battle suspended beneath more than a million cubic feet of highly inflammable hydrogen gas, British aircraft would soon be hunting them armed with machine guns firing a deadly combination of explosive and incendiary bullets. Initial trials of John Pomeroy's .303 explosive bullet in June 1915 failed to convince the RFC authorities of its practicality. Later, in October, another bullet with both explosive and incendiary effects, underwent trials with the RNAS, designed by Flt Lt F.A. Brock (of the famous fireworks family). After further trials in February 1916, the Admiralty placed an order. Pomeroy persevered with his own bullet, and in May the RFC requested an initial batch while also ordering 500,000 of the Brock bullet; a similar-sized order for Pomeroy's bullet followed in August 1916. At least one aircraft from No. 39 Squadron used part of the trial batch of Brock bullets in action on 25 April. That same month the RFC also tested a phosphorus incendiary bullet produced by an engineer, J.F. Buckingham. All these bullets needed further enhancement and none stood out as being superior to the others, but they showed great promise. Orders for the Buckingham bullet followed too, and in June 1916 a new tracer bullet, the Sparklet, was added to the arsenal, developed by the makers of the Sparklet soda siphon. And this was the answer. Hydrogen only becomes flammable when mixed with oxygen; a combination of these new bullets would, it was hoped, blow a hole in the gas bags, letting hydrogen escape and mix with air, then a following incendiary bullet would ignite the combustible mixture. THE LAST RAIDS OF SPRING Bad weather thwarted an attempt by naval Zeppelins to attack London on the night of 24/25 April 1916. The following day the army sent five Zeppelins on a course for the city. Despite good weather, only Hptmn Erich Linnarz, the man who had captained LZ.38 on the first successful bombing of London 11 months earlier, came close to reaching the target. Linnarz now commanded LZ.97, one of the new Q-class Zeppelins, and was determined to reach London again. Coming in over West Mersea at about 10.00pm on 25 April, he followed the course of the Blackwater river inland. Passing Chelmsford, he headed west until, at about 10.45pm, he dropped over 40 incendiary bombs on a line from Fyfield to Chipping Ongar in Essex. These caused virtually no damage. Then, 15 minutes later, having steered a south-west course and believing he was over London, Linnarz began to bomb again. His second-in-command, Oblt Lampel, recalled the feelings of the crew at that moment: [The Commander's] hand is on the buttons and levers. 'Let go!' he cries. The first bomb has fallen on London! We lean over the side. What a cursed long time it takes between release and impact while the bomb travels those thousands of feet! We fear that it has proved a 'dud' – until the explosion reassures us. Already we have frightened them; away goes the second, an incendiary bomb. It blazes up underneath and sets fire to something, thereby giving us a point by which to calculate our drift and ground speed. But Linnarz's crew had miscalculated, and this second batch of bombs actually dropped over Barkingside, some 8 miles north-east of the city. LZ.97 followed a curving route southwards towards Newbury Park as searchlights flicked to and fro across the sky. Oberleutnant Lampel described them 'reaching after us like gigantic spiders' legs; right, left and all around.' Then the guns opened up. LZ.97 circled over Seven Kings then headed back towards the east, dropping a single bomb on Chadwell Heath. However, Linnarz was not yet out of danger. Barkingside lay in the midst of the airfields of the newly organized No. 39 Squadron. With word of the Zeppelin's approach, two aircraft took off from both Suttons Farm and Hainault Farm. Captain A.T. Harris (later Air Marshal Arthur 'Bomber' Harris), commanding B Flight at Suttons Farm, was first up at 10.30pm and 15–20 minutes later he saw the searchlights reaching out to the north. At 7,000ft he observed LZ.97 turning and climbing over Seven Kings. Struggling up to 12,000ft, Harris made for the Zeppelin, which passed over him 2,000ft higher up. In spite of the long range, he opened fire with his Lewis gun, but almost immediately the new Brock explosive ammunition jammed. He turned, got behind Linnarz's ship, cleared his gun and fired again – but once more it jammed. Then, as he worked to clear it a second time, his BE2c slipped off course and the target disappeared into the blackness of the night. The other pilot from Suttons Farm, 2nd Lt William Leefe Robinson, took off about 15 minutes after Harris. Then, having climbed to 7,000ft and attracted by the sweeping searchlights, he caught sight of LZ.97. Climbing towards her, he opened fire, but estimated the target to be 2,000ft or more above him. Three times he got into position below LZ.97, but each time his gun jammed; he fired off only 20 rounds before losing sight of her. Linnarz and LZ.97 escaped, but it was a sobering experience for Oblt Lampel who later wrote: 'It is difficult to understand how we managed to survive the storm of shell and shrapnel.' After the departure of LZ.97, the skies over London were empty for many weeks. The navy Zeppelins' commitment to the German fleet in connection with the Battle of Jutland (31 May–1 June 1916) and then the advent of short summer nights prevented any more raids on Britain for almost three months. 2nd Lt William Leefe Robinson. Robinson transferred from the Worcestershire Regiment to the RFC in March 1915. After initially serving as an observer in No. 4 Squadron in France, he qualified as a pilot at Upavon in September 1915. A year later he was awarded the V.C. and became a national celebrity. LONDON'S AERIAL DEFENCE MAKES READY With this lull in the German offensive, the RFC was able to continue its reorganization. However, in June 1916, with the approach of the Allied offensive on the Somme, the demands for more aircraft on the Western Front led to a reduction in the February home defence proposal from ten to eight squadrons, but even then less than half the aircraft required were available to bring these squadrons up to strength. Further pressure reduced this force again in July to six squadrons, with the promise of additional squadrons later in the year to compensate. No. 39 Squadron was in fact one of the few up to full strength, with 24 aircraft, including six of the new BE12. This single-seat version of the BE2c had an improved engine, giving it a better rate of climb and, for the first time on home defence, a Vickers machine gun fitted with interrupter gear. This allowed firing through the propeller arc – a major improvement on the upward-firing, bracket-mounted Lewis used on the BE2c. In August, No. 39 Squadron finally grouped all three flights on the north-eastern approaches to London as the Hounslow flight took up residence at a new airfield at North Weald Bassett. At the end of June positive feedback on RFC trials of the new bullets paved the way for pilots to discard bombs from their armament. The recommended load for a BE2c pilot was now a Lewis gun firing a mixture of explosive, incendiary and tracer ammunition along with a box of Ranken darts. However, the RNAS steadfastly refused to abandon bombs entirely. On the ground, although great improvements were apparent in the number of guns and searchlights available, at 271 guns and 258 searchlights, these figures remained far short of the planned national levels of 490 each. RETURN OF THE RAIDERS The navy airships returned to the offensive on the night of 28/29 July. This raid was remarkable only in the fact that it saw the arrival over England of the first R-class or Super Zeppelin, L.31, commanded by Heinrich Mathy. In the pipeline for over a year, the design had suffered a number of production delays, but finally it was ready. At some 650ft long and with a diameter of 78ft, its 19 gas cells contained almost 2 million cubic feet of hydrogen, a vast increase over the 1.2 million of the Q-class and the 1.1 million of the P-class. This increase in gas capacity allowed the Super Zeppelins to climb to 17,400ft. However, at an operational height of 13,000ft the six engines could reach 60mph when loaded with between three and four tons of bombs. While hopes were high for this long-awaited addition to the Zeppelin fleet, their arrival over Britain coincided with the introduction of explosive and incendiary bullets to home defence squadrons. This ten-Zeppelin raid caused virtually no damage. Mechanical problems caused four to return early and fog severely restricted the impact of the others. However, it proved a useful exercise for Mathy and his new ship. Eight naval Zeppelins followed up with a raid two days later. All headed for the east coast, except Mathy in L.31 who steered for London, but unpredicted high winds disrupted the attack leaving L.31 to wander briefly over Kent before returning to Nordholz. The naval airships set out again on the night of 2/3 August. Following a similar pattern, five headed for East Anglia as Mathy in L.31 made another strike for London. As in the previous raid, Mathy only reached Kent, where vigorous defensive fire from batteries on the south coast forced him away. Zeppelin L.31 was the first of the 'Super Zeppelins' to appear over Britain, on 28 July 1916. The photo shows L.31 flying over the German battleship _Ostfriesland,_ one of those that bombarded Scarborough, Hartlepool and Whitby in December 1914. Mathy was joined by another of the Super Zeppelins – L.30, commanded by Horst von Buttlar, now promoted to Kapitänleutnant – on the night of 8/9 August, when nine naval airships raided the north-east coast, Hull in particular suffering badly. A short lull followed as the cycle of the full moon passed; then, on the night of 24/25 August the navy Zeppelins returned. THE SIXTH LONDON RAID – THE SUPER ZEPPELINS REACH THE CAPITAL Thirteen naval Zeppelins set out on the sixth London raid. L. 16 and L.21 came in over Suffolk and Essex, where they caused minor damage before turning for home. L.32, another Super Zeppelin on its first raid over England and with Strasser on board, reached Kent. Greatly delayed by strong winds at altitude, the commander, Werner Peterson, decided it was too late to strike for London, so having flown along the coast from Folkestone to Deal he dumped his bombs at sea and returned. Nine others dropped out with mechanical difficulties or through delays caused by the strong winds. Only one airship made for London: L.31 commanded by Heinrich Mathy. Mathy appeared off Margate at 11.30pm, and for once the bad weather worked in his favour. The night was wet with extensive low cloud, and, although the engines could be heard, L.31 became visible only momentarily between gaps in the cloud. Mathy followed the line of the Thames and, having passed between North Woolwich and Beckton, he turned south-west over Blackwall. This took him over the Millwall Docks on the Isle of Dogs, where he dropped his first bombs on or adjoining West Ferry Road. These destroyed a number of small houses and an engineering works, falling only a few yards from those dropped by SL.2 on 7 September 1915. Crossing to the south bank of the Thames, bombs fell in Deptford on the Foreign Cattle Market, home to the Army Service Corps' No. 1 Reserve Depot. They also caused severe damage to the London Electric Supply Company and the Deptford Dry Dock. Mathy continued, following the south bank of the Thames back to the east until, over Norway Street in Greenwich, he turned south and dropped bombs on the railway station. The following morning the stationmaster turned up for work proudly displaying 'a wonderful black eye' and a face covered with scores of minute cuts, caused by a shower of stone and shell fragments from when the bomb had exploded. Besides causing much damage to the station, the bomb also blasted a hole in the wall of the public house opposite and inflicted superficial damage on a cluster of almshouses. Other houses in Greenwich suffered too as L.31 passed over towards Blackheath; there, an explosive bomb partly demolished a shop and house in the inappropriately named Tranquil Vale. Three other explosive bombs and an incendiary fell on the Horse Reserve Depot of the Army Service Corps, injuring 14 soldiers. From Blackheath L.31 continued to Eltham, where a bomb blasted a house in Well Hall Road, killing the occupants. Mathy then steered a north-east course, dropping bombs on Plumstead, where one demolished a house at 3 Bostall Hill, killing a family of three. The raid commenced just after 1.30am on the morning of 25 August and was over about ten minutes later, during which time 36 explosive and eight incendiary bombs rained down. The low cloud made it very difficult for the ground defences to home in on L.31 and it appears that no searchlights located her until she passed over Eltham. Only after L.31 had completed its path of destruction across south-east London did the anti-aircraft guns begin to blast the first of 120 rounds skywards. However, one observer reported that all shells were bursting over the target. Second-Lieutenant J.I. MacKay of No. 39 Squadron was the only RFC pilot to catch even a brief glimpse of L.31 that night, before Mathy headed for home; he was pursued out to sea by two RNAS pilots from Eastchurch and Manston. Although the raid of 24/25 August had successfully reached London, it failed to penetrate to the heart of the city. Instead, the bombs fell largely on poor housing on the south-east outskirts, although the damage was estimated at £130,000 – the greatest single damage was caused by the bombs dropping on the workshops, offices and stores of Le Bas & Co, an industrial company based in West Ferry Road. Altogether nine civilians died and about 40 soldiers and civilians were injured. As far as Strasser was concerned, this marked the start of a big effort for the raiding period planned between 20 August and 6 September. However, L.31 would not be available for service again for another month, as a rough landing necessitated extensive repairs. Another raid had come and gone. While the increased gunfire offered the public some comfort, what they really wanted to see was a Zeppelin, one of the 'baby-killers', brought down before their own eyes. Wealthy industrialists and newspaper editors offered up monetary rewards for the first Zeppelin brought down over Britain, but that goal remained elusive. THE TIDE TURNS – THE LOSS OF SL.11 Strasser's next big raid, aimed at London on 2/3 September, coincided with one planned by the army. That night, a total of 12 navy airships (11 Zeppelin and a Schütte-Lanz) and four army airships (three Zeppelin and a Schütte-Lanz) set out to bomb the capital. It was the largest single raid of the war, but it ended in disaster for the Army Airship Division. In fact the whole raid turned out badly. The naval Zeppelins encountered rain, hail and snowstorms over the North Sea, widely dispersing their attack. One bombed the East Retford gasworks in Nottinghamshire and another caused most of the night's casualties when bombing Boston in Lincolnshire, while at least six others wandered largely ineffectively over East Anglia. Only three reached Hertfordshire, north of the capital, and these were preparing to strike London when events forced them to change course and make for home. That night the army airships found themselves centre stage. This is believed to be the wooden-framed SL.11 under construction at Leipzig between April and August 1916. SL.11, commanded by Hptmn Wilhelm Schramm, officially entered service on 12 August. The raid of 2/3 September was its first over Britain. Heavy rain squalls over the North Sea forced one of the army Zeppelins, LZ.97, to turn back before crossing the coast. Another, LZ.90, came inland at Frinton on the Essex coast, penetrating as far as Haverhill in Suffolk, where it dropped six bombs before turning away and flying out north of Yarmouth. Oberleutnant Ernst Lehmann, commanding LZ.98, came inland over New Romney on the Kent coast. Lehmann had almost reached London during the raid of 2/3 April earlier in the year, but the Waltham Abbey guns forced him to turn back on that occasion. This time he approached London across Kent, passing Ashford, Maidstone and Sevenoaks, bearing towards Woolwich. The other army airship to penetrate inland was SL.11. Making landfall over Foulness, Essex, at about 10.40pm, she steered north-west across Essex into Hertfordshire, with the intention of sweeping around London and approaching the capital from the north-west. The newly commissioned SL.11 was on her first mission; her commander, London-born Hptmn Wilhelm Schramm, had previously led LZ.93 on two unsuccessful raids against the capital in April. The British authorities, intercepting a great volume of radio traffic, were aware that a raid was imminent. No. 39 Squadron received orders to commence patrolling at about 11.00pm. Second-Lieutenant William Leefe Robinson, now commanding B Flight, was first up from Suttons Farm at 11.08pm in a BE2c. He began the long climb to 10,000ft to patrol the line from his home base to the airfield at Joyce Green. Within the next five minutes, Lt C.S. Ross took off from North Weald in a BE12, flying the line to Hainault Farm, while 2nd Lt A. de Bathe Brandon, flying a BE2c from Hainault Farm, covered the line to Suttons Farm. Neither Ross nor Brandon saw any sign of enemy airships and returned to their home airfields, where Ross made an emergency landing and crashed. Second-Lieutenant J.I. MacKay took over his patrol from North Weald and 2nd Lt B.H. Hunt replaced Brandon in the air. At 1.07am, 2nd Lt F. Sowrey ascended from Suttons Farm to take up Robinson's patrol, but Robinson had not yet started to descend. At 1.10am, flying at 12,900ft, he noticed searchlights attempting to hold a Zeppelin in their beams to the south-east of Woolwich. The airship was Lehmann's LZ.98. Anti-aircraft guns opened on the airship, turning it away to the east where it dropped bombs near Gravesend. Robinson estimated he was flying about 800ft above the Zeppelin and, preferring to maintain this advantage, closed only slowly on his target for the next ten minutes. However, his quarry steered into clouds and disappeared from view. Having lost his target Robinson turned away and, sighting the landing flares at Suttons Farm in the distance, headed for home. At about 1.50am, some 15 minutes into his homeward flight, Robinson noticed a red glow over north-east London. Although well overdue back at Suttons Farm, he thought the glow could be the result of bombing and flew on to investigate. He was correct. After a circuitous route, Wilhelm Schramm in SL.11 set his course for the centre of London and, passing to the south of St Albans, he released a string of bombs between London Colney and South Mimms at about 1.20am. Schramm then continued towards Enfield before heading south in the direction of Southgate, dropping a few bombs as he went, before changing course again, heading west towards Hadley Wood where he dropped two bombs at about 1.45am. Schramm then set course for the centre of London once more, but as he passed over Hornsey a searchlight picked him out and the anti-aircraft gun in Finsbury Park immediately opened fire. The central London guns soon joined in and then, as SL.11 veered away from the immediate danger and headed towards Tottenham, the east London guns opened up too, adding to the great crescendo of noise over the city. Londoners in their thousands, awoken by the storm of shot and shell the like of which they had never heard before, tumbled from their beds, peering up at the drama being enacted in the night sky. Over Wood Green the searchlights lost Schramm's ship in clouds and, now free of their hold, he resumed bombing as he passed over Edmonton; but, when a searchlight caught him again, the crowds gathering all over London cheered vociferously. Now, flying at around 11,000ft, SL.11 released more bombs over Ponders End and Enfield Highway at about 2.14am, before the ever-vigilant Waltham Abbey area searchlights and guns locked on to her. Unknown to the crew they had but ten minutes to live. Robinson already had SL.11 in his sights. Elsewhere, MacKay and Hunt had also turned their aircraft towards this illuminated target. Following his earlier experience with LZ.98, Robinson decided to abandon height advantage and put his nose down to close with the airship as quickly as possible, while SL.11 attempted to gain height and shrug off the net of light beams that held her tight. At only 27ft in length, Robinson's BE2c was dwarfed by the 570ft bulk of SL. 11 looming above him, yet some of those watching from below momentarily glimpsed him as he flitted through the searchlight beams. Turning some 800ft below SL.11, Robinson flew a path from bow to stern directly under the airship and emptied a drum of mixed Brock and Pomeroy ammunition into her from his upward-firing Lewis gun. To his dismay the burst of fire made no impact other than to alert the airship crew to his proximity. They immediately opened fire, their Parabellum and Maxim machine guns spitting out 'flickering red stabs of light' in the dark. Undeterred, Robinson returned to the attack, this time firing off another ammunition drum all along one side of SL.11, but again with no effect. As he prepared to make a third attack a shell from the anti-aircraft gun at Temple House exploded very close to SL.11 and may have damaged one of the engine gondolas, but then the searchlights lost her again, causing the guns to cease firing as Robinson swung in to the attack. A postcard issued at the time supposedly showing SL.11 over London (possibly suggesting Bruce Castle, Tottenham). However, keen to cash in on the popularity of postcards such as this, they were often reproduced as depicting different raids. This picture later appeared as L.31 approaching London on the night of 1/2 October. Another contemporary postcard depicting SL.11 held by searchlights and attacked by anti-aircraft guns. An earlier version of this card depicted the September 8/9 September 1915 raid over London. With the airship now at 12,000ft he took up a position behind her and about 500ft below, before emptying a third drum of mixed ammunition into one point of the rear underside. For a moment nothing happened, and then Robinson reported: 'I had hardly finished the drum before I saw the part fired at glow. In a few seconds the whole rear part was blazing.' SL.11 was doomed. Taking urgent evasive action, Robinson avoided the rapidly blazing airship as it started to fall. Lieutenant MacKay saw SL.11 burst into flames while still a mile from the target, but Lt Hunt had closed to 200 yards and was preparing to commence his own attack when she exploded. In the sudden flare of light Hunt caught a glimpse of another airship less than a mile away but lost her in the glare. This was L. 16, and her commander, Kptlt E. Sommerfeldt, reported that 'a large number of searchlights... had seized an airship travelling from south to north, which was being fired on from all sides with shrapnel and incendiary ammunition... It caught fire at the stern, burned with an enormous flame and fell.' At least five other scattered naval airships saw the destruction of SL.11 from a distance and turned for home. Members of the local fire brigade douse the smouldering remains of SL.11 while RFC men recover what they can from the wreckage. One fireman, asked for his opinion of Robinson, commented: 'He's given us plenty to do this night, he have, but us don't begrudge it. Us'd turn out any durned night for a month if us had a working job like this afore us.' THE ATTACK ON SL.11 In the early hours of 3 September 1916, a BE2c, piloted by 2nd Lt William Leefe Robinson, attacked German Army airship SL.11. It became one of the most celebrated aerial duels of the Zeppelin war. Robinson's first two passes were unsuccessful. Because the Lewis gun on the BE2c was fitted to fire upwards, pilots would attempt to make their attack from beneath the target. Observers on the ground noted Robinson's aircraft flitting through the searchlight beams, banking 'as it turned almost on its beam ends in wheeling round, in its efforts to secure an advantage over its gigantic foe.' As Robinson manoeuvred into position the crew of SL.11 opened up with their machine guns. The top gun platform, merely a shallow recess in the outer envelope, was the most exposed position on any airship. It was normal for at least one man to remain here on lookout throughout the flight. This could involve endless hours in freezing temperatures, the only shelter from the buffeting winds provided by a small canvas screen that shielded the guns. Access to the gun platform was via a hatch at the top of a ladder that ascended through the structure of the airship. The favoured gun was the air-cooled Parabellum MG.14, firing a 7.92mm bullet. Robinson's third attack destroyed SL.11, the first airship to be brought down on British soil. The burning wreckage of SL.11 came to earth at the village of Cuffley, Hertfordshire. There were no survivors. A vast crowd of Londoners watched the flaming descent of the stricken airship, its flames illuminating the darkness 30 miles away. As it plummeted to earth the crowds erupted, giving vent to 'defiant, hard, merciless cheers.' It was as though the threat of the Zeppelins, under which Londoners had lived since the war began, had disappeared in that blinding flash of burning hydrogen. People danced in the streets, hooters sounded, bells rang and trains blew their whistles; in the morning thousands upon thousands celebrated 'Zepp Sunday' by joining the great exodus to Cuffley to see the charred remains of the once-mighty airship for themselves. TOP LEFT: The first in a series of four postcards depicting the final moments of SL.11. The original given caption was 'Airman Attacks – 2.18am'. TOP RIGHT: The second postcard in the sequence, titled, 'Well Alight – 2.20am'. BOTTOM LEFT: The third card, 'Nearing the End – 2.22am'. BOTTOM RIGHT: The final card in the series, 'Final Rapid Fall – 2.25am'. An eyewitness described the plummeting airship as 'an incandescent mantle at white heat and enveloped in flame.' Although the authorities were aware that the destroyed airship was a Schütte-Lanz, they saw a benefit in doing nothing to dispel the belief that the wreckage was that of a Zeppelin. To the public the name Schütte-Lanz meant little, but everyone knew and hated the Zeppelins, the despised and feared 'baby-killers'. Accordingly, the victim became known as Zeppelin L.21. Five days later, an instant celebrity, William Leefe Robinson received the Victoria Cross from King George V in a ceremony at Windsor Castle – and from the various substantial financial rewards he received, he bought himself a new car. The loss of SL.11 struck at the failing heart of the Army Airship Service and they never raided Britain again. Disbanded within a year, the army turned its attention to the Gotha bomber. Strasser, however, was determined to continue the offensive as soon as the moon entered its next dark cycle towards the end of September, still convinced that his airships could strike an effective blow against Britain. Part of the mass of wire from SL.11 that remained after the wooden framework was devoured in the fire. The Red Cross sold off pieces of the wire for a shilling each to raise money for the war-wounded. (Colin Ablett) THE END APPROACHES Although Strasser remained confident, morale took a further blow with the destruction on 16 September of L.6 and L.9, now serving as training ships, following an explosion in their Fuhlsbüttel shed. In the meantime, the naval crews appeared happy to accept that the wooden construction of SL.11 may in some way have contributed to her demise, for at this time no one in Germany knew that explosive/ incendiary bullets igniting the hydrogen had been the cause of her destruction. On the night of 23/24 September, Strasser was ready to launch his airships against Britain once more. In all, 12 were detailed for the raid. Eight of the older airships were detailed to strike against the Midlands and North, while four Super Zeppelins – L.30, L.31, L.32 and L.33 – received orders for London. The only significant action by the northern group was the bombing of Nottingham by L. 17; the rest made little or no impact. Kptlt Alois Böcker. Previously commander of L.5 and L.14, Böcker took command of the new L.33 on 2 September 1916. The raid on London in the early hours of 24 September was L.33's first and last. The Super Zeppelins took off from their bases at Ahlhorn and Nordholz around lunchtime on 23 September. Von Buttlar in L.30 and Böcker in L.33 were to approach along the more traditional eastern routes, while Mathy in L.31 and Peterson in L.32 came in on the less anticipated southern route over Kent and Surrey. The first to claim a successful bombing run over London was von Buttlar in L.30. He had previously filed false reports detailing raids on the capital in August and October 1915, and this appears to be another. It seems more likely that L.30 never crossed the coastline and dropped her bombs at sea. Kapitänleutnant Alois Böcker, having previously commanded L.5 and L.14, was now at the helm of the navy's latest Zeppelin, L.33, on her first raid. Böcker crossed the coast at Foulness at about 10.40pm and steered a familiar course. Fifty minutes later he passed Billericay and then, turning south over Brentwood, he flew close to Upminster and dropped four sighting incendiary bombs prior to releasing six explosive bombs close to 39 Squadron's Suttons Farm airfield at 11.50pm. Word of the approaching Zeppelin reached the airfields late and only two pilots got aloft at 11.30am; both were still climbing when L.33 passed over and out of sight. Still undetected, Böcker dropped a parachute flare at 11.55pm south of Chadwell Heath as he attempted to determine his position. A searchlight caught L.33 briefly, but lost contact before any guns could engage. However, the flare does not seem to have aided Böcker in establishing his location, for after continuing to Wanstead he turned away from London. Then, at 12.06am, he changed direction again. Now heading south-west, he passed between the guns at Beckton and North Woolwich, before twisting to the north-west and steering towards West Ham. At 12.10am the West Ham gun opened fire as L.33 began unloading bombs on the unsuspecting streets of East London. The Black Swan public house in Bow Road, destroyed by one of L.33's bombs. The landlord, E.J. Reynolds escaped injury, as did his wife, but the bomb killed two adult daughters, his mother-in-law and a 1-year-old granddaughter. Böcker reported that his first bomb fell close to Tower Bridge, but he was actually approaching Bromley, just over 2 miles away, where a bomb on St Leonard's Street severely damaged four houses and killed six of the occupants. Steering westwards, he continued his bombing run but suddenly L.33 shuddered. As the first bombs fell, the guns at Victoria Park, Beckton and Wanstead opened up on L.33. The volume and accuracy of their fire shook the Zeppelin, even though it was flying close to 13,000ft. It was probably a shell from either Beckton or Wanstead that exploded close to L.33 at about 12.12am, smashing into one of the gas cells behind the forward engine gondola, while other shell splinters slashed their way through another four cells. Böcker immediately released water ballast in an attempt to gain height and turned back to the north-east, dropping bombs as he went. These caused serious damage to a Baptist chapel and a great number of houses in Botolph Road, while a direct hit on the Black Swan public house in Bow Road claimed four lives, including two of the landlord's children. Böcker steered away over the industrial buildings of Stratford Marsh, where his final bombs caused severe damage to a match factory and the depot of an oil company. A contemporary postcard showing L.33, viewed from outside London, being hit by an anti-aircraft shell while over Bromley-by-Bow. The original caption reads: 'A nasty jar for the Baby Killers'. The wounded Zeppelin passed over Buckhurst Hill at 12.19am before continuing towards Chelmsford. In spite of the frantic efforts of the crew to repair the shell damage, L.33 began to lose height. As she approached Kelvedon Hatch, flying at about 9,000ft, the searchlight picked her up and the gun there opened fire, possibly inflicting further damage. Having been in the air for almost an hour, 2nd Lt Alfred de Bathe Brandon spotted L.33 from some distance away as she bombed East London. Brandon, the same pilot who had come close to bringing down L.15 six months earlier, was unlucky this time too. As he closed with the target, his automatic petrol pump failed, requiring him to pump by hand while loading a drum of ammunition on his Lewis gun and controlling the aircraft at the same time. During his attack on L.15 six months earlier, he ended up with an incendiary bomb resting in his lap. This time, as he raised the gun it jerked out of its mounting and fell, coming to rest across the cockpit. By the time he fitted it back into position, while still pumping fuel, he realized he had flown under and past the Zeppelin. He turned to attack, but, approaching from the bow this time, the two aircraft closed so quickly that Brandon was unable to take aim before the target flashed past. Undeterred, he turned again and approached from the rear port side, firing a whole drum of mixed Brock, Pomeroy and Sparklet ammunition. Frustratingly, he saw the Brock rounds bursting all along the side of L.33 but without apparent effect. Loading another drum of ammunition, he turned again, but after firing just nine rounds his Lewis gun jammed. He then attempted to climb above her but lost her in the clouds. His pursuit and attack had lasted 20 minutes. On board L.33 the crew were making every effort to keep her aloft. Close to Chelmsford, the crew began throwing any removable objects overboard, including guns and ammunition, but this did not arrest the descent. Böcker hoped at least to get to the coast where he could sink his ship, but already close to the ground a gust of wind forced him down. Shortly after 1.15am, L.33 landed in a field close to the village of Little Wigborough. All 21 of the crew survived the landing; whereupon they set fire to L.33 before forming up and marching off down a country lane in a half-hearted attempt to reach the coast. 2nd Lt Alfred de Bathe Brandon. A New Zealander, Brandon qualified as a lawyer in England in 1906. When war broke out he left his father's legal firm in New Zealand, returned to England, learnt to fly and qualified as an RFC pilot in December 1915. (Colin Ablett) The glow of the fire attracted the attention of Special Constable Edgar Nicholas, who, cycling to the scene, discovered Böcker and his men marching towards him. In a somewhat surreal situation, Böcker asked Nicholas in English how far it was to Colchester. The constable told him, but recognizing a foreign accent decided to cycle along behind them, chatting with one of the crew. Another 'Special' and a police officer on leave appeared as the group approached the village of Peldon and the three men decided to escort the crew to the post office where they found Constable Charles Smith of the Essex Police. Smith formally arrested Böcker and the crew. He then received instructions to escort the prisoners towards Mersea Island and, calling in another eight special constables, this strange group set off and led the crew of L.33 into captivity. A dramatic reconstruction of Brandon's attack on L.33. L.32 – A SUPER ZEPPELIN DESTROYED Meanwhile, Mathy in L.31 and Peterson in L.32 came in from the south. The experienced Mathy crossed the coastline over Rye at about 11.00pm and pursued a direct course for London, passing Tunbridge Wells at 11.35pm and Caterham at 12.15am. Ten minutes later, over Kenley, he released four high-explosive bombs. All fell in a line, probably intended as sighting bombs to allow the crew to judge their ground speed, but they still caused damage to three houses and injured two people. The Croydon searchlight located L.31 at about 12.30am but parachute flares, dropped to illuminate the ground below, effectively blinded the crew. About six minutes later lights picked up L.31 once more and the Croydon anti-aircraft gun opened fire, but another parachute flare caused the searchlight to lose contact again. Mathy flew on for just over 5 miles before dropping four bombs over open land near Mitcham. Then, over Streatham, he unleashed a murderous salvo of 17 explosive and 24 incendiary bombs. These fell between Streatham Common Station and Tierney Road, to the north of Streatham Hill Station; the blasts killed seven, including the driver, conductor and four passengers of a tram, and wounded another 27. From Streatham, L.31 followed the line of Brixton Hill, unloading another 16 bombs (eight explosive and eight incendiary) on Brixton, killing seven and injuring 17. Mathy dropped just one more bomb south of the Thames, which landed in Kennington Park, then ceased as he crossed the river near London Bridge. He flew right across the city, the prime target area, and only resumed bombing at 12.46am along the Lea Bridge Road, Leyton, seven and a half miles from Kennington, dropping ten explosive bombs. Another eight people died here with 31 injured; many houses and shops were damaged. It seems likely that Mathy had miscalculated his position, for in his report he claimed to have bombed Pimlico and Chelsea, then the City and Islington. Flying at 13,000ft (about two and a half miles), he may have mistaken Streatham/Brixton for Pimlico/ Chelsea, with this confusion extended as he bombed Leyton. Despite this destruction, local anti-aircraft guns were unable to locate L.31; a mist had risen and smoke from fires in East London caused by L.33's bombs contributed to greatly reduced visibility. Mathy steered for home, passing close to Waltham Abbey at 1.00am before turning towards Harlow and following an unhindered course for Norfolk and the North Sea. The pursuit of L.33 across Essex held the attention away from L.31 – and then at about 1.10am, as Mathy passed Bishop's Stortford, a vast explosion filled the sky some 20 miles away to the south-east. Special Constable Edgar Nicholas. When one of the crew asked him in English if he thought the war was nearly over, Nicholas gave the now-clichéd reply, 'It's over for you anyway'. (Colin Ablett) Oberleutnant-zur-See Werner Peterson, commanding L.32, had approached the Kent coast with Mathy, but encountered engine problems. While L.31 headed inland, L.32 remained circling slowly over the coast for about an hour. Finally, at about 11.45pm, Peterson steered inland, observed from Tunbridge Wells at 12.10am. About 20 minutes later L.32 dropped a single incendiary over Ide Hill near Sevenoaks. However, as Peterson approached Swanley a searchlight illuminated his ship. To distract the light he dropped seven explosive bombs, which only succeeded in smashing a number of windows in the town. However, a light mist south of the Thames helped shroud his movements and no gun engaged him as, flying at about 13,000ft, he moved away and crossed the river east of Purfleet at about 1.00am. North of the Thames the mist cleared, and, as L.32 flew on, the searchlights at Beacon Hill and Belhus Park caught her almost immediately. The guns at Tunnel Farm were first to open fire at 1.03am, then, as L.32 dropped five explosive and six incendiary bombs on the village of Aveley, the guns at Belhus Park, engaged her too. At 1.08am, over South Ockendon, Peterson released ten explosive and 19 incendiary bombs, but the damage they caused was slight. At about the same time another five searchlights locked on and more guns opened fire. Inevitably, all this action in the sky attracted the attention of the pilots of No. 39 Squadron. The London Fire Brigade report on the bomb dropped by L.31 on Estreham Road, by Streatham Common Station, states that three houses were demolished, one partly demolished and one severely damaged. A 74-year-old woman was killed, with nine adults and five children injured. The looming skeleton of L.33 by New Hall Cottages, Little Wigborough, Essex. There was so little hydrogen left in the gas cells that the attempt to destroy the ship by fire left the framework almost intact, providing a useful source of information for the British authorities. Still in the air following his unsuccessful attack on L.33, 2nd Lt Brandon caught sight of L.32 held in the searchlights and turned towards her. Second-Lieutenant J.I. MacKay had taken off from North Weald later than Brandon, and was about 35 minutes into his patrol when he also saw L.32. However, as they both homed in, a blinding light filled the sky as the raider exploded in a roaring inferno. Second-Lieutenant Frederick Sowrey took off in his BE2c from Suttons Farm at 11.30pm, with orders to patrol between there and Joyce Green. At about 12.45am and from a height of 13,000ft he observed a Zeppelin south of the Thames. Sowrey turned towards it and gradually closed, as bombs fell on Aveley and South Ockendon. As Sowrey swept in, searchlights still held L.32, but the guns were no longer firing. The bomb that landed on Baytree Road, Brixton, demolished No. 19 and partly demolished those on either side. The house was the home of music hall artist Jack Lorimer. The bomb killed his housekeeper-cum-nanny and the youngest of his three sons. Rescuers pulled the two other sons from the wreckage; one, 8-year-old Maxwell Lorimer, went on to earn fame as the entertainer Max Wall. Oblt-z-S Werner Peterson. Peterson took command of L.32 on 7 August 1916, having previously commanded L.7, L.12 and L.16. On L.32's first raid on 24/25 August, she got no further than the Kent coast; on 2/3 September Peterson bombed Ware in Hertfordshire. At 1.10am, as Peterson was heading for home, Sowrey appeared out of the darkness, positioned himself below L.32 and, throttling down his engine to keep pace with the airship, opened fire with his Lewis gun. A whole drum of mixed bullets sprayed along the underside of the vast airship with no effect. As he turned to reposition himself for a second attack the machine guns on L.32 spat out their bullets in response. Undeterred, Sowrey slid back into a position beneath her and fired off a second drum of ammunition, traversing the belly of the craft, but again his bullets failed to set her alight. Second-Lieutenant Brandon, who had been closing on L.32, wrote in his report that he 'could see the Brock bullets bursting. It looked as if the Zepp was being hosed with a stream of fire.' MacKay closed in too. He saw Sowrey empty his first two drums and, although he was at long range, fired a few shots himself. Then he saw Sowrey fire a third drum. This time Sowrey concentrated his fire in one area and a fire took hold inside, possibly caused by a burning petrol tank, for bullet holes riddled one of those recovered from the wreckage. Flames swiftly spread throughout the airship, bursting through the outer envelope in several places. An eye witness recalled that 'the flames crept along the back of the Zeppelin, which appeared to light up in sections... until it was burning from end to end.' Then, as at Cuffley three weeks earlier, 'the people cheered, sirens started screeching, factory whistles commenced to blow, and in a moment all was pandemonium.' L.32 sagged in the middle, forming a V-shape before plummeting to earth in an incandescent mass. Another eyewitness described the demise of L.32 as it fell: A contemporary postcard originally captioned 'Hot Stuff' showing L.32 in flames over Essex after the attack by Frederick Sowrey in his BE2c. Those few moments afforded a wonderful spectacle. Flames were bursting out from the sides and behind, and, as the gasbag continued to fall, there trailed away long tongues of flame, which became more and more fantastic as the falling monster gained impetus. The burning wreckage finally crashed to earth at Snail's Hall Farm in Great Burstead, just south of Billericay, Essex. Like the Cuffley wreck, thousands made a pilgrimage to see it. The bodies of the crew were collected together in a nearby barn – many horribly burnt. Oberleutnant Peterson, however, had jumped to his death. Back in Germany there could be no hiding from the fact that this was a serious setback for the Naval Airship Division. With two of his new Super Zeppelins lost, Strasser ordered another raid the following day, but stressed that his commanders should exercise caution if the sky over Britain was clear. Only two, L.30 and L.31, headed for London, but a cloudless sky spelt danger and they unsuccessfully sought other targets. For the next few days bad weather kept the airship crews at home. 2nd Lt Frederick Sowrey. Aged 23, Sowrey had received a commission in the Royal Fusiliers and was wounded at the Battle of Loos in 1915. Having recovered, he transferred to the RFC and was eventually posted to No. 39 Squadron on 17 June 1916. L.31 AND THE DEATH OF HEINRICH MATHY The next test came on 1 October 1916. Eleven naval Zeppelins received orders to attack Britain that night, with targets specified as London and the Midlands. Strong winds and thick cloud over the North Sea prevented four airships from passing inland. Once over Britain the remaining raiders encountered cloud, rain, snow, ice, hail and mist, and this seriously hampered navigation. Only one, Mathy's L.31, approached London. The wreckage of L.32 at Snail's Hall Farm, Great Bursted, near Billericay, Essex. There were no survivors. A medical officer reported that all but three of the bodies of the crew were 'very much burned... Several had their hands and feet burned off, nearly all had broken limbs.' L.31 came in over the Suffolk coast near Lowestoft at about 8.00pm and followed a course for London. As he approached Chelmsford at about 9.45pm, the searchlight at Kelvedon Hatch locked on to Mathy's ship. Turning away to the north-west, he made a wide sweeping detour, passing Harlow, Stevenage and Hatfield before turning east. At about 11.10pm, as he closed on Hertford, he silenced his engines and drifted silently with the wind towards Ware, presumably hoping to avoid the attention of London's northern defences. However, about 20 minutes later as he approached Cheshunt with the engines back on, the Enfield Lock searchlight picked up L.31, quickly attracting another five beams. At 11.38pm the Newmans anti-aircraft gun opened fire, followed a minute later by the Temple House gun. The decision to jump or burn was never far from the mind of any Zeppelin crew as the war progressed. Peterson chose to jump to his death. The original caption suggests the impact of Peterson's body left this impression in the ground. This sudden outbreak of gunfire attracted the ever-alert pilots of No. 39 Squadron. Orders to commence patrolling only arrived a little before 10.00pm and the first three pilots were still climbing to operational height as L.31 approached and passed their patrol lines unseen. As the lights caught the beleaguered airship, lieutenants MacKay, Payne and Tempest all turned towards it. Another pilot, P. McGuinness, who had taken off at 11.25pm to patrol a line from North Weald to Hendon, also saw L.31 in the searchlights about 20 minutes later and joined the chase. But it was 2nd Lt Wulstan Tempest who caught her first. A contemporary postcard showing a Zeppelin held by searchlights. The scene would have been similar when the searchlights caught L.31 over Cheshunt as the guns at Newmans and Temple House opened fire. TO BURN OR TO JUMP – THE DEATH OF HEINRICH MATHY On the night of 1 October 1916, 2nd Lt Wulstan Tempest, in a BE2c, attacked and shot down Zeppelin L.31, commanded by Kptlt Heinrich Mathy, the most revered of all the Zeppelin commanders. Since the introduction of explosive and incendiary bullets for use with the Lewis gun, the advantage in aerial combat had swung to the pilots of the RFC and RNAS. With the realization that they were now extremely vulnerable, German aircrew began to dwell on the last great question – if your Zeppelin is on fire and there is no hope of survival, do you jump to your death or burn in the wreckage? When Heinrich Mathy was asked this question he replied, 'I won't know until it happens.' Elsewhere, one of his crew confessed that the old cheerfulness had disappeared: 'We discuss our heavy losses... Our nerves are on edge, and even the most energetic and determined cannot shake off the gloomy atmosphere... It is only a question of time before we join the rest. Everyone admits that they feel it... If anyone should say that he was not haunted by visions of burning airships, then he would be a braggart.' When the time came, with the fire spreading rapidly through the gas cells contained within the outer envelope of L.31, Mathy chose to jump to his death. 2nd Lt Wulstan Tempest. Aged 25, Tempest joined the army in November 1914 and, as an officer in the King's Own Yorkshire Light Infantry, was wounded at Second Ypres in May 1915. In June 1916 he qualified as a RFC pilot and joined B Flight, No. 39 Squadron at Suttons Farm. Tempest estimates he was about 15 miles away when the guns opened fire. As he closed to within 5 miles of the target, Tempest realized he was at a height well above the Zeppelin and found the anti-aircraft shells bursting uncomfortably close to his BE2c. Then, as he passed through one of the searchlights, someone on board L.31 saw his approach and Mathy immediately ordered the release of 24 explosive and 26 incendiary bombs to lighten the ship and gain height. The bombs landed on Cheshunt, seriously damaging four houses and breaking windows and doors in 343 others as well as destroying 40 horticultural glasshouses. Fortunately for the residents of the town, casualties were restricted to one woman slightly cut by flying glass. L.31 zigzagged away westwards, rapidly gaining height. As he closed to launch his attack, Tempest's pressure petrol pump failed and he had to hand-pump furiously prior to making his attack before L.31 could climb out of reach. He recorded, thankfully, that he was now beyond the range of the anti-aircraft guns – the last gun ceased firing at 11.50pm. Flying straight at the oncoming airship, Tempest flew under its belly, firing off a short burst of mixed Pomeroy, Buckingham and standard ammunition with no effect. He quickly turned until he was flying underneath in the same direction as L.31 and gave her another burst, but again there was no result other than to draw machine-gun fire from the crew. He banked and then sat under her tail, from where the machine guns were unable to reach him. Although he had almost begun to despair of bringing her down, he attacked again and, in his words, 'pumped lead into her for all I was worth.' The tangle of wreckage that was once Heinrich Mathy's L.31. The burning ship broke up and fell in two main sections a few hundred feet apart. This section, guarded by a cordon of soldiers, lies impaled on an oak tree in a misty Oakmere Park, Potters Bar. As this third burst penetrated the outer skin of L.31, Tempest recalled: 'I noticed her begin to go red inside like an enormous Chinese lantern and then a flame shot out of the front part of her and I realized she was on fire.' L.31 shot about 200ft up in the air then began to fall, 'roaring like a furnace', directly towards Tempest who dived as hard as he could to get out of the way of the burning wreck. Tempest returned to Hainault Farm, but, feeling 'sick, giddy and exhausted', wrecked his own aircraft on landing although he only suffered minor injuries to himself. L.31 crashed to earth at Potters Bar in Hertfordshire, only a few miles from the scene of William Leefe Robinson's victory at Cuffley. All 19 of the crew of L.31 perished. Many, including Heinrich Mathy – the most respected and successful of all the Zeppelin commanders – jumped to their deaths rather than be burnt alive. This had been his 15th raid over England and his death struck like a dagger to the heart of the Naval Airship Division. In a letter to Mathy's widow, Peter Strasser described his fallen comrade as a man of 'daring, of tireless energy... and at the same time a cheerful, helpful and true comrade and friend, high in the estimation of his superiors, his equals and his subordinates.' ZEPPELIN LOSSES MOUNT Zeppelins did not return to England again until the night of 27/28 November 1916, when, avoiding London, they selected targets in the industrial Midlands and the North. Yet the result was much the same. L.34, commanded by Kptlt Max Dietrich, was shot down in flames by 2nd Lt I.V. Pyott of No. 36 Squadron, based at Seaton Carew, and crashed into the sea near the mouth of the River Tees. Further south, about 10 miles off the coast from Lowestoft, the joint efforts of two aircraft piloted by Flt Lt Egbert Cadbury and Flight Sub-Lt Edward Pulling, RNAS, shot down L.21 commanded by Kptlt Kurt Frankenburg. There were no survivors from either wreck. It was this Zeppelin the authorities had already claimed was shot down at Cuffley. On that same day – 28 November - largely unnoticed, a single German LVG C.IV aircraft appeared over London in broad daylight, dropping six 10kg bombs between Brompton Road and Victoria. The bombs injured ten. It was a sign of things to come, although few realized it at the time. An already bad end to 1916 got even worse when the Naval Airship Division lost three more airships on 28 December. First, SL.12, although damaged, survived a bad landing at the Ahlhorn base, but strong overnight winds destroyed her. Then, at Tondern an equipment failure caused the ground crew to lose control of L.24 as she came in to land, whereupon she smashed against the shed and burst into flames, which also engulfed the neighbouring L. 17. Although the army had lost faith in the airship's ability to carry the war to Britain, Strasser, driven by his unshakeable belief and now appointed Führer der Luftschiffe, remained positive. He insisted on new airships of improved performance for the navy, demanding a greater ceiling to enable the airships to operate above the range of the now-lethal British aircraft. Indeed, while the overall material effect of the airship raids was limited, by the end of 1916 they resulted in the commitment of some 17,000 British servicemen to home defence duty. The army, meanwhile, began to look again at bomber aircraft as a means of striking more effectively against London. One of many postcards produced to supply public demand for souvenirs to commemorate the destruction of four airships between 3 September and 1 October 1916. This one was titled, 'The End of the Baby-Killer'. THE 1917 ZEPPELIN RAIDS THE ARRIVAL OF THE 'HEIGHT CLIMBERS' The first of the new S-Class Zeppelins (in the form of L.42) entered naval service on 28 February 1917. She had an operational ceiling of 16,500ft and the ability to climb to about 21,000ft (4 miles high), way beyond the reach of the anti-aircraft guns and aircraft allocated to home defence. However, to attain these great heights the new models traded against a reduction in power, fitting five engines instead of six. With existing Super Zeppelins also altered to fulfil these new requirements, the British dubbed this new class of airship the 'Height Climbers'. The first raid of 1917 took place on the night of 16/17 March, with London as its target. The force, made up of L.42 and four converted Super Zeppelins, encountered fierce 45mph winds from the north-west that blew them south and none penetrated further inland than Ashford in Kent. On the night of 23/24 May, six Height Climbers targeted London again. Adverse winds at high altitude disrupted the raid and no airships reached the city. The closest, L.42, turned back over Braintree in Essex, some 40 miles away. All the crews suffered badly from the intense cold and experienced the debilitating effects of altitude sickness encountered at these great heights. Two days after this raid Germany launched its first major aeroplane raid on London, with 21 twin-engine Gotha bombers crossing the Essex coastline in daylight. Only a heavy cloud build-up over the capital prevented them from reaching London, but it marked a dramatic change in the air war over the city. Having studied the report of the 23/24 May Zeppelin raid, the Kaiser voiced the opinion that 'the day of the airship is past for attacks on London.' However, strong representations from the naval authorities persuaded him to approve their continuation, but only 'when the circumstances seem favourable.' Strasser decided they were favourable on 16/17 June 1917. Three days earlier the Army's Gotha bombers had made a highly destructive raid on London in broad daylight. L.48 – DEATH THROES Strong winds and engine problems prevented all but two of the six Zeppelins detailed for the 16/17 June raid from reaching England. These winds held L.42 over Kent, where she bombed Ramsgate before heading for home, but not before one of her bombs struck lucky, hitting a naval ammunition store. The other raider, L.48, commanded by Kptlt Franz Eichler, but with Kvtkpt Viktor Schütze (the new commander of the Naval Airship Division since Strasser's appointment as Führer der Luftschiffe) on board, experienced serious engine problems and her compass froze. Unable to reach London, L.48 attempted to bomb Harwich naval base then turned north, dropping to 13,000ft to take advantage of tailwinds to compensate for the lack of engine power. At this height, and in a lightening summer sky, the air defences easily located L.48. Three aircraft from Orfordness Experimental Station, as well as a BE12 of No. 37 Squadron, all saw L.48 heading towards the coast and gave chase. Three of the aircraft scored hits as they swarmed around the lone airship. Minutes later L.48, the most recent addition to the navy's airship fleet, commissioned only 26 days earlier, crashed in flames in a field at Holly Tree Farm, Theberton, Suffolk. Miraculously, three members of the crew survived the crash, but Viktor Schütze was not one of them. L.48, one of the new type dubbed 'Height Climbers' by the British, entered service on 23 May 1917, based at Nordholz. To hinder searchlights, the undersides were painted black. Commanded by Kptlt Franz Eichler, the raid of 16/17 June was her first time over England. (IWM, Q.58467) RNAS personnel sifting through the wreckage of L.48 at Holly Tree Farm, Theberton, Suffolk in 1917. The site again came under close scrutiny between April and June 2006 when an archaeological dig of the crash site took place. The Naval Airship Division never directly targeted London again. However, in one of its most disastrous raids of the war for the Division, when high winds played havoc with the raiders, the last bombs dropped on London from a Zeppelin struck the capital on the night of 19/20 October 1917. THE SILENT RAID The naval airships had already undertaken raids against northern England on 21/22 August and the Midlands and the North again on 24/25 September 1917 without great success. During this period, the twin-engine Gotha bombers were now attacking London, joined at the end of September by the massive Staaken 'Giants', designed by the Zeppelin Company. Then, on 19 October, 13 airships set out to attack targets in industrial northern cities such as Sheffield, Manchester and Liverpool; it was the last large-scale airship raid of the war. Two vessels failed to take off, while the other 11 encountered vicious headwinds once they had climbed over 16,000ft. The high winds battered the airships off course and reduced their ground speed to a crawl, making it almost impossible for the commanders to ascertain their positions. Kapitänleutnant Waldemar Kölle, commanding L.45, aimed for Sheffield but found himself moving rapidly southwards and reported that 'precise orientation from the ground was impossible... no fixed points could be discerned.' He dropped a number of bombs that fell on Northampton. Then, just before 11.30pm, the crew became aware of a large concentration of dim lights extending before them for some distance. Kölle's second-in-command, Lt Schütz, shouted 'London!' and for the first time Kölle realized how far off course L.45 had travelled. Wasting no time, he immediately released a number of bombs that fell in north-west London, causing damage to the Grahame-White Aviation Company at Hendon Aerodrome and on cottages nearby. Hendon experienced more damage before L.45, continuing on a south-east course, dropped two explosive bombs near Cricklewood Station. The great height of L.45, coupled with a thin veil of cloud, meant Kölle's progress towards the centre of the capital remained unseen and unheard by those on the ground. Therefore no guns opened fire, and the attack became known as the 'silent raid'. One of the crew described their experience over London: The Thames we just dimly saw from the outline of the lights; two great railway stations I thought I saw, but the speed of the ship running almost before the gale was such that we could not distinguish much. We were half frozen, too, and the excitement was great. It was all over in a flash. The last bomb was gone and we were once more over the darkness and rushing onwards. Members of the crew of L.45, the last to bomb London. After dropping bombs on London, strong winds carried it over France and, with only two engines still working, its commander, Kptlt Waldemar Kölle, brought her down near Sisteron, where the crew surrendered. In fact, these randomly dropped bombs proved devastating. The first fell without warning in Piccadilly, close to Piccadilly Circus in the heart of London's West End. The massive 300kg bomb blasted a hole in the road about 12ft in diameter, fracturing two gas mains and pipes carrying telephone cables. The blast smashed the whole of the front of the fashionable department store Swan & Edgar's, with damage extending into Regent Street, Jermyn Street and Shaftesbury Avenue amongst others. Many people were in the streets, unaware of the impending danger, and were caught in the explosion. Flying shrapnel, debris and glass scythed down 25, of whom seven died, including three soldiers on leave. One woman, so disfigured by the blast, was eventually only identified by her clothes and jewellery. L.45 careered on. The next bomb fell in Camberwell, on the corner of Albany Road and Calmington Road, demolishing three homes as well as a doctor's surgery and a fish and chip shop. Many other buildings were seriously damaged. The blast killed ten, including four children; another ten children were amongst the 24 injured. The final bomb dropped by L.45 landed in Glenview Road, Hither Green, demolishing three houses and inflicting less serious damage on other houses in the surrounding roads, but it claimed a high cost in human life. The bomb killed another ten children – seven of these from one family – and five women, while six people needed treatment for their injuries. However, for the crew of L.45, their rather precarious position was just about to get worse. Having dropped to 15,000ft to get below the fierce winds at high altitude, Kölle managed to make some headway eastwards. However, near Chatham shortly after midnight, L.45 encountered 2nd Lt Thomas Pritchard of No. 39 Squadron, flying his BE2e from North Weald. Only able to get his aircraft up to 13,000ft, Pritchard fired at L.45 anyway. He missed the target, but Kölle climbed rapidly to escape the pursuer and was caught again in the gales; once more, the wind swept L.45 southwards. One engine then broke down, and in the intense cold it proved impossible to repair. One member of the crew retired with frostbite while many others suffered altitude sickness. The winds drove L.45 across France where she lost another two engines and was fortunate to survive an encounter with French anti-aircraft guns. With fuel almost exhausted and only two engines still working, L.45 had no chance of getting back to Germany, and so Kölle brought her down in a riverbed near Sisteron in southern France. The crew set fire to their ship and surrendered to a group of French soldiers. Although L.45 dropped only a few bombs on the London area, their effect was devastating. The 300kg bomb that fell in Camberwell demolished 101 and 103 Albany Road and 1 Calmington Road, and severely damaged a great number of others in the area. The bomb killed ten, including Emma, Alice, Stephen and Emily Glass, and injured 23. Other ships of the attacking force suffered similar fates. L.44 came down in flames, destroyed by anti-aircraft guns while attempting to cross the front-line trenches in an effort to get back to Germany. Kapitänleutnant Hans-Karl Gayer brought L.49 down in a wood in France, where soldiers captured her before the crew could destroy her. Having lost two engines, the commander of L.50, Kptlt Roderich Schwonder, attempted to ground his ship, but a rough landing tore off the forward control gondola before she took back to the air. Most of the crew leapt to safety but the wind carried L.50 away and she was last seen drifting over the Mediterranean with four men still on board. Of the seven airships that did limp back to Germany, one of those, L.55, sustained serious damage during a forced landing and had to be dismantled. In a disastrous raid the Naval Airship Division lost five Zeppelins. But if the gale-force winds had not taken a hand, it may have been one of the most successful of the war, for some 78 British aircraft took to the skies in defensive sorties but not one was able to climb high enough to engage the attacking force. THE END OF THE ZEPPELIN WAR Even the overtly confident Strasser saw the disastrous outcome of the 19/20 October raid as a major setback. However, the following month engines became available – designed specifically to combat the strong winds encountered at high altitude – and Strasser's confidence returned. All new airships were to be equipped with the new engine and existing vessels re-fitted. However, before the re-equipped fleet could even contemplate returning to Britain, another disaster struck. On 5 January 1918 a fire broke out at the Ahlhorn airship base – the headquarters of the Naval Airship Division – in one of the massive sheds housing L.47 and L.51. In the great conflagration that followed, the flames engulfed four Zeppelins and one Schütte-Lanz, along with four of the all-important double-sheds – effectively putting the base out of service. Strasser launched only four raids against Britain in 1918, the last year of the war. None of these attempted to target London, choosing instead targets in the Midlands and northern England. The final airship raid took place on 5 August 1918. Led by Strasser in person aboard the navy's latest zeppelin, L.70, five airships approached the Norfolk coast. Caught at only 13,000 feet, two aircraft of the new amalgamated Royal Air Force pounced on L.70. Moments later she 'plunged seaward a blazing mass.' Strasser, the life and soul of Naval Airship Division, and the driving force behind the airship raids on Britain, died in action with the rest of the crew. At the start of the war, both in Germany and in Britain, belief in the danger of the threat posed to London by the German airships was great. In the early months of the war, London lay exposed, with only a limited defensive capability, but the airship fleet was not in a position to expose this weakness. From May 1915 to the end of the year, the airship raids on London faced little significant opposition but gradually the defences of the city improved. During 1916, the network of searchlights, anti-aircraft guns and observation posts increased dramatically while the escalation in aircraft production further strengthened the defence. Now organized into home defence squadrons with night-flying-trained pilots and with the introduction of explosive and incendiary bullets, from September 1916 the advantage swung dramatically away from the airships. That they kept flying over Britain after this change of circumstances says much for the courage of their officers and crews. Although the Zeppelins had failed to achieve the goals set for them, the effect on morale of bombing London remained a great prize for Germany. Despite its losses, the Navy remained committed to the development of airships to counter Britain's improved defences almost to the end of the war, but the army, disillusioned, from September 1916 turned its attention to the potential offered by aeroplanes to carry an effective bomb load to London. It was a change of direction that signalled an escalation in London's first Blitz. THE COMING OF THE BOMBERS The shelving of Wilhelm Siegert's 1914 plan for aeroplane raids on London saw the activities of the squadron he created refocused on objectives closer at hand, bombing targets behind the Allied lines, until the spring of 1915 when it briefly redeployed to the Eastern Front before returning in July 1915. In December 1915 the squadron became Kampfgeschwader 1 der OHL – Battle Squadron 1 of the Army High Command – generally abbreviated to Kagohl 1. For the next eight months Kagohl 1 flew bombing missions, reconnaissance patrols and escort duties over Verdun and later the Somme until, in August 1916, its six _Kampfstaffeln_ (flights) – abbreviated to _Kasta –_ were split into two separate _Halbgeschwader_ (half squadrons). Halbgeschwader 1 remained on the Somme while Halbgeschwader 2 redeployed to the Balkans. A reorganization of Germany's army air service in late 1916 saw the appointment of Ernst von Hoeppner as its supreme commander. At the same time, the doubts the army had in the ability of its airships to carry the war to London were confirmed by the loss of SL.11 in early September. But a new weapon was now available, one that allowed Hoeppner to confidently resurrect Siegert's 1914 plan for an aeroplane bombing campaign against London – the G-type bomber. And the unit selected to carry out this mission, Kampfgeschwader 3 der OHL (Kagohl 3), he formed on a nucleus of Halbgeschwader 1. At just this time, as Germany was planning this new means of striking at the morale of the British population, Britain, convinced that the menace of the Zeppelin raids was largely over, began reducing its home defences to support the growing demands for manpower on the Western Front and in other theatres. A few months later, largely unopposed, German bombers were flying over the streets of London in broad daylight, trailing death and destruction in their wake. THE BOMBER RAIDS – THE MEN THAT MATTERED **Generalleutnant Ernst Wilhelm von Hoeppner** Born in January 1860, Ernst Wilhelm von Hoeppner joined the army as a junior officer in a dragoon regiment in 1879 at the beginning of an impressive military career that saw him hold a number of high-profile regimental, field and staff commands. At the outbreak of war in 1914 Hoeppner was chief of the general staff of III Armee, and over the next two years he held various other senior field and staff commands. Then, the OHL, having emerged battered from the maelstrom of the Verdun and Somme campaigns, decided that the _Fliegertruppen_ – the army aviation arm – needed re‑forming under a general officer with command over all aspects of army aviation. The outcome was the creation of the Luftstreitkräfte and, on the recommendation of Erich Ludendorff, Generalquartiermeister of the German Army, Hoeppner, with no aviation background, was appointed Kommandierender General der Luftstreitkräfte – conveniently abbreviated to Kogenluft – on 12 November 1916. Kogenluft Ernst Wilhelm von Hoeppner (left) meeting one of his airmen. His decision to end the Army Zeppelin raids on London opened the way for the launch of Operation _Türkenkreuz_ , the Gotha bomber raids on the city. Warming to his new role, shortly after his appointment, Hoeppner issued a memorandum. It stated that he considered airship raids on London no longer viable and as such he planned to open bombing raids against the city with aeroplanes as soon as possible. The aims of such raids were to strike at the morale of the British population, the disruption of war industry and communications, and to impede the cross-Channel supply routes. He stated – a little optimistically – that the G-type bomber aircraft were ready and soon the massive R-type would join them. However, he ended his memorandum on a cautionary note. He stated that raids by the G-type aircraft '... can only succeed provided every detail is carefully prepared, the crews are practised in long-distance overseas flight and the squadron is made up of especially good aeroplane crews. Any negligence and undue haste will only entail heavy losses for us, and defeat our ends.' The man given the task of carrying out these orders was Ernst Brandenburg. Hauptmann Ernst Brandenburg. Personally selected for the task by Hoeppner, he became commander of Kagohl 3, the _Englandgeschwader,_ in March 1917. Brandenburg's calm and calculating manner made him an ideal choice. (David Marks) **Hauptmann Ernst Brandenburg** Born in West Prussia in June 1883, Brandenburg joined the infantry as a young man, becoming a Leutnant in 1908. Three years later he attended an aviation training course before returning to his regiment, 6. Westpreussischen Infanterie-Regiment Nr. 149, with whom he went to war in 1914. Promoted to Hauptmann in November 1914, Brandenburg received a severe wound the following year while serving in the trenches. After his recovery, in November 1915, like so many other soldiers no longer fit to return to the front line, he joined the army's air service. He adapted well to his new role as an observer, flying in two-seater aircraft over the front line and his abilities as an organizer and administrator shone through, quickly bringing him to the attention of his superiors. Following his appointment as Kogenluft in late 1916, Hoeppner personally selected Brandenburg to command the squadron destined to lead the strategic air campaign against London – Kagohl 3. He took up his new command on 5 March 1917, aged 33, and started with a blank piece of paper; there were no guidelines. Brandenburg created an intensive training programme for the crews that would form his squadron; he sent his crews to learn the skills needed for navigation over large expanses of open sea, while the technicalities of formation flying, a tactic considered necessary for the defensive strength of the raiding squadron over hostile territory, were absorbed. Brandenburg also insisted that all aircraft allocated to his squadron were test-flown for at least 25 hours and that his crews all carried out 20 landings, half in daylight and half after dark. Finally, in May 1917, Brandenburg was ready to lead his squadron – now unofficially known as the _Englandgeschwader_ (England squadron) – into battle. **Hauptmann Richard von Bentivegni** Born in Rendsburg in Schleswig-Holstein in August 1889, Bentivegni joined the army in March 1905, in 8. Thüringisches Infanterie-Regiment Nr. 153. In August 1906, two days before his 17th birthday he became a Leutnant and remained so until he volunteered to join the _Schutztruppe_ in German East Africa in 1911. He returned to his regiment at the beginning of August 1914 when he became a company commander, then, in November 1914, while serving on the Western Front, he received promotion to Oberleutnant. However, in September 1915 he transferred to Flieger Ersatz Abteilung Nr. 9 at Darmstadt where he trained as an airman. He completed training in December 1915 and moved to Armeeflugpark Nr. 13 where he awaited an active appointment. Then, in January 1916 he joined Feldflieger Abteilung Nr. 28 at the front. Two months later he was promoted to Hauptmann and then, in September 1915, transferred to the Reisenflieger Abteilung to train on the giant R-type aircraft before joining Riesenflugzeug Abteilung (Rfa) 501 on the Eastern Front in October 1916, becoming commander of the squadron the following month. In July 1917 Rfa 501 relocated to Berlin where it trained on the new Staaken R.VI 'Giant' before arriving in Belgium in September 1917, when Bentivegni prepared to join the _Englandgeschwader_ in the air assault on London. Maj Gen Edward Ashmore was a very single-minded and determined character. When taking flying lessons in 1912 he would arrive at the airfield at dawn, push his way to the front of the queue and stay up in the air over his allotted time, before dashing to the War Office to start work at the normal time. **Major-General Edward Ashmore** Edward Bailey Ashmore, born in London in 1872, joined the Royal Artillery in 1891, having passed through the Royal Military Academy. Having seen action in the Anglo-Boer War, Ashmore attended Staff College before joining the general staff in 1908. An interest in aviation saw him take flying lessons in 1912 and, after passing the course at the RFC's Central Flying School, he joined the reserve of the RFC in January 1913. As a staff officer with the RFC when war broke out the following year, Ashmore held a home administrative posting before taking command of 5th Wing based at Gosport in April 1915. Four months later he found himself in France in command of 1st Wing, followed by command of 1st Brigade, RFC, then later 4th Brigade, during the Somme campaign. At the end of 1916 Ashmore returned to his roots, transferring back to the Royal Artillery. Following the poor showing offered up by the home defences during the daylight raids on London in the summer of 1917, an official review took steps towards revitalizing London's defences. One of its recommendations called for 'a senior officer of first-rate ability and practical air experience' to command the whole defence of London: aircraft, anti-aircraft guns, searchlights and observation posts. Ashmore, described as 'a brilliant combination of airman and artillery officer' fitted the bill perfectly. Recalled from Flanders, where he commanded the artillery of 29th Division, he became commander of the newly created London Air Defence Area (LADA) on 5 August 1917. He later commented sardonically, 'The fact that I was exchanging the comparative safety of the Front for the probability of being hanged in the streets of London did not worry me.' **Lieutenant-Colonel Thomas Charles Reginald Higgins** Thomas Higgins, born in Buckinghamshire in July 1880, attended Dartmouth Naval College before joining HMS _Camperdown_ as a midshipman in 1897. However, in 1900 Higgins transferred to the Army, serving as a lieutenant in the King's Own Royal Regiment in the Anglo-Boer War. He went on to serve in Nigeria with the West African Frontier Force, 1904–13, during which time, in 1911, he was one of an early batch of army officers to gain his flying certificate. Higgins applied to join the RFC shortly after its formation, but with the officer complement full, he found himself fighting in France until wounded early in the war. In 1915 Higgins did transfer to the RFC, quickly becoming a flight commander before his appointment as commander of the newly created No. 19 Reserve Aeroplane Squadron in England in February 1916, with responsibility for all the widely distributed aircraft committed to the defence of London. In April 1916 the squadron was renamed No. 39 (Home Defence) squadron, which he commanded until June 1916 when he became Inspector of Home Defence. Then, in February 1917, with the rank of lieutenant-colonel, Higgins took command of Home Defence Wing which became Home Defence Group (11 squadrons and one depot squadron) in March 1917. Later, as the Home Defence organization expanded in response to the German bomber offensive, the group reorganized as Home Defence Brigade (14 squadrons and one depot squadron) and eventually became VI Brigade in October 1917. **Lieutenant-Colonel Maximilian St Leger Simon** Simon, the son of a physician/ surgeon, was born in Malacca, Straits Settlement (now Malaysia), in 1876 and entered the Royal Military Academy in 1893. Two years later he received a commission in the Royal Engineers where he specialized in submarine mining and studied coastal searchlights. He served in Singapore, England and Canada before returning to England in 1910. The following year Simon became a staff officer at the War Office, where he remained until he received a brevet lieutenant-colonelcy in late 1915 and headed for France with the 197th (Land Drainage) Company, RE. Then, in February 1916, when the War Office took over responsibility for London's defence from the Admiralty, they recalled Simon and placed him in a position to supervise the construction of gun and searchlight positions around the city. Later, in December 1916, he became Anti-Aircraft Defence Commander, London. A British aerial reconnaissance photograph of the Kagohl 3 airfield at Sint-Denijs-Westrem, the home airfield of Kasta 13 and 14. Later, in September 1917, the 'Giants' of Rfa 501 shared the airfield. A view of a Gotha pilot's cockpit and the forward gun position occupied by the aircraft commander. The passage to the right of the pilot allowed the commander to move within the aircraft. PREPARING FOR THE BOMBER BLITZ GERMAN PLANS With the appointment of Hoeppner as Kogenluft in November 1916, the plan to commence an aeroplane bombing campaign began to take shape, based on the new G-type _Grosskampfflugzeug_ (large battle aeroplane) series. In September 1916 the Gothaer Waggonfabrik AG, formerly a builder of railway carriages, received approval for production to commence on their latest aircraft design, the G.IV. Developed from the earlier G.II and G.III, the G.IV, generally known as the Gotha, was the aircraft the army had been waiting for. Powered by two 260hp Mercedes engines, the Gotha G.IV could maintain a speed of 80mph in favourable conditions with an impressive ceiling of around 18,000ft. Despite its uncomfortable open cockpits and 78ft wingspan, it flew well, was manoeuvrable and could carry a bomb load of between 300 and 400kg and two or three 7.92mm Parabellum machine guns for defence. Most importantly, it had the range to reach London and return to bases in occupied Belgium. Its weak point was its instability when coming in to land without the ballast of bombs and fuel. The three-man crew consisted of the commander, pilot and rear-gunner. The commander, an officer, occupied the front nose position. He was responsible for navigation and acted as observer, bomb-aimer and front-gunner. The pilot could be either an officer or senior NCO, while the rear-gunner was often a junior NCO. An innovation on the G.IV was a slanting 'tunnel' built through the fuselage, which gave the rear-gunner the added advantage of being able to fire downwards at attacking aircraft taking advantage of the traditional blind spot below the tail. Diagram showing evolution of the _Englandgeschwader_. Initially the load of the Gotha G.IV in the daylight raids on London consisted of a mixture of 50kg explosive bombs and 12.5kg explosive or incendiary bombs. The larger bomb was about 5ft long with a diameter of 7in and either armed for detonation on impact or with a delay fuse which allowed the bomb to penetrate through a building before exploding. However, estimates indicate that up to a third of 50kg bombs failed to explode and another 10 per cent detonated in mid-air. The actual bomb load varied but on the early daylight raids a typical load of 300kg would be four 50kg and eight 12.5kg bombs. Later in the campaign the Gothas mainly utilized the 50kg explosive bomb with a limited number of the 100kg type, as well as incendiaries. When he first took office, Hoeppner anticipated that 30 Gotha G.IVs would be available to begin attacks on London by 1 February 1917 and he further noted that development of the _Riesenflugzeug_ (giant aeroplane), or the R-type, was progressing well, anticipating that these even larger and more powerful aircraft would soon be added to the weapons at his disposal. Halbgeschwader 1 returned to Ghistelles and reformed as Kagohl 3. The three existing Kasta, 1, 4 and 6, became Kasta 13, 14 and 15 of the new squadron and were boosted by three more; Kasta 16 joined immediately, 17 and 18 in place by July 1917. Each _Kasta_ consisted of six aircraft, giving a squadron strength of 36 aircraft, plus three allocated to the HQ. New airfields were under construction for the squadron around Ghent. But there were delays; the first airfields, Melle-Gontrode and Sint-Denijs-Westrem, were not ready until April 1917, followed in July by Mariakerke and Oostakker. Although anticipated in February the first of the squadron's aircraft did not arrive at Ghistelles until March 1917. The following month Kasta 13 and 14 transferred to their new airfield at Sint-Denijs-Westrem, while Kasta 15 and 16, along with the HQ, moved to Melle-Gontrode. Yet Kagohl 3 was still not ready to begin its work. Throughout the training period the crews experienced engine problems with their new aircraft, requiring the rest of April to improve though not completely rectify these problems. And then there was the fuel issue. Tests proved that the engines would consume their full capacity of 175 gallons of petrol on even the most direct return flight to London. Any deviation or evasion tactics would exhaust the onboard supply and imperil a safe return. Therefore reserve fuel tanks were authorized for all the squadron's aircraft, their fitting causing further delay. But by mid-May Brandenburg announced that his squadron was ready to make its first attack on London. There were ongoing delays fitting the reserve fuel tanks but he reasoned that a refuelling stop near the coast would allow the topping up of the existing tanks thus granting a little leeway. So all was ready, the crews of the _Englandgeschwader_ now just waited impatiently for the advent of good weather before launching Operation _Türkenkreuz_ ( _Turk's Cross_ ) – the code-name for the attack on London. BRITISH PLANS There was a genuine belief in Britain that, after the successes against Zeppelin raiders in the autumn of 1916, the aerial threat was over. Even that audacious raid by a single aeroplane on 28 November, ending close to Victoria station, failed to cause any undue concern amongst the military and prompted little comment, but the press issued a cautionary warning about future aeroplane attacks. But when no more Zeppelins – or aeroplanes – appeared over the capital for the rest of 1916 or in the early weeks of 1917, the fear of aerial raids largely evaporated. At the opening of 1917 the home defences were those instigated and developed since the War Office had taken over the responsibility for the defence of London from the Admiralty in February 1916. It was a defence system designed to oppose the night-time raids by German airships. The Home Defence Wing of the Royal Flying Corps (RFC) contained 11 squadrons assigned to the defence of Britain, of which four defended the approaches to London: Nos. 37, 39, 50 and 78. The four 'London' squadrons each had an establishment of 24 aircraft, the rest set at 18. However, as a snapshot, a report dated 7 March 1917 showed that the 'London' squadrons could muster only 64 aircraft out of an establishment of 96, and ten of those were undergoing repairs. And because of the nature of night-time defence against airships, the aircraft allocated to these squadrons were older, slower, more stable aircraft, such as the BE2c, BE12 and FE2b. The tactics were simple; each aircraft operated individually, flying along pre-set patrol lines hunting for Zeppelins caught in searchlights as they approached the city. The aircraft did not carry radios as the Admiralty opposed their introduction claiming they would interfere with Navy signals. It was one of many flashpoints between the Admiralty and the War Office in their troubled relationship in the field of aviation. However, stretched as the Home Defence squadrons were, with the diminishing threat from enemy airships and an ever-growing demand for aircraft and personnel on the Western Front, moves were afoot to reduce the number even more. Early in February 1917, Lt Gen David Henderson, commander of the RFC, advised that he urgently required two new night-flying squadrons for service in France. While the aircraft would be available at the beginning of March, Henderson now asked for the transfer of 36 trained pilots from the Home Defence squadrons to fly them, with an additional nine pilots each month as replacements, adding that, 'the diminished risk from Zeppelin attack amply justifies this temporary reduction'. Three days later the War Cabinet approved the transfer. The 3in 20cwt anti-aircraft gun, the standard weapon of the London air defences. Here it is shown on a 'trailer' mount designed and constructed by the Royal Navy Anti-Aircraft Mobile Brigade. Control of the anti-aircraft gun defences of London had rested with Lt Col M. St L. Simon, RE, from December 1916. Almost immediately Simon found his command reduced. The plan of his predecessor, Admiral Sir Percy Scott, included two gun rings around London, one 5 miles out and the other 9 miles from the centre, each gun position mounted with twin guns, supported by an outer ring of searchlights – 'aeroplane lights'. The plan required 84 guns in 42 gun positions, but in January 1917 cuts reduced the total available to 65 following the Admiralty's demand for guns to arm merchant ships in the battle against the German U-boats. As a result Simon abandoned the original plan. Only three double-gun stations remained, the other 39 downgraded to single-gun positions and the remaining 20 guns relocated to bolster the defences on north and eastern approaches to the capital. In December 1915, replaced as commander of the BEF, Field Marshal Lord French returned to Britain as Commander-in-Chief, British Home Forces. He presided over the gradual reduction in Britain's aerial defence capability, which left it exposed when Germany began aeroplane raids in 1917. And then one final dramatic decision reduced London's effective defence further. At a high-ranking meeting on 6 March 1917, attention focused on further Home Defence cuts to allow redeployment of manpower to the Western Front. Field Marshal Lord French, commander-in-chief of British Home Forces, then made a remarkable recommendation, one that received immediate approval: 'No aeroplanes or seaplanes, even if recognized as hostile, will be fired at, either by day or night, except by those anti-aircraft guns situated near the Restricted Coast Area which are specially detailed for the purpose.' With AA guns no longer on 24-hour alert, big reductions in manning levels followed. However, Lt Col Simon, who had been working on a plan to oppose future aeroplane attacks, remained unconvinced about the end of the aerial threat and, without official approval, completed his defence plan before filing it away for possible future use. It was against this scaled-down defence system that the _Englandgeschwader_ was about to open its campaign. Damage caused during the raid of 24 September 1917 – the first raid of the Harvest Moon offensive. The bomb here, at 144a King's Cross Road, killed 13-year-old James Sharpe and injured seven others. Having helped his mother carry his brothers and sisters across the road to a shelter, James returned to help his invalid grandfather just as the bomb exploded, burying him under the rubble of the building. He died from a fractured skull. (IWM HO 72) THE 1917 BOMBER RAIDS THE CAMPAIGN BEGINS Having informed Hoeppner in mid-May that he was ready to launch his first attack on London, Brandenburg then faced the frustration of a period of bad weather which prevented him from carrying out the plan. In fact the British weather, which had proved an implacable opponent to the Zeppelin raids, continued to dog the bomber raids too. Weather forecasting in the early years of the 20th century was simplistic in comparison with modern satellite systems, and in 1917 weather systems approaching Britain over the Atlantic remained unknown to German forces. Good weather, wind speeds and directions over the North Sea could be predicted with some accuracy, but what was to come over England could not. In fact, before Brandenburg could launch his first raid, another daring attack by a single aircraft, an Albatross C VII of Feldflieger Abteilung Nr. 19, on the night of 6/7 May did reach London. The crew dropped five 10kg bombs between Hackney and Holloway, killing one man and causing two injuries, before returning unmolested to Belgium. However, on 24 May 1917, Brandenburg received a positive forecast for the following day and with that he issued orders for the first bomber squadron raid on London. Twenty-three Gotha G.IVs set off for London, but thick cloud cover blanketing the city forced Brandenburg to turn away and head home via secondary targets in Kent. The bombs intended for London caused casualties of an unprecedented level, mainly on the unsuspecting population of Folkestone and the military camp at Shorncliffe; 95 were killed and 195 injured. The defensive response was confused, uncoordinated and ineffective. Only specified coastal anti-aircraft batteries opened fire – as ordered on 7 March – and despite over 70 aircraft taking to the air, only one got close enough to engage. The stiffest opposition came from RNAS aircraft based in the Dunkirk area, who encountered the returning raiders and claimed one Gotha shot down over the sea, while another crashed on landing near Bruges, killing the crew. Gotha G.IV aircraft of Kagohl 3 preparing for a raid on England in 1917. Bad weather caused Brandenburg to abort the first two planned raids on London, but the third, on 13 June, delivered a devastating blow against the city. (Colin Ablett) The raid caused a public outcry. Makeshift arrangements called for training squadrons, aircraft acceptance parks and experimental stations to make aircraft available for patrols and another 20 aircraft were drafted into the Home Defence squadrons. A conference then followed to 'report upon the defence of the United Kingdom against attack by aeroplanes', yet it achieved little. To speed up the transmission of accurate information, 24 trained anti-aircraft observers were withdrawn from France and redeployed on lightships anchored in the approaches to the Thames estuary, but other than that it was felt anything else would have a detrimental effect on front-line aircraft requirements. The question of fitting wireless transmitters into RFC aircraft again foundered in the face of Admiralty opposition. Inconclusive discussions also took place at the conference about the practicality of a public air raid warning system. No warning system had been used in London during the Zeppelin raids. A second raid followed on 5 June with similar results. Weather conditions forced Brandenburg to turn away and head for secondary targets along the Thames estuary. His bombs were hitting home as Home Defence pilots still struggled up to operational height. There was, however, one beacon of light for the defenders; the eight coastal AA guns around Shoeburyness and Sheerness opened on the raiders and brought down one of the Gothas, which crashed into the estuary. Then, three days later, on 7 June, an order cancelled the three-month-old restriction on general anti-aircraft fire. Lieutenant-Colonel Simon immediately dusted off his plan, pigeon-holed since March, and prepared for the inevitable raid on London that everyone knew must follow. It came on the morning of Wednesday 13 June 1917. LONDON'S FIRST DAYTIME RAID With a good forecast from his weather officer, Brandenburg prepared his crews for a third attempt on London. All aircraft now had the reserve fuel tanks fitted so it would be a direct flight – he chose a morning departure as there was a possibility of thunderstorms later. On 13 June, 20 aircraft took off from the two airfields near Ghent, but very quickly two turned back with engine problems. The rest continued on course; the mood was buoyant. One later wrote, 'We can recognise the men in the machine flying nearest us, and signals and greetings are exchanged. A feeling of absolute security and indomitable confidence in our success are our predominant emotions.' Shortly after, one Gotha left the formation and turned southwards towards the Kent coastal town of Margate on which it dropped five bombs; moments later it was gone. Word of the raid reached Home Forces GHQ, followed swiftly by news of a large formation of aircraft approaching the Essex coast. At this point three more Gothas left the formation, two peeling off towards Shoeburyness, where they dropped six bombs before heading home while the other crossed to the south of the Thames and followed a course towards Greenwich, believed to be on a photo-reconnaissance mission. Brandenburg continued towards London with the remaining 14 aircraft in two formations flying abreast on a wide front. The noise they created was so great that those on the ground claimed they heard it ten minutes before the aircraft came into view. Yet most who came out into their gardens on this warm, hazy, summer's morning to watch these 'silver specks' flying overhead, presumed them to be friendly aircraft and watched in admiration as they passed. The recovery of the wreckage of Gotha G.IV /660/16, shot down by anti-aircraft guns along the Thames estuary on 5 June. Only the Gotha's gunner, Uffz Georg Schumacher, survived. THE FIRST DAYLIGHT RAID, 25 MAY 1917 This illustration shows a Gotha G.IV about to take part in the first attempt by Kagohl 3 to bomb London on Friday 25 May 1917. Twenty-three Gothas set out for London that day. However, thick cloud cover over the city forced the formation to seek secondary targets in Kent. The bombs intended for London caused casualties of an unprecedented level, mainly on the unsuspecting population of Folkestone and at the Shorncliffe military camp. The aircraft shows the standard pale blue finish used on daylight bombing raids, with pale grey engine compartments. Defensive armament generally consisted of two 7.92mm Parabellum machine guns, one in the front cockpit and one in the rear (hidden behind rear wing struts). Mesh guards fitted to prevent the rear gunner shooting off the rear-mounted propeller blades resulted in a limited lateral field of fire. The usual bomb load on daylight raids amounted to 300kg, typically made up of six 50kg bombs or a combination of four 50kg and eight 12.5kg bombs. The senior crew member occupied the cockpit in the nose, from where he controlled navigation, observation, bomb-aiming and also operated a front machine gun. All of the three-man crew were seated in the open, making long flights uncomfortable and requiring them to be well protected from the wind and cold. The Gotha was a reliable aircraft in flight, it had good manoeuvrability and its two 260hp Mercedes engines gave a maximum speed of 87mph. However, the great flaw in the Gotha design was its instability when landing at the completion of a mission without the ballast provided by bombs and fuel, which resulted in a high percentage of losses in landing accidents. News of the approach of the Gothas reached Nos. 37, 39 and 50 squadrons at 10.53am. Twenty minutes later the additional formations instructed to assist in defence received orders to take off. At 11.24am the 3in 20cwt AA gun at Romford became the first of the London guns to open fire, followed by the Rainham gun at 11.30am – but others struggled to locate the target in the hazy sky. One of the Gotha commanders described the moment: 'Suddenly there stand, as if by magic here and there in our course, little clouds of cotton, the greetings of enemy guns. They multiply with astonishing rapidity. We fly through them and leave the suburbs behind us. It is the heart of London that must be hit.' Moments later the first bomb dropped, harmlessly, on an allotment in North Street, Barking, immediately followed by another seven that fell in East Ham. Two in Alexandra Road damaged 42 houses, killing four and injuring 11. Another bomb fell on the Royal Albert Docks, where it killed eight dockworkers and damaged buildings, vehicles and a railway truck. The City of London was now clearly in view directly ahead and Kagohl 3 closed up into a wide-diamond formation. One of the commanders looked out entranced as though a tourist, 'We see the bridges, the Tower of London, Liverpool [Street] Station, the Bank of England, the Admiralty's palace – everything sharply outlined in the glaring sunlight'. At 11.35am, over Regent's Park, Brandenburg fired a signal flare and the whole formation turned to the east, back towards the City. Once over their target 72 bombs rained down in the space of two minutes within a 1-mile radius of Liverpool Street Station – stretching from Clerkenwell in the west to Stepney in the east and from Dalston in the north to Bermondsey in the south. Even as the bombs fell people rushed outside or grabbed a vantage point to see this bewildering, confusing spectacle. An American journalist, travelling across the City on a bus observed that: From every office and warehouse and tea shop men and women strangely stood still, gazing up into the air. The conductor mounted the stairs to suggest that outside passengers should seek safety inside. Some of them did so. 'I'm not a religious man,' remarked the conductor, 'but what I say is, we are all in God's hands and if we are going to die we may as well die quiet.' But some inside passengers were determined that if they had to die quiet they might as well see something first and they climbed on top and with wonderstruck eyes watched the amazing drama of the skies. Three bombs hit Liverpool Street Station. One fell on the edge of platform No. 9, blasting apart a passenger carriage and causing two others to burn ferociously just as the train was about to depart for Hunstanton. A second bomb fell close by, striking carriages used by military doctors. Casualties in the station rapidly mounted to 16 killed and 15 injured. Siegfried Sassoon, the war poet, was at the station that day while on leave and considered, 'In a trench one was acclimatized to the notion of being exterminated and there was a sense of organized retaliation. But here one was helpless; an invisible enemy sent destruction spinning down from a fine weather sky.' Other bombs fell all around. At 65 Fenchurch Street two bombs partially demolished the five-storey office building while claiming 19 lives and injuring another 13. Thomas Burke, working in his third-floor office, heard 'ominous rumbles' and then: ... came two deafening crashes. The building swayed and trembled. Two big plate-glass windows came smashing through. Deep fissures appeared in the walls, and I was thrown to my knees... Looking out of my window on to a street that seemed enveloped by a thick mist... a girl, who had been standing in a doorway of a provision shop, next door, having now lost both her legs... a certified accountant, who had offices near mine, lying dead beside his daughter, who had tried to help him. Damage to the Mechanics Shop at the Royal Mint, Tower Hill, on 13 June 1917. The blast, at 11.38am, killed four men and injured 30 others. Of nine men working on the roof of a brass foundry just to the west of Liverpool Street station, eight were killed and, not far away in Central Street, a policeman just prevented a number of female factory workers dashing into the street as a bomb exploded and killed him. Countless other dramatic and tragic stories emerged from the few minutes of horror that descended on London that summer's morning – but there was one above all others that left an indelible mark Having passed over the city, those aircraft still carrying bombs unloaded them on east London as they departed. Tragically one fell on the Upper North Street School in Poplar. The 50kg bomb smashed through three floors of the building, killing two children in its path, before exploding on the ground floor in a classroom crammed full with 64 infants. Once the dust and debris had settled, rescuers pulled the mangled bodies of 14 children from the wreckage, along with 30 more injured by the blast, two of whom later died. Another bomb landed on a school in City Road but failed to detonate. Yet, as Kagohl 3 completed their mission, they were still largely unmolested. Although some 94 individual defensive sorties were flown by the RFC and RNAS, the time it took to gain the Gotha's operating height, and the short time the enemy formation was over the city meant that only 11 got close enough to the departing raiders to open fire – all without serious effect. One of these, a Bristol Fighter from No. 35 (Training) squadron, finally caught up with three straggling Gothas over Ilford, Essex. Flown by Capt C.W.E. Cole-Hamilton, with Capt C.H. Keevil as observer, it closed to attack, but in the exchange that followed a bullet pierced Keevil's neck and killed him. Defenceless, Cole-Hamilton turned sharply away and headed for home. Eleven of the London AA guns opened fire on the raiders but scored no success. All the Gothas returned safely to their bases, having caused £125,953 worth of material damage in London, killing 162 and injuring 426 – this raid inflicting the highest single casualty total of the campaign on the city – and leaving the Home Defence organization exposed and largely powerless in its wake. REACTION AND RESPONSE The feeling of outrage amongst the public was great and the clamour for reprisal raids on German towns gained voice. Zeppelins had approached under the cloak of darkness, but the Gothas appeared brazenly in broad daylight. In addition, the question of the lack of public air raid warnings was also the subject of much debate – one that the government struggled to decide upon. It was clear that being out in the streets during a raid was more dangerous than being under cover, yet when enemy aircraft appeared people ran out into the streets to watch them. Would even more of the curious go into the streets, the government considered, if they knew a raid was imminent, risking their own safety and hindering the movement of the emergency services? A newspaper photograph taken four days after the first Gotha raid on London, which killed 18 schoolchildren in Poplar. The original caption reads, 'School children practice what to do in an air raid – at a given signal all lie down flat'. The following day Brandenburg flew to Germany, ordered to report to Supreme Headquarters to relate the details of the raid to the Kaiser; he received the Pour le Mérite (the 'Blue Max') for his achievement. But on his homeward journey on 19 June disaster struck. The engine of his two-seater Albatross stalled and the aircraft crashed, killing his pilot and, although Brandenburg was dragged alive from the wreckage, it proved necessary to amputate one of his legs. The news stunned the previously jubilant crews of Kagohl 3. In London the War Cabinet met on the afternoon of the raid and again the following day to consider this new threat. A demand for a dramatic increase in the strength of the RFC was, after discussion, finally approved in July, but this was long term. In the meantime, a further meeting already planned for 15 June took place with Field-Marshal Sir Douglas Haig and Maj Gen Hugh Trenchard present, respectively commanding the Army and RFC on the Western Front. Reluctantly Trenchard approved the temporary detachment of two front-line squadrons to take part in enhanced patrols on each side of the English Channel; No. 56 Squadron (equipped with the SE5a) moved to Bekesbourne near Canterbury and No. 66 Squadron (Sopwith Pups) relocated to Calais. But Trenchard stressed the importance of the return of both squadrons to him by 5 July. Haig needed the RFC at full strength to support his major attack at Ypres, intended to push through and clear the Germans from the Belgian coast. The monument to those killed at Upper North Street School, in Poplar Recreation Ground, paid for by public subscription. The monument bears the names of all 18 victims, the majority of whom were under six years old. Meanwhile, Higgins, commanding Home Defence Brigade, RFC, was beginning to receive new, more efficient aircraft for his squadrons, as Sopwith Pups, Sopwith 1½ Strutters, SE5as and Armstrong-Whitworth FK8s joined his roster. This meant that pilots familiar only with the BE types needed a period of retraining. At the same time the RNAS at Eastchurch received a batch of new Sopwith Camels previously earmarked for France. However, Lt Col Simon's request for an additional 45 AA guns, to bolster the thin defences on the eastern approaches to the capital and complete his previously shelved plans, failed because neither the guns nor the men to crew them were available. Further meetings also took place regarding the implementation of public air raid warnings but again the government blocked their introduction – citing this time, amongst other reasons, evidence that munitions workers alerted to air raids had left their work place and often did not return after the threat had passed – having a negative effect on war production. Back in Belgium a new man arrived at Melle-Gontrode, the headquarters of Kagohl 3. Hauptmann Rudolph Kleine took up his position as Brandenburg's replacement in late June and waited, like his predecessor, for a suitable break in the weather. With no option presenting itself for London, Kleine ordered an attack on the naval town of Harwich on 4 July, which proved successful. But by now Trenchard's loan of two squadrons was over. The day after the Harwich raid No. 56 Squadron headed back to France, having, as one of their pilots put it, 'stood by, perfectly idle'. No. 66 Squadron left Calais a day later, and then, inevitably, the day following their departure, Kagohl 3 launched its next attack on London. THE SECOND DAYLIGHT RAID – SATURDAY 7 JULY Kleine chose an early take-off again, and reduced each aircraft's bomb load, to allow his formation to fly faster and higher. Word of the approach of the formation of 22 Gothas, transmitted early by observers on the Kentish Knock lightship, meant that 15 minutes later defence aircraft were taking off, enabling some to engage Kagohl 3 on the way to London. One Gotha wheeled away with engine problems, making a brief bombing run over Margate before heading home. The main body crossed the coastline near the mouth of the river Crouch, flying in close formation at about 12,000ft, heading west towards the landmark of Epping Forest before beginning to climb for their bombing run. No. 37 Squadron, directly in the Gotha's flight path, had at least 11 aircraft in the air, but realistically only its four Sopwith Pups could hope to engage the Gothas. Three of them attacked the formation: one pilot gave up his attack when his guns jammed, another suffered engine problems and a third abandoned his attack because of a combination of both. A Bristol Fighter on display at the Shuttleworth Collection. The addition of the 'Brisfit', far superior to old BE-types that formed the bulk of the defence force in June 1917, gave Home Defence pilots a chance to engage the Gothas on equal terms. Following the injury to Brandenburg, command of Kagohl 3 passed to Hptmn Rudolph Kleine (left) on 23 June. Here Kleine is with his adjutant, Oblt Gerlich. Whereas Brandenburg was calm and calculating, Kleine proved to be impatient and rash at times. Other pilots with high-performance aircraft closed to engage but many suffered problems with guns jamming that day. Despite this increased attention and the fire of AA guns along their route, the Gothas continued on their course without significant distraction and, as they got closer to their target, Kleine tightened up the formation. Kleine led, then behind came two flights of eight aircraft, side by side, extended for about a mile, with the remaining four bringing up the rear. Once over Epping Forest, Kleine signalled the formation to begin its turn towards the city. The morning was bright and sunny with a light haze in the eastern sky. Before the guns opened and the bombs began to drop many onlookers, watching the approaching flight, described it in picturesque terms, likening it to a flock of birds, while a journalist wrote: To the spectator, in the midst of a quiet orderly London suburb, busily engaged in its Saturday shopping, it seemed ludicrously incredible that this swarm of black specks moving across the summer sky was a squadron of enemy aircraft, laden with explosive bombs waiting to be dropped into 'the brown' of London's vast expanse of brick and mortar. Moments later the peace of that Saturday morning was broken. At 10.21am the AA gun at Higham Hill opened fire, followed by Wanstead two minutes later. The guns at Palmers Green, Finchley, Highbury and Parliament Hill then opened up too. As the Gothas banked the formation appeared to open out into two groups, that on the left passing south over Tottenham while those to the right continued westwards before turning south-east as they approached Hendon. The first guns of the Western AA district opened fire at 10.26am, the guns at Tower Bridge and Hyde Park joined in at 10.30am. Another journalist takes up the story, '... for five minutes the noise was deafening. Shells bursting in the air left puffs of black smoke, which expanded and drifted into one another. It seemed impossible that the raiders could escape being hit. Machines were often hidden in the smoke, but always they came through safely.' Members of the public and a policeman seek shelter during the raid on Saturday 7 July. A newspaper reported that buses stopped while the passengers and crew got off and dashed into buildings before retaking their seats when the Gothas had passed. Bomb damage caused on 7 July to the German Gymnasium in Pancras Road, between King's Cross and St Pancras stations. Built in 1864–65 by the German Gymnastics Society, it is believed to be the first purpose-built gymnasium in the country. With the increase in gunfire the Gotha formations opened out and began evasive tactics. The first bomb fell on Chingford, followed by a handful more falling in Tottenham and Edmonton inflicting limited damage to property. Then, in Stoke Newington – the scene of the first London Zeppelin raid – the human tragedies began. Four bombs fell close together, in Cowper Road, Wordsworth Road and two in Boleyn Road. In Cowper and Wordsworth roads the bombs severely damaged three houses and another 60 suffered lesser damage, but the bomb that exploded in Boleyn Road delivered a more deadly effect. William Stanton was in the road when the bomb fell: 'About 10.30am someone shouted in the street, "The Germans!" I looked up and saw the aeroplanes. People were running everywhere. There was a terrible explosion, and a hundred yards away three houses were blown to the ground.' The explosion killed a 12-year-old grocer's delivery boy as he cycled past and a naturalized German baker and his wife died while working in their shop; seven other lives were lost in the blast and nine injured. Over 50 buildings, many let out as tenements, also suffered damage. The raiding aircraft continued southwards over Dalston, Hoxton and Shoreditch before reaching the City where they turned to the east and continued bombing as they set course for home. Meanwhile the western part of the formation was now closing in on the City too. Bombs fell close to King's Cross Station and around Bartholomew Close, Little Britain, Aldersgate Street and Barbican, causing significant destruction, while a number of fires also broke out. One bomb, falling on the roof of the Central Telegraph Office in St Martins Le Grand, caused significant damage to the two top floors of this large building. More bombs fell in and around Fenchurch Street, Leadenhall Street and Billingsgate Fish Market, while another landed in Tower Hill, close to the Tower of London. Aerial view of London taken at 14,200ft during the raid of 7 July. Geographic key: 1: River Thames. 2: St Paul's Cathedral. 3: Smithfield Market. 4: Finsbury Circus. 5: Liverpool Street Station. Bomb key: A: Central Telegraph Office. B: Aldersgate Street. C: Little Britain. D: Bartholomew Close. E: Golden Lane. F: Whitecross Street. G: Chiswell Street. (IWM Q 108954) The Tower Hill bomb exploded outside offices where some 80 people were sheltering. Those inside heard a deafening crash followed by 'a blinding flash, a chaos of breaking glass' then dust, soot and fumes filled the air. The blast killed eight and injured 15 of those taking shelter, while outside, the explosion left 'three horses lying badly wounded and bleeding'. A fireman used his axe to put the horses out of their misery. The sound of the raid caused shoppers in neighbouring Lower Thames Street to seek shelter in an alleyway to the side of The Bell public house, but a bomb brought a neighbouring house and part of the pub crashing down, burying the shoppers under the rubble. Eventually rescuers recovered the bodies of four men and dragged seven injured from the wreckage, including a child. A few of the raiding aircraft extended across the Thames on their eastward flight dropping bombs close to London Bridge station. The final bombs fell at about 10.40am in Whitechapel, Wapping and the Isle of Dogs as Kleine led Kagohl 3 away from London. The RFC now had 79 aircraft in the air of 20 different types, while the RNAS put up 22 aircraft. As the incoming formation had headed for London, Higgins redirected individual flights to a position off the north Kent coast where he hoped they would intercept the raiders on their return journey. A series of confused individual attacks harried the Gothas who began to draw their formation together. Many reported problems with jammed guns or an inability to keep up with the raiders. One who did get into close combat, Capt J. Palethorpe, piloting a DH4 from a Testing Squadron, with Air Mechanic F. James as observer, engaged a leading Gotha as it headed across Essex towards the coast. Palethorpe's Vickers gun jammed but he kept up with the formation, allowing James to engage three enemy aircraft with his Lewis gun. James fired off seven drums of ammunition in all and closing in to within 30 or 40 yards of one, he fired into it until it began to emit smoke. But before they could see the outcome a bullet struck Palethorpe 'in the flesh of the hip' and, with blood running down to his boots, he turned sharply away and landed safely at Rochford. Another crew, flying a No. 50 Squadron Armstrong Whitworth FK8, one of those waiting to intercept the returning Gothas, closed to engage over the North Sea. Flying at 14,000ft, the pilot, 2nd Lt F.A.D. Grace with observer, 2nd Lt G. Murray, attacked one Gotha without effect, then attacked a group of three, but turned away because of the intensity of the return fire. Spotting a straggler flying below his FK8, Grace then pounced on this new target, as he later recalled: 'We dived at it, firing our front gun, range 800 yards, as we got closer on a zig-zag course, and when between 600 and 400 yards, we got on its starboard side and above. The observer opened fire on it, with good results, as we saw black smoke coming from the centre section, and the H.A. [hostile aircraft] dived into the sea.' The Gotha remained on the surface for a while, and although Grace and Murray circled, attempting to alert surface craft, with fuel running low, they reluctantly turned away. Neither the crew nor aircraft were recovered. Kleine led his formation on a wider return flight in an attempt to avoid the Dunkirk squadrons and in this he was successful, but RNAS pilots from Manston pursued his formation most of the way back to Belgium. Damage from incessant attacks forced one Gotha down on the beach at Ostend and Kagohl 3 lost three others, wrecked on their airfields, a combination of many factors, including enemy action, strong winds, lack of fuel and the Gotha's inherent instability when unladen. British aircrew suffered too with two aircraft shot down. Large crowds gather after a bomb hits the roof of the Central Telegraph Office on the morning of 7 July. The bomb damaged the top two floors, injured four and falling masonry killed a sentry in the street. Back in London there was concern over the number of anti-aircraft shells that had landed on the city adding to the casualties; the London Fire Brigade recorded the fall of 103 shells. Total casualties in the capital reached 54 killed and 190 injured. Of these, ten were killed and 55 injured by this 'friendly fire'. The raid brought a wide variety of reactions. Sections of the bombed population turned against immigrants in their midst, considering many with foreign names to be 'Germans'. Riots broke out in Hackney and Tottenham, where mobs wrecked immigrant houses and shops. Moreover, such was the anti-German feeling that four days later King George V, with the unfortunate addition of Gotha in his family name (Royal House of Saxe-Coburg-Gotha), issued a proclamation announcing a change of name to the Royal House of 'Windsor'. A historic photograph showing the 21 Gothas of the _Englandgeschwader_ that reached London on 7 July as they flew over Essex on their return flight. Feelings of anger were rife against the Government too. The War Cabinet held a series of meetings between 7 and 11 July. Frustrations at the removal of the two 'loaned' squadrons just hours before the raid were voiced and in response a squadron currently forming for service in France was instead earmarked for home service. Another squadron, No. 46, which was operating on the Western Front received orders temporarily redeploying it to England for Home Defence (10 July–30 August). Discussions followed highlighting the limited response to the raid; these resulted in approval for the formation of a committee to consider Home Defence arrangements and the organization of aerial operations. Lt Gen Jan Christian Smuts. The former Boer guerrilla leader came to London in 1917 as the South African delegate to the Imperial War Cabinet. Having impressed with his sharp intellect and analytical brain, Smuts was invited to join the War Cabinet. LONDON MAKES READY The committee was unusual in that it really revolved around just one man, the former Boer guerrilla leader, now Lt Gen Jan Christian Smuts. Having exhaustively interviewed all the senior officers involved, he produced a detailed report on the air defences eight days later. In it he highlighted the flaws in the current system of defence and recommended that a single officer 'of first-rate ability and practical air experience be placed in executive command of the air defence of the London area', bringing together the RFC, AA guns and Observation Corps under a united command. Smuts also called for additional AA guns and the rapid completion and training of three new day-fighter squadrons (Nos. 44, 61 and 112) and a general increase in aircraft committed to the defence of the capital, allowing for the creation of a reserve. This first part of the report received swift approval leading to the creation of the London Air Defence Area (LADA) on 31 July. The command included all gun batteries in the south-east from Harwich to Dover and inland to London, the nine RFC squadrons allocated to Southern Home Defence wing (Nos. 37, 39, 44, 50, 51, 61, 75, 78 and 112) and all observation posts east of a line drawn between Grantham in Lincolnshire and Portsmouth on the Hampshire coast. The man chosen for the job was Maj Gen Edward B. Ashmore, a former senior RFC officer and currently commander of the artillery of 29th Division. The raid of 7 July also raised the question of a public air raid warning again. This time the government felt unable to block the demand and a system of marine distress maroons would in future announce incoming daytime raids, combined with police alerts which would also extend into the evening. AN ENGLISH SUMMER While this reorganization was under way London was free from attack. The _Englandgeschwader_ waited for its next opportunity but the weather reports were not favourable for attacks on the city. However, clear skies over the coast prompted attacks on Harwich and Felixstowe on 22 July, which resulted in 13 deaths and 26 injuries. As July passed into August the weather effectively blocked Kleine's ambition and granted Ashmore much-needed time to improve London's defences. The first three weeks of the month heralded a typical English summer – rain and high winds! It proved a disastrous month for Kleine. On 12 August, at short notice, he ordered an attack on Chatham. Strong winds delayed the formation forcing it to attack Southend and Margate instead. Engaged by AA guns and pursued by the RFC and RNAS back across the North Sea, heavy casualties followed: one Gotha shot down at sea, one forced down and crashed near Zeebrugge with four more wrecked in landing accidents. Kleine ordered two more raids in August on south-eastern coastal towns. Both ended in disaster. On 18 August strong winds forced the formation of 28 Gothas way off course and, with fuel running low, the raid was abandoned. Driven by the wind towards neutral Holland, it appears that Kleine lost as many as nine aircraft to a combination of Dutch AA fire, shortage of fuel and crash landings. Certainly, when the dogged Kleine ordered the squadron airborne again four days later, he could muster only 15 aircraft. This raid, on 22 August, proved the futility of the continuance of the daylight campaign. Alerted early to the approach of Kagohl 3, the coastal AA guns were ready and RNAS aircraft in Kent were in the air and waiting. Five Gothas, including Kleine's turned back early with engine problems, the rest ran into a determined defence and turned for home after bombing Margate, Ramsgate and Dover. Three Gothas were shot down, two probably by RNAS aircraft and one by AA fire. While the weather conditions had protected the principal target, London, Kleine lost 18 of Kagohl 3's aircraft in these August daytime raids. Following the raid of 7 July the government authorized a system of air raid warnings. On 14 July an announcement confirmed that in future marine distress maroons fired from fire stations would alert the public of approaching enemy aircraft during daylight hours. In conjunction with the maroons, policemen on foot, on bicycles and in cars would tour the streets carrying placards emblazoned with 'take cover', accompanied by whistles, bells or motor horns. (Colin Ablett) THE 'DAYLIGHT' DEFENCES TIGHTEN The resolute Lt Col Simon submitted a new request for more guns in July, backed by the RFC. He asked for the construction of a new ring of gun stations 25 miles out from the centre of London, able to put up a barrage of shells to break up the attacking formations before they reached the capital, making them less formidable targets for the RFC to pick off. The scheme required 110 guns covering the north, east and south approaches to the city, but again the request failed. With no new guns available Simon implemented his plan as best he could with ten guns transferred to the eastern approaches from other London stations and a further 24 withdrawn from other duties and redeployed for the defence of London. With the maroons initially restricted to warn of daytime raids only, the police system continued to warn the population at night – both systems incorporating the use of the police to announce the 'all clear', assisted by Boy Scouts blowing bugles. Evidence of Kagohl 3's disastrous raid on 22 August 1917. The smouldering wreckage of a Gotha crewed by Oblt Echart Fulda, Uffz Heinrich Schildt and Vfw Eichelkamp lying on Hengrove golf course, Margate. All three died. In an effort to stem the incidents of AA guns firing at Home Defence aircraft, Ashmore announced the creation of the 'Green Line', a fixed line drawn inside the line of the new outer barrage. Outside the Green Line guns had priority; inside the line priority switched to the defending aircraft. And while the RFC practised the intricacies of flying in formation, elsewhere four BE12 aircraft became the first to be fitted with wireless-telegraphy equipment, enabling them to transmit, but not receive, Morse messages to ground stations detailing the movements of enemy formations. Moves also commenced to improve telephone communications between Horse Guards and observer posts, airfields and AA gun positions. In addition, a new operations room was under development at Spring Gardens, by Admiralty Arch, Westminster. August also witnessed a significant development in the history of British aviation. During this month Smuts released the second part of his report, _Air Organization and the Direction of Aerial Operations_. It considered the future of air power and with great insight stated, 'As far as can at present be foreseen, there is absolutely no limit to the scale of [the air service's] future independent war use. And the day may not be far off when aerial operations with their devastation of enemy lands and destruction of industrial and populous centers on a vast scale may become the principal operations of war.' The report concluded by making a strong recommendation for combining the RFC and RNAS into a single air service and urging its swift implementation. It marked the birth of the Royal Air Force, which finally came into being on 1 April 1918. THE SWITCH TO NIGHT BOMBING However, all these plans, designed to counter daylight raids, were about to become redundant; the daylight offensive was over. The dramatic losses incurred on recent raids indicated to Kleine the futility of continuing on this course. Therefore, the _Englandgeschwader_ prepared for a switch to night flying but retained a hope that the new Gotha variant, the G.V., would revitalize the daylight offensive. These hopes ended when the G.V. failed to offer any significant improvement in performance over the G.IV. After a period of intensive night-flying training, plans were ready in early September 1917 for Kagohl 3 to return to the offensive. On the night of 3/4 September, before Ashmore's new arrangements were in place, a force of four Gothas attacked Margate, Sheerness and Chatham; it was at the last of these towns that the heaviest casualties occurred when two bombs fell on a drill hall at the naval barracks. When the dust settled 138 naval ratings lay dead amidst the rubble while colleagues dragged clear the 88 injured. The opposition that night proved negligible. With further good weather, the following day Kleine announced a return to London. At about 8.30pm on 4 September, the first of 11 Gothas took to the air at five-minute intervals. The formations of the daylight campaign were finished, now the aircraft flew singly to avoid collisions in the dark. Inevitably two dropped out with engine problems, leaving the other nine to feel their way with difficulty to England. The first came inland at 10.40pm, the last at 12.10am. Observers struggling to interpret the engine sounds in the dark submitted numerous exaggerated claims about the strength of the incoming Gothas. Eighteen RFC aircraft took off, but only the four Sopwith Camels of No. 44 Squadron and an FK8 of No. 50 Squadron stood any real chance of interception, but no pilots effectively engaged the incoming bombers that night. However, the AA gun at Borstal near Rochester proved more effective. Held for some minutes by a searchlight, the gun targeted a Gotha at a height estimated at 13,000ft and opened fire at 11.27pm. The gun commander, 2nd Lt C. Kendrew, RGA, reported that the Gotha 'was apparently disabled by our gun fire... A direct hit was then scored and it was observed to fall almost perpendicularly for a short distance turning over and over'. An exhaustive search discovered no wreckage, leading to the presumption that the aircraft came down in the Medway or Thames Estuary and sank. Of the remaining eight Gothas that came inland reports show that just five reached London. The Gotha G.V. Hopes that the G.V would deliver a superior performance to the G.IV and enable Kagohl 3 to resume daytime bombing were soon dashed. Although it offered a small increase in speed, it did so with a lower rate of climb. (IWM Q 67219) A Sopwith Camel. During the raid on Chatham on 3 September 1917, three Sopwith Camels of No. 44 Squadron, arguably the best day-fighter possessed by the RFC, took to the air at night and confounded current opinion that they were too tricky to fly in the dark. The moon was two days beyond full so the sky was bright over the capital when the first Gotha arrived, although a thin haze hindered the work of the searchlight crews. The lead aircraft dropped its bombs over West Ham and Stratford at around 11.25pm. One fell on an unoccupied factory that had until recently been used as an internment camp for German nationals. Another bombed between Greenwich Park and Woolwich at about 11.45pm. When the first AA guns in London opened fire '... many people rushed for shelter. Those nearer the tubes went to the stations in all stages of undress and were conveyed in the lifts to the underground platforms. There were hundreds of women and children and scores of men who made for these places of refuge.' A third Gotha appeared at 11.52pm over Oxford Circus in central London and dropped a 50kg bomb, causing serious damage to Bourne and Hollingsworth's store on East Castle Street. Its next bomb fell in Agar Street, off the Strand, outside the main entrance to the Charing Cross Hospital. A man, H. Stockman, was about to take shelter in a hotel entrance opposite where two others already stood. But at that moment, 'a woman came up with terror written on her face'. Mr Stockman, realizing there was room only for three, indicated to the woman to take the place, for which she thanked him profusely. Moments later the bomb exploded and blew the gallant Stockman to the ground. When he looked up he realized the woman was dead. An RFC officer on leave, standing on the corner of Agar Street, rushed to help and, seeing the demolished front of the hotel, stepped into the building. There he found '... two Colonial [Canadian] soldiers sitting dead in their chairs. One had been killed by a piece of the bomb, which went through the back of his head and out of the front of his Army hat, taking the cap badge with it.' The former factory of Messrs. Wm. Ritchie, Jute Spinners and Weavers, in Carpenters Road, Stratford, bombed at about 11.25pm on 4 September. The factory, vacant since 1904, had been in use as an internment camp until June 1917. Besides these three deaths, another ten lay injured, including three soldiers, an American naval rating and a policeman. Across the Strand a bomb damaged the roof of the Little Theatre, converted into a soldiers' canteen by the Canadian YMCA, and another fell in Victoria Embankment Gardens, just missing the Hotel Cecil. Then, seconds later, a fourth 50kg bomb of this salvo exploded on the Victoria Embankment, close to Cleopatra's Needle, just as a single-decker tram passed. The blast seared through the tram, killing the driver and two passengers but the bewildered conductor, Joseph Carr, staggered clear. A few bombs dropped on Wanstead at around 11.55pm, then, about 30 minutes later, the fifth Gotha appeared. At about 12.30am bombs dropped on Edmonton, followed by explosions in Tottenham, Hornsey, Crouch End and Upper Holloway, where a bomb demolished the laundry of the Islington Workhouse in St John's Road. Another fell harmlessly in Highgate, just east of Hampstead Heath, followed by two bombs, both of 12kg, which landed in Kentish Town. One of these, falling in Wellesley Road, damaged the doors and windows of 15 houses but also claimed lives too. A witness reported seeing '... a flash in the air, and immediately afterwards there was a tremendous explosion. A dense volume of smoke was rising from the road'. As the smoke cleared the bodies of a soldier, home on hospital leave, and a woman with her five-year-old child were revealed, dead in the passageway of the house where they lived. The soldier had just pushed his mother clear, saving her life, when the bomb exploded. Further bombs landed in Primrose Hill and Regent's Park before the two final bombs fell close to Edgware Road at about 12.50am. One of these, in Norfolk Crescent, killed a woman, while the other exploded in the air above Titchborne Street (no longer there, it was just south of the present Sussex Gardens). The blast caught an 11-year-old girl as she walked to the end of the road to see if the 'all-clear' had sounded and bowled her along the street. Having convinced herself she was still alive, she made her way home. Only then did she realize, 'I had a hole through my knee. Also, the frock I was wearing had 15 holes in it where I had been whirled along and struck by shrapnel.' The bomb injured 16 people caught in the blast, one of whom later died, damaged 33 houses in Titchborne Street, and blew out the windows of 12 shops in Edgware Road. That sound of smashing glass signalled the end of the raid and silence gradually returned to the skies over London. A bronze sphinx guarding Cleopatra's Needle on the Victoria Embankment. At about midnight on 4 September a 50kg bomb exploded close to the ancient monument, the blast hitting a passing tram, killing the driver and two passengers. Scars from the bomb can still be seen today on the sphinx and pedestal. THE FIRST NIGHT-TIME RAID, TUESDAY 4 SEPTEMBER 1917 The skies over London had been free of German aircraft for almost nine weeks, but that changed on the night of 4 September 1917 when a new phenomenon struck the capital – the first night-time Gotha raid. That evening five Gothas reached the capital. One dropped its first bomb over Oxford Circus shortly before midnight and flew on towards the Thames. The next three bombs all fell close to the Strand. Shortly before they landed, a single-decker 'G' Class tram, No. 596, had crossed Westminster Bridge and stopped before continuing along the Victoria Embankment. The driver, Alfred Buckle, felt uneasy that night and, according to his conductor, Joseph Carr, was keen to get his shift over. At Westminster Buckle asked if he could move off in front of another tram and, having shunted onto the line, as Buckle proceeded along Victoria Embankment he heard the sound of exploding bombs near the Strand. Keen to get clear he accelerated but, just as he passed Cleopatra's Needle, a 50kg bomb exploded on the pavement between the tram and the ancient monument. The blast smashed through the pavement, destroying a gas main, damaging the base of the Needle and an adjacent sphinx. The blast seared through the tram, killing Buckle, the two passengers – a man and a woman – and blew Joseph Carr from one end of the tram to the other, but he staggered out onto the street and survived. Eight passers-by were also injured by the blast. An eyewitness recalled that Buckle 'appeared to kneel down suddenly, still pulling at his controls. I saw him fall, and that his legs had been blown off: so while dying his last thoughts were to stop his tram.' The demolished laundry of the Islington Workhouse in St John's Road (now St John's Way) in Upper Holloway. A 50kg bomb landed at around 12.30am on 5 September but there were no casualties. DEFENSIVE IMPROVEMENTS The switch to night-bombing presented Britain's defences with a new threat. The War Cabinet called again on the indefatigable Smuts and on 6 September he produced another report. Smuts doubted the ability of aircraft to engage enemy raiders at night, but recommended the use of more powerful searchlights, hoping to dazzle the incoming pilots. He also supported the idea just proposed by Ashmore for a balloon barrage, 'a wire screen suspended from balloons and intended to form a sort of barrage in which the enemy machine navigated at night will be caught'. Experiments began and approval was given for 20 of these screens, although ultimately only ten were raised, the first in October 1917. After the first moonlit raid, the weather turned in favour of the defenders, granting them time to hone the defensive arrangements to meet this new threat. Investigations into new methods of sound-location continued and Ashmore's plan for the Green Line became operational. No British aircraft were to fly beyond the outer gun line or over London. The AA guns were now authorized to consider any aircraft in these areas as hostile. Within these cleared areas Simon developed a new system of barrage fire which directed guns to direct 'curtains' of shellfire in specific locations, with these walls of fire extending over 2,500ft from top to bottom, targeted at varying heights between 5,000 and 17,000ft. With each barrage screen fixed by map reference, a coordinator directed different barrages to commence firing as an enemy aircraft progressed across the plotting table. Once held by searchlights the AA guns could switch from barrage fire to direct fire against the enemy bomber. A diagram illustrating the new barrage fire system, producing curtains of concentrated fire in the path of incoming aircraft. Barrages were fixed on map coordinates and bore code-names such as Jig-saw, Bubbly, Knave of Hearts and Cosy Corner. While grounded by the weather, Kleine received the news that a new squadron was about to join the attacks on London. In September 1917 the OHL transferred Riesenflugzeug Abteilung (Rfa) 501 from the Eastern Front, via Berlin, to Belgium. Riesenflugzeug Abteilung 501 had been flying early versions of the R-type – _Riesenflugzeug_ (Giant aircraft) – since August 1916. Now the commander of Rfa 501, Hptmn Richard von Bentivegni, received R-types of the latest design, the R.VI, designed by the Zeppelin works at Staaken (as well as the single model R.IV and R.V types). Somewhat ungainly in appearance, the Staaken R.VI, with its crew of seven, was advanced for its time, boasting an enclosed cockpit for the two pilots, navigational aids including wireless telegraphy (W/T) equipment, and could carry a large bomb load, including bombs of 100, 300 and even 1,000kg. On 22 September the first aircraft of Rfa 501 arrived at Sint-Denijs-Westrem, sharing the airfield with two flights of Kagohl 3, and began to prepare their aircraft for their first raid on London. They were not yet operational when favourable weather was forecast for Monday 24 September, the start of an intense period of bombing – later known as the Harvest Moon Offensive – during which six raids took place over a period of eight days. 'Giant' R.33, an R.VI type, designed by the Zeppelin works at Staaken. These huge four-engine aircraft, significantly larger than any Luftwaffe aircraft that attacked London in the Second World War, had a wingspan of 138ft, not far short of twice the size of a Gotha. A Gotha G.V being loaded with two 100kg and five 50kg bombs, a total weight of 450kg – just under half a ton – a fairly typical weight for a night raid. (David Marks) THE HARVEST MOON OFFENSIVE Sixteen Gothas set out on the raid but three turned back with technical problems. The remaining 13 crossed the English coastline between Orfordness and Dover. The wide-ranging courses of these attacks meant the 30 RFC aircraft that took off to intercept the raid – including the first use of Biggin Hill airfield by the RFC – saw nothing, and similarly the searchlights struggled to pick out the raiders. However, only three bombers penetrated inland to London, six contented themselves with a bombardment of Dover while the remaining four dropped their bombs on coastal targets in south Essex and north Kent. Those that did battle through to London found the new AA barrage fire system in operation. The first to approach did so over the eastern suburbs and dropped its first bomb, an incendiary, on Lodore Street, just off East India Dock Road, around 8.05pm, followed quickly by a couple more on Poplar, just north of the West India Docks. Then it crossed the Thames and dropped four bombs on Rotherhithe and Deptford before turning away and heading east. The effectiveness of the new barrage impressed those watching on the ground: 'Everyone agreed that the intensity of the bombardment from the anti-aircraft guns was the greatest yet experienced... A searchlight succeeded in finding one of the raiders... Shrapnel was bursting all around, and more than once it looked as if the aeroplane would be brought crashing to earth... after a shell had burst in front of him he banked steeply and made off in the opposite direction, followed by a violent bombardment, until he disappeared from view.' The other two Gothas came in over north London around 8.35pm, dropping a mixture of explosive and incendiary bombs on Islington before heading towards the centre of the city. An explosive bomb that fell in King's Cross Road caused much local damage and killed 13-year-old James Sharpe. Elsewhere a bomb exploded outside the Bedford Hotel on Southampton Row, Bloomsbury. A doctor, R.D. MacGregor, on his way to have dinner at the hotel, heard the bomb falling and, diving through the door, shouted a warning to a small group gathered there. Doctor MacGregor survived but the bomb killed 13, including three hotel staff, and injured 22. Damage nearby was extensive. The bomb made a hole in the roadway some 4ft deep, the force of the explosion blowing out all the windows in front of the building, even to the sixth storey, and shattering the glass in most of the houses on either side of the street for several hundred yards. Moments later another 50kg bomb crashed through the glass roof of the Royal Academy in Piccadilly, causing considerable damage to the building and, even before the dust had settled, the next bomb fell at the north-east corner of Green Park. The blasted façade of the Bedford Hotel in Southampton Row. A 50kg bomb landed in the road outside the hotel at about 8.55pm on 24 September, killing 13 and injuring 22. All the windows in the hotel are smashed. The tally for the night was 13 explosive bombs and 19 incendiaries, with a total of 14 killed and 49 injured. Reports showed that one Gotha crashed on landing in Belgium, possibly having suffered damage from AA guns on the homeward journey. However, the increased anti-aircraft barrage fire resulted in the police recording damage caused by 73 AA shells: one, landing in Cloudesley Road, Islington, injured five. Another dramatic change that night was the number of people who rushed to the nearest Underground station when the police gave the 'take cover' warning shortly before 8.00pm. The government estimated 100,000 Londoners went underground that night; the trend was to continue and grow. When the warning sounded again the following night the crowding in the Underground was beginning to be a problem, with the authorities growing concerned by the exodus from the East End. _The Times_ blamed it on the 'alien population of the East-end' who they claimed, arrived in family groups to camp out on the platforms 'as early as 5 o'clock in the afternoon'. The Gothas returned the following night, Tuesday 25 September. Fifteen Gothas set out to attack London, but this time only one dropped out with technical problems. Crossing the coast between Foulness and Dover from about 7.00 to 7.45pm, most settled on targets on the north-east Kent coast, such as Margate and Folkestone, with only three penetrating to the south-eastern corner of London. One of these arrived later than the first two, dropping three bombs over Blackheath, all of which failed to explode, and one on Charlton Park before turning away in the face of the barrage. Twenty defence aircraft took off, but again all but one failed to locate the incoming aircraft. Unfortunately for those living in the area where the other two aircraft dropped their bombs, it appears no 'take cover' warning reached them and many were out in the streets when the bombs began to fall. One of the first landed in Marcia Road, just off the Old Kent Road, shortly before 8.00pm. The bomb landed in the street, smashing a gas main and wrecking about 20 houses and 'in the whole length of it there was not a pane of glass left intact'. A woman living on the top floor of one of the houses had rushed upstairs to turn off the gas when she saw through a window 'a great ball of flame falling towards us'. She remembered no more until, having crashed down to the ground floor, she heard her sister calling as helping hands dragged her carefully from the rubble. She survived with injuries just to her legs. Others in the street were not so lucky; three died and another 16 were injured, one of whom later died. Just a few yards away in Old Kent Road another bomb fell directly on Tew's Bakery. The owner of the business had constructed a bunker of heavy flour sacks in the bakehouse under his shop. The family were just sitting down to supper when, as one of the baker's daughters explained, a cry went up in the street. '"They're here!" We all made a rush to our bakehouse, as did many neighbours. The moment we were there there was a terrifying crash, and we knew the house had been struck. The dreadful noise and the sudden darkness; the choking dust; the screaming; the continuous tumbling of the ruin above us, we shall always remember.' It was a terrible experience, but all 17 people huddled in the bake-house survived uninjured to tell their story. The Gotha dropped further 50kg bombs close by in Mina Road, Odell Street, Coburg Road and Goldie Street. The accompanying aircraft dropped a string of 16 incendiary bombs in New Cross and Deptford, with one landing uncomfortably close to the South Metropolitan Gas Company off Old Kent Road. At about 8.15pm a Sopwith 1½ Strutter of No. 78 squadron with Capt D. J. Bell and 2nd Lt G.G. Williams on board was flying south of Brentwood when it came under attack. The enemy aircraft was flying east, presumably returning from London. The Sopwith took up the chase and for 15 minutes kept it in sight, firing frequent bursts before the target disappeared from view. The following day the press carried a story about a substantial amount of petrol falling in Essex, suggesting damage to one of the Gotha's fuel tanks. Certainly one Gotha failed to return, lost over the sea, possibly having run out of fuel. The weather then turned against Kagohl 3 once more, with the return of rain and heavy cloud. But in London the nightly exodus to the Underground continued. In fact the skies over the Belgian airfields did not begin to clear until the afternoon of Friday 28 September. Kleine gathered 25 Gothas for the raid and this time Bentivegni had two of his R-type 'Giants' ready too. It was the largest force yet assembled against London. However, serious doubts surfaced about the weather just before take-off, forcing Kleine and Bentivegni to issue orders to their crews to turn back if they encountered solid cloud cover. Fifteen did just that and only three Gothas and the two 'Giants' claimed to have dropped any bombs, but none got close to London. The cloud cover meant few anti-aircraft guns opened fire and those that did were just firing in the general direction of the engine noise. Oberleutnant Fritz Lorenz, commanding Kasta 14 of the _Englandgeschwader_ , left a rather poetic description of his flight over England on this occasion: 'Probing in vain, the searchlights painted large yellow saucers in the clouds below us. Where a devil's cauldron of bursting shrapnel had never let a machine pass without inflicting at least some hits, there prevailed this time in this silvery solitude a peace which was like something out of a fairy tale.' But this peaceful interlude soon came to an abrupt end. The journey turned from fairy tale to nightmare for many. Three crews never returned, believed shot down by AA guns at Deal, Ramsgate and Sheppey while five Gothas crashed in Belgium and one in Holland – a third of all attacking aircraft lost. Yet even though no aircraft reached London there were casualties in the city. Anticipating the incoming raid the 'take cover' warning went out just before 8.00pm. Suddenly there was a rush by about 200 to 300 people for the entrance of the Underground at Liverpool Street Station. Four policemen on duty there tried to stem the tide, but as one reported, 'there was a panic, they lost their heads'. By the time the police had fought their way down the crowded stairs where 'people were packed like sardines', they found a heap of eight bodies. According to a newspaper report the following day: 'Eight cases of injury were reported... chiefly of broken limbs and body injuries. One elderly woman was crushed to death and another, whose breastbone was fractured, is not expected to live. A child reported to be killed was taken away on an ambulance.' A cutaway diagram showing a Gotha commander using his Goertz bombsight. In reality most commanders released their bombs over London without worrying too much about careful aiming. THE ARRIVAL OF THE 'GIANTS' Although the raid had made a large dent in the strength of Kagohl 3, the relentless Kleine ordered another attack the next evening, Saturday 29 September. He mustered just seven Gothas while Rfa 501 prepared three R.VI 'Giants' for the attack. Over England the force encountered cloud while a low ground mist hampered the Home Defence squadrons. Many observers, searchlight and gun crews were confused by the sheer noise generated by the massive four-engine 'Giants', whose existence was not yet general knowledge, submitting reports mistaking single aircraft as groups of incoming Gothas. The RNAS sent three aircraft up from the airfield at Manston while the RFC put 30 aircraft in the air, but there were only three brief sightings of hostile aircraft. German sources report that just two Gothas and one of the R.VI 'Giants' – R.39 – reached the capital. The exact courses taken by the raiders over the capital are hard to define but one headed much further west than usual, dropping a bomb on Notting Hill and two on Putney Common (now Barnes Common) in south-west London, possibly attempting to extinguish a searchlight based there. One of these killed a married couple, 47-year-old George Lyell and his wife, who were walking on the common when the bomb landed. The blast left Lyell's body 'about six yards from where the bomb fell, and the woman on the other side of the road'. A string of five 50kg bombs landed in a line from Waterloo Station to Kennington. The bomb at Waterloo caused extensive damage around the station while another just south of the station, in Mead Row, injured five people and severely damaged 14 houses. A further bomb fell on the lawn of the Bethlem Hospital Lunatic Asylum in Lambeth (now home to the Imperial War Museum); the patients and staff had a miraculous escape as reported in the press: 'Then a remarkable thing happened. One of the patients shouted to the others to lie flat on the ground, and they did so. Immediately there was a loud report and a crashing of glass, woodwork and stone... the bomb had fallen in the grounds of the building, not fifty yards away. Hundreds of windows had been shattered... Yet there was not a single casualty.' North of the Thames there were two main concentrations of bombs, one on an east–west line across Haggerston and Dalston where nine 12kg explosive bombs fell, and another running north-west across Islington towards Hampstead Heath. A rare photograph of the flight crew and ground crew of 'Giant' R.39. The man in the centre of the front row is Dietrich Freiherr von Lentz, the senior pilot. The commander of R.39, von Bentivegni, is missing from the photo. (Collection DEHLA) In Dalston two bombs fell in Shrublands Road. One exploded in a back garden killing two children, William and Ethel Lee, who were in the kitchen with their parents. In another house, at 34 Mortimer Road, two women were sheltering with nine children when a bomb exploded, again in the back garden. The blast smashed through the kitchen window killing a soldier's wife, 32-year-old Mabel Ward, and her six-year-old son Percy. However, the worst casualty list of the night came in Holloway at The Eaglet public house. On hearing the 'take cover' warning, the landlord, Edward Crouch, sent his wife and child down to the cellar to take cover. A number of customers and passers-by rushed down too but Crouch remained in the bar counting the takings. Moments later a 50kg bomb 'struck the wooden cellar flap just outside the entrance, penetrated to the cellar, and exploded forcing everything upwards'. The stunned landlord's last memory was of a terrific crash and the floor blown to pieces. The devastating explosion killed his wife and three others and injured 32. The intense anti-aircraft barrage probably forced the raiders to turn away early. One newspaper reported that the bombardment was 'unprecedented in this country' and went on to remind its readers that 'Shrapnel cases and bullets, if fired into the air, must obviously fall somewhere, and it is utter folly for people to stand about in the streets, in open doorways, or at windows.' This was a very real danger. That night the police recorded 276 anti-aircraft shells falling on London; two landing in Chiswick High Road injured 11, one in Goldhawk Road, Shepherd's Bush, killed a man and elsewhere across the city another 13 people were injured. That night estimates show some 300,000 people took shelter in Underground stations. The attacking aircraft did not get away without loss either. One Gotha crashed in Holland, resulting in internment for the crew, and it appears likely that the Dover AA guns shot down another over the sea. If the hard-pressed defenders hoped for a lull in the attacks to enable them to take stock they were to be disappointed; the following night – 30 September – the raiders returned again. It was the night of the full moon, the weather in London had been good and the population waited nervously, expecting another attack. Kleine mustered just 11 Gothas for the attack, and then, as usual, once airborne one quickly dropped out. The attacking aircraft came in between 6.45 and 8.15pm. The RFC flew 35 defensive sorties in response with the RNAS putting up two aircraft from Manston. According to an RFC report, 'Three pilots thought that they saw Hostile machines and two of these pilots opened fire', but without result. Once again the over-worked gun barrage prepared to deflect the attack. Out in the western sub-command of the London guns, Lt Col Alfred Rawlinson bleakly summed up his command: owing to cuts in personnel he had '... the very smallest number of men which would suffice to work the guns... these men were necessarily of indifferent physique, such as did not permit their employment at the Front... they were hurriedly and recently trained... there were no reserves, and it was therefore necessary to keep every man, however exhausted, at his post at all costs...' The Eaglet public house, on the corner of Seven Sisters Road and Hornsey Road. During the raid on 29 September a bomb exploded in the cellar causing over 30 casualties. On the night of 30 September Rawlinson reported that a number of his guns each fired 'over 500 rounds'. With the barrels becoming red hot, and despite pouring cold water over them, he had to call 'cease fire' at times to assist cooling. The hot barrels also caused rounds to jam. In an attempt to get the guns firing again as soon as possible he issued the extremely dangerous non-regulation order, 'Jam another round in behind it and fire it out'. According to one newspaper account, 'it was just half past seven when the distant booming of the guns was heard' and the reporter watched and listened as, 'Closer and closer came the noise of guns until it developed into a veritable roar.' The barrage fire was intense and threatening to the attacking crews. Reports indicate that only six aircraft reached London that night and their stay over the capital was brief and generally ineffective; the police and fire service recorded only 22 explosive and 14 incendiary bombs. Most of the damage occurred in East London where bombs fell in Wanstead, Poplar, Plaistow and Barking. At Fairfoot Road, Bromley-by-Bow, 'a narrow thoroughfare of two-storeyed residences', a bomb fell on No. 3, completely demolishing it, killing an 80-year-old man, and injuring a sailor on leave, his wife, another woman and a child. It also damaged another 12 houses in the road. About three-quarters of a mile to the south, a bomb on Southill Street, Poplar, injured nine and two bombs exploded around the Midland Railway cleaning sheds at Durban Road, West Ham, damaging three locomotives. In north London a string of bombs fell across Archway and Highgate with the last three falling in Parliament Fields where they damaged a cricket pavilion. Finally, in south-east London two explosive and two incendiary bombs fell in and around the Woolwich Royal Dockyard, with most damage occurring around Trinity Road. In spite of the intensity of the aerial barrage – just over 14,000 shells fired over London and south-east England (9,000 by LADA) at the ten raiding aircraft – and the claims of a Dover gun crew to have brought down a homeward-bound Gotha, all aircraft appear to have returned safely. Total casualties of the raid were one killed and 19 injured by bombs, with another two killed and 12 injured by AA shells. Undeterred, Kleine ordered Kagohl 3 to attack again the next night, Monday 1 October. In London that night the weather was fine; there was good visibility with little wind and no clouds, but a ground mist increased during the night hindering German navigation and preventing some British squadrons getting airborne. Kleine dispatched 18 Gothas but it appears six turned back. The leading aircraft crossed the coast at 6.50pm, the last not until two hours later. The RFC managed to get 18 aircraft in the air but only one pilot caught a brief glimpse of a raider. The AA guns experienced problems too. The constant firing over the last few days meant many were running short of ammunition and others – with a lifespan estimated at 1,500 rounds – were coming to the end of their usefulness. To preserve them as long as possible an order restricted each burst of barrage fire that night to last no longer than one minute. A combination of the barrage fire and ground mist meant that perhaps only six Gothas arrived over London, dropping bombs between 8.00 and 10.00pm. The first bomb fell in north London near the Edmonton Gas Works, followed by one that fell directly in the Serpentine in Hyde Park, then four 50kg bombs landed in Belgravia and Pimlico close to Victoria Station. One of these bombs, dropping in Glamorgan Street, claimed the lives of four friends sheltering from the raid. Frederick Hanton and Leo Fitzgerald, both 18, and George Fennimore and Henry Greenway, both 17, all played for the same football team and all died in the blast, killed by splinters from the bomb. A few bombs landed in the Highbury–Finsbury Park area; one in Canning Road, Highbury, killed Harriet Sears, a 78-year-old woman, and damaged 36 houses. The most damage though was concentrated within a few streets in Shoreditch, between Haggerston and Hoxton. In a matter of seconds 16 50kg bombs fell on a north–south line parallel with the Kingsland Road. The police records show that these bombs damaged about 770 houses (damage ranging from houses demolished to windows smashed); they also killed four people living in Hows Street and injured eight in Maria Street, six in Caesar Street, two in both Laburnum Street and Pearson Street, and one in Nichol Square. Shortly after 10.00pm the last raider departed the skies over London and 'the firing practically ceased, only an occasional distant boom of the guns being heard, and all was quiet at 10.20.' It was of course anything but a quiet night for those left searching for casualties amongst the piles of rubble that had moments before been homes. Total losses in London that night amounted to 11 killed and 41 injured, with material damage in and around the capital estimated at £44,500. Although Londoners could not know it at the time, the raid of 1 October 1917 marked the end of the Harvest Moon offensive. A dramatic change in the weather put a halt to further raids by Kagohl 3 for the next four weeks. Five raids had reached London in the eight days of the offensive, yet from a German viewpoint the results were disappointing; the authorities recorded 151 bombs in London (94 explosive and 57 incendiary) with material damage estimated at £117,773 and casualties confirmed at 50 killed and 229 injured. However, the raids were producing some of the effects outlined in the orders for Operation _Türkenkreuz_ ; production of munitions at Woolwich Arsenal fell significantly during the raids: on the night of 24 September the production of .303 rifle ammunition fell by 84 per cent and the following night it was down by 77 per cent. And many Londoners, a great number of whom now regularly slept on crowded and insanitary station platforms, were suffering from stress and shattered nerves. Caesar Street (now Nazrul Street), near Kingsland Road, suffered badly on the night of 1 October. Three 50kg bombs fell on the street, demolishing numbers 21, 23 and 41, causing major damage to 11 houses and lesser damage to 40 others. A BRIEF RESPITE In this welcome lull, efforts to improve the ammunition supply to the AA guns were successful and the poor condition of the guns received attention; each month 20 worn-out guns were scheduled for relining. In addition, the production output of 3in 20cwt AA guns for October, earmarked for arming merchant ships, was reassigned to the London defences, while local authorities began requisitioning buildings with suitable basements as shelters and the press were told to moderate their reports of the bombing. The first two of the proposed balloon aprons, approved to cover the approaches to London, from Tottenham in the north around the east via Wanstead, Ilford, Barking, Plumstead to Lewisham, south-east of the city, were in operation in early October. Both were in Essex, one about a mile south-east of Barking and the other about 2 miles east of Ilford. To aid sound detection, experiments were under way with a 'sound reflector' cut into the chalk cliffs near Dover – first used during the night of the 1 October raid. Other experiments using pairs of horns fixed on vertical and horizontal arms – sound locators – proved that when accurately adjusted they could give a trained operator an indication of the height, direction and distance of incoming aircraft. And after much public pressure the government authorized a committee to investigate retaliatory bombing of German towns and cities. The RFC and RNAS were already bombing the Kagohl 3 airfields in Belgium and at the end of September these nagging attacks on Sint-Denijs-Westrem and Melle-Gontrode forced Kleine to redistribute the _Kasta_ around the Ghent airfields. On 4 October, in a brief respite from the stresses of command, Kleine received the Pour Le Mérite in recognition of his recent London raids. Meanwhile, while Kleine waited for a forecast of good weather, the pilots of the RFC continued to familiarize themselves with the complexities of flying their latest fighter aircraft at night. Balloon apron cables extended for 1,000 yards, held aloft by three balloons. From the cable 1,000ft-long steel wires hung down, 25 yards apart, their purpose being to force attacking aircraft up to a predictable height where the AA guns could concentrate their fire as well as presenting an obstacle to incoming raiders. THE BOMBERS RETURN With an improvement in the weather, Kleine prepared the _Englandgeschwader_ for an attack on the night of 31 October, the day after the full moon. He mustered 22 Gothas, and this time they all reached England. Kleine hoped the staggered take-off pattern would ensure a constant flow of raiders over London for a three-hour period, and with half the bombs loaded this time being incendiaries he hoped to start major fires all over the city. The first aircraft crossed the Kent coast at about 10.45pm, the rest came inland, singly or in pairs, over the next two-and-a-half hours. Many, however, were pushed north by crosswinds, abandoned London as a target and attacked towns in Kent instead. German reports claim ten continued to the capital but there were only three main areas that were bombed, suggesting that three aircraft reached south London with perhaps another two getting as far as Erith on the south-eastern approaches. Certainly some 11 explosive and 20 incendiary bombs fell on Erith and neighbouring Slade Green at around 11.45pm – of these, two of the explosive bombs and six of the incendiaries were duds. Over London clouds began gathering at about 10,000ft, hindering the searchlights and guns – this resulted in the LADA guns firing only 2,000 shells. In the city '... the warning was soon followed by the report of distant gunfire, heavy, rapid and muffled. Presently nearer guns joined in the bombardment, and then for nearly a couple of hours sleep was impossible.' An attack developed over the Isle of Dogs at about 12.45am, possibly by two aircraft, when a bomb that dropped on Maria Street, just off West Ferry Road, damaged about 100 houses. Further bombs fell on Greenwich Park, where an incendiary just missed the Royal Observatory and three others fell between the entrance to the Blackwall Tunnel and South Metropolitan Gas Works, but two of them, both incendiaries, failed to ignite. Another incendiary fell on the works of a paint company based on the Thames at Charlton where it burnt out a storeroom, before eight explosive bombs dropped harmlessly over the Belvedere marshes as the aircraft headed home. The final attack of the night developed over Tooting in south-west London at about 1.30am, marked by a steady string of 13 explosive bombs between there and Streatham. In Crockerton Road a bomb killed two people standing in an open doorway and injured two others. Half a mile further on, in Romberg Road, three died, one of them, 13-year-old Boy Scout Alfred Page, while waiting to go out and sound the 'all clear' on his bugle. The bomb also killed his father and injured a woman and two children. An eyewitness recalled that 'the roof of the house, the walls, and the furniture were reduced to a chaotic mass of brick and plaster, wood and iron'. From Streatham the aircraft appears to have headed out in a north-easterly direction, dropping bombs on Deptford, Surrey Docks, Millwall Docks and Plaistow as it went. For Kleine, however, the results were again disappointing. Although 22 aircraft had reached England only 40 explosive and 37 incendiaries (12 of which failed to ignite) were recorded in London, amounting to damage calculated at just £9,536, of which £2,000 was on outlying Erith. Casualties were minimal too: eight killed and 19 injured (five of these by AA shells). And Kleine's disappointment did not end there. Back in Belgium the returning bomber crews found their arrival coinciding with the appearance of a rolling bank of fog. A returning Gotha circled the airfield and later one of the crew recalled that '... the fog showed no sign of thinning, but staying airborne for much longer was impossible; our fuel reserves were almost exhausted... We sank into the fog, with no feeling for our positioning in the air, dropping into the unknown... Death lurked below us, ready to pounce. Seconds lasted for eternities, then: There! The ground! We were safe!' Other crews were not so fortunate; five crashed and wrecked their aircraft attempting landings in the fog. Bad weather meant it would be over a month before Kleine could try for London again. While he waited for a gap in the weather, Kleine concentrated on intensive training for his crews; many new men had arrived and were struggling with the demands and stresses of the bombing campaign. In London the population settled into the rhythm of the moon cycle, linking the arrival of the full moon with a return of the raiders, and as the time of the next full moon drew closer – 28 November – Londoners turned once more to the safety of the Underground stations. But good weather did not coincide and Kleine's men did not come. The moon was in its last quarter when Kleine received news of a break in the weather – meaning a very dark night by which to navigate. However, he grasped the opportunity and readied Kagohl 3, and for only the second time Rfa 501 joined the raid with two Staaken 'Giants'. Disappointed by the previous raid's failure to set London burning, this time the great majority of bombs loaded were incendiaries. THE FAILURE OF THE FIRESTORM The night of 5/6 December 1917 was freezing on the ground in London, there was frost and ice on windows, in the air the cold must have been almost unbearable. In Belgium 19 Gothas and two 'Giants' set course for England – three Gothas turned back early. The first crossed the Kent coastline at about 1.30am on 6 December and many of the raiding force targeted Kent with their bombs, including one of the 'Giants'. Later the other 'Giant' came inland but it too shied away from London. Although taken by surprise with the raid developing in the early hours of a dark morning, the RFC still put up 34 aircraft, including No. 39 Squadron's two new high-performance Bristol Fighters, but no pilots located any enemy aircraft. The first of the six Gothas to penetrate the London defences approached the outer barrage at about 4.30am. When the first bombs began to fall at around 4.45am most Londoners were still asleep in their beds. About 40 incendiaries rained down across Westminster and Chelsea but just over half caused no damage at all, while the rest caused only minor damage. Two other curving, almost parallel, lines of incendiaries were dropped: one from Shaftesbury Avenue over Bloomsbury and Clerkenwell towards Hoxton, Bethnal Green and Mile End, while the other followed a line from Somerset House in Aldwych up through Holborn, Farringdon, along Old Street, over Spitalfields and Whitechapel. These two bombing runs dropped about 90 incendiaries between them yet only three serious fires resulted. The most dramatic was just north of Liverpool Street Station, in Curtain Road at the junction with Worship Street. There a single bomb burnt out a cabinetmaker's factory and that of L. Rose and Co., producers of Rose's lime cordial. The London Fire Brigade estimated material damage caused by this fire at £45,400. A second serious conflagration occurred at 113 Whitechapel Road where the flames also engulfed a number of adjoining premises involved in the clothes trade, causing damage estimated at £16,385. The third fire was at the Acorn Works in Henry Street (now Roger Street) off Gray's Inn Road. A bomb here set alight a range of buildings, recording damage estimated at £13,500. Other bombing runs took place over south-west London (about 54 bombs: three explosive and 51 incendiary) in Lambeth, Kennington and Battersea as well as Clapham, Brixton and Balham, while in south-east London 68 bombs (nine explosive and 59 incendiaries) fell in Lewisham, Brockley, Sydenham, Dulwich and Lee. Despite dropping over 120 bombs south of the Thames the results were extremely limited. A fire at the Sunnybank Laundry by Vauxhall Park caused £2,000 of damage to the premises. Explosive bombs caused damage to a tenement in Burgoyne Road, Stockwell, while another landed in a garden in Paradise Road, close to Stockwell station, injuring three children. The most deadly blast took place when a 50kg explosive bomb dropped in College Road, Dulwich. Here, at a property owned by the British Red Cross Society, the bomb exploded in a room where the caretaker's wife, Edith Howie, and her 13-year-old niece were sleeping, killing both. Her husband, who was in the kitchen at the time, reported that the 'force of the bomb was such as to blow my wife and the child through the roof.' The single model R.IV-type Staaken 'Giant', the R.12, with its crew. Unlike the R.VI-type, which had four engines, the R.IV had six – two 160hp Mercedes D.III and four 220hp Benz Bz.IV – driving three propellers. The barrage had once again given the German crews plenty to think about and for those inexperienced in the task often proved effective in turning them from their intended course. And for the first time it appears one of the London guns inflicted critical damage to a Gotha over the city. _The Times_ reported that '... a series of shell-bursts culminated in one which apparently struck a raider. Loud cheers were raised, and cries of "Got him!" The enemy machine was seen to wobble and descend slowly to the north.' Shrapnel peppered the Gotha, crewed by Lt S.R. Schulte, Lt P.W. Barnard and Vfw B. Senf, damaging the port radiator. This gradually caused the engine to overheat and, once it caught fire, the crew knew they would have to try to land. They then came under fire from a mobile AA gun at Herne Bay and also a Lewis gun based at Bekesbourne airfield, which may have scored hits, but by then the damage was already done; the Gotha crash-landed in a field near Canterbury. The crew set fire to the wreckage of their aircraft and surrendered to a local special constable. The types of P.u.W aerial bomb available to the German air force in 1918. From left to right: 50kg, 100kg, 300kg, 1,000kg, while the soldier in the centre holds a 12.5kg bomb. Another aircraft, a Gotha G.V, that failed to reach London, had a propeller shot away by AA gunfire while over Canvey Island, Essex. Looking for somewhere to land the pilot, Gemeiner J. Rzechtalski, steered towards the lights of Rochford airfield but they clipped a tree on their approach and crashed on a nearby golf course. The pilot and his fellow crew members, Lt R. Wessells and Vizefeldwebel O. Jakobs, crawled from the wreckage into the arms of their surprised captors. Later, when a group of officers was inspecting the wreck they lost a valuable prize when a signal pistol picked up by one went off accidentally and set the petrol-soaked wreck ablaze. Another Gotha failed to return, presumed forced down and lost at sea, while two more limped back and crash-landed in Belgium; a final aircraft crashed as it landed at its home airfield. Kleine lost six of his 16 attacking aircraft, yet, despite dropping over 260 incendiary and 13 explosive bombs, total damage in the London area was estimated at £92,303 of which about half occurred at one site. The bombs killed two civilians and injured seven, while falling AA shells killed a man in Wanstead and injured eight others across London. Kleine's great hope to set London ablaze had failed. The problem was that so many of the incendiary bombs fell in roadways or gardens where they burnt out and those that did ignite could be extinguished by a determined person with water or sand if they could get to them early enough; others just failed to ignite. Ten years later, Major Hilmer von Bülow, a historian of the German air force wrote: A great deal of time was spent over the design of these incendiary bombs, on whose effect on the densely populated London area such high hopes were based. The bomb was a complete failure. During the two night raids on England, on the 31st of October and the 6th of December, 1917, large numbers of these bombs were dropped, both times with no success. The sound idea of creating panic and disorder by numbers of fires came to nothing owing to the inadequacy of the material employed. In Germany technicians were working on a new foolproof incendiary bomb – the Elektron bomb – although it would not be ready for deployment until August 1918. But Hptmn Rudolph Kleine was not to see it developed. Six days after the 6 December raid, Kleine led Kagohl 3 on a raid against British encampments near Ypres. Pounced upon by a patrol of No. 1 Squadron, RFC, Kleine's crew lost the battle; later a German soldier discovered Kleine's body on the ground, his Pour le Mérite still around his neck. The loss of Kleine did nothing to help the failing morale of Kagohl 3, already struggling with the constant attrition amongst its crews. Temporary command passed to Oblt Richard Walter, Kagohl 3's senior flight commander. The wreckage of one of the two Gothas shot down in the early hours of 6 December 1917. One landed at Sturry near Canterbury, the other crashed on a golf course while attempting an emergency landing at Rochford airfield. All crew members survived. A SUCCESS IN THE SKY On Tuesday 18 December, after less than a week in command, forecasters alerted Walter to a break in the weather and, although his predecessors had relied on the light of the full moon, he took the opportunity and launched his first attack on this dark night. That night Kagohl 3 did so under a new designation: Bombengeschwader 3 der OHL (Bogohl 3). In London a thin mist hung over the Thames, with the moon described as a 'thin sickle clear cut in an inky sky'. Fifteen Gothas set out on the raid with two dropping out before reaching England. On this occasion a lone Staaken 'Giant' – the R. 12, the single R.IV type – also joined the raiding force. Reports estimate that six Gothas evaded the barrages and made it through to London, where bombing runs took place between 7.15 and 8.30pm. R.12 reached the capital later, just after 9.00pm. With the unanticipated appearance of enemy aircraft on a dark night the warning system was caught off guard and notification of the imminent raid was received too late to send out warnings. The first bomb appears to have fallen on 187 Westminster Bridge Road, followed moments later by another on the Victoria Embankment, close to Cleopatra's Needle, just a short distance from where a previous bomb fell on 4 September. The bomb killed three women and Henry King, a 38-year-old special constable, as they stood at a tram stop. An incendiary bomb landed on Murdoch's piano manufacturers at 91 Farringdon Road causing a huge fire. A further cluster fell close to the junction around King's Cross Road and Pentonville Road around 7.33pm, damaging over 120 houses, killing two children and injuring 22 civilians and a policeman before the aircraft flew out over Hackney Downs, where one final bomb dropped at about 7.45pm. A second aircraft followed a similar line, dropping bombs around Temple, Chancery Lane, Lincoln's Inn, Gray's Inn and ending the run over Kentish Town where it dropped three explosive bombs. Other bombs fell in Goswell Road, Aldersgate Street and a couple in Whitechapel. South of the Thames, a small cluster fell in Bermondsey and Walworth, with most damage occurring in Spa Road, Bermondsey, where four explosive bombs fell close together. Two of these caused extensive damage to buildings owned by the Salvation Army where people took shelter during the raids. A Mrs Gibbons who lived in the road rushed there with her five children and waited until '... a loud explosion occurred and the lights went out; the suspense was awful as we waited expecting the roof to fall in. In the midst of the confusion a voice shouted, "No lights! I'll shoot the first man to light a match!" We afterwards learned that a gas main had burst.' The engineers' compartment in the nose of 'Giant' R.12, containing the two 160hp Mercedes engines. Unlike the R.VI-types, the R.IV had an open cockpit for the pilot which can be seen on the left. Commander of the Gotha that bombed Bermondsey was Oblt G. von Stachelsky, with Lt Friedrich Ketelsen and Gefreiter A. Weissmann as crew. It flew in over Essex where Capt G.W. Murlis-Green, the commanding officer of No. 44 Squadron, closed in to attack, attracted by searchlights and the Gotha's two exhaust flares. His first attack failed when the muzzle flash from his guns temporarily blinded him. Ketelsen flew on and moments later Stachelsky began releasing his bombs over Bermondsey. With his night vision restored, Murlis-Green made two more attempts to get into a position to attack, but although he reported his tracers entering the Gotha's fuselage, both times his muzzle-flash forced him away. Every time he attacked, caught in searchlights, he was targeted by Stachelsky with the front machine gun, but he commented that his adversary's 'tracers were always very wide of the mark'. He then closed in for a fourth attack, this time there was no searchlight. He emptied the rest of his ammunition drum into the Gotha at which point it dived steeply in front of him, and as he turned to get out of the way, he was caught in the slipstream, which sent his Sopwith Camel into a spin. By the time he regained control the Gotha had disappeared. But the bomber was in trouble. Bullets had damaged the starboard engine, which finally burst into flames halfway back to the coast. Ketelsen hoped to coax the Gotha back on one engine but it soon became clear that was impossible and he ditched in the sea off the coast at Folkestone at about 9.00pm. An armed trawler prepared to pick up the crew and with Stachelsky and Weissmann already safely on board, Ketelsen slipped from his precarious position on the upper wing and drowned. While this drama played out, the final attack of the night began as 'Giant' R.12 flew across London and dropped a huge 300kg bomb on Lyall Street, Belgravia. The bomb landed in the roadway gouging a great crater 30 by 20 by 7ft deep. It damaged gas and water mains and about 20 houses close by, including breaking the windows of the Russian Embassy, but there were no casualties. It appears that the rest of the bomb load were incendiaries and these fell across Belgravia, Pimlico, Lambeth – where four bombs fell close to Lambeth Palace, residence of the Archbishop of Canterbury – and three dropped on or around Southwark Cathedral, but this resulted only in slight damage to the roof. The final incendiary bomb appears to have fallen on Billingsgate Fish Market where it caused a small fire. A total of 43 explosive and 39 incendiary bombs were recorded falling on London, causing material damage estimated at £225,000, the largest single total since the Zeppelin raid of 8/9 September 1915 (£530,000). The final London casualty checks recorded 13 killed and 79 injured. But again, Bogohl 3 sustained heavy losses. With one Gotha confirmed shot down it was clear that the British defences were beginning to come to terms with the night-bomber raids. And back in Belgium the mournful toll continued: two Gothas were lost when they burst into flames on landing and five others sustained damage. PREPARATIONS FOR A NEW YEAR Yet Ashmore was not complacent and demanded improvements. The attack by Murlis-Green had been the first recognized successful aerial engagement since the night raids began – one success from countless sorties. Two new Home Defence squadrons were forming to join the roster: Nos 141 and 143. The balloon apron was extending although it never comprised the 20 sections originally envisaged; in the end there were just ten. Improvements in aerial gunnery followed: the new Neame gunsight included an illuminated ring that a Gotha filled at 100 yards' range and work was under way to eliminate the muzzle-flash causing pilots' temporary loss of night vision. A new bullet became available too for use in the Lewis gun – the RTS (Richard Threlfall and Sons) with explosive and incendiary capabilities. The aircraft available for Home Defence improved too as more high-performance types such as the Sopwith Camel, Sopwith Pup, SE5a and the two-seater Bristol F.2B – the Bristol Fighter or 'Brisfit' – joined the squadrons, all capable opponents for the Gothas and 'Giants'. The ground defences were overhauled too. A new system of anti-aircraft barrage fire superseded 'curtain' fire. The 'polygon' barrage aimed to surround enemy aircraft with shell bursts instead of presenting a line of fire across their path. Other improvements saw searchlights regrouped and their command system redefined while the observer posts of the Observer Corps, except those on the coast, passed to police control in December 1917. This was in reaction to concerns that those manning them consisted mainly of soldiers unfit for service overseas who in many cases lacked the alertness required for this vital role in the defence system. Demands for more aircraft followed too. At this point estimates showed that 89 day-fighters and 69 night-fighters defended London – although as some aircraft appeared in both sections the efficient strength was about 100 aircraft. Then, one final change followed. The raid on 18 December had caught the defences off guard, preventing the issue of an effective warning to the public. Angry scenes followed and in response the government granted the extension of the use of warning maroons. Previously authorized to warn only of incoming daylight raids, now they could alert the public to the approach of hostile aircraft up to 11.00pm, although later alarms were permitted in instances where the police did not have enough warning to tour the streets with their placards. Breakdown of VI (Home Defence) Brigade, RFC. While all this was under way Bogohl 3 and Rfa 501 waited impatiently for another chance to strike at London. However, London experienced an extremely cold January with thick blankets of fog wrapped protectively around the city. Back in Germany a new massive 1,000kg explosive bomb was ready, although only one 'Giant' was adapted to carry it, and intensive work continued to perfect the new Elektron incendiary bomb. Damage caused by the 100kg bomb that fell on Savoy Mansions, close to Victoria Embankment, early on 29 January. The Air Board had occupied the premises until the Air Ministry replaced that organization at the beginning of the month. THE 1918 BOMBER RAIDS THE LONG ACRE TRAGEDY At last the sky looked clear again, and on Monday 28 January 1918 preparations for the first raid of the year were in full swing. But a change was coming and, after 13 Gothas had taken off in their now-usual staggered pattern, fog closed in around the Belgian airfields and prevented the rest following. The fog extended far out to sea too, forcing six of the Gothas to turn back. Two 'Giants' joined the raid, but one soon turned back with engine trouble. The Gothas crossed the coastline between 7.55 and 8.25pm, with just three attacking London. The other four settled for the less risky option of attacking coastal towns in north-east Kent. The night sky was clear and bright over London when, at about 8.00pm, the night-time air raid warning maroons exploded in the sky for the first time. In Shoreditch big queues were building up outside two music halls and a cinema for the evening performances. These sudden aerial explosions, which many presumed to be German bombs, took the crowds by surprise, causing a rush towards Bishopsgate railway goods yard, which served as a vast air raid shelter. However, as the mass of people struggled to push through the narrow gates, panic set in. When order was finally restored the casualty list recorded 14 killed and 12 injured, crushed and trampled in the press. Another panic, at Mile End railway station, resulted in injuries to two women. The first Gotha appeared over east London at about 8.45pm, having weaved its way through the defensive barrage and released a series of explosive bombs on Poplar, Limehouse and Stepney before passing over Shadwell and across the Thames by Cannon Street Station. It resumed bombing over Vauxhall where four bombs killed three men and injured three men, four women and three children. Twenty minutes later a second Gotha appeared over London, dropping its first bomb at about 9.15pm in a garden in Gore Road, south Hackney, damaging eight houses. The next two bombs fell close together in Holborn causing considerable damage to a printworks before the aircraft turned northwards, dropping bombs close to Euston, King's Cross and St Pancras railway stations, with the last bomb dropping at 9.30pm. For twenty minutes all was quiet but then, at about 9.50pm, a Gotha G.V, crewed by Lt Friedrich von Thomsen, and Unteroffiziere Karl Ziegler and Walter Heiden, appeared over north-west London and quickly unloaded six bombs on a curving line along Belsize Road towards Maida Vale, before turning eastwards and setting course for home. With a clear sky and a definite improvement in the coordination of London's defences, searchlights picked up the returning Gotha and, as it approached Romford, two No. 44 Squadron Sopwith Camels, piloted by Capt George Hackwill and 2nd Lt Charles Banks, observed its progress and turned to engage. A tremendous running battle developed between the three aircraft as the Camels swooped in to attack and then withdrew again, recovered their positions before attacking again – the whole drama watched closely from below. Then, as Banks turned away with mechanical problems, Hackwill made a fresh attack and this time he met with success – the Gotha shuddered, flames spread and the aircraft went down, crashing at Frund's Farm at Wickford in Essex. After this last Gotha turned away from London, just before 10.00pm, the guns fell silent over the city, and, although the all-clear did not sound, many took advantage of the lull and left the safety of the shelters to make their way home. However, London's ordeal was not yet over. R.12, the single R.IV-type Staaken 'Giant', crossed the coastline at about 10.25pm and after circling over Suffolk for some time was now heading towards London. Near Harlow a Bristol Fighter from No. 39 Squadron, flown by Lt J.G. Goodyear and 1st AM W.T. Merchant, attacked R.12 and, after a few ineffectual exchanges, bullets from one of R.12's six or seven machine guns splattered along the Brisfit wounding Merchant in the arm and smashing the main petrol tank. With his engine stopped Goodyear turned away and expertly glided down to North Weald airfield to make a perfect landing. The wreckage of Gotha G.V 938/16, brought down at Frund's Farm at Wickford in Essex on 28 January, following repeated attacks by two Sopwith Camels of No. 44 Squadron. All three of the crew died. Undeterred, R.12 continued towards London, approaching the city at about 12.15am. The 'Giant' encountered heavy barrage fire but dropped its first bombs on Bethnal Green and Spitalfields, killing one and injuring 18, these bombs also demolishing three houses and damaging over 300. R. 12 then crossed the Thames and began turning until it recrossed the river by Waterloo Bridge where a bomb dropped in the water. The next smashed into Savoy Mansions causing considerable damage to the building; moments later bombs landed in the Flower Market at Covent Garden, Long Acre, Bedford Place and Hatton Garden before R.12 dropped two final bombs on Bethnal Green and set course for home. But it was the bomb that fell in Long Acre that left the most traumatic mark on London that night. The basement of Odhams Printing Works, a four-storey building in Long Acre with 10in thick concrete floors on the two lower levels, was an official air raid shelter. People started arriving just after 8.00pm when the maroons fired their warning. The bomb dropped by R.12 was a massive 300kg of high explosive; it missed the building but smashed through the pavement and exploded in one of the basement rooms. The blast shook the foundations and fire quickly spread through huge rolls of newsprint stored there. Some of those sheltering in the basement stumbled, bewildered, from the building as fire crews, policemen, ambulances and soldiers rushed to help. One woman, haunted by what she witnessed, recalled that 'there were shrieks and cries and blood and shattered walls and burning wood and bodies stretched on the floors'. The devastation caused by the 300kg bomb dropped on Odhams Printing Works, Long Acre – an official air raid shelter. One of the huge printing presses has fallen through the floor and rolls of paper hang precariously on the upper floors. The tally of 38 killed and 85 wounded was the most caused in London by a single bomb throughout the war. But as the rescuers began to pull people from the rubble, one of the outer walls gave way, collapsing inwards, and the weight falling on the heavy printing presses forced the floors to collapse and crush down on the basement. A boy, J. Sullivan, who had been in the shelter playing with two friends, was knocked unconscious by the first blast. His recollections tell of some of the horrors encountered by those who survived: '... when I regained my senses it was like a nightmare. Everything seemed to be alight and falling on me. I was pinned to the ground with a piece of machine across my legs. My two playmates were missing and no trace was ever found of them. I can vividly remember women and children, bleeding and burning, lying near me, and one woman with her dress blazing actually ran over me.' The devastation was immense and the rubble so extensive that it was not until March, some six weeks later, that the last two bodies were recovered. The final toll was 38 killed (nine men, 19 women and ten children) and 85 injured (43 men, 28 women and 14 children), the most casualties caused by a single bomb on London during the war. Total casualties for the raid on the city amounted to 65 killed and 159 injured (nine of which were caused by anti-aircraft shells). As well as the Gotha shot down in Essex, Bogohl 3 also lost another four aircraft in landing accidents. THE NIGHT OF THE 'GIANTS' The following night, Tuesday 29 January, without support from Bogohl 3, Rfa 501 launched four of its 'Giants' against London alone. Only three – all of the R.VI-type – reached England: R.25, R.26 and R.39. The last to cross the coastline was R.26, but having developed engine problems over Essex it returned to base. The first of the two remaining aircraft, R.39, came inland at about 10.05pm and encountered Capt Arthur Dennis, No. 37 Squadron, flying a BE12b. Dennis, flying at 12,000ft moved to 'fairly close range' and attacked. But after a furious exchange of fire, his aircraft became caught up in the 'Giant's' powerful slipstream and he lost sight of his adversary. R.39 continued towards London after the engagement but it may have lost its bearings as it was over north-west London before it eventually turned southwards. Then, following a tortuous course over the south-western suburbs it appeared over the Old Deer Park, Richmond and Syon Park at about 11.30pm, dropping 16 incendiary and two explosive bombs which caused little or no damage. R.39 then turned east and began dropping an extended string of explosive bombs. The first, tragically, demolished a house in Whitestile Road, Brentford. George Bentley was on his way home when he '... saw an aeroplane caught in the beam of a searchlight. At the same moment a man a few yards in front of me dived to the ground and shouted to me to lie down, which I did... There were three deafening thuds and flashes.' THE FIRST GOTHA SHOT DOWN ON BRITISH SOIL, MONDAY 28 JANUARY 1918 Having completed a bombing run over north-west London, a Gotha G.V – shown in the irregular four-coloured hand-painted night camouflage pattern – crewed by Lt Friedrich von Thomsen, Uffz Karl Ziegler and Walter Heiden, turned for home. Over Essex two Sopwith Camel pilots of No. 44 Squadron, Capt George Hackwill and 2nd Lt Charles Banks, observed the Gotha and turned to attack. Many on the ground watched the combat develop at about 10,000ft. One report told how one of the Camels swooped from above and took up a position behind and below the Gotha on its left, but probably not more than 25 yards away. The other Camel got under the Gotha's tail and a '... Tremendous machine-gun fire broke out immediately from all three machines.' The Gotha pilot tried to shake off his pursuers 'but the two British pilots kept a grip on him.' The Gotha 'fought hard, but the British machines hung on, firing for all they were worth'. This running fight continued for about 10 miles. Banks then experienced some mechanical problems and turned away, while Hackwill continued his attack. Then, as the report concluded, a burst from Hackwill hit the Gotha critically and it started to burn. The Gotha fell as a 'bright ball of flame' and crashed not far from a farmer's house. When the farmer reached the crash site the wreckage was ablaze and he reported that he 'could see by its light the charred body of a German, and two others were observed burning in the aeroplane.' For their involvement in bringing down the first Gotha on British soil, both Hackwill and Banks received the Military Cross. This superb photo shows Staaken 'Giant' R.39 – an R.VI-type – at Scheldewindeke, with R.25 in the background. R.39, commanded by the leader of Rfa 501, Hptmn Richard von Bentivegni, was the only aircraft raiding England adapted to carry the 1,000kg bomb. (Collection DEHLA) Bentley rushed to his house, which had suffered in the blast, but, finding his family were uninjured, he then ran to Whitestile Road where he discovered that the bomb had hit the house of a friend, Sgt Maj Kerley, serving with the Middlesex Regiment. As he stared at the devastation he heard a groan. Bentley clambered over the rubble, then, he later recalled: 'I pulled and wriggled my way into the cellar, which was full of gas and water, and in the darkness came across a young woman, only just alive. Most of her clothes were blown off her. With help I managed to get her to the surface, but by that time she was dead.' By next morning the bodies of Sgt Maj Kerley's wife, May, their five children, aged three months to 12 years, a 22-year-old niece – Hilda Kerley – and an elderly woman lodger had all been recovered from the wreckage. After the bombs on Brentford, R.39 dumped eight in close concentration on the Metropolitan Water Board Works by Kew Bridge on the north bank of the River Thames. These bombs caused considerable damage in the area and claimed the lives of two men at the waterworks. Another two dropped in Chiswick High Road, damaging 72 houses before the last fell in Park Road, Chiswick, after which R.39 crossed the Thames and took a homeward course south of the river. Three RFC pilots attempted to engage the returning 'Giant' but it shrugged off all their attacks. The other 'Giant' to reach London, R.25, came inland at about 10.50pm. The crew brushed off an attack from a No. 37 Squadron BE2e and then, 20 minutes later, a Sopwith Camel of No. 44 Squadron observed R.25 near North Benfleet and attacked. The pilot, 2nd Lt R.N. Hall, tried to close in but every time he did his guns jammed. Another pilot from the squadron, 2nd Lt H.A. Edwardes, joined the attack and fired three bursts before his guns jammed; selflessly he then switched on his fuselage light and flew above the 'Giant' to attract other pilots. R.25 'kept turning sharply to the left and right losing height' as it continued towards London. Two more No. 44 Squadron pilots attacked: 2nd Lt T.M. O'Neill and squadron commander Major Gilbert Murlis-Green. O'Neill experienced frustrating problems with his guns as he attacked before losing R.25 in the dark, while Murlis-Green, thwarted at first by the 'short and accurate bursts' of machine-gun fire aimed at him, eventually got below the 'Giant's' tail and opened fire at what he believed was 50 yards' range. However, confusion caused by the Neame gunsight meant that he later discovered the range was far greater. Pilots knew that a Gotha filled the gunsight ring at 100 yards, but little was yet known about the 'Giants'; at almost twice the size of a Gotha, a 'Giant' was much further away when it filled the ring. Murlis-Green reported later that, '... all my R.T.S. [ammunition] looked as if it was detonating on the fuselage of the hostile machine. I kept my triggers pressed and fired one complete double drum of R.T.S. and three quarters of a drum from my second gun. At any moment I expected the hostile machine to burst into flames.' But R.25 continued on its course. When Murlis-Green later discussed the incident with his pilots they informed him that his bullets were bursting prematurely, at about 100 yards, still short of the target. This whole experience however must have been sobering for the crew of R.25 for next, with horror, they saw one of the balloon aprons looming up directly ahead. They turned sharply away, released their entire load of 20 explosive bombs over Wanstead shortly after midnight, where they caused negligible damage, and turned for home. Having landed safely back in Belgium the much-relieved and fortunate crew of R.25 discovered 88 bullet holes in their aircraft. CHELSEA – THE 1,000KG BOMB In February, Bogohl 3 welcomed back their former commander Ernst Brandenburg after a period of convalescence following his crash just over seven months earlier. Now walking with difficulty on an artificial leg, he found his former command much demoralized by the regular loses they were experiencing, particularly in landing accidents. He immediately suspended any further action by Bogohl 3 and ordered replacement aircraft to return the squadron to full strength. Therefore, when the next air raid set out on the evening of 16 February, the pilots of Rfa 501 continued the assault on London alone. Five 'Giants' set out but, encountering strong winds, three switched to the secondary target of Dover, leaving just the more powerfully engined R.12 and R.39 to continue towards London. On board R.39 hung a single bomb; weighing 1,000kg, it was 13ft long and the heaviest type dropped from the air during the war. The two crossed the coast around 9.40pm with R.12 flying a few minutes ahead. At about 10.15pm, R.12, commanded by Oblt Hans-Joachim von Seydlitz-Gerstenberg and piloted by Lt Götte, was approaching Woolwich in south-east London at a height of 9,500ft when suddenly before them loomed a section of the balloon apron. Götte made a desperate attempt to avoid the steel cables but his starboard wing made contact and threw R. 12 dramatically to the right before it fell out of control to the left. With immense coolness Götte throttled down all engines then opened up the two port engines which allowed him, after plummeting 1,000ft, to regain control and steer away to the south-west. During those anxious moments one of the mechanics saved himself from being thrown out of his engine nacelle only by holding on to the forward exhaust manifold, severely burning his hands. The violent manoeuvres shook free two 300kg bombs which exploded in Woolwich. One demolished a building in Artillery Place, killing five and injuring two, the other blasted a great crater in the road by the parade ground of the Royal Artillery Barracks and damaged St George's Garrison Church; it also killed a nurse and an Australian soldier. A quick inspection of R. 12 showed that the encounter with the balloon apron, although no doubt terrifying for the crew, had inflicted only minor damage to the aircraft so they turned over Beckenham, offloaded their remaining eight bombs and turned for home. The bombs fell in a group near Shortlands railway station causing only minor damage in the vicinity. The leader of Rfa 501, Richard von Bentivegni, commanded R.39, now specially adapted to carry a single massive 1,000kg bomb. He believed he dropped the bomb east of the City but it actually fell in Chelsea, on the north-east wing of the Royal Hospital, home of the Chelsea Pensioners. The bomb killed an officer of the hospital staff and four of his family, but rescuers discovered three children still alive beneath the rubble and debris of the blast. The dark moonless night aided the two 'Giants' on their way home and although 60 defensive sorties were flown that night there were only three brief sightings. All returned in one piece although R.33, one of those forced back early, limped home on one engine. On 16 February at about 10.10pm R.39 dropped its single 1,000kg bomb on the north-east wing of the Royal Hospital, Chelsea. The bomb killed five, injured three, destroyed three buildings within the hospital grounds, severely damaged another and caused slight damage to another 200 buildings in the vicinity. 'A FINE PIECE OF SHOOTING' Despite having only one aircraft available after the experiences of the previous night, Bentivegni ordered R.25 – one of the R.VI-types – to raid London alone on 17 February. R.25 came inland at about 9.45pm and managed to avoid the attention of the RFC on the inward journey, but encountered a stiff barrage fire from the AA guns near Gravesend, Kent. Taking evasive action, R.25 circled around before dropping an incendiary bomb on Slade Green near Erith. Then it continued over Bexley and Eltham before dropping the first of its 19 explosive bombs, on Newstead Road, Lee, and left an evenly spaced trail of destruction right across London, with bombs falling in Hither Green, Lewisham, New Cross, Peckham, Camberwell, Southwark, New Fetter Lane in the City, Holborn and ending, devastatingly, on St Pancras Station. Such was the accuracy of the bombing run that a British analysis of the raid was generous in its praise of the tight grouping of five bombs on the station, describing it as '... by far the most accurate and concentrated fire ever yet brought to bear on any target in London, either by day or night and was a fine piece of shooting by the man responsible for it.' This achievement is often attributed to Lt Max Borchers – however, the credit is due to Hans-Wolf Fleischhauer. Yet beyond this professional appreciation lay the personal tragedies of the raid. The first 12 bombs caused varying levels of damage to hundreds of properties, killed a soldier home on leave in Searlees Road, Southwark, and injured eight other people. However, the situation at St Pancras was entirely different. The five bombs fell within seconds of one another on the station and adjoining Midland Hotel. One of the bombs struck a tower of the hotel sending heavy masonry crashing down through the building and three exploded on the station. The two that claimed most casualties landed either side of an archway leading through to the platforms where the double blast killed a number of people taking shelter from the raid. On the night of 17 February 'Giant' R.25 – an R.VI type – made a solo raid on London. The accurate concentration of five bombs on St Pancras Station brought technical appreciation from a British observer who described it as 'a fine piece of shooting' A nurse, one of the first rescuers to arrive, reached the archway that had taken the blast: 'About ten bodies lay there, terribly mutilated; and two or three soldiers must have been among the victims, for we found two caps, three swagger canes and a limb with a puttee on it – all that remained of them, so far as I could see.' Others had lucky escapes. A family staying in the Midland Hotel sought shelter and eventually found themselves, with others, down in the coal bay of the hotel. Then a terrific crash followed by darkness. One of the family recalled that '... showers of splintering glass fell around us, and coal dust fell, it seemed, by the ton. Those of us who were not lying on the ground bleeding and groaning were practically choking... How long we stayed like that I do not know.' Eventually rescued, as they staggered from the darkened coal bay a grisly spectacle confronted them: 'When we moved we seemed to be ankle-deep in broken glass; and as we left the ruins matches flickered, and in their light a ghastly sight met our eyes. Dead and injured lay everywhere... Picking our way carefully, we at last came out into the air, to see in the distance flames leaping up to the sky.' Within the station and hotel the final toll reached 20 killed (including one soldier) and 22 injured (including five soldiers, a sailor and a policeman). The RFC had 69 aircraft in the sky searching for R.25, but the great noise generated by the aircraft's engines caused wildly conflicting accounts of numbers of enemy aircraft and their position. More than one of the aircraft on patrol found themselves subject to 'friendly fire'. Captain Cecil Lewis of No. 61 Squadron, took it in his stride. In his report he wrote 'Several times I was caught in searchlight beams and over Benfleet was fired at. Shooting very good. Burst exactly at my height (11,000 feet) and put several holes in my machine.' Here a Bristol Fighter and SE5a (below) take part in a flying display at the Shuttleworth Collection. Captain Cecil Lewis (author of _Sagittarius Rising_ ), No. 61 Squadron, was flying an SE5a on 17 February 1918 when twice attacked over Essex by 'friendly fire'. Later that patrol another RFC pilot attacked Lewis. Unfazed, he added, 'Judging the machine had made a mistake I put my machine into a spin and cleared'. The extremely cool commander of R.25 was quick to realize the costly effect his solo raid had on the British defences. The London guns alone fired off about 3,800 shells; in his report he wrote 'An attack by a single [Giant] is sufficient to alert the entire British defence system and to cause the expenditure of vast quantities of ammunition. It is seemingly from nervousness that not only anti-aircraft guns in the vicinity of the aircraft but also some 30km distant were being fired blindly into the air.' Such was the nervous state of the British defences that a full-scale false alarm took place on the following night – 18 February – with 55 defensive sorties flown by the RFC and thousands of shells blasted aimlessly into the sky. In fact, the raiders did not return for almost three weeks. THE AURORA BOREALIS RAID Up until now, the aircraft of Rfa 501 had occupied the airfield at Sint-Denijs-Westrem, formally home airfield of Kasta 13 and 14 of Kagohl 3, and still the base for Armeeflugpark Nr. 4, but on 7 March Rfa 501 moved to a new airfield at Scheldewindeke, south of Ghent. It is surprising therefore, amidst the bustle of the move and on a moonless night, that Bentivegni ordered a raid that same evening. Six 'Giants' took off, one dropped out leaving five to carry out the raid, but only three attacked London: R.13 – the single R.V type – and two R.VI types, R.27 and R.39. Again, R.39 carried a single 1,000kg bomb. The crews found navigation difficult that night. Hauptmann Arthur Schoeller, commanding R.27, left a fascinating report of the raid: 'We approach the coast; the night is so dark that the coastline below us is but a mere suggestion. Under us is a black abyss, no waves are seen, no lights of surface vessels flicker as we head for the Thames estuary at Margate. On our right, in the distant north, is our only light, the weak pulsating glow of the aurora borealis. Ahead of us a black nothingness.' As the aircraft progressed clouds developed and thickened and it was only when searchlights illuminated the clouds below that Schoeller realized he was over England. Requesting wireless bearings he discovered he was south-east of London so he turned towards the city, crossed the Thames and, turning southwards over Hampstead, released his bombs just after midnight. The first two fell in the Belsize Park area, followed by three more in St John's Wood. There a bomb on New Street (now Newcourt Street) demolished two houses, killing two families, before another exploded in the road outside Lord's Cricket Ground. Here the blast killed a soldier of the Royal Horse Artillery and Lt Col F.H.A. Wollaston, Rifle Brigade, who was on leave from service in Palestine. R.27 continued south, crossed back over the Thames, circled in the face of heavy AA fire, then, having one more devastating blow to land, dropped a 100kg bomb in Burland Road, just west of Clapham Common. A mother and daughter heard the bomb explode: ... there was a dreadful shriek through the air and a terrible thud. A big bomb had landed in the middle of the road. It took the front of four houses clean out, ours being one. My mother had to shake me to make me speak. It just seemed as if we were waiting for the end. Mess, and pandemonium; water rushing everywhere, and the smell of gas, for it had hit the main, and there was a great flame in the road. R.27 headed home, but with the Belgian coast in sight all four engines seized. A quick investigation revealed that the fuel lines had frozen because of 'water-contaminated gasoline'. Too late to thaw them, Schoeller realized he must crash, but thanks to the great gliding capabilities of the 'Giant' he managed to reach land. Then, using flares to illuminate the ground below, all he could see were trenches and hollows. Considering his options, Schoeller determined his best action, aware that if he hit any obstacle he risked annihilation: 'Therefore, by pulling sharply on the controls I stall the aircraft letting it fall almost vertically against the ground. With a mighty impact it hits in front of a wide ditch. The right landing gear collapses and the right lower wing shatters, but no crew member is injured.' The terrible destruction caused by the 1,000kg bomb dropped by R.39 on Warrington Crescent, Maida Vale. Amongst the 12 killed in the blast was Lena Ford, who, in 1914, wrote the lyrics for Ivor Novello's hugely popular wartime song, 'Keep The Home Fires Burning'. Another aircraft, one of those that did not reach London and believed to be R.36, also made an emergency landing in Belgium and was wrecked. Back in London, as R.27 had commenced its bombing run, R.39, carrying the single 1,000kg bomb had already completed its mission. The recipient of this steel-encased metric ton of destruction was Warrington Crescent, a quiet residential street in Maida Vale, just over half a mile from Paddington Station. The Reverend William Kilshaw, living half a mile away, was sheltering in his basement listening to 'the barking of the anti-aircraft guns' as the clock approached midnight: 'Suddenly the darkness of the room was broken in upon by a vivid flash... and a terrific roar caused the whole house to tremble. Our top window panes fell to the ground with a shattering noise... After the bombardment we cautiously went to the window, to behold a scene reminiscent of what one reads of the Great Fire of London. The sky to the east was lurid with flames, in which dense smoke poured: the smitten district was afire.' The bomb smashed through the roof and dividing wall between Nos. 63 and 65, four-storey Victorian houses, and detonated inside. The blast destroyed the two buildings and the two adjoining. The houses, 'all solidly built, were utterly wrecked, the four houses reduced to hideous piles of wreckage'. The bomb also caused serious damage to another 20 buildings and slight damage to 400 in the surrounding area. It took a number of days before all the bodies were accounted for, the final tally reaching 12 killed and 33 injured. Amongst those killed was an American woman, 48-year-old Lena Ford; it was she who, in 1914, wrote the lyrics to the hugely popular wartime song 'Keep The Home Fires Burning'. The third of the 'Giants' to reach London, R. 13, encountered engine problems as it approached London from the north after midnight. Bombs fell in fields north of Golders Green, at Mill Hill and Whetstone. The last of these exploded at about 12.30am in a garden in Totteridge Lane causing severe damage to houses close by, killing a man and injuring three men, six women and a child. In all, 157 buildings were damaged before R. 13 turned away and the crew nursed it home on three of its five engines. Although the RFC flew 42 defensive sorties, fewer than previous raids due to mist over some of the more eastern airfields, there were no sightings of Rfa 501's raiding aircraft. The evening ended tragically for the RFC when Capt Alex Kynoch, No. 37 Squadron, flying a BE12, and Capt Clifford Stroud, No. 61 Squadron, in an SE5a, collided over Rayleigh, Essex, resulting in the deaths of both pilots. THE AGONY OF THE 'GIANTS' London readied itself for the next raid, a raid that people now feared could come at any stage of the moon. However, the skies over London remained empty for the rest of the week, then the month, and the next month too. The reason for the lack of enemy activity over London was the launch of the German army's massive spring offensive, the 'Kaiserschlacht', on the Western Front on 21 March 1918; the army needed all squadrons to support the great advance. It was not until May, after the push to the Channel ports ground to a halt, that attention turned back to London. Then, the largest raid of the war set out for the capital, but not before another night of great loss for the German raiders. Bentivegni planned a raid with Rfa 501 for the night of 9 May and Brandenburg was keen to launch Bogohl 3 too. However, Brandenburg's weather officer predicted heavy fog that night and advised against it. Brandenburg heeded the advice and duly informed Bentivegni, but the commander of Rfa 501, determined to the point of recklessness, ignored the advice and went ahead. The weather forecast proved accurate and with fog closing in the four 'Giants' were recalled to Scheldewindeke. Although fog now completely smothered the airfield, the returning aircraft ignored advice to fly to alternate sites and all tried to land anyway. R.32 crashed and exploded on landing and all but one of the crew was killed. R.26 flew into the ground whereupon it burst into flames, again with only one survivor and R.29 was wrecked when it hit some trees; fortunately the crew survived. Only R.39, with Bentivegni on board, managed a successful landing. THE WHITSUN RAID The population of London was enjoying a pleasant Whitsun Bank Holiday weekend on Sunday 19 May 1918. Good weather and an absence of German bombers in the skies over the city for ten weeks promoted a relaxed mood. The skies over London that night were clear while 'a lazy breeze scarcely rustled the young leaves of the trees in the garden squares'. Into this peaceful night Germany launched its largest air raid of the war. While Bogohl 3 focused on bombing missions on the Western Front, Brandenburg always watched for favourable weather over England. The report he wanted finally arrived on 19 May and he wasted no time in preparing 38 Gothas for the raid; this time two single-seater Rumpler aircraft led the way to check the weather ahead. Elsewhere, following the disaster earlier in the month, Bentivegni could add only three 'Giants' to this great aerial armada. Even so, it was London's largest air raid of the war. Reassured that the weather ahead was clear, the first of the Gothas came inland over the north Kent coast just after 10.30pm, the last appeared around midnight. As with previous raids a number of aircraft were forced to turn back and it appears that 28 Gothas and the three 'Giants' made it inland. Reports from observer posts all over the south-east swamped LADA headquarters, stretching the telephone system to the limit. For the first time aircraft of the recently amalgamated Royal Air Force took off to oppose the raiders; they flew 88 sorties and soon the skies over Kent and Essex buzzed like a hornets' nest. Captain Quintin Brand, No. 112 Squadron, took off at 11.15pm in a Sopwith Camel and, attracted by searchlight activity, quickly spotted the exhaust flares of a westbound Gotha over Faversham. The Gotha opened up with its machine guns as Brand closed to 50 yards and fired two 20-round bursts from his own guns in return, hitting and stopping the Gotha's starboard engine. The Gotha, crewed by Oblt R. Bartikowski and Vizefeldwebels F. Bloch and H. Heilgers, turned sharply to the north-east and attempted to evade the attack while losing height. Then Brand '... Followed E.A [enemy aircraft] down and closed to 25 yards and fired three bursts of about 25 rounds each. E.A. burst into flames and fell to pieces'. Although the flames from the burning Gotha enveloped his Camel and scorched his face and moustache, Brand followed the burning wreck down to 3,000ft until he saw it crash on the Isle of Sheppey at 11.26pm. Brand had only been in the air for 11 minutes. The AA guns along the Thames Estuary were pounding the skies too and it appears only 18 aircraft battled their way through the barrage, the rest repulsed by the onslaught. The Metropolitan Police recorded 72 explosive bombs over a wide area, with just a handful landing in the central area, roughly grouped as follows: 11.30pm Catford, Sydenham, Bexley, Bexleyheath 11.40pm Poplar, Walthamstow, Sidcup 11.45pm St James's, Bethnal Green, Tottenham, Manor Park 11.50pm Lewisham, Bromley 11.55pm Old Kent Road, Rotherhithe, Kilburn 12.00am Forest Gate, Stratford, Peckham, Bexleyheath, Chislehurst 12.10am Hither Green, Lewisham, Lee, Islington, Kentish Town 12.15am Kentish Town, Gospel Oak, Limehouse 12.20am Regent's Park, Marylebone, West Ham, East Ham, Plaistow, Canning Town 12.30am City, Shoreditch, Hackney, East Ham, Barking 12.40am Dalston The most casualties that night caused by a single bomb were in Sydenham where a 100kg bomb fell in Sydenham Road. The bomb demolished two houses, which incorporated a dairy and bakery, as well as damaging 46 others, claiming the lives of 18 and injuring 14. These casualty figures include three soldiers killed and another 12 injured, all part of a motor transport depot billeted in buildings opposite where the bomb fell. A former soldier of the Royal Army Medical Corps, P. Leach, was one of the first on the scene and thought the scene that confronted him 'like a battlefield afterwards, with the dead and dying: and there were civilians lying dead in the gutter'. Having torn up sheets to make bandages he got to work: 'I found a Sergeant Oliver with his leg crushed to pulp, and I attended to him first. Others were getting the dead out of the shops. Then I attended to one little mite of a girl who had lost both of her feet.' THE LARGEST, AND LAST, RAID OF THE WAR, 19/20 MAY 1918 The Whitsun Bank Holiday raid proved to be the last of the war. A Bristol Fighter of No. 39 Squadron picked up one of the Gothas just after midnight, flying north of Hainault at 10,000ft. The aircraft, flown by Lt Anthony Arkell with Air Mechanic Albert Stagg as observer/gunner, was about 1,000ft higher and dived to attack. Arkell, just 19 years old, brought the Bristol to a position about 200 yards behind the Gotha under its tail to allow Stagg to fire off half a drum. Arkell then 'zoomed up' and fired a long burst from his Vickers before dropping back down to allow Stagg to engage again. Arkell reported that 'All this time Gotha was firing back with tracer.' Arkell then moved in closer, sitting under the Gotha's tail allowing Stagg to fire off two more drums of ammunition before he 'zoomed up again' and fired another long burst from his forward firing gun. The two aircraft were dropping all the time and, when at 1,500ft Stagg opened fire again, his bullets struck home and the Gotha's starboard engine burst into flames. As the Gotha plummeted all three of its crew jumped to their deaths. 27-year-old Hans Thiedke's body was found on an allotment in Brooks Avenue, that of Paul Sapkowiak, also aged 27, in 'a ditch some 300 yards south of the aeroplane wreckage' and the body of the 20-year-old Wilhelm Schulte, a quarter of a mile to the south 'in the next field on the bank of a ditch'. Nearby residents crowded into the streets to witness the final moments of the London raider. The burning wreckage lay spread over 100 yards of open ground between Roman Road and Beckton Road on the outskirts of East Ham. The scene of devastation in Sydenham Road after the raid of 19/20 May 1918 where Delahoy's Dairy had stood. Amongst the 18 killed were five members of the Delahoy family, while next door the same bomb killed Rose Westley, aged 47, her son, a niece and her sister-in-law. The wreckage of the Gotha brought down in East Ham at 12.20am on 20 May 1918, victim of a No. 39 Squadron Bristol Fighter, the squadron's first 'kill' since their successes against Zeppelin raiders in September/ October 1916. About 15 minutes later three bombs landed close together in Bethnal Green. Two fell on the premises of Allen & Hanbury, wholesale chemist and druggist, and one in neighbouring Corfield Street. The three bombs claimed the lives of a man and two women as well as injuring seven men, eight women and two children while causing massive damage; a third of the factory was destroyed, 14 houses seriously damaged and 227 houses suffered minor damage. Total casualties in London that night amounted to 48 killed and 172 injured. But the bombers, having spent as little time as possible over London, then faced a return journey fraught with danger. That night the LADA guns fired over 30,000 rounds skywards. A No. 39 Squadron Bristol Fighter caught a Gotha flying at 10,000ft north of Hainault, at about 12.05am. Following a running fight the Gotha smashed into the ground on open land on the outskirts of East Ham. Elsewhere Major F. Sowrey, No. 143 Squadron, who had previously shot down a Zeppelin in September 1916, engaged a Gotha V at about 12.25am returning from bombing Peckham and Rotherhithe. Flying an SE5a, Sowrey closed and fired off two drums of Lewis gun ammunition and, despite the Gotha's evasive tactics, closed again to open with his Vickers. But an engine stall caused a spin and by the time he recovered control Sowrey had lost sight of the Gotha – but it appears he had wounded the pilot, Vfw Albrecht Sachtler. Doubtful of reaching Belgium the crew began searching for somewhere to land when a Bristol Fighter of No. 141 Squadron, crewed by lieutenants Edward Turner and Henry Barwise, pounced on the struggling Gotha. The first burst, fired by the observer, Barwise, hit the Gotha's port engine as it was attempting to reach the illuminated landing ground at Frinsted, Kent. Attacked again, the Gotha dived, defending itself by firing its rear gun in short bursts at the Brisfit. Then Barwise's gun jammed and, encountering engine problems, Turner pulled away and gave up the attack. However, the Gotha was now in a bad way and despite Sachtler's best efforts it crashed between Frinsted and Harrietsham at about 12.45am. Only the rear gunner, Uffz Hermann Tasche, who suffered a broken arm, survived the landing. Another of the Gothas brought down on 19/20 May. In total three were shot down over land, one made a forced landing and two were confirmed as shot down over the sea. The small arrow indicates an unexploded 50kg bomb with its nose buried in the ground. (Colin Ablett) A fourth Gotha met with disaster near Clacton after the pilot came down low to clear cloud cover in an attempt to establish its position. An engine problem meant he was unable to check the descent and, having unloaded the bombs, he made an emergency landing; the commander, Lt Wilhelm Rist was killed. Anti-aircraft guns also shot down two Gothas off the coast but claims for a third went unconfirmed; one more crashed on landing as it returned to Belgium. Quiet now returned to the skies over London. The city's population, now reassured by the very visible aerial response to this latest attack, steeled themselves for the next alarm and the defenders confidently waited for their next test. A week passed, then a month, two months then three, but the raiders did not return; in fact they never came again. THE AFTERMATH OF THE FIRST BLITZ THE END OF THE CAMPAIGN For Brandenburg, losses on this Whitsun raid were high as the defences demonstrated increasing efficiency, but he and Bentivegni still planned to continue raiding England. However, on 27 May 1918, the German Army launched an attack on the Aisne and both Bogohl 3 and Rfa 501 were committed to supporting the attack. They planned two raids in July 1918 but the OHL cancelled them. However, by August the powerful Elektron incendiary bomb, weighing only 1kg and able to be dropped in vast numbers, was ready. Plans to unleash a fearsome firestorm on Paris and London were in place and in September, with the war going against Germany, the OHL initially took the decision to launch the firebombs. The crews prepared for this desperate attack, bombs were loaded, then, at the very last minute an order arrived cancelling the raid. Since June 1918, following agitation at home, British aircraft had carried out raids on German towns and cities. Now, recognizing the end of the war was near and fearing even greater reprisals against a German civilian population whose morale was already disintegrating, the OHL cancelled the order to unleash the firestorm. For London's civilian population the war was over. At the beginning of the war, Germany had hoped her bombing raids would break Londoners' morale and force them to demand that the government sue for peace. For the capital's population the raids – first by Zeppelins and then by bomber aircraft – undoubtedly caused sleepless nights, stress, anxiety, fear and anger, but they never induced the people to demand peace; instead, the raids brought forward strident cries for retaliation. Although the plans to crush London's morale failed, the raids did have other intended effects on Britain's war effort. On two occasions raids caused fighting squadrons to be withdrawn from the Western Front and many new high-performance aircraft, urgently needed elsewhere, were committed to Home Defence. The raids also caused dramatic reductions in munitions production at times, and anti-aircraft guns, ammunition, searchlights and manpower were all required to maintain the defence system to combat these raids, remaining in place until the end of the war despite urgent demands for their redeployment elsewhere. Germany had hoped her air raids would crush the morale of the civilian population but in this aim they failed. Instead, many civilians demanded retaliation against German towns and cities and, in countless numbers, they brought 'comic' postcards to send to friends and family offering a wry view of the air raids. THE COST OF THE WAR Yet this came at a price for the German aircrews. As a direct consequence of planned raids on London, Germany lost seven airships (L.12, L. 15, SL.11, L.32, L.33, L.31 and L.48) over Britain or her coastal waters, and 60 Gothas: 24 shot down or missing and 36 destroyed or seriously damaged by crash landings in Belgium. In addition two 'Giants' were also lost in bad landings after raids and another three suffered a similar fate after an aborted mission against London. The airship raids on Britain claimed 557 lives and caused injuries to 1,358 men, women and children, with material damage estimated at £1.5million (1914–18 value), with just under £1million of this inflicted on London. Some 26 raids targeted the capital, but only nine actually reached the central target area, the others deflected by bad weather or hampered by mechanical failure. These raids killed 181 and injured 504, or 36 per cent of the total British casualties in this phase of London's first aerial war. While engaged in defending the capital, 15 aircraft crashed and six pilots were lost in these bad landings. In comparison, the Gotha and 'Giant' raids caused material damage estimated at £1.4million, of which £1.2 million occurred in the London region. These raids killed 837 people and injured 1,991. Of these casualties, 486 deaths and 1,432 injuries occurred in London (68 per cent of total). Of the remaining losses most were inflicted on coastal towns in Kent and Essex. Of those aircraft committed to defending London, 21 were lost due to enemy action or damaged in crash landings, with the loss of five pilots and one observer. As a weapon of war, the airship was short-lived. At the beginning of the war they had seemed to many to be the most advanced and terrifying weapons in existence; by the end of the war, airships as long-range bombers were consigned to the pages of history books – a flawed concept, an aviation cul-de-sac. Yet even though far more of London's population suffered at the hands of the Gotha and 'Giant' bombers, the Zeppelins or 'Zepps' as they became known to Londoners, held a terrible fascination for the civilian population. Even when under attack, people were drawn into the streets to watch the skies above as these 'gaseous monsters', as Churchill dubbed them, passed overhead, looking on in both awe and horror in equal measure. Despite the passage of time, this haunting fascination remains to this day – long after the greater impact made on London by these early bombers has largely been forgotten. LONDON'S LEGACY Those in authority did not forget the danger presented by the bomber raids. The lessons learnt in dealing with this threat proved extremely beneficial to Britain in the long term. They made the government acutely aware of the necessity to maintain an in-depth aerial defence system. The need to realign the divergent paths taken by the RFC and RNAS led to the creation of the Royal Air Force in April 1918, and a sophisticated central operations room evolved at Spring Gardens in central London during the final year of the war. Here sat the hub of an exclusive military telephone network that relayed information from 25 sub-control centres, each in direct contact with the AA gun batteries, searchlight companies, balloon aprons, aerodromes and observer posts in its own area. In the operations room, information received from the sub-commands was fed via telephone headsets to a team of plotters working on a large map table, who moved symbols representing enemy aircraft across the map, all overseen from a gallery above by Ashmore, commander of the London Air Defence Area, Higgins, commanding VI (Home Defence) Brigade, and a senior police representative. Ashmore was able to speak directly to the sub-commanders and Higgins to the fighter wing HQs, while the police representative had direct lines to the emergency services. According to Ashmore, the system worked very well: From the time the observer at one of the stations in the country saw a machine over him, to the time when the counter representing it appeared on my map, was not, as a rule, more than half a minute. It was the system that, with the addition of Radar, provided the country with its defence against the Luftwaffe when German aircraft returned to British skies in the summer of 1940. SELECT BIBLIOGRAPHY Castle, H.G., _Fire Over England_ , London (1982) Castle, I., _London 1914–17: The Zeppelin Menace_ , Oxford (2008) Castle, I., _London 1917–18: The Bomber Blitz_ , Oxford (2010) Castle I., _The Zeppelin Base Raids: Germany 1914_ , Oxford (2011) Cole, C & Cheesman, E.F., _The Air Defence of Britain, 1914–1918_ , London (1984) Fegan, Thomas, _The 'Baby Killers' – German Air Raids on Britain in the First World War_ , Barnsley (2002) Fredette, Major R.H., _The First Battle of Britain 1917–1918_ , London (1966) Griehl, M. & Dressel, J., _Zeppelin! The German Airship Story_ , London (1990) Hanson. N., _First Blitz_ , London (2008) Hyde, A.P., _The First Blitz_ , Barnsley (2002) Jones, H.A., _The War In The Air_ , Volume 3 (pub.1931, reprinted Uckfield 2002) Jones, H.A., _The War In The Air_ , Volume 5 (pub 1935, reprinted Uckfield 2002) Morris, J., _German Air Raids on Britain 1914–1918_ (pub. 1925, reprinted Dallington, 1993) Poolman, K., _Zeppelins Over England_ , London (1960) Rawlinson, A., _The Defence of London, 1915–1918_ , London (1923) Rimmel, R.L., _Zeppelin! A Battle for Air Supremacy in World War I_ , London (1984) Raleigh, W., _The War In The Air_ , Volume 1 (pub.1922, reprinted Uckfield 2002) Robinson, D.H., _The Zeppelin in Combat_ , Atglen (USA, 1994) Stephenson, C., _Zeppelins: German Airships 1900–40_ , Oxford (2004) White, C.M., _The Gotha Summer_ , London (1986) APPENDICES APPENDIX 1: IN TOUCH WITH LONDON'S FIRST BLITZ Inevitably, much of London has changed in the 100 years since this first aerial conflict over the city during the First World War. The destruction caused by the Blitz of 1940–41, the V1 and V2 rockets of 1944–45, as well as the subsequent and ongoing redevelopment of the city, has erased much of early twentieth century London. Even though many roads and buildings have disappeared from great tracts of the capital, there are still a few reminders that link us with this turbulent time if you know where to look. The first bomb dropped on London, by Zeppelin LZ.38 on 31 May 1915, fell on 16 Alkham Road, Stoke Newington. Despite setting fire to the roof and upstairs rooms, the house still stands. Unfortunately Hackney Council erected a plaque in the 1990s to mark this historic moment, but placed it half a mile away on the wall of 31 Nevill Road, incorrectly identifying it as the first house bombed in the war – and gives the wrong date. A new plaque, however, was erected by Hackney Council in Alkham Road in May 2015, to mark the centenary of London's first air raid, and the old one removed. Kapitänleutnant Heinrich Mathy's raid of 8/9 September has left the most indicators of its passing. A small plaque encircled by paving in one of the central lawns marks the spot where his explosive bomb landed in Queen's Square, Bloomsbury. Moments later another bomb fell outside the Dolphin public house on the corner of Lamb's Conduit Passage and Red Lion Street. The clock in the pub stopped as the bomb exploded and remains in situ, with its hands frozen in time for many years at 10.49pm. However, in more recent times the hands have slipped to 10.40pm. Interestingly, back in 1990 the group Crosby, Stills & Nash released an album, _Live It Up_ , which featured a track called _After The Dolphin_ , the lyrics inspired by the raid that night. Further along the route a plaque on the wall of 61 Farringdon Road commemorates the destruction of that building during the raid and its subsequent rebuilding. Another plaque, on the wall of the chapel in Lincoln's Inn, records the explosion of a bomb dropped by Kptlt Breithaupt from L.15 on 13 October 1915. The bomb shattered the seventeenth-century stained glass window, and the walls below still bear the scars of the blast. Outside London, in Cuffley, Hertfordshire, a monument erected by donations from readers of a national newspaper commemorates William Leefe Robinson's deed in bringing down SL.11 on the night of 2/3 September 1916, and his subsequent death in 1918. His grave is located in All Saints Church cemetery at Harrow Weald. The bodies of all the Zeppelin crews shot down over Britain – as well as the Gotha crews – now lie in peace in the tranquil setting of the German Military Cemetery at Cannock Chase, Staffordshire. Perhaps the most poignant reminder of the First World War bomber raids is the memorial in Poplar Recreation Ground, East India Dock Road, recording the deaths of 18 young schoolchildren killed at the Upper North Street School, Poplar, when the first Gotha daylight raid took place on 13 June 1917. Another reminder of that first bomber raid is a ceramic plaque in memory of PC Alfred Smith on the wall at Postman's Park (entrance in Aldersgate Street, EC1), who died while saving the lives of a group of factory girls in Central Street, EC1. The twisted remains of a bomb that fell on the church of St Edmund the King and Martyr in Lombard Street, EC3, now forms a unique memorial to the second daylight raid on 7 July 1917. Although the church is now the London Centre for Spirituality, the bomb remains preserved in a glass case, located where the altar once stood. In September 1917 the Gothas switched to night-time bombing and, during the first moonlight raid on 4/5 September, a bomb landed on the Victoria Embankment, a few feet from Cleopatra's Needle. Gouges from shell fragments still scar the ancient Egyptian obelisk, plinths and right-hand sphinx. The plaque on one of the plinths incorrectly refers to the raid as the 'first raid' on London by aeroplanes; it was the first _night-time_ raid. Later that month, on the night of 24 September, a bomb landed in the roadway in Southampton Row, WC2 outside the Bedford Hotel. The hotel has been completely rebuilt in the intervening years but a framed plaque outside remembers the 13 killed and 22 injured in the blast. Moments later the same Gotha released a 50kg bomb, which smashed through the glass roof of Gallery IX at the Royal Academy, Burlington House, Piccadilly. The blast shattered the room and a small round plaque at the entrance to the gallery now marks the event. The surrounding marble also shows heavy scarring caused by the shell fragments. In Lincoln's Inn, not far from the plaque commemorating the Zeppelin bomb dropped on 13 October 1915, is another, outside 10 Stone's Buildings. The brass plaque commemorates a bomb that fell on 18 December 1917. A small white disc set in the tarmac marks the point where the bomb exploded and the walls show significant shrapnel damage. Finally, in Chelsea, a plaque on the wall of the north-east wing of the Royal Hospital commemorates its destruction by a 1,000kg bomb (incorrectly recorded on plaque as a 500lb bomb) in February 1918, its rebuilding in 1921, destruction again in 1945 by a V2 rocket and subsequent rebuilding in 1965. Examples of the aircraft employed by the RFC, RNAS and RAF against the German bombers are on display at the RAF Museum, Hendon (www.rafmuseum.org.uk). There you can see a restored Sopwith Camel, SE5a and Sopwith Triplane, a rebuilt Bristol Fighter and reproduction Sopwith 1½ Strutter. But perhaps the most interesting is a Sopwith Pup. This restored aircraft uses 60–70 per cent of original Pup N5182, which flew from RNAS Walmer and Dover against the Gotha raids of 25 May and 5 June 1917. In addition the Shuttleworth Collection in Bedfordshire (www.shuttleworth.org), regularly include a Sopwith Pup, Sopwith Triplane, SE5a and Bristol Fighter in their air shows. And near Maldon in Essex, the former No. 37 Squadron First World War airfield at Stow Maries (www.stowmaries.org.uk) is undergoing extensive renovation, with many of the original buildings still standing, and has plans for regular flights by First World War aircraft. APPENDIX 2: CHRONOLOGY 1914 | ---|--- **4 Aug** | Britain declares war on Germany. **8 Aug** | London's first three AA guns positioned in Whitehall. **5 Sep** | Winston Churchill, First Lord of the Admiralty, outlines his Home Defence plan as the Admiralty accept responsibility for the aerial defence of London. **1 Oct** | Instructions for the implementation of a blackout come into effect. **24 Dec** | A German seaplane drops the first bomb from the air on Britain. It lands in Dover. 1915 | **9 Jan** | Kaiser Wilhelm gives official approval for air attacks on Britain – but excludes London as a target. **19/20 Jan** | Navy Zeppelins L.3 and L.4 bomb Great Yarmouth, King's Lynn and a number of Norfolk villages during the first Zeppelin raid on Britain. **12 Feb** | The Kaiser includes London Docks in legitimate targets. **3 Apr** | The Army Airship Service takes delivery of the first of the new 'P-class' Zeppelins – LZ.38 **5 May** | Kaiser Wilhelm approves London, east of Tower of London, as a legitimate target. **31 May/1 Jun** | Army Zeppelin LZ.38 makes the first airship raid on London. **6/7 Jun** | Flt Sub-Lt R.A.J. Warneford (RNAS) destroys Army Zeppelin LZ.37 over Belgium. Zeppelin LZ.38 is bombed in its shed at Evère by Flt Lt J.P. Wilson and Flt Sub-Lt J.S. Mills (RNAS). **20 Jul** | Unrestricted bombing of London approved by the Kaiser. **17/18 Aug** | Navy Zeppelin L. 10 bombs Walthamstow, Leyton, Leytonstone and Wanstead. **7/8 Sep** | Army airships SL.2 and LZ.74 bomb south-east London. **8/9 Sep** | Navy Zeppelin L.13, on a course from Bloomsbury to Liverpool Street Station, causes the most material damage of all the airship raids on London. **12 Sep** | Admiral Sir Percy Scott appointed commander of London's gunnery defence. **13/14 Oct** | Three Navy Zeppelins (L.13, L. 14 and L.15) attack London and outskirts. Bombs fall from Covent Garden to Aldgate as well as on Woolwich and East Croydon. Highest casualties from a single Zeppelin raid. 1916 | **10 Feb** | Responsibility for the aerial defence of London passes from the Admiralty to the War Office. **31 Mar/1 Apr** | Navy Zeppelin L. 15 brought down by anti-aircraft fire in sea north of Margate, Kent. **April** | RFC places order for Buckingham incendiary ammunition. **15 Apr** | No. 19 Reserve Aeroplane Squadron reformed as No. 39 (Home Defence) Squadron and concentrated on the north-eastern approaches to London. **May** | RFC places orders for Brock explosive/incendiary and Pomeroy explosive ammunition. **30 May** | The first of the R-class Super Zeppelins – L.30 – is commissioned into navy service. **24/25 Aug** | Navy Zeppelin L.31 attacks Isle of Dogs, Greenwich, Eltham and Plumstead. **2/3 Sep** | Army airship SL.11 is the first to be shot down over mainland Britain. **September** | Army airships cease raiding Britain. **September** | Approval given to commence production of the Gotha G.IV bomber. **23/24 Sept** | In a raid by Navy Zeppelins, L.33 is brought down at Little Wigborough, Essex, after bombing East London. L.32 shot down near Billericay, Essex. L.31 bombs Streatham, Brixton and Leyton. **1/2 Oct** | L.31 shot down over Potters Bar, Hertfordshire. **28 Nov** | A single German LVG C.IV drops bombs on London in daytime between Knightsbridge and Victoria. **December** | Lt Col M. St. L. Simon appointed Anti-Aircraft Defence Commander, London. 1917 | **January** | Britain begins to scale down the London air defences in belief of a 'diminished risk from Zeppelin attack'. **28 Feb** | L.42, the first of the S-class Zeppelins – the Height Climbers – is commissioned into naval service. **5 Mar** | Hptmn Ernst Brandenburg appointed commander of Kagohl 3, the bomber squadron created to attack London. **6/7 May** | A single Albatross C.VII drops bombs at night on London between Holloway and Hackney. **25 May** | First attempted aircraft squadron raid on London, redirected on Folkestone. **5 Jun** | Second attempted squadron raid on London abandoned due to weather conditions. Sheerness and Shoeburyness bombed instead. First Gotha shot down by AA fire. **13 Jun** | First daylight raid on London by Gotha bombers. Highest casualties on London from a single raid (162 killed, 426 injured). **16/17 Jun** | The German navy launches its last – planned – Zeppelin raid on London. L.48 is shot down over Theberton, Suffolk, as the raid fails to reach its target. **19 Jun** | Brandenburg, commander of Kagohl 3, injured in air crash resulting in amputation of a leg. **23 Jun** | Hptmn Rudolph Kleine appointed commander of Kagohl 3. **7 Jul** | Second, and final, daylight raid on London by Kagohl 3. **19 Jul** | Release of first part of Gen Jan Smuts' report on Home Defence. **8 Aug** | Maj Gen Ashmore appointed commander of LADA. **17 Aug** | Release of second part of Smuts' report, recommending creation of a single air service. **28 Aug** | Home Defence Group upgraded to Home Defence Brigade. **4/5 Sep** | First night-time raid on London by Kagohl 3. **6 Sep** | Smuts' report on night raids. **22 Sep** | Arrival of Rfa 501 in Belgium, flying R-type 'Giants'. **24 Sep** | Kagohl 3 commences 'Harvest Moon Offensive', the first of five raids on London in eight days. **29 Sep** | First London raid involving both Gotha and 'Giant' aircraft. **1/2 Oct** | Last raid of 'Harvest Moon offensive'. **19/20 Oct** | Navy Zeppelin L.45, driven off course by high winds, drops bombs on Hendon, Cricklewood, Piccadilly, Camberwell and Hither Green, the last Zeppelin bombs to fall on London. **31Oct/1 Nov** | Seventh night raid on London by aeroplanes. **6 Dec** | Eighth night aeroplane raid on London. **12 Dec** | Rudolph Kleine killed in action. Temporary command of Kagohl 3 passes to Oblt Richard Walter. **18 Dec** | Ninth night aeroplane raid on London. Highest material damage inflicted in an aeroplane raid (£225,000). **18 Dec** | Kagohl 3 re-designated Bogohl 3. 1918 | **January** | Britain establishes the Air Ministry. **28/29 Jan** | Tenth night aeroplane raid on London. A bomb in Long Acre causes the most casualties in the capital inflicted by a single bomb (38 killed – 85 injured). **29/30 Jan** | Eleventh London night raid by aeroplanes. **Early Feb** | Ernst Brandenburg resumes command of Bogohl 3 and suspends further action to allow squadron to regain full strength. **16 Feb** | Rfa 501 carries out the twelfth night aeroplane raid alone. First 1,000kg bomb dropped on London. **17 Feb** | A single 'Giant' carries out the thirteenth night aeroplane raid on London. **7/8 Mar** | Fourteenth night aeroplane raid on London, flown by Rfa 501. **1 Apr** | RFC and RNAS amalgamate to form Royal Air Force (RAF). **19/20 May** | Bogohl 3 and Rfa 501 combine for the fifteenth night aeroplane raid on London – 'the Whitsun Raid'. Largest and final raid of the war. **5 Aug** | Führer der Luftschiffe Peter Strasser killed when Zeppelin L.70 shot down off the Norfolk coast. APPENDIX 3: THE FORCES ENGAGED IN LONDON'S FIRST BLITZ THE ZEPPELIN RAIDS 31 MAY/1 JUNE 1915 German force **Two German airships** Army Zeppelin LZ.38 (Hptmn Erich Linnarz) – bombed London Army Zeppelin LZ.37 – returned early British defensive sorties **RNAS – 15 aircraft** Chingford: BE2a, BE2c and Deperdussin Dover: four aircraft (type unknown) Eastchurch: Avro 504B, Blériot Parasol, BE2c and Sopwith Tabloid Hendon: Sopwith Gunbus Rochford: Blériot Parasol Westgate: Sopwith Tabloid and Avro 504B 17/18 AUGUST 1915 German force **Four German airships** Navy Zeppelin L. 10 (Oblt-z-S Friedrich Wenke) – bombed London Navy Zeppelin L.11 – reached England Navy Zeppelins L. 13, L. 14 – returned early British defensive sorties **RNAS – 6 aircraft** Chelmsford: Two Caudron G.3 Yarmouth: Sopwith two-seater Scout and two BE2c Holt: one aircraft (type unknown) 7/8 SEPTEMBER 1915 German force **Three German airships** Army Schütte-Lanz SL.2 (Hptmn Richard von Wobeser) – bombed London Army Zeppelin LZ.74 (Hptmn Friedrich George) – bombed London Army Zeppelin LZ.77 – reached England British defensive sorties **RNAS – 3 aircraft** Felixstowe: BE2c Yarmouth: BE2c and Sopwith two-seater Scout 8/9 SEPTEMBER 1915 German force **Three German airships** Navy Zeppelin L. 13 (Kptlt Heinrich Mathy) – bombed London Navy Zeppelins L.9, L. 14 – reached England British defensive sorties **RNAS – 7 aircraft** Redcar: Caudron G.3 and two BE2c Yarmouth: three BE2c _Kingfisher_ (trawler): Sopwith Schneider (seaplane) 13/14 OCTOBER 1915 German force **Five German airships** Navy Zeppelin L. 13 (Kptlt Heinrich Mathy) – bombed London Navy Zeppelin L.14 (Kptlt Alois Böcker) – bombed London Navy Zeppelin L.15 (Kptlt Joachim Breithaupt) – bombed London Navy Zeppelins L.11, L. 16 – reached England British defensive sorties **RFC – 5 aircraft** Joyce Green: one BE2c (two sorties) Hainault Farm: two BE2c Suttons Farm: two BE2c 24/25 AUGUST 1916 German force **Four German airships** Navy Zeppelin L.31 (Kptlt Heinrich Mathy) – bombed London Navy Zeppelins L.16, L.21, L.32 – reached England Navy Zeppelins L.14, L.13, L.23 and three others – returned early Navy Schütte-Lanz SL.8, SL.9 – returned early British defensive sorties **RNAS – 9 aircraft** Eastchurch: two BE2c Felixstowe: two Short 827 Grain: two BE2c Manston: BE2c and two Sopwith 1½ Strutter **RFC – 7 aircraft** No. 39 Squadron: North Weald: two BE2c Suttons Farm: two BE2c Hainault Farm: one BE2c No. 50 Squadron: Dover: two BE2c 2/3 SEPTEMBER 1916 German force **16 German airships** Army Schütte-Lanz SL.11 (Hptmn Wilhelm Schramm) – bombed London Navy Zeppelins L.11, L.13, L.14, L.16, L.21, L.22, L.23, L.24, L.30, L.32 – reached England Navy Schütte Lanz SL.8– reached England Army Zeppelin LZ.90, LZ.98 – reached England Army Zeppelin LZ.97 – returned early Navy Zeppelin L. 17 – returned early British defensive sorties **RNAS – 4 aircraft** Grain: Farman F.56 Yarmouth: BE2c Bacton: BE2c Covehithe: BE2c **RFC – 10 aircraft** No. 33 Squadron: Beverley: BE2c No. 39 Squadron: North Weald: BE12 and BE2c Suttons Farm: two BE2c Hainault Farm: two BE2c No. 50 Squadron: Dover: three BE2c 23/24 SEPTEMBER 1916 German force **12 German airships** Navy Zeppelin L.31 (Kptlt Heinrich Mathy) – bombed London Navy Zeppelin L.33 (Kptlt Alois Böcker) – bombed London Navy Zeppelins L.13, L.14, L.17, L.21, L.22, L.23, L.30, L.32 – reached England Navy Zeppelins L. 16, L.24 – returned early British defensive sorties **RNAS – 13 aircraft** Cranwell: BE2c Eastchurch: three BE2c Manston: two BE2c Yarmouth: Short 184, two BE2c and two Sopwith Baby Bacton: BE2c Covehithe: BE2c **RFC – 12 aircraft** No. 33 Squadron: Beverley: BE2c No. 39 Squadron: North Weald: two BE2c Suttons Farm: BE2c Hainault Farm: two BE2c No. 50 Squadron: Dover: two BE2c Bekesbourne: BE2c and one unknown type No. 51 Squadron: Thetford: two aircraft (type unknown) 19/20 OCTOBER 1917 **German force** **11 German airships** Navy Zeppelin L.45 (Kptlt Waldemar Kölle) – bombed London Navy Zeppelins L.41, L.44, L.46, L.47, L.49, L.50, L.52, L.53, L.54, L.55 – reached England British defensive sorties **RNAS –11 aircraft** Cranwell: BE2e Frieston: BE2c Manston: three BE2c Yarmouth: BE2c Bacton: BE2c Burgh Castle: three BE2c Covehithe: BE2c **RFC – 66 aircraft** No. 33 Squadron: Scampton: two FE2b and three FE2d Kirton-Lindsey: three FE2b and three FE2d Elsham: two FE2d Gainsborough: FE2d and FE2b No. 37 Squadron: Goldhanger: BE2d, two BE2e and BE12 Stow Maries: four BE2e No. 38 Squadron: Leadenham: two FE2b Buckminster: two FE2b Stamford: four FE2b No. 39 Squadron: North Weald: seven BE2e and Martinsyde G.102 (attached) Biggin Hill: BE2c, BE12 and BE12a No. 50 Squadron: Bekesbourne: BE2e and three BE12 No. 51 Squadron: Mattishall: two FE2b Tydd St Mary: two FE2b Marham: two FE2b No. 75 Squadron: Hadleigh: three BE2e and BE12 Harling Road: two BE2e and BE12 Elmswell: BE2e No. 76 Squadron: Copmanthorpe: three BE2e and BE12 Helperby: BE2e and BE12 * * * THE BOMBER RAIDS 13 JUNE 1917 **Kampfgeschwader 3 der Oberste Heeresleitung (Kagohl 3)** 20 Gothas – 2 returned early **Royal Naval Air Service (RNAS) – 33 aircraft** Dover: 3 x Sopwith Pup, 1 x Sopwith Baby Eastchurch: 2 x Bristol Scout, 1 x Sopwith 1½ Strutter Felixstowe: 2 x Sopwith Schneider, 5 x Sopwith Baby Grain: 2 x Sopwith Pup, 2 x Sopwith Baby Manston: 4 x Bristol Scout, 1 x Sopwith Pup, 2 x Sopwith Triplane Westgate: 4 x Sopwith Baby Walmer: 4 x Sopwith Pup **Royal Flying Corps (RFC) – 55 aircraft** No. 37 Squadron: 1 x BE2e, 1 x BE12, 1 x BE12a, 1 x RE7, 5 x Sopwith 1½ Strutter No. 39 Squadron: 1 x BE2c, 2 x BE2e, 3 x BE12, 3 x BE12a, 1 x FK8 No. 50 Squadron: 1 x BE2c, 1 x BE12, 2 x BE12a, 5 x FK8, 1 x RE8, 1 x Vickers ES1 No. 65 Squadron: 2 x DH5 No. 78 Squadron: 1 x BE12a No. 98 Depot Squadron (DS): 1 x BE2d, 1 x BE2e, 1 x BE12a No. 35 Training Squadron (TS): 2 x Bristol Fighter No. 40 TS: 2 x Sopwith Pup No. 62 TS: 1 x Sopwith Pup No. 63 TS: 1 x Sopwith Pup No. 2 Aircraft Acceptance Park (AAP): 2 x DH4, 1 x DH5 No. 8 AAP: 1 x DH4, 1 x DH5, 1 x FE8, 1 x Sopwith 1½ Strutter, 1 x Bristol Fighter, 1 x RE8 Orfordness Experimental Station: 2 x Sopwith Triplane, 2 x DH4 7 JULY 1917 **Kagohl 3** 24 Gothas – 2 returned early **RNAS – 22 aircraft** Dover: 1 x Sopwith Pup, 2 x Sopwith Baby Eastchurch: 2 x Sopwith Camel Grain: 1 x Sopwith Pup Manston: 1 x Sopwith Pup, 3 x Sopwith Camel, 4 x Sopwith Triplane, 3 x Bristol Scout Walmer: 5 x Sopwith Pup **RFC – 81 aircraft** No. 37 Squadron: 2 x BE12, 3 x BE12a, 1 x BE2e, 1 x RE7, 6 x Sopwith Pup, 6 x Sopwith 1½ Strutter No. 39 Squadron: 3 x BE12, 3 x BE12a, 2 x SE5, 1 x FK8 No. 50 Squadron: 2 x BE12a, 6 x Sopwith Pup, 1 x FK8, 1 x Vickers ES1, 3 x unrecorded aircraft No. 78 Squadron: 5 x BE12a No. 35 TS: 2 x Bristol Fighter No. 40 TS: 1 x Sopwith Pup, 2 x unrecorded aircraft No. 56 TS: 1 x Spad No. 62 TS: 1 x Sopwith Pup No. 63 TS: 2 x Sopwith Pup No. 198 DS: 1 x Vickers FB12c No. 2 AAP: 5 x DH4, 1 x DH5 No. 7 AAP: 1 x FE8 No. 8 AAP: 3 x Bristol Fighter, 1 x FE2d, 2 x FE8, 1 x DH5, 1 x FK8 Orfordness Experimental Station: 1 x Bristol Fighter, 1 x Sopwith Triplane, 1 x DH2, 1 x FE2b, 1 x Sopwith 1½ Strutter, 1 x FK8, 1 x RE8 Martlesham Heath Testing Squadron: 2 x Sopwith Camel, 1 x DH4 4/5 SEPTEMBER 1917 **Kagohl 3** 11 Gothas – 2 returned early **RFC – 18 aircraft** No. 37 Squadron: 1 x BE2d, 3 x BE2e, 1 x BE12a No. 39 Squadron: 3 x BE2e, 3 x BE12 No. 44 Squadron: 4 x Sopwith Camel No. 50 Squadron: 2 x BE12, 1 x FK8 24 SEPTEMBER 1917 **Kagohl 3** 16 Gothas – 3 returned early **RFC – 30 aircraft** No. 37 Squadron: 4 x BE2e, 2 x BE12 No. 39 Squadron: 4 x BE2e (inc. one W/T tracker aircraft), 1 x BE12, 1 x BE12a No. 44 Squadron: 3 x Sopwith Camel No. 50 Squadron: 5 x BE12 (three W/T tracker aircraft), 1 x BE2e, 2 x FK8 No. 78 Squadron: 2 x FE2d, 2 x Sopwith 1½ Strutter Orfordness Experimental Station: 3 x unrecorded aircraft 25 SEPTEMBER 1917 **Kagohl 3** 15 Gothas – 1 returned early **RNAS – 2 aircraft** Manston: 2 x BE2c **RFC – 18 aircraft** No. 37 Squadron: 1 x BE2d, 2 x BE2e, 1 x BE12 No. 39 Squadron: 4 x BE2e (one W/T tracker aircraft), 1 x BE2c, 1 x BE12a No. 44 Squadron: 3 x Sopwith Camel No. 50 Squadron: 1 x BE2e No. 78 Squadron: 1 x FE2d, 3 x Sopwith 1½ Strutter 29 SEPTEMBER 1917 **Kagohl 3** 7 Gothas – 3 returned early **Riesenflugzeugabteilung 501 (Rfa 501)** 3 Giants – 0 returned early **RNAS – 3 aircraft** Manston: 3 x BE2c **RFC – 28 aircraft** No. 39 Squadron: 2 x BE2c, 6 x BE2e, 2 x BE12 (both W/T tracker aircraft), 1 x BE12a No. 44 Squadron: 4 x Sopwith Camel No. 50 Squadron: 4 x BE12, 3 x BE2e, 1 x FK8 No. 78 Squadron: 1 x FE2d, 3 x Sopwith 1½ Strutter Orfordness Experimental Station: 1 x Martinsyde F 1 30 SEPTEMBER 1917 **Kagohl 3** 11 Gothas – 1 returned early **RNAS – 2 aircraft** Manston: 2 x BE2c **RFC – 31 aircraft** No. 37 Squadron: 4 x BE2e (one W/T tracker) No. 39 Squadron: 4 x BE2e (two W/T trackers), 2 x BE12, 1 x BE12a No. 44 Squadron: 8 x Sopwith Camel No. 50 Squadron: 1 x BE2e, 2 x BE12 (one W/T tracker), 2 x FK8 No. 78 Squadron: 1 x FE2d, 5 x Sopwith 1½ Strutter Orfordness Experimental Station: 1 x Martinsyde F 1 1 OCTOBER 1917 **Kagohl 3** 18 Gothas – 6 returned early **RFC – 18 aircraft** No. 37 Squadron: 1 x BE2e, 1 x BE12 No. 39 Squadron: 2 x BE2e, 2 x BE12a No. 44 Squadron: 7 x Sopwith Camel No. 78 Squadron: 1 x FE2d, 4 x Sopwith 1½ Strutter 31 OCTOBER/1 NOVEMBER 1917 **Kagohl 3** 22 Gothas – 0 returned early **RNAS – 5 aircraft** Eastchurch: 2 x Sopwith 1½ Strutter Manston: 2 x Sopwith 1½ Strutter, 1 x DH4 **RFC – 45 aircraft** No. 37 Squadron: 6 x BE2e (one W/T tracker) No. 39 Squadron: 5 x BE2e, 3 x BE12 (one W/T tracker) No. 44 Squadron: 13 x Sopwith Camel No. 50 Squadron: 3 x BE12 (two W/T trackers), 1 x BE12a, 4 x FK8, 1 x BE2e No. 78 Squadron: 6 x Sopwith 1½ Strutter, 3 x Sopwith 1½ Strutter SS (single-seater conversion) 6 DECEMBER 1917 **Kagohl 3** 19 Gothas – 3 returned early **Rfa 501** 2 Giants – 0 returned early **RFC – 32 aircraft** No. 37 Squadron: 1 x BE2d, 6 x BE2e, 1 x BE12 No. 39 Squadron: 2 x BE2e, 2 x Bristol Fighter, 3 x BE12 No. 44 Squadron: 6 x Sopwith Camel No. 50 Squadron: 3 x BE12 (all W/T trackers), 4 x FK8 No. 78 Squadron: 4 x Sopwith 1½ Strutter SS 18 DECEMBER 1917 **Kagohl 3 – now redesignated Bombengeschwader 3 der OHL (Bogohl 3)** 15 Gothas – 2 returned early **Rfa 501** 1 Giant – 0 returned early **RFC – 46 aircraft** No. 37 Squadron: 3 x BE2e, 1 x BE12, 2 x BE12b No. 39 Squadron: 2 x BE2e, 4 x Bristol Fighter No. 44 Squadron: 8 x Sopwith Camel No. 50 Squadron: 3 x BE12, 6 x FK8 No. 61 Squadron: 4 x SE5a No. 78 Squadron: 9 x Sopwith 1½ Strutter SS, 1 x Sopwith 1½ Strutter, 1 x BE2e (W/T tracker), 2 x BE12 28/29 JANUARY 1918 **Bogohl 3** 13 Gothas – 6 returned early **Rfa 501** 2 Giants – 1 returned early **RNAS – 6 aircraft** Dover: 4 x Sopwith Camel, 1 x Sopwith 1½ Strutter Eastchurch: 1 x Sopwith Camel **RFC – Detailed returns not available** No. 37 Squadron: 15 sorties flown No. 39 Squadron: 10 sorties flown No. 44 Squadron: 25 sorties flown No. 50 Squadron: 11 sorties flown No. 61 Squadron: 9 sorties flown No. 75 Squadron: 5 sorties flown No. 78 Squadron: 22 sorties flown 29/30 JANUARY 1918 **Rfa 501** 4 Giants – 1 returned early **RNAS – 7 aircraft** Dover: 4 x Sopwith Camel, 1 x Sopwith 1½ Strutter Walmer: 2 x Sopwith Camel **RFC – 69 aircraft** No. 37 Squadron: 3 x BE2e, 3 x BE12, 3 x BE12b No. 39 Squadron: 8 x Bristol Fighter (one W/T tracker), 1 x BE2e (W/T tracker) No. 44 Squadron: 15 x Sopwith Camel No. 50 Squadron: 3 x BE12 (inc. one W/T tracker), 4 x BE12b (one W/T tracker), 5 x FK8 No. 61 Squadron: 7 x SE5a No. 75 Squadron: 1 x BE2e, 1 x BE12, 1 x BE12b No. 78 Squadron: 6 x Sopwith Camel, 3 x Sopwith 1½ Strutter SS, 3 x BE12, 1 x BE12a, 1 x BE12b 16 FEBRUARY 1918 **Rfa 501** 5 Giants – 1 returned early **RFC – 56 aircraft** No. 37 Squadron: 1 x BE2d, 5 x BE2e, 3 x BE12, 1 x BE12a No. 39 Squadron: 7 x Bristol Fighter No. 44 Squadron: 12 x Sopwith Camel No. 50 Squadron: 3 x BE12 (one W/T tracker), 3 x BE12b (one W/T tracker) No. 61 Squadron: 7 x SE5a No. 78 Squadron: 7 x Sopwith Camel, 1 x Sopwith 1½ Strutter SS No. 141 Squadron: 4 x BE12 (one W/T tracker) No. 143 Squadron: 2 x FK8 17 FEBRUARY 1918 **Rfa 501** 1 Giant – 0 returned early **RFC – 66 aircraft** No. 37 Squadron: 5 x BE2e, 3 x BE12 (one W/T tracker), 4 x BE12b No. 39 Squadron: 7 x Bristol Fighter No. 44 Squadron: 12 x Sopwith Camel No. 50 Squadron: 2 x BE2e, 4 x BE12 (two W/T trackers), 1 x BE12a, 1 x BE12b No. 61 Squadron: 8 x SE5a No. 78 Squadron: 9 x Sopwith Camel, 1 x Sopwith 1½ Strutter SS No. 141 Squadron: 4 x BE12, 1 x BE12b No. 143 Squadron: 4 x FK8 7/8 MARCH 1918 **Rfa 501** 6 Giants – 1 returned early **RFC – 41 aircraft** No. 37 Squadron: 2 x BE2e, 2 x BE12, 3 x BE12b No. 39 Squadron: 8 x Bristol Fighter No. 44 Squadron: 4 x Sopwith Camel No. 50 Squadron: 3 x BE12 (two W/T trackers), 3 x BE12b No. 61 Squadron: 6 x SE5a No. 78 Squadron: 3 x Sopwith Camel No. 112 Squadron: 1 x Sopwith Camel No. 141 Squadron: 3 x BE12 No. 143 Squadron: 3 x FK8 19/20 MAY 1918 **Bogohl 3** 38 Gothas – 10 returned early **Rfa 501** 3 Giants – 0 returned early **RAF – 86 aircraft** No. 37 Squadron: 5 x BE12, 2 x BE12a, 2 x BE12b, 1 x SE5a No. 39 Squadron: 8 x Bristol Fighter (one W/T tracker) No. 44 Squadron: 11 x Sopwith Camel No. 50 Squadron: 1 x BE12 (W/T tracker), 1 x BE12b (W/T tracker), 7 x SE5a No. 61 Squadron: 9 x SE5a No. 78 Squadron: 10 x Sopwith Camel No. 112 Squadron: 12 x Sopwith Camel No. 141 Squadron: 7 x Bristol Fighter No. 143 Squadron: 10 x SE5a **ACKNOWLEDGEMENTS** My fascination with the first air war over London was ignited many years ago by a plaque on a building in Farringdon Road, Clerkenwell. It referred to a Zeppelin raid on the night of 8/9 September 1915 and it set me on a course which eventually resulted in the publication of two books by Osprey, London 1914–17: The Zeppelin Menace and London 1917–18: The Bomber Blitz. Now, as we reach the centenaries of those ground-breaking air raids on London, Osprey has decided to republish updated versions of these books in this single volume. This book was only possible because of the diligent and methodical work of numerous anonymous clerks of the Royal Flying Corps, Royal Naval Air Service, Metropolitan Police and London Fire Brigade who meticulously filed away every letter, report and document they received until eventually, years later, they found their way to the National Archives in Kew, London. Without their efforts this book would have been very different and the maps a shadow of what they are. It never ceases to amaze me that the mere click of a button on a computer keypad at Kew can put those 100-year-old documents directly in one's hands just 20 minutes later. Throughout the time working on these books I received open and generous help from a number of individuals. I must therefore particularly thank Colin Ablett for access to his library and permission to use photos from his collection. In Austria my friend Martin Worel meticulously produced German translations for me and I am indebted to Marton Szigeti in Germany who has been generous with his vast knowledge of the German 'Giants'. And back in England I must mention David Marks. He shares my fascination with all things dirigible and has granted me open access to items in his unparalleled collection. Finally I would like to thank Christa Hook, the artist who produced the artwork that features so prominently in this book. I sent Christa childish scribbles and notes and she returned them as works of art. First published in Great Britain in 2015 by Osprey Publishing, PO Box 883, Oxford, OX1 9PL, UK PO Box 3985, New York, NY 10185-3985, USA E-mail: info@ospreypublishing.com Osprey Publishing, part of Bloomsbury Publishing Plc This electronic edition published in 2015 by Bloomsbury Publishing Plc © 2015 Osprey Publishing Ltd. Previously published as: Campaign 193, _London 1914–17_ and Campaign 227, _London 1917–18_ , both also by Ian Castle. All rights reserved You may not copy, distribute, transmit, reproduce or otherwise make available this publication (or any part of it) in any form, or by any means (including without limitation electronic, digital, optical, mechanical, photocopying, printing, recording or otherwise), without the prior written permission of the publisher. Any person who does any unauthorised act in relation to this publication may be liable to criminal prosecution and civil claims for damages. Every attempt has been made by the Publisher to secure the appropriate permissions for material reproduced in this book. If there has been any oversight we will be happy to rectify the situation and written submission should be made to the Publishers. A CIP catalogue record for this book is available from the British Library Ian Castle has asserted his right under the Copyright, Designs and Patents Act, 1988, to be identified as the Author of this Work. ISBN: 978-1-4728-1529-3 ePub ISBN: 978-1-4728-1-531-6 PDF ISBN: 978-1-4728-1-530-9 Cartography by The Map Studio (Zeppelin raids) and bounford.com (Bomber raids) Artwork by Christa Hook **Front cover:** A Zeppelin airship over London, illuminated by searchlights, 1916. (Topfoto) Osprey Publishing supports the Woodland Trust, the UK's leading woodland conservation charity. Between 2014 and 2018 our donations will be spent on their Centenary Woods project in the UK. **www.ospreypublishing.com** # 1. Cover 2. Title Page 3. Dedication 4. CONTENTS 5. Introduction 6. The Growing Aerial Threat to London 7. The Zeppelin Raids – The Men That Mattered 8. Preparing for the Zeppelin War 9. The 1915 Zeppelin Raids 10. The 1916 Zeppelin Raids 11. The 1917 Zeppelin Raids 12. The End of the Zeppelin War 13. The Coming of the Bombers 14. The Bomber Raids – The Men That Mattered 15. Preparing for the Bomber Blitz 16. The 1917 Bomber Raids 17. The 1918 Bomber Raids 18. The Aftermath of the First Blitz 19. Select Bibliography 20. Appendix 1: In Touch With London's First Blitz 21. Appendix 2: Chronology 22. Appendix 3: The Forces Engaged in London's First Blitz 23. ACKNOWLEDGEMENTS 24. eCopyright
{ "redpajama_set_name": "RedPajamaBook" }
2,673
Nauener Platz – plac w Berlinie, w Niemczech, w dzielnicy Gesundbrunnen, na granicy z dzielnicą Wedding, w okręgu administracyjnym Mitte. Został wytyczony pod koniec XIX wieku. Przy placu znajduje się stacja metra linii U9 Nauener Platz. Bibliografia Nauener Platz Ulice i place w Berlinie
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,104
Summerdale may refer to Places Summerdale, Alabama Summerdale, Pennsylvania Summerdale (Neighborhood), Philadelphia, PA Other the Summerdale scandals
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,593
Q: Why is $u: [0,r] \times \mathbb{R}^n: (x,z) \mapsto \int_0^x f(v(s,z))ds$ continuous? Why is $$u: (G := [0,r] \times \mathbb{R}^n) \to \mathbb{R}^n: (x,z) \mapsto \int_0^x f(v(s,z))ds$$ continuous, where we endow the domain with the product metric, and where $$v: G \to \mathbb{R}^n; f: \mathbb{R}^n \to \mathbb{R}^n$$ are continuous functions? Attempt: Probably this follows easily from some fact about the product metric/topology, but I can't see it. I was messing around with the fundamental theorem of calculus, and it is easy to see that for every fixed $z$ the associated function $u_z$ is differentiable, and therefore continuous, but I don't think this is sufficient to deduce continuity of $u$. Context of this problem: book on differential equations A: Let $\varepsilon>0$. First notice that, since $v:G\to \mathbb{R}^n$ and $f:\mathbb{R}^n\to\mathbb{R}^n$ are continuous, so is $h \equiv f\circ v : G\to \mathbb{R}^n$. Let $e_x(\delta),e_z(\delta)$ any functions such that $e_z(\delta),e_x(\delta)\to 0$ as $\delta\to 0$. Continuity for $u$ is equivalento to say that $$ u(x+e_x(\delta),z+e_z(\delta))\to u(x,z) \quad \text{ as } \quad \delta\to 0.$$ Now we can write $$ u(x+e_x(\delta),z+e_z(\delta)) = \int_0^{x+e_x(\delta)}h(s,z+e_z(\delta))ds =\\ \int_0^{x}h(s,z+e_z(\delta))ds + \int_x^{x+e_x(\delta)}h(s,z+e_z(\delta))ds = A(\delta)+B(\delta). $$ Since $h$ is continuous over $[0,r]\times B_1(z)$ (I'm looking for a set that would contains points $(s,z+e_z(\delta))$ for $\delta$ small enough), then we can apply Dominated Convergence Theorem and deduce that $$ A(\delta)\xrightarrow{\delta\to 0} \int_0^x h(s,x)ds. $$ On the other side, $h$ is bounded over compact sets because it is continuous, so we find that $$ |B(\delta)| \leq \int_x^{x+e_x(\delta)}|h(s,z+e_z(\delta))|ds \leq \|h(s,x)\|_{L^\infty([0,r]\times B_1(z))} e_x(\delta) \xrightarrow{\delta\to 0}0. $$ This shows that $u(x+e_x(\delta),z+e_z(\delta))\to u(x,z)$, i.e. $u$ is continuous.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,353
<html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>C.87 WikiPublishTask</title> <link rel="stylesheet" type="text/css" href="book.css"> <meta name="generator" content="DocBook XSL Stylesheets V1.78.1"> <link rel="home" href="index.html" title="Phing User Guide"> <link rel="up" href="app.optionaltasks.html" title="Appendix C. Optional tasks"> <link rel="prev" href="VersionTask.html" title="C.86 VersionTask"> <link rel="next" href="XmlLintTask.html" title="C.88 XmlLintTask"> </head> <body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF"> <div class="navheader"> <table width="100%" summary="Navigation header"> <tr> <th colspan="3" align="center">C.87 WikiPublishTask</th> </tr> <tr> <td width="20%" align="left"><a accesskey="p" href="VersionTask.html">Prev</a> </td> <th width="60%" align="center">Appendix C. Optional tasks</th> <td width="20%" align="right"> <a accesskey="n" href="XmlLintTask.html">Next</a></td> </tr> </table> <hr> </div> <div class="sect1"> <div class="titlepage"> <div> <div><h2 class="title" style="clear: both"><a name="WikiPublishTask"></a>C.87 WikiPublishTask</h2></div> </div> </div> <p>This task can publish Wiki document via Wiki WebAPI. It supports only <a class="ulink" href="http://www.mediawiki.org/" target="_top">MediaWiki</a> engine for now. </p> <p><a class="ulink" href="http://www.php.net/manual/en/book.curl.php" target="_top">cURL</a> extension is required. </p> <div class="table"><a name="idp55980448"></a> <p class="formal-object-title"><span class="label">Table C.117: </span><span class="title">Attributes</span></p> <div class="table-contents"> <table summary="Attributes" border="1"> <colgroup> <col class="name"> <col class="type"> <col class="description"> <col class="default"> <col class="required"> </colgroup> <thead> <tr> <th>Name</th> <th>Type</th> <th>Description</th> <th>Default</th> <th>Required</th> </tr> </thead> <tbody> <tr> <td><code class="literal">apiUrl</code></td> <td><code class="literal">String</code></td> <td>Wiki API URL (eg. http://localhost/wiki/api.php)</td> <td>n/a</td> <td>Yes</td> </tr> <tr> <td><code class="literal">apiUser</code></td> <td><code class="literal">String</code></td> <td>Wiki API user name</td> <td>n/a</td> <td>No</td> </tr> <tr> <td><code class="literal">apiPassword</code></td> <td><code class="literal">String</code></td> <td>Wiki API user password</td> <td>n/a</td> <td>No</td> </tr> <tr> <td><code class="literal">id</code></td> <td><code class="literal">Integer</code></td> <td>ID of page that will be changed</td> <td>n/a</td> <td rowspan="2">One of these attributes is required.</td> </tr> <tr> <td><code class="literal">title</code></td> <td><code class="literal">String</code></td> <td>Title of page that will be changes. Can also be used as page identifier</td> <td>n/a</td> </tr> <tr> <td><code class="literal">content</code></td> <td><code class="literal">String</code></td> <td>Content of published page</td> <td>n/a</td> <td>No</td> </tr> <tr> <td><code class="literal">mode</code></td> <td><code class="literal">String</code></td> <td>Edit mode (overwrite, prepend, append)</td> <td>append</td> <td>No</td> </tr> </tbody> </table> </div> </div> <br class="table-break"> <div class="sect2"> <div class="titlepage"> <div> <div><h3 class="title"><a name="idp56014288"></a>C.87.1 Example</h3></div> </div> </div> <pre class="programlisting">&lt;wikipublish apiUrl="http://localhost/wiki/api.php" apiUser="testUser" apiPassword="testPassword" title="Some Page" content="Some content" mode="prepend"/&gt; </pre> </div> </div> <div class="navfooter"> <hr> <table width="100%" summary="Navigation footer"> <tr> <td width="40%" align="left"><a accesskey="p" href="VersionTask.html">Prev</a> </td> <td width="20%" align="center"><a accesskey="u" href="app.optionaltasks.html">Up</a></td> <td width="40%" align="right"> <a accesskey="n" href="XmlLintTask.html">Next</a></td> </tr> <tr> <td width="40%" align="left" valign="top">C.86 VersionTask </td> <td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td> <td width="40%" align="right" valign="top"> C.88 XmlLintTask</td> </tr> </table> </div> </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
5,484
Q: How to get link to third party site in 'about channel' section via python I want to display information about links in the YouTube profile in a text document, I tried to do it through the requests library, but Google gave links to privacy and security, I did not find information about this in the YouTube API documentation. Who knows, you can help with this A: This isn't possible to get using the YouTube API, I actually found myself needing to do the same thing as yourself and was not able to because the YouTube API lacked the necessary functionality (Hopefully, It will be added soon!) I see you mentioned Python, My only solution is in Node but I will do a large explanation and you can base your code off of it. In order to get the banner links without the YouTube API, we need to scrape the data, since YouTube uses client-side rendering we need to scrape the JSON configuration from the source. There's a variable defined inside a script called ytInitialData which is a big JSON string with a massive amount of information about the channel, viewer, and YouTube configurations. We can find the banner links by parsing through this JSON link. const request = require("request-promise").defaults({ simple: false, resolveWithFullResponse: true }) const getBannerLinks = async () => { return request("https://www.youtube.com/user/pewdiepie").then(res => { if (res.statusCode === 200) { const parsed = res.body.split("var ytInitialData = ")[1].split(";</script>")[0] const data = JSON.parse(parsed) const links = data.header.c4TabbedHeaderRenderer.headerLinks.channelHeaderLinksRenderer const allLinks = links.primaryLinks.concat(links.secondaryLinks || []) const parsedLinks = allLinks.map(l => { const url = new URLSearchParams(l.navigationEndpoint.commandMetadata.webCommandMetadata.url) return { link: url.get("q"), name: l.title.simpleText, icon: l.icon.thumbnails[0].url } }) return parsedLinks } else { // Error/ratelimit - Handle here } }) } The way the links are scraped is as follows: * *We make a HTTP request to the channel's URL *We parse the body to extract the JSON string that the banner links are inside using split *We parse the JSON string into a JSON object *We extract the links from their JSON section (It's a big JSON object data.header.c4TabbedHeaderRenderer.headerLinks.channelHeaderLinksRenderer *Because there are two types of links (Primary, the one that shows the text and secondary, links that don't show the text) we have to concatenate them together so we can map through them *We then map through the links and use URLSearchParams to extract the q query parameter since YouTube encrypts their outgoing links (Most likely for security reasons) and then extract the name and icon too using their appropriate objects. This isn't a perfect solution, should YouTube update/change anything on their front end this could break your program easily. YouTube also has rate limits for their software if you're trying to mass scrape you'll run into 429/403 errors.
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,910
Open Vallejo (https://openvallejo.org/2019/02/10/how-civil-rights-violations-starve-vallejo-of-essential-services/) Police violence costs Vallejo millions, starving city of essential services By Geoffrey King | February 10, 2019 Carolyn Drake / Magnum Photos Painted-over graffiti on a wall in Vallejo, California. A version of this essay was published to Medium on Feb. 10, 2019. Vallejo police killed 20-year-old Willie McCoy the previous evening. Over the last year, facts have come to light in Vallejo that give increasing credence to fully re-imagining, and potentially disbanding and reconstituting, the Vallejo Police Department. Civil rights violations by the Vallejo Police Department have proved devastating for the municipality and its residents. They are remarkable even when compared to much larger departments, according to public records and press reports. The high cost of settlements in frequent civil rights cases forced the city to leave its insurance pool last year, and Vallejo's insurance costs have increased dramatically, choking millions of dollars in the city's budget that could be spent on community services. In this place, structural violence and physical violence are intimately linked. Public records reveal that since 2003, Vallejo and its insurers have paid or approved at least $10.1 million in misconduct-related judgments or settlements involving Vallejo police. More than $4.5 million has been paid out for such claims since 2016. A July 2018 investigation by the East Bay Express found Vallejo paid more in civil rights judgments and settlements per officer than any of the nine Bay Area agencies it surveyed. For example, according to the Express, the City and County of San Francisco paid $1.3 million for civil rights claims in 2015–2017. Averaged across its approximately 3,000 sworn police officers and sheriff's deputies, civil rights claims cost San Francisco approximately $433 per officer in those three years. Public records show that Vallejo paid $2 million for a fatal shooting during this same time period, for a per-officer amount of approximately $20,000. Of the two cities, as of 2017 Vallejo had a lower overall crime rate, and the violent crime rates were comparable. "The most successful litigation defense is not to have been sued in the first place." — Bill Baer, U.S. Department of Justice Also in July 2018, Vallejo withdrew from the California Joint Powers Risk Management Authority municipal insurance pool, of which it had been a member since 1987. The CJPRMA saved the city millions of dollars in direct payouts for police misconduct from 2003–2018. But that proved costly for other members of the pool: prior to its exclusion from the program, Vallejo's 5-year loss history equaled 22% of total losses. Vallejo's losses were "large and disproportionate" compared to other members of the pool, a trend that continued even after Vallejo's bankruptcy. Vallejo was effectively forced out of the CJPRMA on Dec. 11, 2017, when the insurance pool's directors voted unanimously to increase the city's self-insurance retention amount from $500,000 to $2,500,000. Vallejo announced its withdrawal from the program on Feb. 28, 2018, just before the pool covered the bulk of one last large settlement for the city — a $2.5 million defamation action brought by a kidnapping and sexual assault survivor who was publicly accused of fabricating her story. Vallejo's exclusion from the insurance pool required the city to switch to new and ultimately more expensive insurance. According to the 2018–2019 city budget, Vallejo's "insurance and risk management challenges" will increase its insurance costs by $2 million each year. This represents a projected 24% increase in insurance costs over the next five years. The city projects a deficit beginning this year. The Village Market liquor and Grocery on Sonoma Blvd. In other words, Vallejo will now have to set aside $2 million more each year to cover potential settlements and judgments against the city. This is $2 million that could be spent on schools, parks, roads, and social services. It could pay for approximately 8 additional police officers, who make upward of $200,000 per year in salary, overtime and benefits, while at the same time providing better training and accountability mechanisms. Instead this money will sit encumbered, waiting for the next civil rights lawsuit against the police department. Several lawsuits are working their way through the federal court system. Recent cases have involved fatal shootings by officers; others allegedly resulted in amputations, broken bones, unlawful arrests, and Taser shocks — including to a family dog. Two more lawsuits are expected over incidents caught on video in 2018 and 2019, both involving the same officer. And less than 24 hours ago, Vallejo police officers killed a man they say was armed and initially unresponsive as he sat in a car in a Taco Bell drive-through. The numbers cited above do not include the costs associated with any future civil rights investigations or lawsuits by the California or U.S. Departments of Justice, the business and reputational impacts on the city, or the challenge of solving crimes in the face of public mistrust. They do not begin to capture the untold heartache people suffer, or the negative impact on the community as a whole, when those sworn to uphold the law break it instead. Vallejo can better protect its fiscal health and its residents' civil rights by modernizing its policing strategies. In the words of Bill Baer, then the third-highest ranking official at the U.S. Department of Justice, "the most successful litigation defense is not to have been sued in the first place, and when one is sued, it helps a great deal when law enforcement efforts are guided by evidence-based best practices." Baer was discussing the final report of the President's Task Force on 21st Century Policing, which found that "Any [crime] prevention strategy that unintentionally violates civil rights, compromises police legitimacy or undermines trust is counterproductive from both ethical and cost-benefit perspectives. Ignoring these considerations can have both financial costs (e.g., lawsuits) and social costs (e.g., loss of public support)." The President's Task Force report is one potential roadmap for building a police force that is modern, constitutional and just. We have already tried looking the other way, with increasingly predictable, tragic and costly results. Vallejo deserves and must do better. Thanks to Scott Morris, John Glidden, and Jean Likover for doing so much of the original research that inspired and informed this post. About Geoffrey King Geoffrey King is the executive editor of Open Vallejo. Prior to founding Open Vallejo, Geoffrey worked as an attorney and journalist focused on free expression, open government, press freedom and privacy. He is a proud native of Vallejo, California. More by Geoffrey Donate Securely Open Vallejo lawsuit makes police policies public Vallejo scraps Mare Island development guarantees, to sell off public land 'Domestic violence bordering on torture': records reveal years of allegations against Vallejo councilmember Vallejo police bend badges to mark fatal shootings For Vallejo officers who use force, a pattern of promotions and awards Vallejo appoints interim city attorney court sanctioned for fraud
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,613
\section{Introduction} Absorbing state phase transition (APT) \cite{Hinrichsen} is the most studied non-equilibrium phase transition in last few decades. Unlike equilibrium counterparts, these systems do not obey the detailed balance condition, as the absorbing configurations of the system can be reached by the dynamics but can not be left. Thus by tuning a control parameter these systems can be driven from an active phase to an absorbing one where the dynamics ceases. On one hand the non-equilibrium dynamics generically makes analytical treatment of these systems highly nontrivial, giving rise to varied class of distributions as well as rich variety of novel correlations, and on the other hand the non-fluctuating disordered phase being unique to APT leads to a unconventional critical behaviour. The most robust universality class of APT is directed percolation (DP) \cite{DPRev}, which is observed in context of synchronization\cite{synchro}, damage spreading \cite{damage}, depinning transition \cite{depin}, catalytic reactions \cite{catalytic}, forest fire \cite{fire}, extinction of species \cite{bioEvol} etc. Recently DP critical behaviour has been observed experimentally \cite{DPExp} in liquid crystals. It has been conjectured by \cite{DPConjecture} that in absence of any special symmetry, APT with a fluctuating scalar order-parameter belongs to DP. Models involving more than one species of particles can have interesting features \cite{MultDP, AbhikPK}. Some of these models also show multi-criticality in a sense that the density of different species may vanish at the critical point following power-laws with different exponents. In one dimensional coupled directed percolation process \cite{MultDP}, where the transmutation is hierarchical, the order-parameter exponents for different species are found to be $\beta= 0.27, 0.11, \dots,$ with the first value being that of DP. A similar feature has been observed numerically in roughening transition occurring in growth models with adsorption, and desorption at boundaries \cite{Mukamel}. In this article we show that simple diffusion of hardcore particles on a lattice can undergo a multi-critical absorbing phase transition when additional constraints or particle interactions are introduced. The model we investigate here is a variant of the assisted hopping models where hardcore particles hop to one of the neighbours with rates that generally depend on the distance of the moving particle from its nearest occupied neighbour \cite{rdd, CTTP&CLG,assist1}; steady state weights of some of these models are known exactly \cite{Oliviera,CLG_exact2, rdd}. We restrain only to a special case, where diffusion of particles are additionally constrained not to increase the inter-particle separation beyond a fixed positive integer $(n+1)$. The steady state weights of the models in this class, parameterized by the integer $n$, can be written in a matrix product form. This helps us obtaining the spatial correlation functions exactly. In particular, the density of $0$-clusters of size $0\le k<n$ vanishes at the critical point following power-laws with $k$-dependent exponents. Thus, the cluster density $\phi_k$ for each $k$ can be considered as order-parameters of the system in addition to the natural order parameter $\rho_a$, namely activity density. Our careful numerical study of the decay of $\phi_k$s from a natural initial condition \cite{FES, sourish}, which is hyperuniform \cite{hexner}, shows that the dynamical exponents $\alpha, \nu_t, z$ do satisfy scaling relations separately for each $k$. \section{The Model} The model is defined on a one dimensional periodic lattice of size $L$ with sites labeled by $i=1,2\dots L.$ Each site can be occupied by at most one particle and correspondingly there is a site variable $s_i =1,0$ that represents the presence or absence of the particle at site $i.$ The dynamics of the model is the given by, \begin{eqnarray} 10^{k}10^{m}1 &\longrightarrow 10^{k+1}10^{m-1}1 & {\rm if} ~ k<n, m \ge1\cr &\longrightarrow 10^{k-1}10^{m+1}1 & {\rm if} ~ m<n, k \ge1 \label{eq:rate} \end{eqnarray} where a particle moves to the right or left vacant neighbour, chosen independently, if the move does not increase inter-particle separation beyond $n+1$ ($n$ being a fixed integer parameter of the model). Clearly, the total number of particles $N = \sum_{i=1}^L s_i,$ or equivalently the density $\rho= N/L$ is conserved. A schematic description of the dynamics is given in Fig. \ref{fig:cartoon}. Alternatively, the dynamics of the model can be considered as constrained diffusion of hardcore particles. The constraint comes from the fact that the diffusing particle's distance, measured from the nearest particle, does not exceed $(n + 1).$ We further refer to this model as constrained diffusion model (CDM). In fact, recently a similar assisted hopping model has been introduced and solved exactly \cite{rdd}, where particle hopping depends on the inter-particle separation but unlike CDM particles there can hop by one or {\it more} steps across the empty regions. In this constrained diffusion model, a particle which is surrounded from both sides by other particles, or by $0$-clusters of size $\ge n$ are inactive as they can not move; all other particles are active. Thus, the system has many absorbing configurations where all particles are inactive. Important to note that the dynamics allows decrement of length of all $0$-clusters but increment of only those having length less than $n$. Thus it is evident that when $\rho \simeq 0$, i.e. when average separation between neighbouring particles is large, all the small $0$-clusters (size $<n$) of the system tend to grow in size until they reach a maximum $n$. In this case, the number of particles are not enough to reorganize the distances between the neighbouring particles below $(n+1)$ forcing the system to fall into an absorbing configuration. On the other hand, for large density the system has a large number of clusters of size $<n$ which would grow in expense of the larger ones, but all of them can not reach the maximum value $n$. Thus, all large clusters (size $>n$), if present in the initial state, would eventually be destroyed and the system remains active forever; this is surely the case, when $\rho>\frac{1}{n+1}.$ Clearly one expects an absorbing phase transition to occur at some density $\rho\le \frac{1}{n+1}.$ We see later (in Eq. (\ref{eq:rho_c})) that the critical density is in fact $\rho_c=\frac{1}{n+1}.$ \begin{figure}\vspace*{.2 cm} \begin{center} \includegraphics[width=7 cm]{cartoon.eps} \caption{Schematic description of the model: Particles surrounded from both sides by other particles, or by $0$-clusters of size $\ge n$ are inactive whereas all other particles are active. For $n=3$, the active particles of a typical configuration are marked as $l,r,a$ depending on whether they can move to left,right or both directions. A $0$-cluster of size $>n$ (marked with a `$\{$') can appear in the initial condition of an active phase, but they eventually disappear as the system reaches the stationary state.} \label{fig:cartoon} \end{center} \end{figure} Let us consider the system with $\rho> \frac{1}{n+1}$ where the steady state is certainly active. The initial configurations of the system in this case may consist of several $0$-clusters of size $>n$ but all these configurations are {\it non-recurring} as the system, once leaves these configurations by destroying the large clusters, never visit them again. The stationary state of the system only consists of configurations which are {\it recurring}, where all $0$-clusters are of size $n$ or less. Thereby in the steady state, if dynamics (\ref{eq:rate}) allows a particle to move from left to right it also allows the reverse, i.e. a move from right to left. Since both hopping rates are unity, the steady state satisfies the detailed balance condition with a stationary weight $w(C)=1$ for all recurring configurations. Thus, representing the configurations as $C \equiv \{10^{m_1}10^{m_2} \dots 10^{m_N}\},$ we have \begin{equation} w(\{10^{m_1}10^{m_2} \dots 10^{m_N}\} ) = \left\{ \begin{array}{cc} 1 & \forall ~m_i\le n \cr 0 & otherwise \end{array}\right.\label{eq:weight} \end{equation} where the second step ensures that the steady state weight of the {\it non-recurring} configurations are zero. The corresponding probability is then, \begin{equation} P_N(\{s_i\}) = \frac{w(\{s_i\})}{ \Omega_N}; \Omega_N= \sum_{\{s_i\}} w(\{s_i\} ) \delta(\sum_i s_i - N). \ee Here, $\Omega_N$ is the number of recurring configurations of a system of size $L$ having $N$ particles. It is customary to work in the grand canonical ensemble (GCE) where density of the system can be tuned by a fugacity $z,$ the partition function in GCE is $Z = \sum _{N=0}^\infty \Omega_N z^N.$ To proceed further, we make an ansatz that the steady state weights of the configurations can be expressed as a matrix product form, \begin{equation} w(\{10^{m_1}10^{m_2} \dots 10^{m_N}\} ) = {\rm Tr}[ DE^{m_1}\dots DE^{m_N}], \ee where matrices $D$ and $E$ represents $1,0$ respectively. All what we need for a matrix formulation to work is to find a representation of $D$ and $E$ that correctly generates the steady state weights given by Eq. (\ref{eq:weight}). The matrix formulation is very useful here, as one can simply set \begin{equation} E^m =0 ~ {\rm for} ~ m > n \label{eq:condition1} \ee to ensure that probability of all non-recurring configurations are $0.$ Further, let us assume that matrix $D = |\alpha\rangle\langle \beta|,$ where $|\beta\rangle, \langle \alpha|$ are yet to be determined. Now, the recurring configurations are equally likely if \begin{equation} \langle \beta| E^m |\alpha\rangle=1 ~ {\rm for} ~ 0\le m \le n. \label{eq:condition2} \ee Together, Eqs. (\ref{eq:condition1}) and (\ref{eq:condition2}) are satisfied by the following $(n+1)$ dimensional matrices \small{ \begin{equation} E=\sum_{k=1}^{n} |k\rangle\langle k+1|;~ |\alpha\rangle = \sum_{k=1}^{n+1} |k\rangle;~ | \beta \rangle = |1\rangle;~ D= |\alpha\rangle\langle \beta| \label{eq:matrices} \ee} Now, we can write a grand canonical partition function, \begin{equation} Z_L(z) = Tr[T(z)^L] ~~ {\rm where}~~ T(z)= z D + E \ee where fugacity $z$ controls the particle density $\rho.$ The weight of the configuration having no particles is $Tr[E^L]=0$ for $L>n$ (from Eq. (\ref{eq:condition2})). Thus, $Z_L(z)$ is the sum of the weights of all other configurations which has at least one particle. {\small \begin{equation} Z_L(z) = z \sum_{k=1}^L{\rm Tr~}} \def\c{\mathscr\left[E^{k-1}D T^{L-k}\right]= z\sum_{k=1}^L\bra \beta T^{L-k} E^ {k-1} \ket \alpha. \label{eq:ZL} \ee } For any specific $n,$ $Z_L(z)$ can be calculated explicitly. We prefer to use a generating function (or, partition function of the system in variable length ensemble (VLE)), \begin{eqnarray} &&\mathscr Z(z,\gamma)= \sum_{L=1}^\infty \gamma^L Z_L(z) = \bra \beta\frac{\gamma z}{{\cal I}-\gamma T} \frac{1}{{\cal I}-\gamma E} \ket \alpha,\cr &&~~~= \gamma z \frac{ g'(\gamma)}{1- z g(\gamma)} ~;~ g(x) = \sum_{k=0}^n x^{k+1}= x \frac{x^{n+1} -1}{x-1} \label{eq:Zzg} \end{eqnarray} where, together $z$ and $\gamma$, determine the macroscopic variables \begin{eqnarray} &&\langle L\rangle = \frac{\gamma}{\mathscr Z} \frac{\partial \mathscr Z}{\partial \gamma } = 1 + \gamma z \frac{g'(\gamma)}{1- z g(\gamma) } +\gamma \frac{g''(\gamma)}{g'(\gamma)}\cr &&{\rm and}~~~~\langle N\rangle = \frac{z}{\mathscr Z} \frac{\partial \mathscr Z} {\partial z}= \frac{1}{1- z g(\gamma)}. \label{eq:LN} \end{eqnarray} The thermodynamic limit $\langle L\rangle \to \infty$, where VLE is expected to be equivalent to GCE, corresponds to $ z \to 1/ g(\gamma).$ And, in this limit, the particle density is, \begin{equation} \rho(\gamma) = \frac{\langle N\rangle}{\langle L\rangle} = \frac{1}{\gamma}\frac{g(\gamma)}{g'(\gamma)}. \label{eq:rho} \ee Since both $g(\gamma),$ and $\gamma g'(\gamma)$ are polynomials of order $(n+1)$ the density $\rho$ must be finite as $\gamma \to \infty$, which corresponds to the limit $z\to 0,$ as $z= 1/ g(\gamma).$ \begin{equation} \lim_{z\to 0} \rho(z) \equiv \lim_{\gamma \to \infty } \rho(\gamma) = \frac{1}{n+1} + \frac{1}{(n+1)^2} \frac{1}{\gamma} +\mathscr O ( \frac{1}{\gamma^2}) \label{eq:crit} \ee This proves that the critical density is \begin{equation} \rho_c= \frac{1}{n+1}, \label{eq:rho_c}\ee and the system goes to an absorbing state when $\rho<\rho_c.$ Further, Eq. (\ref{eq:crit}) indicates that, near the absorbing transition \begin{equation} \gamma^{-1} \simeq (n+1)^2 (\rho-\rho_c). \ee In Fig \ref{fig:n2_rho}(a) we have plotted $\rho$ as a function of $\gamma^{-1}$ for $n=2,$ where the inset shows $z \equiv g(\gamma)^{-1}$ as a function of $\gamma^{-1}$. Figure \ref{fig:n2_rho}(b) there shows the plot of $\rho(z)$. Clearly, both in the limit $z\to 0$ or equivalently when $\gamma\to \infty$, $\rho \to \frac 1 3$ indicating that an absorbing phase transition occurs at $\rho_c=\frac 1 3.$ \begin{figure}\vspace*{.2 cm}\begin{center} \includegraphics[width=7 cm]{n2C.eps} \caption{(a) For $n=2,$ the density $\rho$ and the fugacity $z= g(\gamma)^{-1}$ (inset) are shown as a function of $\gamma^{-1}$ following Eq. (\ref{eq:rho_n2}). The parametric plot of $\rho$ as a function of $z$ is shown in (b).} \label{fig:n2_rho}\end{center} \end{figure} \section{Multicriticality} At the critical density $\rho_c$ all $0$-clusters are of length $n.$ Thus as $\rho\to \rho_c$ from above, i.e. in the active phase $\rho> \rho_c$, number of $0$-clusters having size $k<n$ must individually vanish. Defining density of such clusters as $\phi_k,$ we have, \begin{eqnarray} &&\phi_k=\langle 10^k 1 \rangle = \frac{\gamma^{k+2} z^2}{\mathscr Z(z,\gamma)} {\rm Tr~}} \def\c{\mathscr[ DE^kD \frac{1}{{\cal I}-\gamma T}]\cr &&~ = \frac{\gamma^{k+2} z^2}{\mathscr Z(z,\gamma)} \bra \beta E^k \ket \alpha \bra \beta\frac{1}{{\cal I}-\gamma T} \ket \alpha = \rho z \gamma^{k+1} \label{eq:phi_k} \end{eqnarray} for $0 \le k <n.$ Here, in the last step we have used the fact that \begin{equation} \bra \beta\frac{1}{{\cal I}-\gamma T} \ket \alpha = \frac{g(\gamma)}{\gamma -\gamma z g(\gamma)} ~{\rm and} ~ \bra \beta E^k \ket \alpha =1. \ee In the thermodynamic limit, $z \to g(\gamma)^{-1},$ we have \begin{equation} \phi_k =\rho\frac{ \gamma^{k+1}}{g(\gamma)} = \frac{ \gamma^{k}}{g'(\gamma)}\ee and in the critical limit $\gamma \to \infty,$ (where $g(\gamma) \simeq \gamma^{n+1}$), \begin{eqnarray} \phi_k \simeq \gamma^{k-n} \simeq (n+1)^{3-2k} (\rho-\rho_c)^{\beta_k} ~;~ \beta_k = n-k. \label{eq:near_crit} \end{eqnarray} \begin{figure}\vspace*{.2 cm}\begin{center} \includegraphics[width=7 cm]{n2B.eps} \caption{(a) For $n=2,$ $\phi_k= \langle 10^k1\rangle$ are shown as functions of $\rho$ for $k=0,1,2.$ Clearly, $\phi_{0,1}$ vanishes as $\rho\to \rho_c= 1/3$ whereas $\phi_2 \to (1-\rho_c)/2$ (b) Log-scale plot of $\phi_{0,1,2}$ as a function of $\rho-\rho_c$ gives slope $\beta_k = 2-k$. The dashed line corresponds to near critical approximation of $\phi_k$s, given by Eq. (\ref{eq:near_crit}).} \label{fig:n2}\end{center} \end{figure} In Fig. \ref{fig:n2}(a) we have plotted $\phi_k$s for $n=2,$ as a function of density $\rho$. Both $\phi_{0,1}$ vanishes as $\rho\to \rho_c=\frac{1}{3}$ and thus each of them can be considered as an order-parameter that describes the APT. However, $\phi_2$ does not vanish and at the critical point $\phi_2= (1-\rho_c)/2$ because there is an exact correspondence $1-\rho= \sum_{k=0}^n k \phi_k$ which holds for any $n, \gamma.$ Also at $\gamma=1,$ which corresponds to density $\frac {2}{(n+2)}$ (from Eq. (\ref{eq:rho})), all $\phi_k$ takes the same value $\frac 2{(n+1)(n+2)}$ (from Eq. (\ref{eq:phi_k})). Thus for $n=2,$ $\phi_k$s cross each other at $\rho=\frac1 2.$ In Fig. \ref{fig:n2}(b) we have shown $\phi_k$s as a function of $\rho- \frac1 3$ in log-scale; both $\phi_{0}$ and $\phi_1$ show power laws as a function of $\Delta = \rho-\rho_c$ in log-scale suggesting that $\phi_{0,1} \sim \Delta^{\beta_{0,1}}$ with $\beta_0=2$ and $\beta_1=1.$ Coming back to the general $n$, all the $\phi_k$ with $k=0,1, \dots n-1$ vanishes as $\rho\to \rho_c$ following $\phi_k \simeq (\rho-\rho_c)^{\beta_k}$ with exponents $\beta_k = n-k.$ The natural question is then, whether other exponents associated with $\phi_k$s will modify such that the standard scaling relations are obeyed. The answer is affirmative, which we will discuss in details. But, let us remind ourselves that, besides these $n$ observables $\phi_k$s there is a natural order-parameter $\rho_a,$ the density of active particles, which conventionally characterizes the APT. Since in the steady state, inactive particles are surrounded from both sides by $0$-clusters of size $0$ or $n,$ the density of active particles is \begin{eqnarray} \rho_a = \sum_{k_1,k_2=0}^n \psi_{k_1,k_2} -\psi_{0,0}-\psi_{n,n} \cr {\rm where} ~ \psi_{k_1,k_2} = \langle 10^{k_1}10^{k_2}1\rangle = \rho z^2 \gamma^{k_1+k_2+2}, \end{eqnarray} and $0\le k_1,k_2\le n.$ Now, for a thermodynamic system $z \to 1/g(\gamma)$ and in the critical limit (as $\gamma \to \infty$), \begin{eqnarray} \rho_a &=& \frac{\rho}{g(\gamma)^2} \left(g(\gamma)^2 - \gamma^2- \gamma^{2n+2}\right) \cr &\sim & \rho_c (\rho- \rho_c) + {\cal O} \left( (\rho- \rho_c)^2\right). \nonumber \end{eqnarray} Thus, the natural order-parameter exponent associated with $\rho_a$ is $\beta=1.$ \begin{figure} \vspace*{.2 cm}\begin{center} \includegraphics[width=6.5 cm]{corr.eps} \caption{For $n=2,$ the density correlation function $C(r)$ calculated from Monte-carlo simulations for $\rho=\frac{10}{29} \simeq 0.345$, a value closer to the critical density $\rho_c=1/3,$ is compared with the analytical results calculated using Eqs. (\ref{eq:Cr}), (\ref{eq:theta}), (\ref{eq:rho_n2}). } \label{fig:corr}\end{center} \end{figure} To calculate other static exponents $\nu$ and $\eta$ we study the correlation functions, first the density correlation function \begin{eqnarray} C(r)& =& \langle s_i s_{i+r+1}\rangle - \rho^2 \cr &=& \frac{\gamma^2 z^2}{\mathscr Z(z,\gamma)} \bra \beta (\gamma T)^{r} \ket \alpha \bra \beta \frac{1}{{\cal I}-\gamma T} \ket \alpha -\rho^2 \cr &=&\frac{\rho\gamma}{g(\gamma)} \bra \beta (\gamma T)^{r} \ket \alpha -\rho^2 \label{eq:Cr} \end{eqnarray} Similarly, correlation of the order-parameters can be calculated using a variables $s^k_i$ which takes a nonzero value $1$ only when $i$-th site is occupied, and exactly $k$ neighbours to its right are vacant (thus, $\phi_k = \langle 10^k1\rangle= \langle s^k\rangle$), \begin{eqnarray} C_k(r) &=& \langle s^k_i s^k_{i+r+1}\rangle -\phi_k^2 = \frac{\gamma^{2k+4} z^4}{\mathscr Z(z,\gamma)} \bra \beta E^k \ket \alpha^2 \cr &&~~~\times \bra \beta (\gamma T)^{r} \ket \alpha \bra \beta \frac{1}{{\cal I}-\gamma T} \ket \alpha -\phi_k^2\cr &=& \frac{\rho\gamma^{2k+3}}{g^3(\gamma)} \bra \beta (\gamma T)^{r} \ket \alpha-\phi_k^2. \end{eqnarray} Clearly $r$-dependence of $C(r)$ and $C_k(r)$ comes from the same factor $\bra \beta (\gamma T)^{r} \ket \alpha$ and the detailed structure of these correlation functions would depend on the nature of eigenvalues of $T.$ Eigenvalues can be calculated explicitly for any given $n,$ but first let us extract some general results. The characteristic equation for the eigenvalue equation for $T$ is \begin{equation} \lambda^{n+1} -z \sum_{k=0}^n \lambda^k =0, \label{eq:char}\ee which is equivalent to $z g(\lambda) = \lambda^{n+2}.$ Since $g(x)$ satisfies an identity $g(\frac{1}{x}) = \frac{g(x)}{x^{n+2} }$, using $z= g(\gamma)^{-1}$ one can check that $\lambda= \gamma^{-1}$ is one of the solution of the characteristic equation. Again, since the characteristic equation changes sign once, from Descartes' sign rule we conclude that there is exactly one positive real eigenvalue; thus the largest eigenvalue of $T$ is $\lambda_{1}= 1/\gamma.$ Assuming that the eigenvalues $\{\lambda_k\}$ are ordered such that $\lambda_1 < |\lambda_2|\le\dots |\lambda_{n+1}|$ (mod is taken, as generically, the eigenvalues could be complex), we write, \begin{equation} \bra \beta T^{r} \ket \alpha = A_1 \left( \lambda_1^r + \sum_{k=2}^{n+1} A_k \lambda_k^{r}\right)\n \ee where $A_k$ are constants, independent of $r$. Since, the correlation function $C(r)$ vanishes in $r\to \infty$ limit, we must have $A_1 = \rho g(\gamma)/\gamma,$ which results in the asymptotic form of the correlation function as, \begin{equation} C(r)\simeq \rho^2 A_2 (\gamma \lambda_2)^{r}~;~ C_k(r)\simeq \phi_k^2 A_2 (\gamma \lambda_2)^{r}. \ee If $\lambda_2$ is complex, then $\lambda_3$ must be $\lambda_2^*$, because complex roots of real valued polynomials appear pairwise. Taking $\lambda_{2,3} = \bar \lambda e^{\pm i \theta},$ the correlation functions can be written as, \begin{equation} C(r)\simeq \rho^2 A_2 (\gamma \bar \lambda)^{r} \cos(r\theta) ~;~ C_k(r)\simeq \phi_k^2 A_2 (\gamma \bar \lambda)^{r}\cos(r\theta). \label{eq:Cr} \ee Let us calculate the correlation functions explicitly for $n=2,$ where the eigenvalues of the transfer matrix $T=zD+E,$ with $z^{-1} =g(\gamma)= \gamma+ \gamma^2 + \gamma^3$ and $D,E$ given by Eq. (\ref{eq:matrices}) are \begin{equation} \lambda=\{\frac{1}{\gamma}, \bar\lambda e^{\pm i \theta} \}~;~\bar\lambda= \frac{\gamma}{g(\gamma)}; \tan(\theta) = \frac{\sqrt{3+ 2\gamma + 3\gamma^2}}{1+\gamma}\label{eq:theta} \ee This leads to, \begin{equation} \rho = \frac{1+ \gamma + \gamma^2}{1+ 2 \gamma + 3 \gamma^2}~ and ~~\phi_k = \frac{\gamma^k}{1+ 2 \gamma + 3 \gamma^2} \label{eq:rho_n2} \ee Thus in this case the spatial correlation functions would show damped oscillations of period $2\pi/\theta.$ We calculate the density correlation functions of CDM with $n=2$ at density $\rho=\frac{10}{29} \simeq 0.345$ which is close to the critical density $\rho_c=1/3$ and plot $C(r)$ as a function of $r$ in Fig. \ref{fig:corr}. We compare this with the analytic results, using $\gamma = 10.8$ (corresponding to $\rho=\frac{10}{29}$ in Eq. (\ref{eq:Cr})). The oscillations are consistent with $\theta = 1.03$ calculated from Eq. (\ref{eq:theta}). It is important to note that, for any $n,$ all $C_k(r)$s have same $r$ dependence, suggesting an unique length scale $\xi = 1/\ln( \gamma \bar \lambda).$ At the critical point $(\gamma \to \infty),$ the eigenvalues $\lambda_k$s approach towards $\frac{1}{\gamma} e^{2\pi i k/{(n+1)}}$ and thus $|\lambda_k|/\lambda_1 \to 1,$ resulting in a diverging correlation length $\xi$ . Near the critical point, we may write, to the leading order, $(\gamma \bar \lambda -1) \propto \frac{1}{\gamma};$ thus, the correlation length $\xi \sim \gamma \sim (\rho-\rho_c)^{-\nu},$ with $\nu=1.$ Also, since the correlation functions are expected to decay as $r^{-(d-2+\eta)},$ for this one dimensional model ($d=1$) we get $\eta =1.$ \begin{figure}\vspace*{.2 cm}\begin{center} \includegraphics[width=7 cm]{alpha_k.eps} \caption{ At the critical point the order-parameters $\phi_k(t)$ for $k=0,1\dots ,n-1$ decay as $t^{-\alpha_k}$ where $\alpha_k= \frac{n-k}{2}.$ In (a) and (b) we show decay of $\phi_k(t)$s, from a natural initial condition (see text for details), for $n=2$ and $n=3$ respectively (respective system sizes are $3\times 2^{14}$ and $2^{16}$). } \label{fig:alpha}\end{center} \end{figure} \begin{figure}\vspace*{.3 cm}\begin{center} \includegraphics[width=7 cm]{z.eps} \caption{ Scaling collapse of order-parameters $\phi_{0,1}$ for $n=2$ following Eq. (\ref{eq:collapse_n2}). At the critical critical point, $\phi_k(t) t^{\alpha_k}$ is an universal function of $tL^{-z}$. (a) and (b) shows data collapse respectively for $k=0,1$, for system size $L= 300\times (1,2,4,8).$ Here we take $z=2$ and use $\alpha_k$ as a fitting oparameter; data collapse is observed in (a) for $\alpha_0 =1.02$ and (b) for $\alpha_1=0.5.$ } \label{fig:z}\end{center} \end{figure} Now let us turn our attention to the dynamic exponents at the critical point. At the critical point, every particle has exactly $n$ vacant sites to their right. If we add an extra particle, it will break one of the $0$- clusters into two, each having size $<n,$ creating some active particles in the system. It is easy to see that these active particles would do unbiased random walk, exploring a typical region of size $ \sqrt t$ in time $t.$ Thus, the dynamic exponent is $z=2.$ Now assuming that the scaling relations $\nu_t= \nu z$ we expect $\nu_t=2.$ Since $\phi_k$s vanish at the critical point, it is natural to expect that their decay from an active initial condition follow a power-law, \begin{equation} \phi_k(t) \sim t^{-\alpha_k} ~; ~ \alpha_k = \frac{\beta}{\nu_t} = \frac{n-k}{2}. \ee Of course, we have assumed scaling relations to hold here, when its validity is being doubted \cite{sourish,violation} in similar models. Thus it is necessary that we verify from numerical simulations, whether the scaling relations are indeed valid here. To measure the decay exponents at the critical density $\rho_c$ corresponding to any $\phi_k,$ one must carefully choose initial configurations with some nonzero $\phi_k$ which possess natural correlations of the critical state. It has been argued \cite{FES, hexner} and verified in many models of APT \cite{sourish, GDP} that the critical absorbing state is hyperuniform, i.e., the variance of density in the critical state is sub-linear in volume (here length $L$). Usually densities in hyperuniform states are anti-correlated and thus it is useful to study decay from configurations which already posses the natural correlations of the critical state. Such natural initial conditions can be generated following the prescriptions given in Ref. \cite{FES}. In the restricted diffusion model, starting from the absorbing configuration, $1$s separated by $n$ zeros, we allow particle to diffuse stochastically for a very short time (say, 0.1 MCS) to create an active state and then turn on the dynamics. The decay of $\phi_k(t)$ for $n=2$ and $3$ are plotted in Fig. \ref{fig:alpha}(a) and (b) respectively in log-scale; they consistently show that $\alpha_k= (n-k)/2.$ We also calculate the dynamical exponent $z$ from the finite size corrections. At the transition point, \begin{equation}\phi_k(t,L) = t^{-\alpha} {\cal F}_k (\frac t{L^z}). \label{eq:collapse_n2}\ee Starting from the natural initial condition, we measure $\phi_k(t,L)$ for different $L$ and plot $\phi_k(t,L) t^{\alpha_k}$ as a function of $\frac t{L^z}$ in Fig. \ref{fig:z} (a) and (b) respectively for $k=0$ and $1$, taking $z=2.$ A good data collapse confirms that $z=2.$ Note that the fluctuations and a small deviation of $\alpha_0=1.02$ from expected value $1$ can be blamed to the small numerical value of $\phi_0.$ \section{Mapping to misanthrope process} We must mention that CDM can be mapped to misanthrope process in one dimension \cite{EvansBeyondZRP}, where particles do not obey hardcore restriction and hop, one at a time, from a site $($usually called box$)$ to one of the neighbours with a rate that depends on the occupation number of both, the departure and the arrival site. In this mapping, 1s are considered as boxes carrying exactly as many particles as the number of vacant sites in front them. Thus, the dynamics of CDM translates to hopping of a single particle from a box to a neighbour with a restriction that the hopping must not increase the occupation of the target box beyond $n.$ Thus the system falls in to an absorbing state (where all boxes contain $n$ or more particles) when particle per box $\eta = (L-N)/N$ exceeds $\eta_c=n.$ In the active phase, thus, all boxes $\le n$ particles and the partition function in GCE is $Z(x) = F(x)^L$ where $F(x)=\sum_{k=0}^n x^k.$ Corresponding density is then $\eta(x) = \frac{1}{F(x)} \sum_{k=0}^n k x^k.$ The order-parameters $\phi_k$s are simply the steady state probability that a box contains $k$-particles; $\phi_k = x^k/F(x)$ vanish as $x^{k-n}$ in $x\to \infty$ limit, or equivalently $ \phi_k\sim (n- \eta)^{n-k}$ as in this limit $\eta \sim n - 1/x.$ Although $\phi_k$s can be calculated efficiently in the box particle picture, it is rather difficult to calculate the correlation functions in general, as the information of particle ordering is lost in the mapping. In such cases, it is useful to write the steady state in matrix product form \cite{UrnaPK}, whenever possible. \section{Summary} In summary we study diffusion of hardcore particles on a one dimensional periodic lattice, where particle movement is constrained such that the inter-particle separation is not increased beyond $(n+1)$. Thus particles which are surrounded from both sides either by other particles or by $0$-clusters of size $\ge n$ are immobile or inactive, whereas all other particles are active. Thus initial distances between two neighbouring particles, if larger than $(n+1),$ can only decrease if one of the particle is active. This constrained diffusion model (CDM) undergoes an absorbing state phase transition when density is lowered below a critical value $\rho_c= \frac 1 {n+1}.$ Interestingly, besides the activity density $\rho_a$ the APT here can be characterized by the steady state densities of $0$-clusters of size $0\le k<n$ (i.e. $\phi_k = \langle 10^k1\rangle = \langle s^k_i\rangle)$ which vanish simultaneously at $\rho_c.$ We show that the steady state of CDM can be written as a matrix product, which helps us obtaining the static critical exponents exactly: $\rho$ approaches $\rho_c$ from the active side, $\rho_a \sim (\rho-\rho_c)^\beta$ with $\beta=1$ whereas other order-parameters vanish as $\phi_k \sim (\rho-\rho_c)^{\beta_k}$ with $\beta_k= n-k.$ This multicritical behaviour is characterized by correlation exponents $\nu =1=\eta,$ same for all $\phi_k$s as $\langle s^k_i s^k_{i+r} \rangle \sim e^{-r/\xi}$ with $\xi \sim (\rho-\rho_c)^{-1}.$ The steady state dynamics of CDM in the active phase is only unbiased diffusion of particles, leading to an dynamical exponent $z=2.$ Thus, assuming that the scaling relations $\nu_t= z\nu,$ $\alpha = \beta/\nu_t$ hold, one expects that $\nu_t=2$ is independent of $k$ whereas $\alpha \equiv \alpha_k= (n-k)/2.$ We verified the scaling relations explicitly from careful Monte-Carlo simulations of the model by measuring $z,\alpha_k$ for $n=2,3.$ In these simulations, the major difficulty is to choose initial conditions that retains natural correlations of the stationary state, which we overcome by using natural initial conditions \cite{FES}. Multicritical phase transitions are not specific to absorbing phase transitions. It has been observed in many other contexts. Some of the examples in equilibrium includes eight-vertex solid on solid models \cite{Baxter,Huse}, $N$-state Potts model \cite{McCoy}, antiferromagnetic spin chains \cite{Kedar} etc. Also, this has been observed in multi-species directed percolation process \cite{MultDP} and in growth models with adsorption \cite {Mukamel}. In all these models, the critical point could be characterized by many order-parameters, each corresponding to a particular kind of order - but they all vanish at the same critical point. Exactly solvable models are a step forward to understand the nature of transition. It would be interesting to look for perturbations which could produce different ordered phases of CDM at different densities. {\it Acknowledgement:} The authors acknowledge Amit K. Chatterjee for helpful discussions. PKM thankfully acknowledge financial support from the Science and Engineering Research Board, India (Grant No. EMR/2014/000719).
{ "redpajama_set_name": "RedPajamaArXiv" }
5,273
We offer you best of services in astrology with an easy to use web based interface. The intimacy turns out to be a fight with some elements of melodrama. Sensitive Pisces depend on Aquarius too much and want him to demonstrate his love all the time. As a result, Aquarius feels a lack of freedom. So these two signs have different objectives. A relation may seem to be much promising in the beginning, but won�t be successful anyway. Are you really a Manglik?
{ "redpajama_set_name": "RedPajamaC4" }
5,114
Окръг Уорд () е окръг в щата Северна Дакота, Съединени американски щати. Площта му е 5325 km², а населението - 68 946 души (2017). Административен център е град Майнът. Източници
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,014
These Terms of Service (the "Agreement") are an agreement between Eclarian LLC ("Eclarian" or "us" or "our") and you ("User" or "you" or "your" or "subscriber" or "customer"). This Agreement sets the general terms and conditions of your use of the products and services available to you through Eclarian and the Eclarian.com website (collectively, the "Services"). By using these Services, you agree to be bound by this Agreement. Those who do not agree to abide by the terms and conditions of this Agreement are specifically not authorized to use or access the Services. All services provided by Eclarian are to be used for lawful purposes only. Transmission, storage, or presentation of any information, data or material in violation of any United States Federal, State or Local law is prohibited. This includes, but is not limited to: copyrighted material, material we judge to be threatening or obscene, material that jeopardizes national security, or material protected by trade secret or other laws. You agree to indemnify and hold harmless Eclarian from any claims resulting from the subscriber's use of the Services which damages the subscriber or any other party, including Eclarian. Content that is deemed to not meet these standards will be removed without prior notice to the subscriber. If Services are terminated due to a violation of these Terms of Service, absolutely no refund will be available to the customer. Eclarian allows a certain level of flexibility for customers to create their own passwords and account authorization practices and procedures. You are responsible for using strong, complex passwords and securing access to your account credentials. You must protect the information used to authenticate your account. Eclarian will not be liable for any damages, direct or indirect, that result from unauthorized account access, password compromise or hacking. Eclarian provides support to customers via phone, email, and in person at our office. Contact Eclarian via phone at 616-209-9134. Email your support request to support@eclarian.com. You may schedule a visit at our office using one of these methods. The company is located at 4675 32nd Ave, Suite 2, Hudsonville, MI 49426. If your support requests requires paid support, Eclarian will inform you before the work begins, unless you have an existing support agreement in place that supersedes these general Terms of Service. The web hosting services provided by Eclarian are not normally subject to long-term contracts, unless otherwise specified by your specific agreement with Eclarian. Most services are considered "month to month" in nature. If a refund is requested, Eclarian will use reasonable efforts, best practices, and follow company policy in responding to all refund requests. Subscriber acknowledges that from time to time the services covered under these Terms of Service may be inaccessible or inoperable for various reasons, including maintenance or upgrades, administrative actions, malfunctions, and causes beyond Eclarian's control or which are not reasonably foreseeable by Eclarian, including the interruption or failure of telecommunications or digital transmission links, hostile hackers and network attacks, or network congestion or other failures. Eclarian will seek to provide 24 hours advance notice to subscribers for scheduled maintenance and downtime. Eclarian has no responsibility for downtime resulting from a user's actions. If a customer payment is late, a late fee of $25 may be added to the customer's account. Eclarian reserves the right to add late fees if invoices are left unpaid past the due date in any way deemed necessary to facilitate collection. A returned check penalty fee of $25 or more may be charged for any check dishonored by their bank or financial institution. A credit card charge back fee of $25 or more may be charged to any customer's account for any charge back received by their bank or financial institution. If the customer wishes a fee to be waived, the customer must submit a support ticket with documentation as to why the fee should be waived, and Eclarian will determine (at our sole discretion) to waive the fee or not. Late payments may result in temporary suspension or termination of services. If a customer does not pay an invoice by the due date listed on the invoice, Eclarian may suspend or terminate the customer's services without warning or notification. Eclarian is not responsible for interruption in services due to unpaid invoices and account balances. Eclarian may wait to un-suspend an account until a check clears, if customer pays by check. Eclarian may suspend or terminate customer accounts due to overdue payment or unpaid invoices. If you as the customer fail to pay an invoice before the due date listed on the invoice, Eclarian reserves the right to suspend or disconnect service(s) without further warning or notification. If services are disconnected for non-payment, customer must pay all past due charges and late fees in order to reconnect or un-suspend service. If a customer account is suspended or terminated for non-payment, the customer may be charged a reconnection fee to re-enable the account(s). Eclarian shall not be liable to customer for harm caused by or related to customer's services or inability to utilize the services, unless caused by willful misconduct. Eclarian shall not be liable to customer for lost profits, indirect, special, or incidental, consequential or punitive damages. Eclarian provides all products and services "As Is," without warranty of any kind, whether express or implied, and disclaims all implied warranties, including but not limited to, the implied warranties of merchantability for a particular purpose. The customer shall be solely responsible for the selection, use, and suitability of any product or service chosen by customer. Customer is solely responsible for ensuring the secure nature of passwords, credentials, accounts, and server security. These Terms of Service are subject to change without any prior notification. By opening and using an account or any Eclarian service, you agree to these Terms of Service. All prices are nonrefundable and nonnegotiable. You can review the most current version of the Terms of Service at any time on this web page. We reserve the right, at our sole discretion, to update, change or replace any part of these Terms of Service by posting updates and changes to our website. It is your responsibility to check our website periodically for changes. Your continued use of or access to our website or the Service following the posting of any changes to these Terms of Service constitutes acceptance of those changes. Customer hereby agrees to indemnify, defend, and hold harmless Eclarian and its respective successors and assigns, harmless from and against any loss, expense, cost (including reasonable attorney and other professional fees), damages of whatsoever kind or nature incurred or suffered by reason of any breach by the other party of any of the covenants, obligations or agreements contained in these Terms of Service. © Eclarian LLC. All rights reserved.
{ "redpajama_set_name": "RedPajamaC4" }
3,944
2017 Red Bull Air Race World Series — это двенадцатая серия чемпионата мира по спортивному гонку Red Bull Air Race. Самолёты и пилоты Master Class Изменения пилота Бывший чемпион [Найджел Лэмб] ушёл из спорта после финального раунда сезона 2016 года. 2015 год. Чемпион класса Challenger Мика Бражо дебютировал в Мастер-классе. Challenger Class Все пилоты Cupenger Cup использовали Extra 330LX. Ракетный календарь и результаты 16 декабря 2016 года было объявлено, что 10—15 февраля состоится первый тур в Персидском заливе Абу-Даби. Статистика турнира Master Class Challenger Class Рекомендации Ссылки Авиационный спорт
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,931
\section{Introduction} The abundance of hosts of a vector-borne disease could influence the dilution or amplification of the infection. In \cite{keesing2010impacts}, the authors discusse several examples where loss of biodiversity increases disease transmission. For instance, West Nile virus is a mosquito-transmitted disease and it has been shown that there is a correlation between low bird density and amplification of the disease in humans \cite{allan2009ecological,swaddle2008increased}. One of the suggested explanations of this phenomenon is that the competent hosts persist as biodiversity is lost, meanwhile the density of the species who reduce the pathogen transmission declines. This is the case of the Lyme disease in North America, which is transmitted by the blacklegged tick \textit{Ixodes pacificus}. The disease has the white-footed mouse \textit{Peromyscus leucopus} as competent host, which are abundant in either low-diversity or high-diversity ecosystems. On the other hand, the opossum \textit{Didelphis virginiana}, which is a suboptimal host and acts as a buffer of the disease, is poor in low-diversity forest \cite{keesing2009hosts,LoGiudice2008impact}. Symmetrically, the dilution effect hypothesizes that increases in diversity of host species may decrease disease transmission \cite{ostfeld2000biodiversity}. The diluting effect of the individual and collective addition of suboptimal hosts is discussed in \cite{johnson2010diversity}. For example, the transmission of \textit{Schistosoma mansoni} to target snail hosts \textit{Biomphalaria glabrata} is diluted by the inclusion of decoy hosts. These decoy hosts are individually effective to dilute the infection. However, it is interesting to notice that their combined effects are less than additive \cite{johnson2012parasite,johnson2009community}. The objective of this paper is to study the behavior of a vector-borne disease with multiple hosts when changes in biodiversity occur. More precisely, we present a mathematical framework that simultaneously explains why the accumulative effect of decoy hosts is less than additive and how competent and resilient host amplify the disease. To model a vector-borne disease with multiple hosts we use a dynamical system that was created based on \cite{dobson2004population}. We suggest a mathematical interpretation of competent and suboptimal host using the basic reproductive number of the cycle formed by the host and the vector. Furthermore, we assume that the abundances of the hosts follow a conservation law given by community constraints and with it we attempt to capture how a disturbance of the ecosystem leads to changes in the density of the hosts. We also give a mathematical interpretation of what a resilient species is using the conservation law. In this way, we are able to measure the effect on the dynamics of the disease due to different changes in the biodiversity. We show that in the case of endemic diseases these effects are determined by the effectiveness of the hosts to transmit the disease and the resistance of the hosts to biodiversity changes. In section \ref{smodel} we present the variables and the equations of the model. Section \ref{sresults} is divided in three subsections. In subsection \ref{scompetent} we derive some properties of the basic reproductive number and we show how an endemic state implies the existence of a competent host. From these properties we explain why the combined effect of decoy hosts is less than additive and how biodiversity loss can entail amplification of the disease. Subsection \ref{sconstraints} introduces the community constraints that leads us to a definition of resilient host. In subsection \ref{singlecompetent} we consider the case of an endemic disease with a unique competent host. We discuss the conclusions from our results in section \ref{sconclusions} . The mathematical justification are in Appendix, section \ref{sappendix}. \section{The model} \label{smodel} We propose a mathematical model of a vector-borne disease that is spread among a vector $V$ and hosts $H_i$, $i= 1, \ldots,k$. We suppose that each population is divided into susceptible individuals ($S_V$ susceptible vectors and $S_{H_i}$ susceptible hosts) and infectious individuals ($I_V$ infectious vectors and $I_{H_i}$ infectious hosts). Let $N_V$ and $N_{H_i}$ represent the total abundances of vectors and hosts respectively. The dynamics of the disease will be studied by means of the basic reproductive number as we are interested in the strength of a pathogen to spread in an ecosystem. Modification of the ecosystem entails changes in the abundances of the hosts. After these changes are brought, the ecosystem will settle to a stable pattern of constant abundances. We are interested in understanding the basic reproductive number when the ecosystem reaches these steady states. Therefore we will assume the abundance of the vector and hosts are constant in time, i.e. $\dot{N_V} = \dot{S_V} + \dot{I_V}=0$ and $\dot{N_{H_i}} = \dot{S_{H_i}} + \dot{I_{H_i}}=0$ for $i=1,\ldots,k$. In that way, it suffices to consider as state variables only the number of infectious species. We define the total number of hosts as $N_H= \sum_{i=1}^{k}N_{H_i}$. Our model is a system of ordinary differential equations for the infectious populations of hosts and vectors: \begin{equation} \begin{cases} \dot{I_{H_i}} = \beta_{VH_i} I_V \dfrac{S_{H_i}}{N_{H}} - \delta_{H_i} I_{H_i},\quad i = 1, \ldots, k\\ {}\\ \dot{I_V} = \sum_{i=1}^k \beta_{H_iV} I_{H_i}\dfrac{N_{H_i} }{N_H}\dfrac{S_V}{N_V} - \delta_V I_V.\\ \end{cases} \label{ecompleto} \end{equation} We assume frequency-dependent transmission and that the vector does not have preference for a specific host, hence the number of contacts between the vector and the hosts are diluted by the total population of hosts. We also assume that there are no intraspecies infections and that there is no interspecies infection between hosts, or that these are negligible. Therefore, the only mean of infection is through contact with the vectors as Fig. \ref{complete} shows. \begin{figure}[!hbp] \centering \includegraphics[scale = 0.5]{hosts} \caption{The node $V$ represents the infectious vector and the nodes $H_i, i=1,\ldots,k $ represent the infectious reservoirs.} \label{complete} \end{figure} The parameters of the model are presented in Table \ref{param1}. Note that we could alternatively assume that infected hosts gain immunity after recovering. In such case the model would yield the same next generation matrix (see Appendix \ref{ssngm}), and since our analysis depends entirely on this matrix we would obtain the same results. \begin{table} \centering \begin{tabular}{llll} \hline\noalign{\smallskip} Parameter & Definition & Units \\ \noalign{\smallskip}\hline\noalign{\smallskip} $\beta_{VH_i}$ & Transmission rate from $V$ to $H_i$ & $[H_i]/([time]*[V])$ \\ & in the cycle formed by $V$ and $H_i$ & &\\ {}\\ $\beta_{H_iV}$ & Transmission rate from $H_i$ to $V$ & $[V]/([time]*[H_i])$\\ & in the cycle formed by $V$ and $H_i$ & \\ {}\\ $\delta_V$ & Mortality rate of infected vectors & $1/[time]$ \\ {}\\ $\delta_{H_i}$ & Mortality rate of infected hosts $H_i$ & $1/[time]$ \\ \noalign{\smallskip}\hline \end{tabular} \caption{Parameters of the model described by equations (\ref{ecompleto}).}\label{param1} \end{table} \section{Results} \label{sresults} \subsection{Properties of the basic reproductive number and the existence of competent hosts}\label{scompetent} We define the basic reproductive number $\mathcal{R}_0^{H_i}$ of the cycle formed by host $H_i$ and the vector $V$ by \begin{equation*} (\mathcal{R}_0^{H_i})^2= \dfrac{\beta_{VH_i}}{\delta_V}\dfrac{\beta_{H_iV}}{\delta_{H_i}}. \end{equation*} The quantity $\mathcal{R}_0^{H_i}$ is the basic reproductive number of the epidemiological model (\ref{ecompleto}) when $N_H=N_{H_i}$. It corresponds to the average number of secondary cases produced by a single infected host $H_i$ in an otherwise susceptible population when the only cycle taken into account is the interaction between $V$ and $H_i$. In this setting, the infection will spread in the population if $\mathcal{R}_0^{H_i} > 1$, and it will disappear if $\mathcal{R}_0^{H_i}<1$. Therefore, we say that a host $H_i$ is competent if $\mathcal{R}_0^{H_i} \geq 1$ and suboptimal if $\mathcal{R}_0^{H_i}<1$. In general, taking into account all cycles, if $D_i = \frac{N_{H_i}}{N_H}$ is the density of the host $H_i$ in the total population of hosts, then the basic reproductive number $\mathcal{R}_0$ of the whole system is given by \begin{equation*} \mathcal{R}_0^2 = \sum_{i=1}^k (\mathcal{R}_0^{H_i})^2 D_i^2, \end{equation*} (see (\ref{r0msimple}) in Appendix). Note that this implies that the combined effect of decoy hosts is less than additive. The quantity $\mathcal{R}_0$ is a convex function of $D_1, \ldots D_{k}$. We have $D_i \geq 0$ for $i = 1, \ldots, k$ and $\sum_{i=1}^kD_i = 1$. Using Lagrange multipliers, we obtain that the minimum value of $\mathcal{R}_0$ is attained in $(D_1^* , \ldots, D_k^*)$, where $$ (\mathcal{R}_0^{H_1})^2D_1^*= \ldots = (\mathcal{R}_0^{H_k})^2D_k^*.$$ Therefore, we have $$ D_i^* = \dfrac{\frac{1}{ (\mathcal{R}_0^{H_i})^2}}{\sum_{j=1}^k \frac{1}{ (\mathcal{R}_0^{H_j})^2}}\quad\textrm{ for } i=1,\ldots,k $$ and \begin{equation}\label{eharmonic} (\mathcal{R}_0)_{\min}^2 = \dfrac{1}{\sum_{j=1}^k \frac{1}{ (\mathcal{R}_0^{H_j})^2}}= \dfrac{1}{k}H\left((\mathcal{R}_0^{H_1})^2, \ldots, (\mathcal{R}_0^{H_k})^2\right), \end{equation} where $H\left((\mathcal{R}_0^{H_1})^2, \ldots, (\mathcal{R}_0^{H_k})^2\right)$ is the harmonic mean of $(\mathcal{R}_0^{H_1})^2, \ldots, (\mathcal{R}_0^{H_k})^2$. From the properties of the harmonic mean we have $$ \underset{i=1, \ldots, k}{\min}\{(\mathcal{R}_0^{H_i})^2\} \leq H\left((\mathcal{R}_0^{H_1})^2, \ldots, (\mathcal{R}_0^{H_k})^2\right) \leq k\underset{i=1, \ldots, k}{\min}\{(\mathcal{R}_0^{H_i})^2\}.$$ Using (\ref{eharmonic}), we obtain \begin{equation*} \frac{1}{k}\underset{i=1, \ldots, k}{\min}\{(\mathcal{R}_0^{H_i})^2\} \leq (\mathcal{R}_0)_{\min}^2 \leq \underset{i=1, \ldots, k}{\min}\{(\mathcal{R}_0^{H_i})^2\}. \end{equation*} From the last inequalities we can observe the following. First, the presence of a reservoir with $\mathcal{R}_0^{H_i}<1$ implies that $ (\mathcal{R}_0)_{\min}<1$. Hence, in some cases we may have $\mathcal{R}_0<1$. Furthermore, from (\ref{eharmonic}) we obtain that the larger the number of the hosts is, the smaller the basic reproductive number could be. This explains how high biodiversity could lead to the dilution of the disease. On the other hand, if all the reservoirs are effectively transmitting the disease ($\mathcal{R}_0^{H_i}\gg 1, i = 1, \ldots, k$) and there are few host ($k$ is small), then $\mathcal{R}_0>1$. This explains why in the case when competent host species thrive as a result of biodiversity loss we can expect the amplication of the disease, as discussed in \cite{keesing2010impacts} for the case of the Lyme disease \cite{keesing2009hosts,LoGiudice2008impact} and the Nipah virus \cite{epstein2006nipah}. Furthermore, as the function $(\mathcal{R}_0)^2(D_1, \ldots, D_k)$ is convex, we have \begin{equation*} \mathcal{R}_0^2 \leq \underset{i=1, \ldots, k}{\max}\{(\mathcal{R}_0^{H_i})^2\}. \end{equation*} This inequality implies that the disease can not be amplified beyond the basic reproductive number of the most competent host. We obtain the following theorem. \begin{theorem} \label{tcompetent} There exist values of $D_1, \ldots, D_k$ for which $\mathcal{R}_0\ge 1$ if and only if $$ (\mathcal{R}_0)_{\min} < 1 < \mathcal{R}_0^{H_i},$$ for some $i$. In particular, under the assumption of model (\ref{ecompleto}), the endemicity of a disease implies the existence of a competent host. \end{theorem} Figure \ref{dobsonf} represents a contour plot of $\mathcal{R}_0 $ in the case of two hosts. \begin{figure}[H] \centering \includegraphics[scale = 0.3]{nivel} \caption{Contour plot for different values of $\mathcal{R}_0 $ for a two hosts system where there is a competent host (horizontal axis) and a suboptimal host (vertical axis). In the red line $\mathcal{R}_0 $ takes its minimum value, and as we move away from the red line, $\mathcal{R}_0 $ increases.} \label{dobsonf} \end{figure} \subsection{Community constraints}\label{sconstraints} In this section we will take into consideration host interaction using community constraints. First, we will consider the case when the abundance of hosts follow linear constraints. Secondly, we will show that in the study of small changes in the abundances we can linearize general constraints. \subsubsection{Linear case} Let us assume that the abundances of the hosts $N_{H_1},\ldots, N_{H_k}$ follow $k-1$ linear constraints: \begin{equation*} \sum_{j=1}^k a_{ij} N_{H_i} + b_i = 0,\quad\textrm{ for } i =1,\ldots, k-1, \end{equation*} for some constants $a_{ij}$, $b_i$. If the matrix $(a_{ij})_{1\le i,j\le k-1}$ is nonsingular, the abundance of all hosts can be explained by the abundance of the host $H_k$: \begin{equation}\label{linconstraints} N_{H_i} = -A_i N_{H_k} + B_i, \quad\textrm{ for } i=1\ldots, k-1, \end{equation} for some constants $A_i$, $B_i$. In particular, if $A_i > 0$ in (\ref{linconstraints}), then $N_{H_k}$ increases as $N_{H_i}$ decreases. Moreover, when $A_i=\dfrac{dN_{H_i}}{dN_{H_k}}>1$ the changes in $N_{H_i}$ are more pronounced than the changes in $N_{H_k}$. Therefore, we say that the host $k$ is the resilient if $A_i>1$ for $i=1,\ldots, k-1$ and it is non-resilient if $0<A_i < 1$ for $i=1,\ldots, k-1$. We have \begin{equation*} \frac{d \mathcal{R}_0}{d N_{H_k}}=D_{\mathbf{u}}\mathcal{R}_0 = {\sum_{i=i}^k u_i r_i}, \end{equation*} where $\mathbf{u}=(-A_1, \ldots, - A_{k-1},1 )$ and $r_i=\dfrac{\partial \mathcal{R}_0}{\partial N_{H_i}}= \dfrac{1}{N_H \mathcal{R}_0} \left((\mathcal{R}_0^{H_i})^2 D_i - \mathcal{R}_0^2\right) $.\\ We define the index \begin{equation*} \Gamma_k = \frac{N_{H_k}}{\mathcal{R}_0} \frac{d \mathcal{R}_0}{d N_{H_k}} \end{equation*} The index $\Gamma_k$ measures the sensitivity of $\mathcal{R}_0$ to changes of the population $N_k$. \subsubsection{General constraints}\label{Gc} Let us assume that the abundances of the hosts $\mathbf{N}=(N_{H_1},\ldots, N_{H_k})$ follow the $m$ community constraints: \begin{equation*} \mathbf{F}(\mathbf{N}) = (F_1(\mathbf{N}), \ldots, F_m(\mathbf{N})) = (0, \ldots, 0)=\mathbf{0}, \end{equation*} for some $m < k$. Here $F_1,\ldots,F_m$ are real-valued differentiable functions defined where the values for $\mathbf{N}$ have biological sense. Let $E$ be the set of such values of $\mathbf{N}$ where the community constraints are satisfied and let $\mathbf{N}_0\in E$. Under suitable conditions (see subsection \ref{jd} in Appendix), we have $$N_i = g_i(N_{m+1}, \ldots , N_{k})\quad\textrm{ for } i=1,\ldots,m,$$ for some functions $g_1, \ldots, g_m$ and for $\mathbf{N}\in E$ close to $\mathbf{N}_0$. The derivatives $\dfrac{\partial g_i}{\partial N_{j}}$, $i = 1, \ldots, m$, $j=m+1,\ldots,k$ can be computed in terms of the derivatives of the functions $F_1, \ldots, F_m$. If $m=k-1$ and $\dfrac{\partial \mathcal{R}_0}{\partial N_k}(\mathbf{N}_0)\ne 0$ in a neighborhood of $\mathbf{N}_0$, then we have $$N_i = g_i(N_k)\quad\textrm{ for } i=1, \ldots, k-1.$$ Moreover, for all $\mathbf{N}\in E$ close to $\mathbf{N}_0$ we have the approximation $$N_i = g_i(N_k) \approx -A_i N_k + B_i,$$ for some constants $A_i$, $B_i $ (see Appendix). Thus, locally we can consider linear restrictions as in (\ref{linconstraints}). \subsection{The case of a single competent host}\label{singlecompetent} In this section we consider the case of an endemic disease. Theorem \ref{tcompetent} implies the existence of a competent host in this setting. We will show that in the case when this competent host is unique the increase in its density implies the amplification of the disease if the densities of the rest of the hosts decrease. This corresponds to the cases when there is a unique host that thrives with biodiversity loss and this host is competent. \begin{theorem}\label{t1competent} We assume that $\mathcal{R}_0^{H_i}<1$ for $i =1, \ldots, k-1$ and $\mathcal{R}_0^{H_k}>1$. Let $D_1, \ldots,D_k $ be such that $\mathcal{R}_0 \geq 1$. Then $$\dfrac{\partial R_0}{\partial D_i} < 0\quad\textrm{ for } i=1,\ldots,k-1$$ and $$\dfrac{\partial R_0}{\partial D_k} > 0.$$ In particular, under the assumption of model (\ref{ecompleto}), in the case of an endemic disease with a unique competent host, increase in its density together with decrease in the density of all other hosts implies amplification of the disease. \end{theorem} \begin{proof} See section \ref{onecompetent} in Appendix. \end{proof} \begin{corollary} Under the assumption of model (\ref{ecompleto}), in the case of an endemic disease with a unique competent and resilient host, increase in its density implies amplification of the disease. \end{corollary} Let us assume \begin{equation*} N_{H_i} = -A N_{H_k} + B_i \quad\textrm{ for } i = 1,\ldots, k-1. \end{equation*} for some constants $A$, $B_i$. If $A>1$, then the host $H_k$ is resilient and, the greater $A$ is, the more resilient $H_k$ is. We have that $\Gamma_k$ is an increasing function of $A$ (see subsection \ref{jd} in Appendix). Furthermore, taking $D_k$ and $A$ large, we have \begin{equation}\label{egammak} \Gamma_k \approx (k-1)A. \end{equation} Hence $ \Gamma_k $ increases as $k$, $D_k$ and $A$ increase. This implies that the more resilient the host $H_k$ is, the greater its effect on $\mathcal{R}_0$ is, in the case when this host is abundant. The case $k=2$ is represented in Fig. \ref{const2}. \begin{figure}[H] \subfloat[]{\includegraphics[scale=0.325]{region2}} \subfloat[]{\includegraphics[scale=0.25]{n2roo}}\\ \centering{\subfloat[]{\includegraphics[scale=0.25]{d2ro}}} \caption{In figure (a) the host $H_2$ is the competent host (the slope of the red line is less than one). The blue and the black lines represent the community linear constraints. Over the blue line the host $H_2$ is non-resilient, whereas over the black line the host $H_2$ is resilient. In figure (b) the blue graph represents $\mathcal{R}_0$ when host $H_2$ is competent and non-resilient and the black graph represents the case when $H_2$ is competent and resilient. Close to the intersection of these graphs (where $ D_2 = 0.9$ and $ D_1=0.1$) the derivative $\dfrac{d\mathcal{R}_0}{dN_{H_2}}$ of the black graph is greater than of the blue one. Moreover, $\Gamma_2$ is greater for the black graph than for the blue one, as expected by (\ref{egammak}). In this simulation $\mathcal{R}_0^{H_1} = 2/3$, $\mathcal{R}_0^{H_2} = 4/3$. In figure (c), on the left side of the dashed red line (where the minimum of $\mathcal{R}_0$ is attained), there is dilution of the disease if $D_2$ increases. On the right side of the red line, there is amplification of the disease as $D_2$ increases. Moreover, the amplification of the disease above the dashed purple line (where $\mathcal{R}_0\geq 1$) is ensured by Theorem \ref{t1competent}.} \label{const2} \end{figure} \section{Conclusions} \label{sconclusions} In this paper we present a mathematical framework that explains how changes of biodiversity can lead to the dilution or amplification of the disease. We show that the square of the basic reproductive number of the whole ecosystem is the weighted average of the squares of the basic reproductive numbers of the cycles between the vector and the hosts, weighted by their densities. Therefore, the accumulative effect of the hosts that buffer the disease is less than additive. Moreover, we obtain that the mininum of the basic reproductive number of the whole system is the harmonic mean of the basic reproductive numbers of the cycles. Hence, we conclude that an increase in biodiversity could dilute the disease and that loss in biodiversity could amplify the disease. Furthermore, we obtain that a necessary condition for the endemicity of a disease is the presence of a competent host. Finally, we study the case of an endemic disease. To explain how changes in the ecosystem affects the density of the hosts we assume that the abundances of the hosts follow a conservation law given by community constraints. We show that in the case when we have small changes in abundances, general constraints can always be linearized, thus it is sufficient to consider only linear constraints. We obtain that in the case of a disease with a unique resilient and competent host increase in its density amplifies the infection. \section{Appendix}\label{sappendix} \subsection{Next generation matrix} \label{ssngm} We will compute $\mathcal{R}_0$ using the NGM method from \cite{van2002reproduction}. From model (\ref{ecompleto}) we obtain the matrices $F$ and $V$ that define the NGM: $$F= \begin{pmatrix} 0 & \beta_{H_1V} D_1 & \beta_{H_2V} D_2 & \ldots & \beta_{H_kV} D_k \\ \beta_{VH_1} D_1 & 0 & 0 & 0 & 0 \\ \beta_{VH_2} D_2& 0 & 0 & 0 & 0 \\ \ddots & \ddots & \ddots & \ldots &0\\ \beta_{VH_k} D_k& 0 & 0 & 0 & 0 \\ \end{pmatrix}, V= \begin{pmatrix} \delta_V & 0 & 0 & 0 & 0 \\ 0 & \delta_{H_1} & 0 & 0 & 0 \\ 0 & 0 & \delta_{H_2} & \ldots & 0 \\ \ddots & \ddots & \ddots & \ldots &0\\ 0 & 0 & 0 & \ldots & \delta_{H_k} \\ \end{pmatrix}.$$ Hence, the NGM is: \begin{equation*} G = FV^{-1}= \begin{pmatrix} 0 & \frac{\beta_{H_1V}}{\delta_{H_1}} D_1 & \frac{\beta_{H_2V}}{\delta_{H_2}} D_2 & \ldots & \frac{\beta_{H_kv}}{\delta_{H_k}} D_k \\ \frac{\beta_{VH_1}}{\delta_{V}} D_1 & 0 &0& \ldots & 0 \\ \frac{\beta_{VH_2}}{\delta_{V}} D_2& 0 &0& \ldots & 0 \\ \vdots & \ddots & \ddots & \ldots &0\\ \frac{\beta_{VH_k}}{\delta_{V}} D_k& 0 &0& \ldots & 0 \\\end{pmatrix}. \end{equation*} Computing the spectral radius of the matrix $G$, we obtain that the basic reproductive number of the whole system is given by \begin{equation} \label{r0msimple} \mathcal{R}_0=\rho(FV^{-1}) = \sqrt{\sum_{i=1}^k \frac{\beta_{VH_i}}{\delta_V} \frac{\beta_{H_iV}}{\delta_{H_i}}D_i^2}. \end{equation} The disease free equilibrium (DFE) of model (\ref{ecompleto}) is $\mathbf{I}^*=(I_V^*,I_{H_1}^*,\ldots,I_{H_k}^*)=\mathbf{0}$. The following theorem explains how the basic reproductive number is related to the stability of the DFE in model (\ref{ecompleto}) \cite[Theorem 2]{van2002reproduction}. \begin{theorem} Let $\mathbf{I}^*$ be the DFE of (\ref{ecompleto}). Then, $\mathcal{R}_0<1$ implies that $\mathbf{I}^*$ is locally asymptotically stable and $\mathcal{R}_0>1$ implies that $\mathbf{I}^*$ is unstable. \label{umbral} \end{theorem} \subsection{Community constraints}\label{jd} Let $F_1,\ldots,F_m$ and $E$ be as in subsection \ref{Gc} and let $\mathbf{N}_0\in E$. We assume that the matrix \begin{equation*} J_1 = \frac{\partial(F_1, \ldots, F_m)}{\partial(N_{H_1}, \ldots, N_{H_m})}(\mathbf{N}_0) = \left( \frac{\partial F_i (\mathbf{N}_0)}{\partial N_{H_j}}\right)_{1\le i,j\le m} \end{equation*} is invertible and let us define \begin{equation*}\label{ji} J_2 = \frac{\partial(F_1, \ldots, F_m)}{\partial(N_{H_{m+1}}, \ldots, N_{H_m})}(\mathbf{N}_0) = \left( \frac{\partial F_i (\mathbf{N}_0)}{\partial N_{H_j}}\right)_{1\le i\le m,m< j\le k}. \end{equation*} The implicit function theorem states that there exists a neighborhood in $E$ of $\mathbf{N}_0$ where we have $N_i = g_i(N_{m+1}, \ldots , N_{k})$ for $i = 1, \ldots, m$. Furthermore, if $\mathbf{g}= (g_1, \ldots, g_m)$, then $$\frac{\partial\mathbf{g}}{\partial N_{H_j}} = \begin{pmatrix} \frac{\partial g_1}{\partial N_{{H_j}}} \\ \vdots \\ \frac{\partial g_m}{\partial N_{H_j}} \end{pmatrix} = - J_1 ^{-1} \begin{pmatrix} \frac{\partial F_1}{\partial N_{H_j}} \\ \vdots \\ \frac{\partial F_m}{\partial N_{H_j}} \end{pmatrix},$$ for $m< j\le k$. We define $$ J = \frac{\partial(F_1, \ldots, F_m)}{\partial(N_{H_1}, \ldots, N_{H_k})}(\mathbf{N}_0) = \left( \frac{\partial F_i (\mathbf{N}_0)}{\partial N_{H_j}}\right) _{1\le i\le m, 1\le j \le k}.$$ We are interested in computing $D_{\mathbf{u}} \mathcal{R}_0$ for $\mathbf{u}\in T_{\mathbf{N}_0} E$, where $$ T_{\mathbf{N}_0} E = \{\mathbf{u} \in \mathcal{R}^k | J \mathbf{u} = \mathbf{0} \}.$$ If $\mathbf{u} = (u_1, \ldots, u_k)$, using $u_{m+1}, \ldots, u_k$ as free variables, we have that $$ \begin{pmatrix} u_1 \\ \vdots \\ u_m \end{pmatrix} = - \sum_{j=m+1}^k u_j J_1 ^{-1} \begin{pmatrix} \frac{\partial F_1}{\partial N_{H_j}} \\ \vdots \\ \frac{\partial F_m}{\partial N_{H_j}} \end{pmatrix} = - J_1^{-1}J_2\begin{pmatrix} u_{m+1} \\ \vdots \\ u_k \end{pmatrix}.$$ Therefore, for a given set of values $u_{m+1}, \ldots, u_k$, we can obtain the values $u_1, \ldots, u_m$ and $$D_{\mathbf{u}}\mathcal{R}_0 = \sum_{i=i}^k u_i r_i,$$ where $r_i= \dfrac{1}{N_H \mathcal{R}_0} \left((\mathcal{R}_0^{H_i})^2 D_i - \mathcal{R}_0^2\right)$ is evaluated in $\mathbf{N}_0$. If we assume $m=k-1$, then there exists a neighborhood in $E$ of $\mathbf{N}_0$ where $$ N_{H_i} = g_i(N_{H_k})\quad\textrm{ for } i=1, \ldots, k-1.$$ Furthermore, $$J_2=\begin{pmatrix} \frac{\partial F_1(\mathbf{N}_0)}{\partial N_{H_j}} \\ \vdots \\ \frac{\partial F_{k-1}(\mathbf{N}_0)}{\partial N_{H_j}} \end{pmatrix} $$ and $$ \begin{pmatrix} u_1 \\ \vdots \\ u_{k-1} \end{pmatrix} = - u_k J_1 ^{-1} \begin{pmatrix} \frac{\partial F_1(\mathbf{N}_0)}{\partial N_{H_j}} \\ \vdots \\ \frac{\partial F_{k-1}(\mathbf{N}_0)}{\partial N_{H_j}} \end{pmatrix} = - J_1^{-1}J_2u_k,$$ for $(u_1, \ldots, u_{k-1}, u_k)\in T_{\mathbf{N}_0} E$. Taking $u_k =1$, we have $$\begin{pmatrix} u_1 \\ \vdots \\ u_{k-1} \end{pmatrix} = \begin{pmatrix} \frac{\partial g_1}{\partial N_{H_k}} \\ \vdots \\ \frac{\partial g_m}{\partial N_{H_k}} \end{pmatrix}. $$ Therefore, for $\mathbf{N}\in E$ close to $\mathbf{N}_0$ we have the approximations $$N_{H_i} = g_i(N_{H_k}) \approx u_i (N_{H_k} - N_{H_k}^0) + N_{H_i}^0 = -A_i N_{H_k} + B_i,$$ for $i = 1, \ldots, k-1$, where $A_i= -u_i$ and $B_i = N_{H_i}^0 - u_i N_{H_k}^0$. \subsection{One competent host}\label{onecompetent} We assume that $\mathcal{R}_0^{H_i}<1$ for $i =1, \ldots, k-1$ and $\mathcal{R}_0^{H_k}>1$. Let $D_1, \ldots,D_k $ be such that $\mathcal{R}_0 \geq 1$. We will prove that $\dfrac{\partial\mathcal{R}_0}{\partial D_i} < 0$ for $i=1,\ldots,k-1$ and $\dfrac{\partial\mathcal{R}_0}{\partial D_k} > 0$. Using $\sum_{j=1}^{k}D_j=1$, we have $$ \dfrac{\partial\mathcal{R}_0}{\partial D_i} = \dfrac{1}{\mathcal{R}_0}\left((\mathcal{R}_0^{H_i})^2D_i- (\mathcal{R}_0^{H_k})^2D_k\right)\quad\textrm{ for }i=1,\ldots,k-1.$$ Furthermore, since $\mathcal{R}_0= \sum_{i=1}^k (\mathcal{R}_0^{H_i})^2 D_i^2$, we obtain $$ D_k\left((\mathcal{R}_0^{H_k})^2 D_k - 1\right) \ge \sum_{i=1}^{k-1} D_i\left(1-(\mathcal{R}_0^{H_i})^2 D_i\right) \ge 0.$$ Therefore, $$ (\mathcal{R}_0^{H_k})^2 D_k \ge 1,$$ hence $$\dfrac{\partial\mathcal{R}_0}{\partial D_i} < 0\quad\textrm{ for } i=1,\ldots,k-1$$ and $$\dfrac{\partial\mathcal{R}_0}{\partial D_k}=\dfrac{\partial\mathcal{R}_0}{\partial D_1}\dfrac{\partial D_1}{\partial D_k}>0.$$ \\ We have $$ \Gamma_k = \frac{D_k}{\mathcal{R}_0^2}\sum_{i=i}^k u_i ((\mathcal{R}_0^{H_i})^2 D_i - \mathcal{R}_0^2).$$ If $\mathbf{u} = (-A, \ldots, -A, 1)$, then \begin{equation*} \Gamma_k =\frac{D_k}{\mathcal{R}_0^2} (Ar + ((\mathcal{R}_0^{H_k})^2 D_k - \mathcal{R}_0^2)), \end{equation*} where $r = -\sum_{i=i}^{k-1} ((\mathcal{R}_0^{H_i})^2 D_i - \mathcal{R}_0^2)$. Since the hosts $H_1, \ldots, H_{k-1}$ are suboptimal, we have $r >0$, hence $\Gamma_k$ is an increasing function of $A$. If $D_k$ is large, then $D_1, \ldots, D_{k-1}$ are small and $(\mathcal{R}_0^{H_i})^2 D_i - \mathcal{R}_0^2 \approx - \mathcal{R}_0^2$ for $i = 1, \ldots, k-1$. Therefore, $r \approx (k-1)\mathcal{R}_0^2$. Furthermore, if $(\mathcal{R}_0^{H_k})^2 D_k - \mathcal{R}_0^2 \approx 0 $ and $A$ is large, then $$ \Gamma_k \approx (k-1)A. $$
{ "redpajama_set_name": "RedPajamaArXiv" }
1,254
{"url":"https:\/\/chemistry.stackexchange.com\/tags\/equilibrium\/hot","text":"# Tag Info\n\n12\n\nA chemical equilibrium concerns chemical reactions. There should be at least a forward- and backward reaction between two species but more complex systems with multiple individual reactions may occur. The important observation is that there is no macroscopic change to the chemical constituents of the system, i.e. the concentrations of all reaction partners ...\n\n12\n\nChemical equilibrium is a type of dynamic equilibrium, but not every dynamic equilibrium is a chemical equilibrium. In a chemical equilibrium there is no change on the macroscopic scale. That means that if you look at the system it seems like nothing is happening, but at molecular scale there are reactions going on and the rate of forward reaction = rate of ...\n\n6\n\nCurrent definition implies that $\\mathrm{pH}$ is a function of relative activity. Originally, the amount concentration of $\\ce{H+}$ in $\\pu{mol L-1}$ was proposed, which is also often used these days as an approximation [1]. $\\mathrm{pH}$ was originally defined by S\u00f8rensen in 1909 \u2026 in terms of the concentration of hydrogen ions (in modern nomenclature) ...\n\n4\n\nYou got the solubility part reversed. The solubility of $\\ce{AgCl}$ is lower than the solubility of $\\ce{Ag2CrO4}:$ $$s(\\ce{AgCl}) = \\sqrt{K_\\mathrm{sp}(\\ce{AgCl})} = \\sqrt{\\pu{1.8E-10 mol2 L-2}} = \\pu{1.34E-5 mol L-1}$$ $$s(\\ce{Ag2CrO4}) = \\sqrt[3]{\\frac{K_\\mathrm{sp}(\\ce{Ag2CrO4})}{4}} = \\sqrt[3]{\\frac{\\pu{1.1E-12 mol3 L-3}}{4}} = \\pu{6.50E-5 mol L-1}$$ ...\n\n3\n\nWhy ? Because nature of matter at molecular and atomic level is dynamic, not static. In classical mechanics, objects can be in long term mutual rest. Not in quantum mechanics and quantum chemistry. Molecules have zero knowledge about the system being in equilibrium or not. If the process is supported by the thermodynamic and kinetic aspects, nothing is ...\n\n3\n\nThe reaction you wrote down is wrong on two counts. The reactant is not a hypothetical tetraaqua complex and the product is not a hypothetical tetraammin complex. The correct reaction is as shown below: $$\\ce{[Cu(H2O)6]^2+ (aq) + 4 NH3 (aq) <=> [Cu(NH3)4(H2O)2]^2+ (aq) + 4 H2O (l)}\\tag{1}$$ Note that I have used an equilibrium arrow here: the ...\n\n3\n\nYour textbook's derivation is done under the assumption of constant $T$, which means $T_{sys} = T_{surr} =T$. However, this does not mean $dG_{sys}$ is always zero. Let's start with the following: $$dS_{univ}=dS_{sys}+dS_{surr}= \\frac{\\text{\u0111}q_{rev, sys}}{T_{sys}}+\\frac{\\text{\u0111}q_{rev, surr}}{T_{surr}}$$ Since heat flow always affects the surroundings ...\n\n3\n\nProper definitions of chemical equilibrium will not involve reaction rates whatsoever. Thermodynamics does not care about time. Chemical potential is the work required to form a molecule in solution, irregardless of the time it takes. Statistical Mechanics says chemical potential is the work required to move a molecule from infinitely far away (ideal gas ...\n\n2\n\nI expect it may be due to the basic definition of chemical equilibrium simply being inadequate This is the answer (sort of). In essence, when you are below the solubility limit, the chemical potential of the solid lies above that of the solubilized salt, and there is no equilibrium with the solid (because no solid can form). This is illustrated in the ...\n\n2\n\nYou have to look at two things in terms of equillibrium. The pot with the solution, once all AgCl is dissolved and you have stirred it a bit more, is in equillibrium. Obviously. There is no chemical potential gradient, and you have only one phase. By itself, nothing will ever happen again in it. The reaction (dissolution of AgCl in water) isn`t: If you add ...\n\n2\n\nFor a microscopic step at constant $T$ and $p$ $$\\mathrm dG=0\\tag{constant T and p}$$ implies: reversibility (equilibrium) $\\mathrm dS_\\mathrm{univ} = 0$ $\\mathrm dH_\\mathrm{sys} = T\\,\\mathrm dS_\\mathrm{sys}$ since $\\mathrm dG = \\mathrm dH_\\mathrm{sys} - T\\,\\mathrm dS_\\mathrm{sys} \\tag{constant$T$and$p$}$ The derivation you suggest seems strange. ...\n\n2\n\nThe value $\\pu{1.3653 mol L^{\u22121} atm^{\u22121}}$ is the solubility constant (or Henry's law solubility constant), not the solubility. The solubility is defined as the maximum possible concentration (the saturation concentration) of a solute under given solution conditions (e.g. temperature and pressure), whereas the solubility constant $H^{cp}$ defines how solute ...\n\n2\n\nBoth chloride and bromide ions are present in 10-fold excess over silver ions. That means that the chloride and bromide concentration in solution will not drop by much (they will remain major species). The solubility product of AgCl is 200-times higher than that of AgBr. If both AgCl and AgBr precipitate, the chloride solution would be 200-times higher than ...\n\n2\n\nvan 't Hoff equation $$\\frac{\\mathrm d}{\\mathrm dT} \\ln K_\\mathrm{eq} = \\frac{\\Delta H^\u29b5}{RT^2}$$ = Le Chatelier's principle in a particular context of temperature and chemical reaction equilibrium. What the Le Chatelier's principle says qualitatively as a general principle, the van 't Hoff equation says quantitatively in context of temperature ...\n\n1\n\npH is not defined versus normality or molarity of an acid: Whatever the normality or the molarity, the pH is defined from the activity (or the concentration) of the ions H+. A given value of the normality or of the molarity does not give you the activity (or the concentration) of H+. So it does not allow you to calculate the pH.\n\n1\n\nThe problem could be solved with simultaneous equations, but the following are the wrong equations. $$K_{\\mathrm{sp,}\\ \\ce{AgCl}} = [\\ce{Ag+}][\\ce{Cl-}]$$ $$K_{\\mathrm{sp,}\\ \\ce{AgBr}} = [\\ce{Ag+}][\\ce{Br-}]$$ The right equations to use would be: $$K_{\\mathrm{sp,}\\ \\ce{AgCl}} \\ge [\\ce{Ag+}][\\ce{Cl-}]$$ $$K_{\\mathrm{sp,}\\ \\ce{AgBr}} \\ge [\\ce{Ag+}][\\ce{... 1 Assume surface molecules are restrained by a QM harmonic potential, with energy levels described as$$E_n = \\hbar \\omega(n + \\frac12)$$where$$\\omega=\\sqrt{\\frac{k}{\\mu}}$$is the frequency of the harmonic oscillator and$$\\mu=\\frac{m_1m_2}{m_1 + m_2} is its reduced mass. The actual shape of the potential is determined by $k$, the force constant, ie ...\n\n1\n\nShouldn't it clearly be the opposite, as increasing temperature favors the reverse reaction? Yes, they made a mistake. For all the other scenarios, they paired a figure of the rate changes with a figure of the matching concentration changes. For this scenario (increase in temperature) they matched it with a correct figure of concentration changes when the ...\n\n1\n\nMy answer starts with the premise that a static equilibrium exists (although that is debatable). How could we be sure that one certain reaction in equilibrium is dynamic? If your reaction is at dynamic equilibrium, changing the temperature or the concentration of one of the reactant or product species (by adding some solute, for example) should disturb ...\n\nOnly top voted, non community-wiki answers of a minimum length are eligible","date":"2019-12-15 22:15:00","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7298344969749451, \"perplexity\": 822.8518033243133}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-51\/segments\/1575541310866.82\/warc\/CC-MAIN-20191215201305-20191215225305-00295.warc.gz\"}"}
null
null
\section{ Introduction } \label{sec: Introduction} Many of the physical properties of graphene are captured by a one-band tight-binding electronic Hamiltonian with uniform, real-valued, and nearest-neighbor hopping amplitude whereby: (i) electron-electron interactions are ignored; (ii) spin-orbit interactions are ignored; (iii) the electronic band structure is replaced by two conical dispersions centered about two non-equivalent points, the Dirac points, in the first Brillouin zone; and (iv) the coupling to electro-magnetic external fields is governed by the minimal substitution. For instance, graphene displays an integer quantum Hall effect (IQHE) as a function of the applied bias voltage,% ~\cite{Novoselov05,Zhang05} and it shows an universal optical conductivity.% ~\cite{Nair08} Both these properties can be understood within the non-interacting electron picture. Although most experiments observe the massless Dirac spectrum assumed in (iii), electronic instabilities in the form of single-particle spectral gaps (mass gaps in short) can be triggered by external perturbations such as some commensurate substrates,~\cite{Lanzara07} or large enough magnetic fields that can change the balance between the kinetic and the potential energy.% ~\cite{Checkelsky09,Nomura09,Chamon09} In this paper we study a number of issues pertaining to Dirac fermions in two-dimensions when a mass gap is opened in the fermionic spectrum by different non-vanishing order parameters. In particular, we shall study in great detail the simpler case when there is no superconducting instabilities and spin-rotation invariance is maintained, in which case there are only 4 possible masses. We derive in this simpler case the effective action when the massive fermions are integrated out, and read from this action the fractional statistics of topological defects in the mass order parameters. We also present a complete classification of all possible masses (36 in total) in the general case where any spin, valley, and superconducting instabilities are permitted. In the simpler spinless problem (or, more realistically, the problem when spin-rotation invariance is never broken), the 4 different masses that can be added to the two-dimensional Dirac equation representing graphene are the following. One perturbation is a staggered chemical potential, taking values $+\mu^{\ }_{\mathrm{s}}$ and $-\mu^{\ }_{\mathrm{s}}$ in the two sublattices of the honeycomb lattice of graphene say. It opens a gap $2|\mu^{\ }_{\mathrm{s}}|$ at the two Dirac points.% ~\cite{Semenoff84} A second mass gap $2|\eta|$ arises by adding directed next-nearest-neighbor hopping amplitudes in the presence of fluxes, but such that no net magnetic flux threads a hexagonal Wigner-Seitz unit cell of graphene say. This perturbation breaks time-reversal symmetry (TRS).% ~\cite{Haldane88} Finally, a real-valued modulation of the nearest-neighbor hopping amplitude with a wave vector connecting the two Dirac points (i.e., a Kekul\'e dimerization pattern for graphene) also opens a gap $2|\Delta|$.~\cite{Hou07} This real-valued modulation of the nearest-neighbor hoppings is parametrized by the complex order parameter $\Delta=\mathrm{Re}\,\Delta+{i}\mathrm{Im}\,\Delta$ whose phase controls the angles of the dimerization pattern. This mass corresponds to two real masses $\mathrm{Re}\,\Delta$ and $\mathrm{Im}\,\Delta$, bringing the total number of real-valued masses that conserve the electron number and spin-rotation symmetry (SRS) to four. If the order parameters $\mu^{\ }_{\mathrm{s}}$, $\eta$, and $\Delta$ are not uniform, but vary in space and contain topological textures, then midgap states in the massive Dirac spectrum can appear. Examples are static line defects at which $\mu^{\ }_{\mathrm{s}}$ and $\eta$ change signs,~\cite{Callan85} and static point defects represented by vortices in the phase of $\Delta$.~\cite{Hou07} As occurs at a static domain wall in one-dimensional polyacetylene,~\cite{Jackiw76,Su79,Jackiw81npb} a fractional electronic charge is exponentially localized in the vicinity of a static charge $\pm1$ vortex in the phase of $\Delta$.~\cite{Hou07} The value of the fractional charge that is bound to a vortex in the phase of $\Delta$ also depends on whether the vortex is dressed with a half flux of the axial vector potential $\boldsymbol{a}^{\ }_{5}$ or not.~\cite{Chamon08a,Chamon08b} When the axial gauge flux is absent (logarithmically confined case), the value of the charge can be tuned continuously as a function of the ratio $\mu^{\ }_{\mathrm{s}}/m$ where $m:=\sqrt{|\Delta|^{2}+\mu^{2}_{\mathrm{s}}}$.% ~\cite{Chamon08a,Chamon08b} It is independent of the ratio $\mu^{\ }_{\mathrm{s}}/m$ when the axial gauge half flux is present (deconfined case), for the charge is then pinned to the rational values $Q=\pm 1/2$.% ~\cite{Chamon08a,Chamon08b} These values of the fractional charges persist as long as the magnitude of the TRS-breaking mass $|\eta|$ is smaller than the mass scale $m$.~\cite{Chamon08a,Chamon08b} There is a phase transition at $|\eta|=m$. For $|\eta|>m$ the fractional charge bound to the vortices vanishes. ~\cite{Chamon08a,Chamon08b} Just like the charge, the statistical phase $\Theta$ acquired upon the exchange of two vortices depends on whether the vortex in the phase of $\Delta$ is screened or not by the axial gauge flux. In this paper, we derive the statistical angle from the effective action obtained upon integrating out the massive fermions. (We thereby resolve conflicting claims about $\Theta$ in the literature.~\cite{Chamon08a,Seradjeh08,Milovanovic08}) The statistical angle depends on the interplay between the magnitude of the TRS-breaking mass $\eta$ and the magnitude $m$ of the TRS masses. There are phase transitions at the lines $|\eta|=m$ depicted in Fig.~\ref{fig:phase-diagram} that separates regions dominated by the TRS-breaking masses and those dominated by the TRS-preserving mass $\eta$. The statistics $\Theta$ jumps for both the screened and unscreened vortices at the phase boundaries. When unit vortices in $\Delta$ are screened by an axial gauge flux, they are deconfined.~\cite{Jackiw07} Their statistics is well-defined in a dynamical sense and it takes universal values independent of the ratio $\mu^{\ }_{\mathrm{s}}$ on both sides of the transition. We show that \begin{subequations} \label{eq: main result} \begin{equation} \Theta=0 \hbox{ when $m>|\eta|$ } \end{equation} and that \begin{equation} \frac{\Theta}{\pi}= \mathrm{sgn}(\eta)\, Q^{2}= \frac{\mathrm{sgn}\,\eta}{4} \hbox{ when $|\eta|>m$. } \end{equation} \end{subequations} Along the lines $|\eta|=m$ in the zero-temperature phase diagram of Fig.~\ref{fig:phase-diagram}, the gap in the Dirac spectrum vanishes. At criticality, the notion of point-particles is moot and so is the question of their quantum numbers. A remarkable complementarity has emerged. Defects carry either a fractional charge $Q=\pm 1/2$ but no fractional statistical phase when the breaking of TRS is not too strong ($|\eta|<m$), or no fractional charge but a fractional statistics $\Theta/\pi=\pm 1/4$ when the breaking of TRS is dominant ($|\eta|>m$). When unit vortices in the order parameter $\Delta$ are not accompanied by an axial gauge flux, they are logarithmically confined.~\cite{Hou07} Although their statistics is not well-defined dynamically, it is nevertheless possible to create them and exchange them by external means. If so, both their charges and statistics acquire a dependence on all masses $\eta$, $\mu^{\ }_{\mathrm{s}}$, and $\Delta$, that we compute analytically and test numerically here in this paper. We then go beyond the simpler spinless case with only 4 masses, and we classify all 36 masses in the general case where any spin, valley, and superconducting instabilities are allowed. These 36 order parameters break up into 56 possible quintuplets of masses that add in quadrature (to a value $m^2$), and thus do not compete with one another. The Haldane mass, the generalization of the $\eta$ mass above, competes with all the other 35 masses, and thus one has generically a quantum phase transition when $|\eta|=m$. We argue that these 5-tuplets provide a rich playground for Landau-forbidden continuous phase transitions. We discuss in the paper how any U(1) order parameter in a 5-tuplet can be assigned a conserved charge and supports topological defects in the form of vortices. A pair of U(1) order parameters in a 5-tuplet is said to be dual if the vortices of one order parameter binds the charge of the other order parameter and vice versa. A continuous phase transition can then connect directly the two dual U(1) ordered phases through a confining-deconfining transition of their vortices. This paper is organized as follows. We define the relevant continuum Dirac Hamiltonian and review its symmetries for the simpler problem with only 4 masses, that encodes the competition between a charge-density, a bond-density, and an integer-quantum-Hall instability at the Dirac (charge neutral) point of any graphene-like two-dimensional electronic system in Sec.~\ref{sec: Hamiltonian and symmetries}. We reveal a hidden non-Abelian structure of the field theory in Sec.~\ref{sec: Path integral formulation of the model} that plays an important role when deriving the charge and statistics of quasiparticles. The fermions are integrated in the background of these 4 order parameters and of the U(1)$\times$U(1) gauge fields to leading order in a gradient expansion in Sec.% ~\ref{sec: Derivative expansion}. The effective low-energy and long-wave length interacting field theory thereby obtained is a Anderson-Higgs-Chern-Simons field theory for bosonic fields: two U(1) gauge fields and one phase field. The induced fractional fermion number and the induced fractional Abelian statistical phase in the Anderson-Higgs-Chern-Simons field theory of Sec.% ~\ref{sec: Derivative expansion} are computed in Sec.~\ref{sec: Fractional charge quantum number} and Sec.~\ref{sec: Fractional statistical angle}, respectively. The numerical calculation of the fractional charges and statistical phases within a single-particle (mean-field) approximation that violates the U(1)$\times$U(1) gauge symmetry is presented in Sec.~\ref{eq: Numerical calculation of the charge and Berry phase}. A microscopic (lattice) model sharing the same U(1)$\times$U(1) gauge symmetry and low-energy long-wave-length particle content as the Anderson-Higgs-Chern-Simons field theory is constructed in Sec.~\ref{sec: Microscopic models}. Either by enlarging the particle content of the lattice model from Sec.~\ref{sec: Microscopic models} or by allowing additional magnetic, spin-orbit, or superconductivity instabilities to compete with the charge-density, bond-density, and integer-quantum-Hall instabilities in graphene-like two-dimensional systems, we are lead to a classification presented in Sec.~\ref{sec: Reinstating the electron spin} of all 36 competing orders of a Dirac Hamiltonian represented by 16-dimensional Dirac matrices that encodes the quantum dynamics of electrons constrained to a two-dimensional space, as occurs in graphene at the charge neutral point say. We conclude in Sec.~\ref{sec: Summary} and relegate some intermediary steps to the Appendix. \begin{figure} \includegraphics[angle=0,scale=0.6]{FIGURES/phase-diagram.eps} \caption{ Phase diagram parametrized by the TRS mass $m$ and the TRS-breaking mass $\eta$. There are three regions delimited by the boundaries $|\eta|=m$, in each of which the spectral gap does not close. The boundaries $|\eta|=m$ are lines of critical points at which the spectral gap closes. When the vortices are screened by half of an axial gauge flux, they carry a fractional fermionic charge of $|Q|=1/2$ with the vanishing statistical phase $\Theta=0$ under pairwise exchange in regions for which TRS is weakly broken, i.e., the painted region $m>|\eta|$. Unit vortices are charge neutral but acquire the non-vanishing statistical phase $|\Theta|=\pi/4$ under pairwise exchange in regions for which TRS is strongly broken, i.e., $|\eta|>m$. [See Eqs.~(\ref{eq: main result}).] When the vortices are not screened by the axial gauge flux, the charge $Q$ acquires a dependence on the ratio of the chemical potential $\mu^{\ }_\mathrm{s}$ and $m$, for $|\eta|<m$, and $Q$ vanishes for $|\eta|>m$. The statistics also depend on which phase one sits, but it is non-zero for any $\eta\ne 0$, and it is related to the value of the charge, as shown in Secs.~\ref{sec: Fractional statistical angle} and \ref{eq: Numerical calculation of the charge and Berry phase}.} \label{fig:phase-diagram} \end{figure} \section{ Hamiltonian and symmetries: spinless case with 4 masses } \label{sec: Hamiltonian and symmetries} The continuum model under consideration in this paper is defined by the second-quantized planar Hamiltonian $ \hat{H}:= \int d^{2}\boldsymbol{r}\, \hat{\mathcal{H}} $ where\cite{footnote: covariant covention for gradient} \begin{subequations} \label{eq: def quantum Hamiltonian} \begin{equation} \begin{split} & \hat{\mathcal{H}}:= \hat{\mathcal{H}}^{\ }_{0} + \hat{\mathcal{H}}^{\ }_{\mathrm{gauge}} + \hat{\mathcal{H}}^{\ }_{\mathrm{scalar}}, \\ & \hat{\mathcal{H}}^{\ }_{0}:= \hat{\psi}^{\dag} \boldsymbol{\alpha} \cdot \left( - {i} \boldsymbol{\partial} \right) \hat{\psi}, \\ & \hat{\mathcal{H}}^{\ }_{\mathrm{gauge}}:= \hat{\psi}^{\dag} \boldsymbol{\alpha} \cdot \left( \boldsymbol{a} + \boldsymbol{a}^{\ }_{5} \gamma^{\ }_{5} \right) \hat{\psi}, \\ & \hat{\mathcal{H}}^{\ }_{\mathrm{scalar}}:= \hat{\psi}^{\dag} \left( |\Delta| \beta e^{{i}\theta\gamma^{\ }_{5}} + \mu^{\ }_{\mathrm{s}} R + {i} \eta \alpha^{\ }_{1} \alpha^{\ }_{2} \right) \hat{\psi}. \end{split} \label{eq: def quantum Hamiltonian a} \end{equation} The 4 components of the spinor-valued operator \begin{equation} \hat{\psi}(\boldsymbol{r})= \begin{pmatrix} \hat{\psi}^{\ }_{\mathrm{+A}}(\boldsymbol{r}) \\ \hat{\psi}^{\ }_{\mathrm{+B}}(\boldsymbol{r}) \\ \hat{\psi}^{\ }_{\mathrm{-B}}(\boldsymbol{r}) \\ \hat{\psi}^{\ }_{\mathrm{-A}}(\boldsymbol{r}) \end{pmatrix} \equiv \begin{pmatrix} \hat{\psi}^{\ }_{\ell}(\boldsymbol{r}) \end{pmatrix} \label{eq: def quantum Hamiltonian b} \end{equation} obey the equal-time fermion algebra \begin{equation} \begin{split} & \{ \hat{\psi}^{\ }_{\ell }(\boldsymbol{r} ), \hat{\psi}^{\dag}_{\ell'}(\boldsymbol{r}') \}= \delta^{\ }_{\ell,\ell'} \delta(\boldsymbol{r}-\boldsymbol{r}'), \\ & \{ \hat{\psi}^{\dag}_{\ell }(\boldsymbol{r} ), \hat{\psi}^{\dag}_{\ell'}(\boldsymbol{r}') \}= \{ \hat{\psi}^{\ }_{\ell }(\boldsymbol{r}), \hat{\psi}^{\ }_{\ell'}(\boldsymbol{r}') \}=0. \end{split} \label{eq: def quantum Hamiltonian c} \end{equation} The representation (\ref{eq: def quantum Hamiltonian b}) is here fixed by the indices $\mathrm{A}$ and $\mathrm{B}$ that distinguish the two triangular sublattices of the honeycomb lattice and the indices $+$ and $-$ that distinguish the two inequivalent Dirac points (valleys) of graphene. With this choice, the 4 Dirac matrices $\alpha^{x}\equiv\alpha^{1}$, $\alpha^{y}\equiv\alpha^{2}$, $\alpha^{z}\equiv\alpha^{3}\equiv R$, and $\beta$ are defined by their 4-dimensional chiral representation\cite{Itzykson80} \begin{equation} \begin{split} & \boldsymbol{\alpha}:= \begin{pmatrix} \boldsymbol{\tau} & 0 \\ 0 & - \boldsymbol{\tau} \end{pmatrix}\equiv \sigma^{\ }_{3} \otimes \boldsymbol{\tau}\equiv \begin{pmatrix} \alpha^{1}, & \alpha^{2} \end{pmatrix}, \\ & \alpha^{3}:= \begin{pmatrix} \tau^{\ }_{3} & 0 \\ 0 & - \tau^{\ }_{3} \end{pmatrix}\equiv \sigma^{\ }_{3} \otimes \tau^{\ }_{3}\equiv R, \\ & \beta:= \begin{pmatrix} 0 & \tau^{\ }_{0} \\ \tau^{\ }_{0} & 0 \end{pmatrix}\equiv \sigma^{\ }_{1} \otimes \tau^{\ }_{0}, \end{split} \label{eq: def quantum Hamiltonian d} \end{equation} where the $2\times2$ unit matrix $\tau^{\ }_{0}$ and the three Pauli matrices $\tau^{\ }_{1}$, $\tau^{\ }_{2}$, and $\tau^{\ }_{3}$ act on the sublattices indices ($\mathrm{A},\mathrm{B}$) while the $2\times2$ unit matrix $\sigma^{\ }_{0}$ and the three Pauli matrices $\sigma^{\ }_{1}$, $\sigma^{\ }_{2}$, and $\sigma^{\ }_{3}$ act on the valley indices ($+,-$). The matrix \begin{equation} \gamma^{\ }_{5}\equiv \gamma^{5}:= -{i} \alpha^{1} \alpha^{2} \alpha^{3}= \begin{pmatrix} \tau^{\ }_{0} & 0 \\ 0 & - \tau^{\ }_{0} \end{pmatrix}\equiv \sigma^{\ }_{3} \otimes \tau^{\ }_{0} \end{equation} \label{eq: def quantum Hamiltonian e} \end{subequations} \noindent acts trivially on the sublattices indices while it acts non-trivially on the valley indices, i.e., $(1\pm \gamma^{\ }_{5})/2$ is a projector on the $+$ and $-$ valley indices, respectively. In (3+1)-dimensional space and time quantum electrodynamics, the eigenspaces of $(1\pm \gamma^{\ }_{5})/2$ define the chiral indices, a terminology that we shall also use in this paper. The external (background) real-valued fields $\boldsymbol{a} =(a_{1},a_{2})$, $\boldsymbol{a}^{\ }_{5}=(a_{51},a_{52})$, $|\Delta|$, $\theta\equiv -\mathrm{arg}\,\Delta$, $\mu^{\ }_{\mathrm{s}}$, and $\eta$ are space- and time-dependent fields. Their microscopic interpretation is the following. A strong uniform magnetic field (rotational of $\boldsymbol{a}$) is responsible for the IQHE in graphene.% \cite{McClure56} A vector field $\boldsymbol{a}^{\ }_{5}$ encodes changes in the curvature (ripples) of graphene, ~\cite{Morozov06,Morpurgo06} and it can also encode defective coordination numbers at apical defects. ~\cite{Gonzalez92,Lammert00,Pachos08} A constant $\mu^{\ }_{\mathrm{s}}$ realizes in graphene a staggered chemical potential and opens an electronic spectral gap.% \cite{Semenoff84} A constant $\eta$ realizes in graphene a directed next-nearest-neighbor hopping amplitude without net magnetic flux through the Wigner-Seitz cell of the honeycomb lattice and it also opens an electronic spectral gap.% \cite{Haldane88} A constant $\Delta$ realizes in graphene a Kekul\'e distortion of the nearest-neighbor hopping amplitude and, again, opens an electronic spectral gap.% \cite{Hou07} The 4 space- and time-independent $\mathrm{Re}\,\Delta$, $\mathrm{Im}\,\Delta$, $\mu^{\ }_{\mathrm{s}}$, and $\eta$ exhaust all possible ways for the opening of a spectral gap in the single-particle spectrum of the kinetic Dirac kernel $\boldsymbol{\alpha}\cdot(-{i}\boldsymbol{\partial})$, as $\beta$, $\beta\gamma^{\ }_{5}$, $R$, and ${i}\alpha^{1}\alpha^{2}$ generate the largest set of traceless and Hermitian $4\times4$ matrices that anticommutes with $\boldsymbol{\alpha}\cdot(-{i}\boldsymbol{\partial})$. The 3 masses $\mathrm{Re}\,\Delta$, $\mathrm{Im}\,\Delta$, and $\mu^{\ }_{\mathrm{s}}$ are compatible, i.e., they open the gap $2m$, where \begin{equation} m:= \sqrt{|\Delta|^{2}+\mu^{2}_{\mathrm{s}}}, \label{eq: def m} \end{equation} for $\beta$, $\beta\gamma^{\ }_{5}$, and $R$ anticommute pairwise. On the other hand, the mass $\eta$ competes with the mass $m$, as ${i}\alpha^{\ }_{1}\alpha^{\ }_{2}$ commutes with $\beta$, $\beta\gamma^{\ }_{5}$, and $R$ (the competition between $\eta$ and $m$ leads to a phase transition when $|\eta|=m$, which shall be important in the discussion of fractional statistics in this paper). The fields $\boldsymbol{a}$, $\boldsymbol{a}^{\ }_{5}$, $\Delta$, $\mu^{\ }_{\mathrm{s}}$, and $\eta$ have also appeared in the context of (a) slave-boson treatments of antiferromagnetic spin-1/2 Heisenberg model on the square lattice in the $\pi$-flux phase,% \cite{Affleck88,Wen89,Fradkin91,Mudry94} and (b) Anderson localization for electrons hopping on a square lattice with a flux of half a magnetic flux quantum per plaquette, i.e, the square lattice with $\pi$-flux phase.% \cite{Ludwig94,Hatsugai97,Guruswamy00} \subsection{ Symmetries } \label{subsec: Symmetries} The model defined in Eq.~(\ref{eq: def quantum Hamiltonian}) possesses a number of symmetry operations that we list below and utilize in the paper. \subsubsection{ Time-reversal symmetry } In the Heisenberg representation, \begin{equation} \hat{H}(t)\to \hat{H}(-t) \end{equation} under the anti-unitary transformation \begin{equation} \begin{split} & \hat{\psi}(\boldsymbol{r},t)\to \left( TK \hat{\psi} \right) (\boldsymbol{r},-t), \\ & \boldsymbol{a}(\boldsymbol{r},t)\to - \boldsymbol{a}(\boldsymbol{r},-t), \qquad \eta(\boldsymbol{r},t)\to -\eta(\boldsymbol{r},-t), \\ & \boldsymbol{a}^{\ }_{5}(\boldsymbol{r},t)\to \boldsymbol{a}^{\ }_{5}(\boldsymbol{r},-t), \qquad \theta(\boldsymbol{r},t)\to \theta(\boldsymbol{r},-t), \\ & |\Delta|(\boldsymbol{r},t)\to |\Delta|(\boldsymbol{r},-t), \qquad \mu^{\ }_{\mathrm{s}}(\boldsymbol{r},t)\to \mu^{\ }_{\mathrm{s}}(\boldsymbol{r},-t), \end{split} \label{eq: def time reversal} \end{equation} where complex conjugation is represented by $K$ and \begin{equation} T:= \beta\alpha^{1}\gamma^{\ }_{5}= \begin{pmatrix} 0 & \tau^{\ }_{1} \\ \tau^{\ }_{1} & 0 \end{pmatrix}\equiv \sigma^{\ }_{1} \otimes \tau^{\ }_{1}= T^{t} \label{eq: def mathcal T} \end{equation} is a unitary, Hermitian (and thus symmetric) matrix. Transformation (\ref{eq: def time reversal}) realizes reversal of time in graphene, for $T$ exchanges the two valleys while acting trivially on the sublattice indices. Moreover, transformation (\ref{eq: def time reversal}) realizes reversal of time for an effectively spinless single particle, for $T$ is symmetric. Hamiltonian $\hat{H}$ is time-reversal symmetric and can be represented by real-valued matrix elements,% \cite{footnote H is real valued} if all background fields are static while \begin{equation} a^{\ }_{1}= a^{\ }_{2}= \eta=0. \end{equation} \subsubsection{ Sublattice symmetry } Always in the Heisenberg representation, \begin{equation} \hat{H}(t)\to - \hat{H}(t) \end{equation} under the unitary transformation \begin{equation} \begin{split} & \hat{\psi}(\boldsymbol{r},t)\to \left( R \hat{\psi} \right) (\boldsymbol{r},t), \\ & \boldsymbol{a}(\boldsymbol{r},t)\to \boldsymbol{a}(\boldsymbol{r},t), \qquad \eta(\boldsymbol{r},t)\to -\eta(\boldsymbol{r},t), \\ & \boldsymbol{a}^{\ }_{5}(\boldsymbol{r},t)\to \boldsymbol{a}^{\ }_{5}(\boldsymbol{r},t), \qquad \theta(\boldsymbol{r},t)\to \theta(\boldsymbol{r},t), \\ & |\Delta|(\boldsymbol{r},t)\to |\Delta|(\boldsymbol{r},t), \qquad \mu^{\ }_{\mathrm{s}}(\boldsymbol{r},t)\to -\mu^{\ }_{\mathrm{s}}(\boldsymbol{r},t), \end{split} \label{eq: def SL} \end{equation} where \begin{equation} R:= \alpha^{\ }_{3}= \begin{pmatrix} \tau^{\ }_{3} & 0 \\ 0 & - \tau^{\ }_{3} \end{pmatrix}\equiv \sigma^{\ }_{3} \otimes \tau^{\ }_{3}= R^{t} \label{eq: def m,athcal R} \end{equation} is a diagonal, unitary, and Hermitian matrix. Transformation (\ref{eq: def SL}) realizes in graphene the change of sign of the single-particle wave functions on every sites of the honeycomb lattice belonging to one and only one triangular sublattice. The single-particle eigenstates of the conserved Hamiltonian $\hat{H}$ obey the spectral symmetry (SLS) by which any single-particle eigenstate $|\Psi\rangle$ with a non-vanishing energy eigenvalue $\varepsilon$ has the mirror eigenstate $R|\Psi\rangle$ with the non-vanishing energy eigenvalue $-\varepsilon$, if all background fields are static while \begin{equation} \mu^{\ }_{\mathrm{s}}= \eta=0. \end{equation} \subsubsection{ Continuous gauge symmetries } We now turn to the continuous symmetries obeyed by the Dirac Hamiltonian% ~(\ref{eq: def quantum Hamiltonian}) in the Heisenberg representation. To this end, we make use of \begin{equation} 0= [\gamma^{\ }_{5},\boldsymbol{\alpha}]= \{\gamma^{\ }_{5},\beta\}= [\gamma^{\ }_{5},R]. \label{eq: gamma5 either commutes or anticommutes} \end{equation} The commutators and anticommutator% ~(\ref{eq: gamma5 either commutes or anticommutes}) imply that \begin{equation} \hat{H}(t)\to \hat{H}(t) \end{equation} under the U(1)$\otimes$U(1) local gauge transformation \begin{equation} \begin{split} & \hat{\psi}\to e^{{i}\left(\phi+\phi^{\ }_{5}\gamma^{\ }_{5}\right)} \hat{\psi}, \qquad \boldsymbol{a}\to \boldsymbol{a} - \boldsymbol{\partial}\phi, \\ & \boldsymbol{a}^{\ }_{5}\to \boldsymbol{a}^{\ }_{5} - \boldsymbol{\partial}\phi^{\ }_{5}, \qquad \theta\to \theta - 2 \phi^{\ }_{5}, \\ & |\Delta|\to |\Delta|, \qquad \mu^{\ }_{\mathrm{s}}\to \mu^{\ }_{\mathrm{s}}, \qquad \eta\to \eta, \end{split} \label{eq: local gauge symmetries} \end{equation} generated by the two space- and time-dependent real-valued smooth functions $\phi$ and $\phi^{\ }_{5}$. The microscopic origin of the global U(1) gauge symmetry generated by $\phi$ is conservation of the electron number in graphene. For planar graphene, the continuous global axial U(1) gauge symmetry generated by $\phi^{\ }_{5}$ is broken as soon as the curvature of the tight-binding dispersion is accounted for so that the Dirac points are not anymore decoupled. We shall nevertheless impose the local axial U(1) gauge symmetry at the level of the approximation captured by the Dirac Hamiltonian% ~(\ref{eq: def quantum Hamiltonian}) and see through its consequences in this paper. (We do provide a microscopic example of a lattice model that realizes the local axial U(1) gauge symmetry in Sec.~\ref{sec: Microscopic models}.) \section{ Path integral formulation of the model with 4 masses } \label{sec: Path integral formulation of the model} For our purposes, it will be more convenient to trade the operator formalism for an effective partition function defined by integrating over the Dirac fermions in the background of the gauge fields $\boldsymbol{a}$ and $\boldsymbol{a}^{\ }_{5}$ and of the scalar fields $\Delta$, $\mu^{\ }_{\mathrm{s}}$, and $\eta$. We will demand that this effective theory captures the U(1)$\otimes$U(1) local gauge symmetry% ~(\ref{eq: local gauge symmetries}). This is possible in odd-dimensional space and time,% \cite{Fujikawa04} for the Grassmann measure can be regularized without breaking the U(1)$\otimes$U(1) local gauge symmetry of the Lagrangian. Of course, maintaining the U(1)$\otimes$U(1) local gauge symmetry can only be achieved if the phase $\theta=-\mathrm{arg}\,\Delta$ of the Kekul\'e background field $\Delta$ is also included as a dynamical field. For simplicity but without loss of generality as far as the computation of the charge quantum number and statistical phase are concerned, the masses $m$ and $\eta$ will be taken to be space- and time-independent parameters, while $\Delta$ and $\mu^{\ }_{\mathrm{s}}$ vary in space and time (with $m=\sqrt{|\Delta|^2+\mu^{2}_{\mathrm{s}}}$ constant) through $\theta\equiv -\mathrm{arg}\,\Delta$ and $\cos\alpha\equiv\mu^{\ }_{\mathrm{s}}/m$. (For simplicity, we shall also focus on the case where $\mu^{\ }_{\mathrm{s}}$ is also constant in space and time, with the exception of near the vortex core, where $\Delta\to 0$, so $\mu^{\ }_{\mathrm{s}}$ has to adjust as to keep $m$ constant.) Thus, we seek the effective field theory defined by the Grassmann path integral \begin{subequations} \label{eq: def Z} \begin{equation} \begin{split} & Z^{\ }_{m,\eta}[a^{\ }_{\mu},a^{\ }_{5\mu},\theta,\alpha] := \int\mathcal{D}[\bar\psi,\psi] \exp \left( {i} \int d^3x\, \mathcal{L}^{\ }_{m,\eta} \right), \\ & \mathcal{L}^{\ }_{m,\eta}:= \bar{\psi} \left( \gamma^{\mu}{i}\partial^{\ }_{\mu} - \gamma^{\mu} a^{\ }_{\mu} - \gamma^{\mu} \gamma^{\ }_{5} a^{\ }_{5\mu} - M^{\ }_{m,\eta} \right) \psi, \end{split} \end{equation} where we have also included the time-components $a^{\ }_{0}$ (TRS but SLS breaking) and ${a_{5}}_{0}$ (SLS but TRS breaking) of the U(1)$\otimes$U(1) gauge fields to maintain space and time covariance. The independent Grassmann-valued fields over which the path integral is performed are the 4-components spinors $\bar\psi$ and $\psi$. They depend on the contravariant 3-vectors $x^{\mu}=(t,\boldsymbol{r})$ [covariant 3-vectors $x^{\ }_{\mu}=(t,-\boldsymbol{r})$] and we will use the repeated summation convention $x^{\mu}y_{\mu}= x^{0}y^{0} -x^{1}y^{1} -x^{2}y^{2}$. We have defined the four gamma matrices \begin{equation} \gamma^{0}:= \beta, \quad \gamma^{1}:= \beta \alpha^{1}, \quad \gamma^{2}:= \beta \alpha^{2} \quad \gamma^{3}:= \beta \alpha^{3}, \end{equation} for which lowering and raising of the greek indices $\mu,\nu=0,1,2$ is achieved with the Lorentz metric $g^{\ }_{\mu\nu}=\mathrm{diag}(1,-1,-1)$. The 4 matrices $\gamma^{0}$, $\gamma^{1}$, $\gamma^{2}$, and $\gamma^{3}$ obey the usual Clifford algebra in Minkowsky space in the chiral representation, i.e., $\gamma^{\ }_{5}={i} \gamma^{0}\gamma^{1} \gamma^{2}\gamma^{3}$ is diagonal. We have also defined the matrix \begin{equation} \begin{split} & M^{\ }_{m,\eta}:= m \left( n^{\ }_{1}M^{\ }_{1} + n^{\ }_{2}M^{\ }_{2} + n^{\ }_{3}M^{\ }_{3} \right) + \eta\gamma^{\ }_{5}\gamma^{3}, \\ & M^{\ }_{1}:= 1, \qquad M^{\ }_{2}:= -{i}\gamma^{\ }_{5}, \qquad M^{\ }_{3}:= \beta \alpha^{3}\equiv \gamma^{3}, \end{split} \end{equation} for which we do not distinguish upper and lower latin indices $\mathrm{a},\mathrm{b}=1,2,3$ as they are contracted with the Euclidean metric $\delta^{\ }_{\mathrm{a}\mathrm{b}}= \mathrm{diag}(1,1,1)$. (Notice that because space and time is (2+1) dimensional, we can use the gamma matrix $\gamma^{3}$ to open a spectral gap by taking $M^{\ }_{3}=\gamma^{3}$.) The space and time dependencies in $M^{\ }_{m,\eta}$ follow entirely from those of the phase $\mathrm{arg}\,\Delta$. Indeed, while the masses $\eta$ and $m$ are constant in space and time, the direction of the unit vector $\boldsymbol{n}$ with the 3 components \begin{equation} n^{\ }_{1}:= \frac{|\Delta|\cos\theta}{m}, \quad n^{\ }_{2}:= -\frac{|\Delta|\sin\theta}{m}, \quad n^{\ }_{3}:= \frac{\mu^{\ }_{\mathrm{s}}}{m} \label{eq: para unit vector} \end{equation} \end{subequations} can vary in space and time. The U(1)$\otimes$U(1) local gauge symmetry~(\ref{eq: local gauge symmetries}) has become the invariance of the Lagrangian in Eq.~(\ref{eq: def Z}) under the U(1)$\otimes$U(1) local gauge transformation \begin{equation} \begin{split} & \bar\psi\to \bar\psi\, e^{-{i}(\phi-\phi^{\ }_{5}\gamma^{\ }_{5})}, \qquad \psi\to e^{{i}(\phi+\phi^{\ }_{5}\gamma^{\ }_{5})}\, \psi, \\ & a^{\ }_{ \mu}\to a^{\ }_{ \mu} - \partial^{\ }_{\mu}\phi, \quad a^{\ }_{5\mu}\to a^{\ }_{5\mu} - \partial^{\ }_{\mu}\phi^{\ }_{5}, \\& \theta\to \theta - 2\phi^{\ }_{5}. \end{split} \label{eq: local gauge symmetries bis} \end{equation} In spite of appearances [$\bar\psi\psi\to \bar\psi\exp(2{i}\phi^{\ }_{5}\gamma^{\ }_{5})\,\psi$], the Grassmann Jacobian induced by the U(1)$\otimes$U(1) local gauge transformation~(\ref{eq: local gauge symmetries}) is unity and does not produce a quantum anomaly in (2+1) dimensions (odd space-time dimension).% ~\cite{Fujikawa04} We take advantage of the fact that $\bar\psi$ and $\psi$ are independent Grassmann integration variables to bring the algebra obeyed by the 6 matrices $\gamma^{\mu}$ $\mu=0,1,2$ and $M^{\ }_{\mathrm{a}}$ $\mathrm{a}=1,2,3$ to a form that will simplify greatly the evaluation of the partition function% ~(\ref{eq: def Z}). Under the non-unitary change of integration variable \begin{equation} \bar\psi=: \bar\chi\gamma^{\ }_{5}\gamma^{3}, \qquad \psi=:\chi, \end{equation} the partition function~(\ref{eq: def Z}) becomes \begin{subequations} \label{eq: def Z bis} \begin{equation} \begin{split} & Z^{\ }_{m,\eta} \left[ B^{\ }_{\mu}, n^{\ }_{\mathrm{a}} \right]= \int\mathcal{D}[\bar\chi,\chi] \exp \left( {i} \int d^3x\, \mathcal{L}^{\ }_{m,\eta} \right), \\ & \mathcal{L}^{\ }_{m,\eta}= \bar{\chi} \left( \Gamma^{\mu}{i}\partial^{\ }_{\mu} + \Gamma^{\mu} B^{\ }_{\mu} - m\, n^{\ }_{\mathrm{a}} \Sigma^{\ }_{\mathrm{a}} - \eta \right) \chi, \end{split} \end{equation} where the matrices \begin{equation} \Gamma^{\mu}:= \gamma^{\ }_{5} \gamma^{3} \gamma^{\mu}, \qquad \Sigma^{\ }_{\mathrm{a}}:= \gamma^{\ }_{5} \gamma^{3} M^{\ }_{\mathrm{a}}, \end{equation} obey \begin{equation} \{\Gamma^{\mu},\Gamma^{\nu}\}= 2g^{\mu\nu}, \quad [\Sigma^{\ }_{\mathrm{a}},\Sigma^{\ }_{\mathrm{b}}]= {i}\epsilon^{\ }_{\mathrm{abc}}\Sigma^{\ }_{\mathrm{c}}, \quad [\Gamma^{\mu},\Sigma^{\ }_{\mathrm{a}}]=0, \end{equation} for $\mu,\nu=0,1,2$ and $\mathrm{a},\mathrm{b},\mathrm{c}=1,2,3$ and we have regrouped the gauge fields into \begin{equation} B^{\ }_{\mu}\equiv b^{0 }_{\mu} + b^{\mathrm{a}}_{\mu}\Sigma^{\mathrm{a}} \label{eq: from B's to a's and a5's d} \end{equation} following the prescription \begin{equation} \begin{split} & b^{0 }_{\mu}:= - a^{\ }_{\mu}, \qquad b^{1}_{\mu}:=b^{2}_{\mu}:=0, \qquad b^{3 }_{\mu}:= + a^{\ }_{5\mu}. \label{eq: from B's to a's and a5's e} \end{split} \end{equation} \end{subequations} Notice that \begin{equation} \Sigma^{\ }_{3}= -\gamma^{\ }_{5} \end{equation} so that the symmetry under the U(1)$\otimes$U(1) local gauge transformation~(\ref{eq: local gauge symmetries bis}) has become the invariance of the Lagrangian in Eq.~(\ref{eq: def Z bis}) under \begin{equation} \begin{split} & \bar\chi\to \bar\chi\, e^{-{i}(\phi-\phi^{\ }_{5}\Sigma^{\ }_{3})}, \qquad \chi\to e^{+{i}(\phi-\phi^{\ }_{5}\Sigma^{\ }_{3})} \chi, \\ & b^{0}_{\mu}\to b^{0}_{\mu} + \partial^{\ }_{\mu}\phi, \qquad b^{3}_{\mu}\to b^{3}_{\mu} + \partial^{\ }_{\mu}\phi^{\ }_{5}, \\ & b^{1}_{\mu}\to b^{1}_{\mu}, \qquad b^{2}_{\mu}\to b^{2}_{\mu}, \qquad \theta\to \theta -2\phi^{\ }_{5}. \end{split} \label{eq: local gauge symmetries bis bis} \end{equation} \subsection{ Hidden U(2) non-Abelian structure } \label{subsec: Hidden U(2) non-Abelian structure} To make the U(2) non-Abelian structure explicit, observe first that the mass $ mn^{\ }_{\mathrm{a}}\Sigma^{\ }_{\mathrm{a}} $ is an element of an su(2) Lie algebra. Indeed, there exists a $4\times 4$ matrix $U$ representing an element of SU(2) generated by $\Sigma^{\ }_{\mathrm{a}}$ $\mathrm{a}=1,2,3$ such that \begin{equation} m\,n^{\ }_{\mathrm{a}}\Sigma^{\ }_{\mathrm{a}}= m\,U\Sigma^{\ }_{3} U^{\dag}. \end{equation} We then infer that the partition functions (\ref{eq: def Z}) or, equivalently, (\ref{eq: def Z bis}) are special cases of the more general partition function \begin{subequations} \label{eq: def Z bis bis} \begin{equation} \begin{split} & Z:= \int\mathcal{D}[\bar\chi,\chi] \exp \left( {i} \int d^3x\, \mathcal{L}^{\ }_{m,\eta} \right), \\ & \mathcal{L}^{\ }_{m,\eta}:= \bar{\chi} \left( \Gamma^{\mu}{i}\partial^{\ }_{\mu} + \Gamma^{\mu} B^{\ }_{\mu} - m\, U \Sigma^{\ }_{3} U^{\dag} - \eta \right) \chi, \end{split} \end{equation} where \begin{equation} B^{\ }_{\mu}(x)= b^{0}_{\mu}(x) + b^{\mathrm{a}}_{\mu}(x) \Sigma^{\mathrm{a}}, \qquad \mu=0,1,2, \end{equation} are arbitrary elements of the Lie algebra $\mathrm{u}(2)=\mathrm{u}(1)\oplus\mathrm{su}(2)$ and \begin{equation} U(x)= e^{ {i} u^{\ }_{\mathrm{0}}(x) } e^{ {i} u^{\ }_{\mathrm{a}}(x) \Sigma^{\ }_{\mathrm{a}} }, \qquad u^{\ }_{\mathrm{0}}(x), u^{\ }_{\mathrm{a}}(x)\in\mathbb{R}, \end{equation} \end{subequations} is an arbitrary element of U(2). As the mapping between the unit vector $\boldsymbol{n}(x)$ and $U(x)$ is one to many, the Lagrangian and the Grassmann measure in Eq.~(\ref{eq: def Z bis bis}) are both invariant under the local U(2) gauge transformation \begin{subequations} \label{eq: symmetries bis bis bis} \begin{equation} \begin{split} & \bar\chi\to \bar\chi\,V^{\dag}, \qquad \chi\to V\chi, \\ & B^{\ }_{\mu}\to V B^{\ }_{\mu} V^{\dag} - {i} V^{\dag} \partial^{\ }_{\mu} V, \\ & U\to VU, \end{split} \end{equation} parametrized by the smooth space- and time-dependent \begin{equation} V(x):= e^{ {i} \left[ v^{\ }_{0}(x) + v^{\ }_{\mathrm{a}}(x) \Sigma^{\ }_{\mathrm{a}} \right] } \in\mathrm{U}(2), \end{equation} and under the global U(1)$\times$U(1) transformation \begin{equation} U\to U\, W, \qquad W:= e^{{i}\phi^{\ }_{0}} e^{{i}\phi^{\ }_{3}\Sigma^{\ }_{3}}, \end{equation} \end{subequations} parametrized by the real-valued \textit{numbers} $\phi^{\ }_{0}$ and $\phi^{\ }_{3}$. The transformation~(\ref{eq: local gauge symmetries bis}) or, equivalently, (\ref{eq: local gauge symmetries bis bis}) is represented by the transformation% ~(\ref{eq: symmetries bis bis bis}) with $B^{\ }_{\mu}$ given in Eqs.~(\ref{eq: from B's to a's and a5's d}) and (\ref{eq: from B's to a's and a5's e}) and $U$ given by \begin{equation} U= e^{+{i}\theta\Sigma^{\ }_{3}/2} e^{-{i}\alpha\Sigma^{\ }_{2}/2} e^{-{i}\theta\Sigma^{\ }_{3}/2} \label{eq: parametrization U} \end{equation} whereby the unit vector~(\ref{eq: para unit vector}) is parametrized by \begin{equation} \boldsymbol{n}= \left( \sin\alpha\cos\theta, -\sin\alpha\sin\theta, \cos\alpha \right)^{t}. \end{equation} (Recall that $\cos\alpha:=\mu^{\ }_{\mathrm{s}}/m$, $\sin\alpha:=|\Delta|/m$, and that the phase $\theta=-\mathrm{arg}\,\Delta$ is space and time dependent.) A gradient expansion for the partition function% ~(\ref{eq: def Z bis bis}) with an arbitrary space and time dependent $U\in\mathrm{SU}(2)$ but with $B^{\ }_{\mu}=0$ and $\eta=0$ was performed by Jaroszewicz and shown to produce the effective action for the O(3) non-linear-sigma model (NLSM) modified by a Hopf term \cite{Jaroszewicz84,Chen89,Hlousek90,Yakovenko90,Abanov00} This Hopf term was shown by Chen and Wilczek to vanish as soon as the TRS-breaking mass $\eta$ is larger in magnitude than the TRS mass $m$. Chen and Wilczek also showed that an Abelian Chern-Simons term for a non-vanishing $b^{0}_{\mu}\equiv a^{\ }_{\mu}$ is present if and only if $|\eta|>m$. Hopf or Chern-Simons terms can cause the fractionalization of quantum numbers. Although charge fractionalization can here also be deduced from the presence of midgap single-particle states of the Dirac Hamiltonian% ~(\ref{eq: def quantum Hamiltonian}) in static backgrounds,% ~\cite{Hou07,Jackiw07,Chamon08a,Chamon08b,Jaroszewicz84} it is natural to explore the emergence of fractional statistics under the exchange of point-like quasiparticles by exploring the fully dynamical theory encoded by the partition function% ~(\ref{eq: def Z bis}). To this end, it is essential to preserve all symmetries as we did up to now. The point-like quasiparticle whose braiding statistics we shall derive are vortices~\cite{Hou07} in the dynamical phase $\theta=-\mathrm{arg}\,\Delta$, including the case when they are accompanied by axial gauge half fluxes in $a^{\ }_{5\mu}$ that screen the interactions between vortices.% ~\cite{Jackiw07} \section{ Derivative expansion and the effective action } \label{sec: Derivative expansion} It is known that the Dirac Hamiltonian% ~(\ref{eq: def quantum Hamiltonian}) with static backgrounds can support zero modes.% ~\cite{Hou07,Jackiw07,Chamon08a,Chamon08b,Jaroszewicz84} This can be of a nuisance when computing a fermion determinant. However, it is possible to elegantly dispose of this difficulty with the help of the observation made by Jaroszewicz that a non-singular U(2) gauge transformation on the Dirac Kernel in the partition function% ~(\ref{eq: def Z bis bis}) can turn a single-particle midgap state into a single-particle threshold state without changing the spectral asymmetry.% ~\cite{Jaroszewicz84,Jaroszewicz86} This is achieved by redefining the Grassmann integration variables in the partition function% ~(\ref{eq: def Z bis bis}) according to \begin{equation} \bar\chi=: \bar\chi'U^{\dag}, \qquad \chi=: U\chi'. \label{eq: U(2) trsf on chi's} \end{equation} The partition function% ~(\ref{eq: def Z bis bis}) becomes \begin{subequations} \label{eq: def Z bis bis bis} \begin{equation} \begin{split} & Z^{\prime}_{m,\eta}[B^{\prime}_{\mu}]:= \int\mathcal{D}[\bar\chi',\chi'] \exp \left( {i} \int d^3x\, \mathcal{L}^{\prime}_{m,\eta} \right), \\ & \mathcal{L}^{\prime}_{m,\eta}:= \bar{\chi}' \left( \Gamma^{\mu}{i}\partial^{\ }_{\mu} + \Gamma^{\mu} B^{\prime}_{\mu} - m\, \Sigma^{\ }_{3} - \eta \right) \chi', \end{split} \end{equation} where \begin{equation} B^{\prime}_{\mu}= U^{\dag} B^{\ }_{\mu} U + U^{\dag} {i}\partial^{\ }_{\mu} U \label{eq: def B'mu} \end{equation} \end{subequations} need not be a pure gauge because of the term $U^{\dag}B^{\ }_{\mu}U$. The symmetries~(\ref{eq: symmetries bis bis bis}) of the Lagrangian and the Grassmann measure in Eq.~(\ref{eq: def Z bis bis}) become the invariance of the Lagrangian and the Grassmann measure in Eq.~(\ref{eq: def Z bis bis bis}) under the local U(2) gauge symmetry \begin{subequations} \label{eq: symmetries bis bis bis bis} \begin{equation} \begin{split} & \bar\chi'\to \bar\chi', \qquad \chi'\to \chi', \\ & B^{\ }_{\mu}\to V B^{\ }_{\mu} V^{\dag} - {i} V^{\dag} \partial^{\ }_{\mu} V, \\ & U\to VU, \end{split} \label{eq: symmetries bis bis bis bis a} \end{equation} parametrized by the space- and time-dependent $ V(x) \in\mathrm{U}(2) $ and under the global U(1)$\times$U(1) gauge symmetry \begin{equation} \begin{split} & \bar\chi'\to \bar\chi'\,W, \qquad \chi'\to W^{\dag}\chi', \qquad U\to U\, W, \end{split} \label{eq: U(1) U(1) b} \end{equation} \label{eq: symmetries bis bis bis bis b} \end{subequations} \noindent parametrized by the space and time independent $W:= \exp({i}\phi^{\ }_{0}) \exp({i}\phi^{\ }_{3}\Sigma^{\ }_{3})$. Notice that \begin{equation} B^{\prime}_{\mu}\to W^{\dag}\,B^{\prime}_{\mu}\,W \end{equation} under the transformation% ~(\ref{eq: symmetries bis bis bis bis}). Evidently, the transformed Dirac fermions are local U(2) gauge singlets. Thus, by dressing the original Dirac fermions into local U(2) gauge singlets, any midgap single-particle states from the original static Dirac Hamiltonian has migrated to the threshold of the continuum part of the transformed single-particle spectrum, provided the single-particle spectral gap has not closed, i.e., $m\neq|\eta|$ in the parameter space $(m,\eta)\in\mathbb{R}^{2}$ of Fig.~\ref{fig:phase-diagram}. This dressing is achieved without changing the spectral asymmetry in any region of Fig.~\ref{fig:phase-diagram} in which the single-particle gap remains open, for the U(2) gauge transformation is not singular. The parametrization \begin{equation} \begin{split} & b^{\prime 0}_{\mu}= - a^{\ }_{\mu}, \\ & b^{\prime 1}_{\mu}= - \sin \alpha \cos \theta \left( a^{\ }_{5 \mu} - \frac{1}{2} \partial^{\ }_{\mu}\theta \right), \\ & b^{\prime 2}_{\mu}= + \sin \alpha \sin \theta \left( a^{\ }_{5 \mu} - \frac{1}{2} \partial^{\ }_{\mu}\theta \right), \\ & b^{\prime 3}_{\mu}= + \left( \cos \alpha\, a^{\ }_{5 \mu} + \frac{ 1 - \cos\alpha } { 2 } \partial^{\ }_{\mu} \theta \right), \end{split} \label{eq: paramertization B'mu} \end{equation} of $B^{\prime}_{\mu}= b^{\prime\mathrm{0}}_{\mu} + b^{\prime\mathrm{a}}_{\mu}\Sigma^{\mathrm{a}}$ where $\mu=0,1,2$ follows from inserting Eqs.~(\ref{eq: from B's to a's and a5's d}), (\ref{eq: from B's to a's and a5's e}), and (\ref{eq: parametrization U}) into Eq.~(\ref{eq: def B'mu}). The transformation law of Eq.~(\ref{eq: paramertization B'mu}) under the local U(1)$\otimes$U(1) gauge transformation~(\ref{eq: local gauge symmetries bis}) is \begin{equation} \begin{split} & b^{\prime 0}_{\mu}\to b^{\prime 0}_{\mu} + \partial^{\ }_{\mu}\phi, \\ & b^{\prime 1}_{\mu}\to \cos(2\phi^{\ }_{5})\, b^{\prime 1}_{\mu} - \sin(2\phi^{\ }_{5})\, b^{\prime 2}_{\mu}, \\ & b^{\prime 2}_{\mu}\to \sin(2\phi^{\ }_{5})\, b^{\prime 1}_{\mu} + \cos(2\phi^{\ }_{5})\, b^{\prime 2}_{\mu}, \\ & b^{\prime 3}_{\mu}\to b^{\prime 3}_{\mu} - \partial^{\ }_{\mu}\phi^{\ }_{5}. \end{split} \label{eq: gauge trs law parametrization b'mu} \end{equation} At this stage, it is convenient to define the effective action (Lagrangian) \begin{equation} S^{\mathrm{eff}}_{m,\eta}[B^{\prime}_{\mu}]\equiv \int d^{3}x\, \mathcal{L}^{\mathrm{eff}}_{m,\eta}:= -{i}\ln Z^{\prime}_{m,\eta}[B^{\prime}_{\mu}] \end{equation} in the background field $B^{\prime}_{\mu}$ given by Eq.~(\ref{eq: gauge trs law parametrization b'mu}). This effective action is constrained by the gauge symmetries in the following way. Any transformation of the Grassmann integration variables $\bar\chi'$ and $\chi'$ with unity for the Jacobian leaves the numerical value of the partition function% ~(\ref{eq: def Z bis bis bis}) unchanged. As the Grassmann measure in the partition function% ~(\ref{eq: def Z bis bis bis}) is invariant under the local U(1)$\otimes$U(1) transformation \begin{equation} \begin{split} & \bar\chi'\to \bar\chi'\, V^{\dag}, \quad \chi'\to V\, \chi', \quad V:= e^{+{i}(\phi-\phi^{\ }_{5}\Sigma^{\ }_{3})}, \end{split} \label{eq: dressing chi'} \end{equation} it follows that \begin{equation} Z^{\prime}_{m,\eta}[B^{\prime}_{\mu}]= Z^{\prime}_{m,\eta}[ V^{\dag}B^{\prime}_{\mu}V-V^{\dag}{i}\partial^{\ }_{\mu}V]. \end{equation} The partition function% ~(\ref{eq: def Z bis bis bis}) thus takes the form \begin{subequations} \label{eq: L eff if m neq 0} \begin{equation} Z^{\prime}_{m,\eta}[B^{\prime}_{\mu}]= \exp \left( {i} \int d^{3}x\, \mathcal{L}^{\mathrm{eff}}_{m,\eta} \right) \end{equation} where \begin{equation} \begin{split} \mathcal{L}^{\mathrm{eff}}_{m,\eta}=&\, \hphantom{+} C^{(0)}_{11} \left( b^{\prime 1 \rho} b^{\prime 1}_{\rho} + b^{\prime 2 \rho} b^{\prime 2}_{\rho} \right) \\ &\, + C^{(1)}_{00} \epsilon^{\nu\rho\kappa} b^{\prime 0}_{\nu} \partial^{\ }_{\rho} b^{\prime 0}_{\kappa} + C^{(1)}_{33} \epsilon^{\nu\rho\kappa} b^{\prime 3}_{\nu} \partial^{\ }_{\rho} b^{\prime 3}_{\kappa} \\ &\, + C^{(1)}_{11} \epsilon^{\nu\rho\kappa} \left( b^{\prime 1}_{\nu} \partial^{\ }_{\rho} b^{\prime 1}_{\kappa} + b^{\prime 2}_{\nu} \partial^{\ }_{\rho} b^{\prime 2}_{\kappa} - 2 \epsilon^{\mathrm{a}\mathrm{b}3} b^{\prime \mathrm{a}}_{\nu} b^{\prime \mathrm{b}}_{\rho} b^{\prime 3}_{\kappa} \right) \\ &\, + C^{(1)}_{03} \epsilon^{\nu\rho\kappa} b^{\prime 0}_{\nu} \partial^{\ }_{\rho} b^{\prime 3}_{\kappa} +\ldots, \end{split} \label{eq: L eff m not 0} \end{equation} \end{subequations} up to first order in a derivative expansion. This Lagrangian changes by the usual Abelian Chern-Simons boundary terms under the gauge transformation% ~(\ref{eq: gauge trs law parametrization b'mu}). The real-valued coefficients $C^{(0)}_{11}$, $C^{(1)}_{00}$, $C^{(1)}_{33}$, $C^{(1)}_{11}$, and $C^{(1)}_{03}$ are functions of the parameters $m\in\mathbb{R}$ and $\eta\in\mathbb{R}$ with $m\neq|\eta|$. A tedious calculation, summarized in Appendix% ~\ref{appsec: coefficients}, yields the values shown in Table~\ref{tab:Cs}. \begin{table*} \caption{ \label{tab:Cs} Coefficients for the effective action in Eq.~(\ref{eq: L eff if m neq 0}). The calculation leading to these values is presented in Appendix~\ref{appsec: coefficients}. \\} \begin{ruledtabular} \begin{tabular}{ccccc} &$C^{(0)}_{11}$& $C^{(1)}_{00} = C^{(1)}_{33}$& $C^{(1)}_{11}$& $C^{(1)}_{03}$\\ &&&\\ $|\eta|<m$& $\frac{3m^2-\eta^2}{6\pi m}$ & $0$ & $\frac{\eta}{6\pi m}$ & $\frac{1}{2\pi}\;\mathrm{sgn}\,\mu^{\ }_{\mathrm{s}}$ \\ \\ $m<|\eta|$& $\frac{m^2}{3\pi |\eta|}$ & $\frac{1}{4\pi}\;{\rm sgn}\,\eta$ & $\frac{3\eta^2-m^2}{12\pi \eta^2}\;{\rm sgn}\,\eta$ & $0$ \\ \\ \end{tabular} \end{ruledtabular} \end{table*} Observe that the coefficients $C^{(1)}_{00}$, $C^{(1)}_{33}$, and $C^{(1)}_{03}$ that multiply the terms fixed by the local U(1)$\otimes$U(1) gauge invariance in the effective Lagrangian~(\ref{eq: L eff m not 0}) can only take a discrete set of values, while the coefficients $C^{(0)}_{11}$ and $C^{(1)}_{11}$ that multiply the terms fixed by the global U(1) gauge invariance can vary continuously with $m$ and $\eta$. The case $m=0$ when TRS is maximally broken is special as the symmetry-breaking term $m\Sigma^{\ }_{3}$ drops out from the Lagrangian in Eq.~(\ref{eq: def Z bis bis bis}). The matrix $V$ in the change of Grassmann variables% ~(\ref{eq: dressing chi'}) is then not restricted to the Abelian subgroup U(1)$\otimes$ U(1) of U(1)$\otimes$SU(2) but can be arbitrarily chosen in U(2). Consequently, $C^{(1)}_{33}=C^{(1)}_{11}$ in this limit, which is consistent with the values in Table~\ref{tab:Cs}. These (equal) coefficients then multiply an SU(2) non-Abelian Chern-Simons term when $m=0$, and hence must be quantized~\cite{Deser82}, i.e., \begin{equation} \begin{split} \mathcal{L}^{\mathrm{eff}}_{m=0,\eta}=&\, \hphantom{+} \frac{\mathrm{sgn}\,\eta}{4\pi} \epsilon^{\nu\rho\kappa} \left( \delta^{\mathrm{a}\mathrm{b}} b^{\prime \mathrm{a}}_{\nu} \partial^{\ }_{\rho} b^{\prime \mathrm{b}}_{\kappa} - \frac{2}{3} \epsilon^{\mathrm{a}\mathrm{b}\mathrm{c}} b^{\prime \mathrm{a}}_{\nu} b^{\prime \mathrm{b}}_{\rho} b^{\prime \mathrm{c}}_{\kappa} \right) \\ &\, + \frac{\mathrm{sgn}\,\eta}{4\pi} \epsilon^{\nu\rho\kappa} b^{\prime 0}_{\nu} \partial^{\ }_{\rho} b^{\prime 0}_{\kappa} +\ldots \end{split} \label{eq: L eff if m = 0 final} \end{equation} where the second line on the right-hand side is nothing but the level 1 SU(2) Chern-Simons term. In the case $\eta=0$ when TRS holds Eq.~(\ref{eq: L eff m not 0}) simplifies to \begin{equation} \begin{split} \mathcal{L}^{\mathrm{eff}}_{m,\eta=0}=&\, \hphantom{+} \frac{m}{2\pi} \left( b^{\prime 1 \rho} b^{\prime 1}_{\rho} + b^{\prime 2 \rho} b^{\prime 2}_{\rho} \right) \\ &\, + \frac{\mathrm{sgn}\,\mu^{\ }_{\mathrm{s}}}{2\pi}\; \epsilon^{\nu\rho\kappa} b^{\prime 0}_{\nu} \partial^{\ }_{\rho} b^{\prime 3}_{\kappa} +\ldots. \end{split} \label{eq: L eff if eta = 0 final} \end{equation} Notice that the second line is a double Chern-Simons term on the fields $b^{\prime 0}$ and $b^{\prime 3}$ which is also called a BF Chern-Simons theory.% ~\cite{Blau91,Hansson04} We close this section with the main intermediary step of this paper from which the fractionalization of the fermion charge and statistical phase follows. Insertion of Eq.~(\ref{eq: paramertization B'mu}) into Eq.~(\ref{eq: L eff if m neq 0}) gives the effective action \begin{widetext} \begin{equation} \begin{split} \mathcal{L}^{\mathrm{eff}}_{m,\eta}=&\, \hphantom{+} C^{(0)}_{11} \sin^2 \alpha \left( a^{\rho}_{5} - \frac{1}{2} \partial^{\rho}\theta \right) \left( a^{\ }_{5\rho} - \frac{1}{2} \partial^{\ }_{\rho}\theta \right) \\ &\, + C^{(1)}_{00} \epsilon^{\nu\rho\kappa} a^{\ }_{\nu} \partial^{\ }_{\rho} a^{\ }_{\kappa} + C^{(1)}_{33} \epsilon^{\nu\rho\kappa} \left( \cos\alpha\, a^{\ }_{5\nu} + \frac{1-\cos\alpha}{2} \partial^{\ }_{\nu} \theta \right) \partial^{\ }_{\rho} \left( \cos\alpha\, a^{\ }_{5\kappa} + \frac{1-\cos\alpha}{2} \partial^{\ }_{\kappa}\theta \right) \\ &\, + C^{(1)}_{11} \sin^2\alpha\, \epsilon^{\nu\rho\kappa} \left( a^{\ }_{5\nu} - \frac{1}{2} \partial^{\ }_{\nu} \theta \right) \partial^{\ }_{\rho} \left( a^{\ }_{5\kappa} - \frac{1}{2} \partial^{\ }_{\kappa} \theta \right) \\ &\, - C^{(1)}_{03} \epsilon^{\nu\rho\kappa} a^{\ }_{\nu} \partial^{\ }_{\rho} \left( \cos\alpha\, a^{\ }_{5\kappa} + \frac{1-\cos\alpha}{2} \partial^{\ }_{\kappa} \theta \right) + \ldots \end{split} \label{eq: final effective action} \end{equation} \end{widetext} with the local U(1)$\otimes$U(1) gauge invariance% \begin{equation} \begin{split} & a^{\ }_{ \mu}\to a^{\ }_{ \mu} - \partial^{\ }_{\mu}\phi, \\ & a^{\ }_{5\mu}\to a^{\ }_{5\mu} - \partial^{\ }_{\mu}\phi^{\ }_{5}, \qquad \theta\to \theta-2\phi^{\ }_{5}, \end{split} \label{eq: effective local gauge symmetries final} \end{equation} for any compact and boundary-less manifold in (2+1)-dimensional space and time. Some comments are of order here. First, the coefficient $C^{(0)}_{11}$ controls the axial phase stiffness of the Anderson-Higgs contribution to the effective action. Second, each of the coefficients $C^{(1)}_{00}$, $C^{(1)}_{33}$, and $C^{(1)}_{11}$ multiplies a Chern-Simons term that is diagonal with respect to the gauge fields. The coefficient $C^{(1)}_{03}$ is different in that regard since it couples the gauge field $a^{\ }_{\mu}$ responsible for the conservation of the fermion number to the axial gauge field $a^{\ }_{5\mu}$ on the one hand, and the axial singlet linear combination $\tilde{a}^{\ }_{5\mu}\equiv a^{\ }_{5\mu}-\partial^{\ }_{\mu}\theta/2$ on the other hand. Such an off-diagonal coupling is reminiscent of so-called BF Chern-Simons theories.% ~\cite{Blau91,Hansson04} It is the coefficient $C^{(1)}_{03}$ that controls the charge assignments in the field theory% ~(\ref{eq: final effective action}) and, for later convenience, we break its contribution to the induced fermionic charge into two pieces, \begin{equation} \begin{split} & \mathcal{L}^{\ }_{\mathrm{BF}}:= \mathcal{L}^{(1)}_{\mathrm{BF}} + \mathcal{L}^{(2)}_{\mathrm{BF}}, \\ & \mathcal{L}^{(1)}_{\mathrm{BF}}:= C^{(1)}_{03} \left(1-\cos\alpha\right) a d \tilde{a}^{\ }_{5}, \\ & \mathcal{L}^{(2)}_{\mathrm{BF}}:= - C^{(1)}_{03} a d a^{\ }_{5}. \end{split} \label{eq: def LBF} \end{equation} Here, we have introduced the short-hand notation $adb\equiv \epsilon^{\mu\nu\rho}a^{\ }_{\mu}\partial^{\ }_{\nu}b^{\ }_{\rho}$. \section{ Fractional fermion charge } \label{sec: Fractional charge quantum number} Equipped with Eq.~(\ref{eq: final effective action}) and Table~\ref{tab:Cs} we compute in this section the leading contributions in the gradient expansion to the expectation value of the conserved charge current \begin{equation} \left\langle j^{\mu} \right\rangle^{\ }_{m,\eta}:= -{i} \left. \frac{ \delta \ln Z^{\prime}_{m,\eta}[B'] } { \delta a^{\ }_{\mu} } \right|^{ }_{a^{\ }_{\mu}=0}. \end{equation} The induced fermion charge current is \begin{equation} j^{\mu}= - C^{(1)}_{03} \epsilon^{\mu\rho\kappa} \partial^{\ }_{\rho} \left( \cos\alpha\, a^{\ }_{5\kappa} + \frac{1-\cos\alpha}{2} \partial^{\ }_{\kappa} \theta \right) +\ldots. \label{eq: final induced fermion current} \end{equation} It obeys the continuity equation \begin{equation} \partial^{\ }_{\mu}j^{\mu}=0. \end{equation} The total induced fermionic charge \begin{equation} Q:= \int d^{2}\boldsymbol{r}\, j^{0}(\boldsymbol{r},t) \end{equation} is thus time-independent and given by \begin{equation} Q= - C^{(1)}_{03} \oint d\boldsymbol{l} \cdot \left( \cos\alpha\, \boldsymbol{a}^{\ }_{5} + \frac{1-\cos\alpha}{2} \boldsymbol{\partial} \theta \right) \end{equation} with the help of Stokes' theorem. The induced fermionic charge is \begin{subequations} \begin{equation} Q= - 2\pi\;C^{(1)}_{03} \left( \frac{n^{\ }_{\theta}}{2} + \frac{1}{2} \left( n^{\ }_{5} - n^{\ }_{\theta} \right) \cos\alpha \right) \end{equation} for the special case when the vector fields $\boldsymbol{a}^{\ }_{5}$ and $\boldsymbol{\partial}\theta$ support, on a circular boundary at infinity, the net vorticity \begin{equation} \begin{split} & a^{i}_{5}\to - \frac{n^{\ }_{5}}{2} \epsilon^{ij} \frac{r^{j}}{\boldsymbol{r}^{2}}, \qquad n^{\ }_{5}\in\mathbb{Z}, \\ & \partial^{i}\theta\to - n^{\ }_{\theta} \epsilon^{ij} \frac{r^{j}}{\boldsymbol{r}^{2}}, \qquad n^{\ }_{\theta}\in\mathbb{Z}, \end{split} \label{eq: net vorticity} \end{equation} \end{subequations} respectively. In the absence of the axial gauge flux $n^{\ }_{5}=0$, while the condition for the axial vorticity to screen the (Kekul\'e) vorticity is $n^{\ }_{5} = n^{\ }_{\theta}$. Notice that because $C^{(1)}_{03}$ vanishes for $|\eta|>m$, there is no charge bound to the vortices in that regime. In contrast, when $|\eta|<m$, the charge bound to the topological defect is \begin{equation} \begin{split} Q=-\mathrm{sgn}\,\mu^{\ }_\mathrm{s}\times \begin{cases} \sin^2\frac{\alpha}{2} \;n^{\ }_{\theta} \;, & \hbox{unscreened $(n^{\ }_{5} = 0$)}, \\ \\ \frac{1}{2} \;n^{\ }_{\theta} \;, &\hbox{screened $(n^{\ }_{5} = n^{\ }_{\theta}$).} \end{cases} \end{split} \label{eq: derivation of Q} \end{equation} These results are consistent with those in Refs.~\onlinecite{Hou07}, \onlinecite{Chamon08a}, and \onlinecite{Chamon08b}. \section{ Fractional statistical angle } \label{sec: Fractional statistical angle} We start from the effective partition function \begin{equation} Z^{\mathrm{eff}}_{m,\eta}:= \int\mathcal{D}[a^{\mu},a^{\mu}_{5},\theta] \exp \left( {i} \int d^{3}x\, \mathcal{L}^{\mathrm{eff}}_{m,\eta} \right) \label{eq: def Z eff m eta} \end{equation} with the Lagrangian given by Eq.~(\ref{eq: final effective action}) and the coefficients in Table~\ref{tab:Cs}. In a static approximation, i.e., if we ignore dynamics as we did when computing the fractional charge% ~(\ref{eq: derivation of Q}), vortices are independently supported by the axial gauge field $a^{\mu}_{5}$ or by the phase $\theta$. We will analyze the exchange statistics in two separate cases. The first is when the $\theta$ vortices are dynamically screened by the half fluxes in the axial gauge field $a^{\mu}_{5}$. The second case is when the axial gauge field is suppressed, and the $\theta$ vortex is unscreened; this situation does not arise from the effective Lagrangian% ~(\ref{eq: final effective action}) itself, but it can occur when one goes beyond the linearized Dirac approximation or includes other lattice effects. \subsection{ Screened vortices } \label{subsec: Screened vortices} The exchange statistics of vortices and axial gauge fluxes follows from the effective Lagrangian for the so-called vortex currents. One way to obtain this effective Lagrangian in the screened case is to notice that the local axial gauge invariance together with the first line in Eq.~(\ref{eq: final effective action}) provides the screening condition, for the axial gauge potential must then track the $\theta$ field and, in particular, vortices in $\theta$ must be screened by half fluxes in $a^{\mu}_{5}$. One way to impose this screening is to replace \begin{equation} \partial^{\ }_{\mu} \theta - 2 a^{\ }_{5\mu} \to 0 \label{eq:screening-cond} \end{equation} in Eq.~(\ref{eq: final effective action}). This can be justified more precisely by using the (vortex) dual description of the $XY$ model, as presented in Appendix~\ref{sec:duality}. In effect, the fluctuations away from the condition~(\ref{eq:screening-cond}), which are penalized by the finite stiffness coefficient $C^{(0)}_{11}\sin^{2}\alpha$, can be accounted through a Maxwell term in the dual description. However, the Maxwell term does not enter the exchange statistics. Thus, we can simply use the infinite stiffness limit or, equivalently, the condition~(\ref{eq:screening-cond}). The Lagrangian given by Eq.~(\ref{eq: final effective action}) in the screening limit% ~(\ref{eq:screening-cond}) is \begin{equation} \begin{split} \mathcal{L}^{\mathrm{eff}}_{m,\eta}=&\, \hphantom{+} C^{(1)}_{00}\, \epsilon^{\nu\rho\kappa} \, a^{\ }_{\nu} \, \partial^{\ }_{\rho} a^{\ }_{\kappa} \\ &\, + \frac{1}{4}\;C^{(1)}_{33}\; \epsilon^{\nu\rho\kappa} \, \left(\partial^{\ }_{\nu} \theta\right)\, \partial^{\ }_{\rho} \left(\partial^{\ }_{\kappa}\theta \right) \\ &\, - \frac{1}{2}\; C^{(1)}_{03}\; \epsilon^{\nu\rho\kappa} \, a^{\ }_{\nu}\; \partial^{\ }_{\rho} \left( \partial^{\ }_{\kappa} \theta \right) +\ldots. \end{split} \label{eq:effective-screened-action} \end{equation} The Lagrangian can be written in terms of the vortex current \begin{equation} \bar{j}^{\mu}_{\mathrm{vrt}}:= \frac{1}{2\pi} \epsilon^{\mu\nu\lambda} \partial^{\ }_{\nu} \partial^{\ }_{\lambda} \theta, \label{eq:def-j-theta-screened} \end{equation} that obeys the conservation law \begin{equation} \partial^{\ }_{\mu}\bar{j}^{\mu}_{\mathrm{vrt}}=0, \end{equation} using the duality representation of the $XY$ model supplemented by a Chern-Simons term in (2+1) space and time as done in Appendix~\ref{sec:duality}. This leads to the Chern-Simons Lagrangian \begin{equation} \begin{split} \mathcal{L}^{\mathrm{eff}}_{m,\eta} =&\, \hphantom{+} C^{(1)}_{00} \epsilon^{\nu\rho\kappa} a^{\ }_{\nu} \partial^{\ }_{\rho} a^{\ }_{\kappa} - \pi C^{(1)}_{03} a^{\ }_{\nu} \bar{j}^{\nu}_{\mathrm{vrt}} \\ &\, + \frac{1}{4} C^{(1)}_{33} \left( \epsilon^{\nu\rho\kappa} d^{\ }_{\nu} \partial^{\ }_{\rho} d^{\ }_{\kappa} + 4\pi d^{\ }_{\nu} \bar{j}^{\nu}_{\mathrm{vrt}} \right) \\ &\, +\ldots, \end{split} \label{eq: screened dual lagrangian} \end{equation} from which the statistics carried by screened quasiparticles with the current $\bar{j}^{\nu}_{\mathrm{vrt}}$ follows. This statistics depends on the coefficients in Table~\ref{tab:Cs}. We treat separately the two phases of Fig.~\ref{fig:phase-diagram}. \subsubsection{ Weak time-reversal symmetry breaking: $|\eta|<m$ } In this limit, $ C^{(1)}_{00}= C^{(1)}_{33}= 0 $ and the effective Lagrangian~(\ref{eq: screened dual lagrangian}) reduces to \begin{equation} \begin{split} \mathcal{L}^{\mathrm{eff}}_{m,\eta}=&\, - \frac{1}{2} \mathrm{sgn}\,\mu^{\ }_{\mathrm{s}} a^{\ }_{\nu} \bar{j}^{\nu}_{\mathrm{vrt}} + \ldots. \end{split} \end{equation} Thus, because of the absence of the Chern-Simons terms, the statistical angle $\Theta$ under exchange of any two screened quasiparticles is bosonic, \begin{equation} \frac{\Theta}{\pi}=0. \end{equation} Notice that it also follows that the induced fermionic U(1) current \begin{equation} j^{\nu}= - \frac{1}{2} \mathrm{sgn}\, \mu^{\ }_{\mathrm{s}}\ \bar{j}^{\nu}_{\mathrm{vrt}} \end{equation} that couples linearly to $a^{\ }_\mu$, is tied to the vortex current. In other words, screened quasiparticles with unit vorticity are charged objects with charge $Q=\pm1/2$ as found in Refs.% ~\onlinecite{Hou07}, \onlinecite{Chamon08a}, \onlinecite{Chamon08b}, and in Sec.~\ref{sec: Fractional charge quantum number}. \subsubsection{ Strong time-reversal symmetry breaking: $|\eta|>m$ } In this limit, $ C^{(1)}_{00}= C^{(1)}_{33}= (4\pi)^{-1}\mathrm{sgn}\,\eta $ and the effective Lagrangian~(\ref{eq: screened dual lagrangian}) reduces to \begin{equation} \begin{split} \mathcal{L}^{\mathrm{eff}}_{m,\eta}=&\, \hphantom{+} \frac{1}{4\pi} \mathrm{sgn}\,\eta\, \epsilon^{\nu\rho\kappa} a^{\ }_{\nu} \partial^{\ }_{\rho} a^{\ }_{\kappa} \\ &\, + \frac{1}{16\pi} \mathrm{sgn}\,\eta \left( \epsilon^{\nu\rho\kappa} d^{\ }_{\nu} \partial^{\ }_{\rho} d^{\ }_{\kappa} + 4\pi d^{\ }_{\nu} \bar{j}^{\nu}_{\mathrm{vrt}} \right) \\ &\, +\ldots. \end{split} \end{equation} Using the coefficient of the Chern-Simons Lagrangian for the gauge field $d^{\ }_{\nu}$ and its coupling to the vortex current $\bar{j}^{\nu}_{\mathrm{vrt}}$ (see appendix~\ref{sec:duality} for the relation between the statistical angle and the coefficient in front of the Chern-Simons term), the statistical angle $\Theta$ under exchange of two screened quasiparticles with unit vorticity is \begin{equation} \frac{\Theta}{\pi}= \frac{1}{4} \mathrm{sgn}\,\eta. \end{equation} Notice that the U(1) current now vanishes, i.e., screened quasiparticles carrying fractional statistics are now charge neutral. \subsection{ Unscreened vortices } \label{subsec; Unscreened vortices} We turn to the situation when the axial gauge half fluxes are suppressed, while $\theta$ vortices are still present. We call these vortices unscreened quasiparticles. This situation arises if, in addition to the effective Lagrangian~(\ref{eq: final effective action}) which followed from integrating out the Dirac fermions, there are terms in the effective Lagrangian due to lattice degrees of freedom that break the axial gauge symmetry. For instance, acoustic phonons and ripples in graphene can bring about the axial vector potential $a^{\mu}_{5}$; however, in these cases there is an energy penalty of the form $a^{\ }_{5\mu}a^{\mu}_{5}$ that breaks the axial gauge invariance due to contributions to the elastic energy. The case when the axial gauge potential is absent, i.e., the quasiparticles are unscreened, is implemented by the replacement \begin{equation} a^{\ }_{5\mu} \to 0 \label{eq:unscreening-cond} \end{equation} in Eq.~(\ref{eq: final effective action}). There follows \begin{widetext} \begin{equation} \begin{split} \mathcal{L}^{\mathrm{eff}}_{m,\eta}=&\, \hphantom{+} \frac{1}{4}\;C^{(0)}_{11}\; \sin^2 \alpha \left(\partial^{\rho}\theta\right) \left(\partial^{\ }_{\rho}\theta\right) \\ &\, + \left( C^{(1)}_{33}\sin^2\frac{\alpha}{2} + C^{(1)}_{11} \cos^2\frac{\alpha}{2} \right) \;\sin^2\frac{\alpha}{2} \;\; \epsilon^{\nu\rho\kappa} \left( \partial^{\ }_{\nu} \theta \right) \partial^{\ }_{\rho} \left( \partial^{\ }_{\kappa} \theta \right) \\ &\, + C^{(1)}_{00}\; \epsilon^{\nu\rho\kappa} \; a^{\ }_{\nu} \; \partial^{\ }_{\rho} a^{\ }_{\kappa} \\ &\, - C^{(1)}_{03}\;\sin^2\frac{\alpha}{2}\;\; \epsilon^{\nu\rho\kappa} a^{\ }_{\nu} \partial^{\ }_{\rho} \left( \partial^{\ }_{\kappa} \theta \right) +\ldots. \end{split} \label{eq: final effective action unscreened} \end{equation} This Lagrangian can be dualized with the help of the vortex current% ~(\ref{eq:def-j-theta-screened}) (see appendix~\ref{sec:duality}) \begin{equation} \begin{split} \mathcal{L}^{\mathrm{eff}}_{m,\eta} =&\, -({8\pi^2 \;C^{(0)}_{11}\;\sin^2 \alpha})^{-1}\; f^{\mu\nu}\,f_{\mu\nu} +c_\mu\;\bar{j}^{\mu}_{\mathrm{vrt}} \\ &+ C^{(1)}_{00}\, \epsilon^{\nu\rho\kappa} \, a^{\ }_{\nu} \, \partial^{\ }_{\rho} a^{\ }_{\kappa} - 2\pi C^{(1)}_{03}\sin^2\frac{\alpha}{2}\; a^{\ }_{\nu}\;\bar{j}^{\nu}_{\mathrm{vrt}} \\ &\, + \left( C^{(1)}_{33}\sin^2\frac{\alpha}{2} + C^{(1)}_{11} \cos^2\frac{\alpha}{2} \right) \;\sin^2\frac{\alpha}{2} \;\; \left( \epsilon^{\nu\rho\kappa} \, d_\nu\, \partial^{\ }_{\rho} d_\kappa + 4\pi\; d^{\ }_{\nu}\;\bar{j}^{\nu}_{\mathrm{vrt}} \right) +\ldots. \end{split} \label{eq:unscreened-eff-L} \end{equation} \end{widetext} (The Maxwell term $f^{\ }_{\mu\nu}f^{\mu\nu}$ is associated to the gauge potential $c^{\ }_{\mu}$, see appendix~\ref{sec:duality}). We shall denote with $\mathcal{L}^{\ }_{c}$ the first line of Eq.~(\ref{eq:unscreened-eff-L}). The statistics carried by unscreened quasiparticles with the current $j^{\mu}_{\mathrm{vrt}}$ follows. This statistics depends on the coefficients in Table~\ref{tab:Cs}. We treat separately the two phases of Fig.~\ref{fig:phase-diagram}. \subsubsection{ Weak time-reversal symmetry breaking: $|\eta|<m$ } \label{subsubsec: Weak time-reversal symmetry breaking} In this limit, $C^{(1)}_{00}=C^{(1)}_{33}=0$, $C^{(1)}_{11}=\eta/(6\pi m)$, $C^{(1)}_{03}=(2\pi)^{-1}\mathrm{sgn}\,\mu^{\ }_{s}$, and the effective Lagrangian~(\ref{eq:unscreened-eff-L}) reduces to \begin{equation} \begin{split} \mathcal{L}^{\mathrm{eff}}_{m,\eta}=&\, \mathcal{L}^{\ }_{c} - \mathrm{sgn}\,\mu^{\ }_{\mathrm{s}}\, \sin^2\frac{\alpha}{2}\, a^{\ }_{\nu}\, \bar{j}^{\nu}_{\mathrm{vrt}} \\ &\, + \frac{\eta}{24\pi m} \sin^2\alpha \left( \epsilon^{\nu\rho\kappa} d^{\ }_{\nu}\, \partial^{\ }_{\rho} d^{\ }_{\kappa} + 4\pi\; d^{\ }_{\nu} \bar{j}^{\nu}_{\mathrm{vrt}} \right). \end{split} \label{eq:eff-un-2} \end{equation} Using the coefficient of the Chern-Simons Lagrangian for the gauge field $d^{\ }_{\nu}$ and its coupling to the vortex current $\bar{j}^{\nu}_{\mathrm{vrt}}$ (see appendix~\ref{sec:duality} for the relation between the statistical angle and the coefficient in front of the Chern-Simons term), the statistical angle $\Theta$ under exchange of two unscreened quasiparticles with unit vorticity is \begin{equation} \frac{\Theta}{\pi}= \frac{\eta}{6m}\;\sin^2\alpha \\ = \frac{2\eta}{3m}\;|Q|(1-|Q|) \label{eq:statistical-angle-eta-smaller-m} \end{equation} by Eq.~(\ref{eq: derivation of Q}). Notice that it also follows that the induced fermionic U(1) current, \begin{equation} j^{\nu}= - \mathrm{sgn}\,\mu^{\ }_{\mathrm{s}}\, \sin^2\frac{\alpha}{2}\, \bar{j}^{\nu}_{\mathrm{vrt}} \end{equation} that couples linearly to $a^{\ }_\mu$, is tied up to the vortex current. In other words, unscreened quasiparticles with unit vorticity are charged objects with charge $Q=\pm\sin^2(\alpha/2)$ that varies continuously as a function of the ratio $\mu^{\ }_\mathrm{s}/m$ [see Eq.~(\ref{eq: derivation of Q})] as found in Refs.% ~\onlinecite{Hou07}, \onlinecite{Chamon08a}, and \onlinecite{Chamon08b}. \subsubsection{ Strong time-reversal symmetry breaking: $|\eta|>m$ } \label{subsubsec: Strong time-reversal symmetry breaking} In this limit, $C^{(1)}_{00}=C^{(1)}_{33}=(4\pi)^{-1}\mathrm{sgn}\,\eta$, $C^{(1)}_{11}=(3\eta^{2}-m^{2})/(12\pi\eta^{2})\,\mathrm{sgn}\,\eta$, $C^{(1)}_{03}=0$, and the effective Lagrangian~(\ref{eq:unscreened-eff-L}) reduces to \begin{widetext} \begin{equation} \begin{split} \mathcal{L}^{\mathrm{eff}}_{m,\eta} =&\, \hphantom{+} \mathcal{L}^{\ }_{c} + \frac{1}{4\pi}\; \mathrm{sgn}\,\eta \; \epsilon^{\nu\rho\kappa} \, a^{\ }_{\nu} \, \partial^{\ }_{\rho} a^{\ }_{\kappa} \\ &\, +\frac{1}{4\pi}\; \mathrm{sgn}\,\eta \; \left[ \left( 1-\frac{m^2}{3\eta^2} \right) + \frac{m^2}{3\eta^2} \sin^2\frac{\alpha}{2} \right] \;\sin^2\frac{\alpha}{2}\;\; \left( \epsilon^{\nu\rho\kappa} \, d^{\ }_{\nu}\, \partial^{\ }_{\rho} d^{\ }_{\kappa} + 4\pi d^{\ }_{\nu}\;\bar{j}^{\nu}_{\mathrm{vrt}} \right) +\ldots. \end{split} \end{equation} \end{widetext} Using the coefficient of the Chern-Simons Lagrangian for the gauge field $d^{\ }_{\nu}$ and its coupling to the vortex current $\bar{j}^{\nu}_{\mathrm{vrt}}$ (see appendix~\ref{sec:duality} for the relation between the statistical angle and the coefficient in front of the Chern-Simons term), the statistical angle under exchange of two unscreened quasiparticles with unit vorticity is \begin{equation} \begin{split} \frac{\Theta}{\pi}&=\, \mathrm{sgn}\,\eta \; \left[ \left( 1 - \frac{m^2}{3\eta^2} \right) + \frac{m^2}{3\eta^2} \sin^2\frac{\alpha}{2} \right] \;\sin^2\frac{\alpha}{2} \\ &=\, \mathrm{sgn}\,\eta \; \left[ \left( 1 - \frac{m^2}{3\eta^2} \right) + \frac{m^2}{3\eta^2} |Q| \right] \;|Q|. \end{split} \label{eq: fractional Theta unscreened phase} \end{equation} Here, we have used the value of the charge $|Q|=\sin^2(\alpha/2)$ for the complementary phase $|\eta|<m$. Notice that the induced fermionic charge current $j^{\ }_{\mu}$ now vanishes. The unscreened quasiparticles carrying the fractional statistics% ~(\ref{eq: fractional Theta unscreened phase}) are thus charge neutral, i.e., $|Q|$ in Eq.~(\ref{eq: fractional Theta unscreened phase}) should not be confused with the (now vanishing) electronic charge of unscreened quasiparticles. We stress that the quenching of the dynamics in the axial gauge field $a^{\mu}_{5}$ implies the breaking of the axial gauge symmetry. It can be thought of as a mean-field approximation needed to interpret the numerical simulations of the Berry phase acquired by the Slater determinant of lattice fermions when one vortex is moved in a quasi-static way along a closed curved around another vortex. The quench approximation can also be justified if terms that break explicitly the axial gauge symmetry such as mass term for $a^{\ }_{5}$ were added to the Lagrangian~(\ref{eq: final effective action}). After all, from a microscopic point of view, axial gauge symmetry is by no means generic. The axial gauge fields can be viewed as phonon-induced fluctuations in the average separations between ions that an elastic theory generically induces. A mass term for these phonons cannot be ruled out by symmetry. \subsection{ Adding one more fermion to the midgap states } \label{subsec:one-more-electron} All calculations for the fractional charge and exchange statistics done so far apply at zero chemical potential $\mu=0$, and at some finite staggered chemical potential $\mu^{\ }_{\mathrm{s}}\neq0$, assuming global vortex neutrality. Global vortex neutrality is imposed to bound the energy from above in the thermodynamic limit or if periodic boundary conditions are imposed. A staggered chemical potential is needed to lift the near degeneracy between the two single-particle midgap states that are exponentially localized about a vortex and anti-vortex in the bond-density-wave (Kekul\'e for graphene) order parameter $\Delta$, respectively, whose separation $r$ is much larger than $1/m$. On the one hand, when $\mu^{\ }_{\mathrm{s}}=0$, the two single-particle midgap levels are, up to exponentially small corrections in $mr$, pinned to the band center $E=0$. In the thermodynamic limit, their occupancy when $\mu=0$ is then ambiguous. On the other hand, when $\mu^{\ }_{\mathrm{s}}\neq0$, the two single-particle midgap levels get pushed in opposite directions, one to $E>0$ and the other to $E<0$ (which one goes which way depends on the sign of $\mu^{\ }_{\mathrm{s}}$). The single-particle midgap level with $E<0$ is then occupied, the other empty, when $\mu=0$ and the results of Secs.% ~\ref{sec: Fractional charge quantum number} and% ~\ref{subsec; Unscreened vortices} for the fractional and exchange statistics, respectively, apply. We are going to prove that when $m>|\mu|>|\mu^{\ }_{\mathrm{s}}|$, so that the two single-particle midgap levels are either both empty or both occupied, the exchange statistics is that of semions. Suppose one adds one more electron to the Dirac sea (here defined to be the Fermi sea at $\mu=0$), filling the single-particle midgap state at $E>0$. What happens to the exchange statistics? The easiest way to answer this question is by realizing that the Berry phase accumulated by a many-body wave function that can be written as a single Slater determinant (the case in hand) is just the sum of the Berry phases for single-particle states. If we fill one more level, we only need to add the Berry phase due to that single-particle state to that of the filled Dirac sea that we already computed. The contribution from the extra level can be obtained as follows. (Here we focus on the case $\eta=0$. A generalization to $\eta\ne 0$ can be similarly formulated.) A single-particle midgap wave function is localized near a vortex, i.e., its spatial extent is of order $1/m$. Details on $\Delta$ for distances much larger than $1/m$ away do not matter. Hence, when winding another far-away vortex around the first one, the local order parameter $\Delta$ in the vicinity of the first vortex just sees its phase change by $2\pi$. This allows us to focus solely on the problem of determining what happens to the single-particle midgap wave function as the phase of the order parameter near a vortex is rotated by $2\pi$. The solution for the single-particle midgap wave function when $\mu^{\ }_{\mathrm{s}}=0$ and in the Dirac approximation was obtained in Ref.~\onlinecite{Hou07} for the unscreened vortex and in Ref.~\onlinecite{Jackiw07} for the screened vortex. In both cases, the wave function picks up a phase of $\pi$ when the phase of $\Delta$ changes by $2\pi$. If $\mu^{\ }_{\mathrm{s}}\ne0$, the result remains the same, because while the midgap level moves with $\mu^{\ }_{\mathrm{s}}$ the wave function is independent of $\mu^{\ }_{\mathrm{s}}$ (the wave function has support in only one of the sublattices $\Lambda^{\ }_{\mathrm{A}}$ or $\Lambda^{\ }_{\mathrm{B}}$ of the underlying lattice model, so the finite value of the staggered chemical potential does not perturb the single-particle midgap wave function). In conclusion, occupying one additional single-particle fermion level adds a phase of $\pi$ to the many-body Berry phase when $\eta=0$. This means that the statistical angle shifts by $\Delta\Theta=\pm\pi/2$, the statistical angle for a semion, when one fermion is added (removed) to (from) the Dirac sea. \begin{figure} \includegraphics[angle=0,scale=0.7]{FIGURES/ldos-eta-pi-flux.eps} \caption{ (Color online) The induced fermionic charge of a quasiparticle, a unit vortex in $\Delta(\boldsymbol{r})$ with or without attachment of an axial gauge half flux in $\boldsymbol{a}^{\ }_{5}(\boldsymbol{r})$, as a function of $\eta$. This charge is computed from the spectral asymmetry of spinless fermions hopping on the square lattice with lattice spacing $\mathfrak{a}$ and with a magnetic flux of $\pi$ in units of the flux quantum $\phi^{\ }_{0}=hc/e$ threading each elementary plaquette in the \textit{static background} of a unit vortex in $\Delta(\boldsymbol{r})$ with or without attachment of an axial gauge half flux in $\boldsymbol{a}^{\ }_{5}(\boldsymbol{r})$ and for a uniform value of $\eta$. The square lattice is $100\times100$ in units of the lattice spacing and the area used for integrating the local density of states is a square of size $50\times 50$ centered around a unit vortex. The following parameters were chosen: the hopping $t=1$, the magnitude of $\Delta(\boldsymbol{r})$ on the boundary is $\Delta^{\ }_{0}(\infty)=0.5$ while the magnitude of the staggered chemical potential is $\mu^{\ }_{\mathrm{s}}=0.1$ ($m\approx0.51$). Each (red) filled circle is the induced fermionic charge of a unit vortex in $\Delta(\boldsymbol{r})$ to which an axial gauge half flux in $\boldsymbol{a}^{\ }_{5}(\boldsymbol{r})$ is attached. The (red) dashed line for $\eta\leq m$ represents the $Q=1/2$ line. Each (blue) filled triangle is the induced fermionic charge of an unscreened unit vortex in $\Delta(\boldsymbol{r})$, i.e., the vector axial flux in $\boldsymbol{a}^{\ }_{5}(\boldsymbol{r})$ vanishes everywhere. The (blue) solid line for $\eta\leq m$ represents $Q=0.402$, i.e., the predicted value from the field theory with the input parameters. When $m\gg\eta$, the induced fractional charge vanishes. A quantum phase transition at $m=\eta$, as measured by the jumped in the induced fermionic charge, is smeared by finite size effects. } \label{fig:charge-eta-pi} \end{figure} \section{ Numerical calculation of the charge and Berry phase } \label{eq: Numerical calculation of the charge and Berry phase} We are going to present numerical results on the charge and statistics of unscreened vortices supported by the bond-density-wave (Kekul\'e for graphene) order parameter $\Delta$ in the presence of the compatible and competing order parameters (masses when space and time independent) $\mu^{\ }_\mathrm{s}$ and $\eta$, respectively. The dependence of the induced fermionic charge of vortices in $\Delta$ as a function of the staggered chemical potential $\mu^{\ }_\mathrm{s}$ was studied in Refs.~\onlinecite{Chamon08a} and \onlinecite{Chamon08b} (see also Ref.~\onlinecite{Weeks08} when $\mu^{\ }_{\mathrm{s}}=0$). The following numerical results with the competing mass $\eta$ are new. Our studies have been carried out for the honeycomb lattice, which is of direct relevance to graphene, and the square lattice with $\pi$-flux phase. Both lattice models yield consistent numerical results. In this paper, only the results for $\pi$-flux phase are presented. The relevant technical details for our numerical calculations are summarized in Appendix% ~\ref{appsec: Numerical Berry phase in the single-particle approximation}. To compare the numerical with our analytical results, derived from the Dirac Hamiltonian~(\ref{eq: def quantum Hamiltonian}), which is the continuum limit of the linearized lattice Hamiltonian, two important issues arise. The first one is that \textit{all} band curvature effects, present in \textit{any} microscopic lattice model, are absent in the continuum model. Here we expect that, as long as the characteristic sizes over which the order parameters vary are large compared to the size of the unscreened vortex core, \textit{static} results obtained within the continuum approximation should capture some static long-wave length properties of the lattice model. This first expectation can be concretely addressed by the numerical studies of the induced fermionic charge of unscreened vortices below. The second issue that arises when one starts from a lattice model is the assumed axial gauge invariance of Hamiltonian% ~(\ref{eq: def quantum Hamiltonian}). This issue is subtle and substantial. The Dirac Hamiltonian% ~(\ref{eq: def quantum Hamiltonian}) has a local U(1)$\times$U(1) gauge symmetry, while this symmetry is absent in graphene, say. Although the vector axial gauge field $\boldsymbol{a}^{\ }_{5}$ is realized in graphene, say through acoustic phonons generating ripples, and thus couples in an axial-gauge-invariant way to the fermions in the linear approximation, its kinetic energy is by no means required to be gauge invariant. For example, the kinetic energy of $\boldsymbol{a}^{\ }_{5}$ is expected to contain the axial-gauge-symmetry-breaking mass term $|\boldsymbol{a}_{5}|^2$. It is thus difficult to justify the axial gauge invariance of Dirac Hamiltonian% ~(\ref{eq: def quantum Hamiltonian}) in a lattice model as simple as graphene. We do not expect predictions based on Hamiltonian% ~(\ref{eq: def quantum Hamiltonian}) that rely crucially on the dynamics of the axial vector gauge field $\boldsymbol{a}^{\ }_{5}$ to capture the corresponding low-energy and long wave-length dynamical properties of graphene. We will verify this expectation with lattice computations that require the dynamics of the axial gauge field, for example the induced Berry phase as one moves a composite particle made of a vortex and an axial gauge flux around another composite particle. In Sec.~\ref{sec: Microscopic models}, we will present a lattice model that, by construction, has the desired local U(1)$\times$U(1) gauge symmetry. This model can be used to compute numerically the statistical phases of unit bond-density-wave (Kekul\'e for graphene) vortices screened by axial gauge half fluxes and to verify that non-linearities in the many-body excitation spectrum do not affect the exchange statistics of vortices separated by distances much larger than their vortex core, i.e., this is one model that regularizes Hamiltonian% ~(\ref{eq: def quantum Hamiltonian}) on the lattice. While the system presented in Sec.~\ref{sec: Microscopic models} serves by itself as a proof of principle that one can realize the local axial gauge invariance on the lattice, the computation of the exchange statistics of vortices in this lattice model is a computational challenge in lattice gauge theory, as opposed to the much simpler exercise in exact diagonalization for any non-interacting lattice model. For this reason, we now limit the numerical studies of the statistical phases to the simpler case when $\boldsymbol{a}^{\ }_{5}\to0$, i.e., the case of unscreened vortices. In effect, we are ignoring all many-body effects imposed by the local axial gauge invariance and thus treating the problem at the mean-field level. By comparing the charge obtained from the Aharonov-Bohm effect with that obtained directly from the local density of states, we will show that this approximation is qualitatively (but not quantitatively) justified for dynamical properties of bond-density-wave (Kekul\'e for graphene) vortices, whereas it fails dramatically for dynamical properties of the axial gauge half fluxes. \subsection{Static calculation of the charge} \label{subsec: Static calculation of the charge} We begin with the study of static properties, when the vortices or axial gauge half flux tubes are not moved, so that the dynamics of the axial gauge potential is not relevant. One physical quantity that can be studied in the static limit is the induced fermionic fractional charge. It is obtained by summing up the local fermionic density of states in a region of space that encloses the core of the vortex. In our numerical studies, a vortex is placed at the center of the square lattice system of size $100 \times 100$ in units of the lattice spacing $\mathfrak{a}$ while a flux of $\pi$ in units of the flux quantum $\phi^{\ }_{0}=hc/e$ threads each elementary plaquette. An area of integration, $50\times 50$, centered around the vortex is used for summing the local fermionic density of states. We fixed the strength of the bond-density-wave (Kekul\'e for graphene) order parameter $\Delta=0.5$ and staggered chemical potential $\mu^{\ }_{\mathrm{s}}=0.1$. Figure~\ref{fig:charge-eta-pi} shows the value of the induced fermionic charge as a function of $\eta$ with and without the axial gauge half flux. A clear normalization of the fractional charge to 1/2 follows from adding an axial gauge half flux. Notice that there is a (smoothed) step as the mass $\eta$ becomes comparable to $m$. This is the finite-size signature of a quantum phase transition at $|\eta|=m$. The results in Sec.% ~\ref{sec: Fractional charge quantum number} are displayed in Fig.~\ref{fig:charge-eta-pi}. They correspond to sharp step functions at the transition point $|\eta|=m$. The numerical results displayed in Fig.~\ref{fig:charge-eta-pi} are consistent with the analytical results~(\ref{eq: derivation of Q}), keeping in mind that the lattices studied are finite and thus quantum transitions are smeared. For that matter, notice that the agreement between the field-theory prediction and numerics is best away from the critical point $|\eta|=m$. \begin{figure} \includegraphics[angle=0,scale=0.6]{FIGURES/flux-tube.eps} \caption{ (Color online) Schematics of the \textit{static} magnetic flux tubes inserted to probe the induced fermionic charge of a quasiparticle, a mass vortex with or without the attachment of an axial gauge half flux, using the Aharonov-Bohm effect in the second set-up described in the text. (a) We insert one \textit{static} magnetic flux tube (colored in red) with the flux $\phi=l\phi^{\ }_{0}$ (the flux quantum is $\phi^{\ }_{0}=hc/e$) while a quasiparticle encircles dynamically this magnetic flux with the trajectory indicated by the directed loop (colored in blue). (b) We insert two \textit{static} magnetic flux tubes (colored in red) with the fluxes $\phi= \pm l\phi^{\ }_{0}$ while a quasiparticle encircles dynamically one and only one magnetic flux tube with the trajectory indicated by the directed loop (colored in blue). } \label{fig:flux-tube} \end{figure} \subsection{Dynamic calculation of the charge} \label{subsec: Dynamic calculation of the charge} A dynamical alternative to computing the induced fermionic charge through the integrated local density of states is the following. If we take a unit vortex in $\Delta$ (with or without an accompanying axial gauge half flux) around a circle of radius $r$ that encircles a magnetic flux, then an Aharonov-Bohm phase accumulates. The value of the charge induced near the vortex follows after matching the Berry phase computed numerically to the analytical value of the Aharonov phase. We carry out this approach in two different set-ups. In the first, we apply a uniform magnetic field to the system, i.e., we fix a given electromagnetic flux \begin{equation} l \phi^{\ }_{0}, \qquad \hbox{$l\in\mathbb{R}$, $\phi^{\ }_{0}$ the quantum of flux}, \end{equation} per elementary unit cell on the lattice. The Aharonov-Bohm phase $\gamma^{\ }_{\mathrm{AB}}$ that is picked up depends on the radius of the path since the encircled magnetic flux scales with the area. The Aharonov-Bohm phase in this case is thus given by \begin{equation} \label{eq:AB-phase} \gamma^{\ }_{\mathrm{AB}}= 2 \pi \times Q \times (\pi r^2) \times l. \end{equation} Here, $Q$ is the charge bound to the unit vortex in $\Delta$. A second set-up is shown in Fig.~\ref{fig:flux-tube}a. We insert an electromagnetic flux tube with flux \begin{equation} l \phi^{\ }_{0}, \qquad \hbox{$l\in\mathbb{R}$, $\phi^{\ }_{0}$ the quantum of flux}, \end{equation} through the elementary unit cell on the lattice at the center of the system. All other elementary unit cells are free of any magnetic flux. We then move the unit vortex in the bond-density-wave (Kekul\'e for graphene) order parameter around a path enclosing this flux. Notice that the Aharonov-Bohm phase $\gamma^{\ }_{\mathrm{AB}}$ is independent of the path as long as it strictly contains the magnetic flux tube, i.e., the elementary unit cell at the center of the lattice. It is expected to have the value \begin{equation} \label{eq:AB-phase-tube} \gamma^{\ }_{\mathrm{AB}}= 2 \pi \times Q \times l. \end{equation} We also study the case displayed in Fig.~\ref{fig:flux-tube}b. The reason for it is that we want to ensure that compensating fermionic charges on the edges of the sample do not contribute a phase as well. In the set up of Fig.~\ref{fig:flux-tube}b, whatever happens with the fermionic edge charges does not lead to an Aharonov phase because their path would encircle (even if they move) the vanishing total flux \begin{equation} \phi= l\phi^{\ }_{0} - l\phi^{\ }_{0}=0. \end{equation} The results we obtain for the Berry phase when we wind the unscreened vortices around a closed path are shown in Fig.~\ref{fig:AB-phase-mu-dependence} for the case of the first set-up (uniform applied magnetic field). We fix the parameters $\Delta=0.5$, $r=14.5$ (in a $56\times 56$ lattice) and $\phi=0.001 \phi^{\ }_{0}$ per plaquette, and plot the charge $Q$ versus the parameter $\mu^{\ }_\mathrm{s}/\Delta$. The blue dots and red dots are the numerical results for a vortex without the axial gauge half flux and with the axial gauge half flux, respectively, while the corresponding theoretical predictions from Ref.~\onlinecite{Chamon08a} and~\onlinecite{Chamon08b} are plotted in blue and red solid line. Notice that the analytical and numerical results agree quite well for the case of vortices unscreened by axial gauge half fluxes. As anticipated, the analytical and numerical results are not consistent for the case of screened vortices. The reason is precisely what we highlighted in the beginning of this section, i.e., that the lattice model studied numerically in this section does not contain the U(1)$\times$U(1) symmetry, i.e., the axial gauge field dynamics present in the Dirac Hamiltonian% ~(\ref{eq: def quantum Hamiltonian}). The same issue applies to the problem of computing the exchange statistics of pairs of screened vortices. We cannot study the statistical angle of screened vortices within the approach of this section. In Sec.~\ref{sec: Microscopic models}, we will present a microscopic model that does have the U(1)$\times$U(1) gauge symmetry. However, this model cannot be studied by simply computing Slater determinants (see Appendi ~\ref{appsec: Numerical Berry phase in the single-particle approximation}) as has been done so far in this section. Before closing Sec.~\ref{subsec: Dynamic calculation of the charge}, let us mention that we have checked the results summarized by Fig.~\ref{fig:AB-phase-mu-dependence} that we obtained by applying a uniform magnetic field against those obtained with a single flux tube as in Fig.~\ref{fig:flux-tube}a or with two flux tubes as in Fig.~\ref{fig:flux-tube}b. \begin{figure} \includegraphics[angle=0,scale=0.75]{FIGURES/Berry-phase-charge.eps} \caption{ (Color online) The induced fermionic charge of a quasiparticle, a unit vortex in $\Delta(\boldsymbol{r})$ with or without attachment of an axial gauge half flux in $\boldsymbol{a}^{\ }_{5}(\boldsymbol{r})$, as a function of the ratio $\mu^{\ }_{\mathrm{s}}/\Delta^{\ }_{0}(\infty)$. This charge is obtained by matching the numerical Berry phase picked up when a quasiparticle hops along the closed boundary of an area that encloses a magnetic flux in the uniform background of $\mu^{\ }_{\mathrm{s}}$ to the corresponding Aharonov-Bohm phase along the lines outlined in Sec.% ~\ref{subsec: Dynamic calculation of the charge} and Appendix% ~\ref{appsec: Numerical Berry phase in the single-particle approximation}. Hopping takes place on the square lattice with the lattice spacing $\mathfrak{a}$ and a magnetic flux of $\pi$ in units of the flux quantum $\phi^{\ }_{0}=hc/e$ per elementary plaquette, there being $56\times 56$ elementary plaquettes. The closed path used to compute the Berry phase is approximately circular with the radius $r=14.5$ in units of the lattice spacing. The following parameters were chosen: the hopping $t=1$, the magnitude of $\Delta(\boldsymbol{r},t)$ on the boundary is $\Delta^{\ }_{0}(\infty)=0.5$ while the flux is $\phi= 0.001 \phi^{\ }_{0}$ through each elementary plaquette. The (red) filled circles are the induced charges of a dynamical unit vortex in $\Delta(\boldsymbol{r},t)$ to which is also attached a dynamical axial gauge half flux in $\boldsymbol{a}^{\ }_{5}(\boldsymbol{r},t)$ as a function of $\mu^{\ }_{\mathrm{s}}/\Delta^{\ }_{0}(\infty)$. The (red) dashed line is the analytical charge $Q=1/2$. The (blue) filled triangles are the induced charges of a dynamical unit vortex in $\Delta(\boldsymbol{r},t)$ without the axial gauge half flux in $\boldsymbol{a}^{\ }_{5}(\boldsymbol{r})$ as a function of the ratio $\mu^{\ }_{\mathrm{s}}/\Delta^{\ }_{0}(\infty)$. The (blue) solid line is the induced charge computed from Eq.~(\ref{eq: derivation of Q}) as a function of the ratio $\mu^{\ }_{\mathrm{s}}/\Delta^{\ }_{0}(\infty)$. } \label{fig:AB-phase-mu-dependence} \end{figure} \subsection{ Fractional statistics for unscreened vortices } \label{subsec: Fractional statistics for unscreened vortices} We now present the numerical value of the statistical angle $\Theta$ in units of $\pi$ acquired under the exchange of two unit unscreened vortices in the bond-density-wave (Kekul\'e for graphene) order parameter $\Delta$, which we shall call quasiparticles from now on. We have computed numerically the Berry phase $\gamma$ in units of $\pi$ accumulated when a first dynamical quasiparticle moves along a trajectory that winds once around a second static quasiparticle as outlined in Appendi ~\ref{appsec: Numerical Berry phase in the single-particle approximation}. The statistical angle $\Theta$ acquired under the exchange between these two quasiparticles is then \begin{equation} \Theta= \frac{\gamma}{2}. \end{equation} Here, as we do not impose dynamically the axial gauge symmetry at the microscopic level as presented in Appendi ~\ref{appsec: Numerical Berry phase in the single-particle approximation} and unlike in Sec.~\ref{sec: Microscopic models}, we only treat unscreened vortices. We have verified that $\gamma$, when computed along the lines of Appendi ~\ref{appsec: Numerical Berry phase in the single-particle approximation}, does not change when axial gauge half flux tubes are attached to the vortices. To compare the microscopic exchange statistics with the one computed within field theory in Sec.% ~\ref{subsec; Unscreened vortices}, we restrict the numerical computation to the half-filled case. However, we will also test the prediction of Sec.~\ref{subsec:one-more-electron} by working with one spinless fermion more than (or less than) at half-filling. We will always take $m\approx 0.5$ in Eq.~(\ref{eq: def m}). Moreover, to limit finite size effects, we assume that $\eta\ll m$, i.e., we work well below the transition point $|\eta|=m$ when the breaking of TRS is weak. The $\eta$ dependence of the Berry phase $\gamma$ with $m$ fixed is shown in Fig.~\ref{fig: statistics a} for different values of the uniform staggered chemical potentials $\mu^{\ }_{\mathrm{s}}$. The magnitude of the Berry phase is seen to be independent of whether the pair of quasiparticles have the same (filled circles) or opposite (star symbols) vorticities, but it does depend on $\mu^{\ }_{\mathrm{s}}$, i.e., on the induced fractional charge $Q$ given in Eq.~(\ref{eq: derivation of Q}). The $\eta$ dependence of $\gamma$ is linear, as predicted in Sec.~\ref{subsec; Unscreened vortices}, but with slopes deviating from the theoretical predictions, i.e., Eq.~(\ref{eq:statistical-angle-eta-smaller-m}), shown as the solid or dashed lines. The agreement between the Berry phase of the microscopic model and Eq.~(\ref{eq:statistical-angle-eta-smaller-m}) is thus qualitatively but not quantitatively good. \begin{figure} \includegraphics[angle=0,scale=0.6]{FIGURES/Berry-phase-eta-dependence.eps} \caption{ (Color online) Berry phase in units of $\pi$ as a function of $\eta/m\ll1$ for fixed $m$ acquired during the exchange of two unscreened quasiparticles, i.e., unit vortices in $\Delta(\boldsymbol{r},t)$ without the attachment of axial gauge half fluxes in $\boldsymbol{a}^{\ }_{5}(\boldsymbol{r},t)$. Numerical computations along the lines outlined in Sec.% ~\ref{subsec: Dynamic calculation of the charge} and Appendix% ~\ref{appsec: Numerical Berry phase in the single-particle approximation} were performed for spinless fermions hopping on the square lattice with lattice spacing $\mathfrak{a}$ and with a magnetic flux of $\pi$ in units of the flux quantum $\phi^{\ }_{0}=hc/e$ threading each elementary plaquette in the \textit{dynamic background} of a unit vortex in $\Delta(\boldsymbol{r},t)$ without the attachment of an axial gauge half flux in $\boldsymbol{a}^{\ }_{5}(\boldsymbol{r},t)$ and for a uniform value of $\eta$. The square lattice is $72\times 72$ and the exchange path is approximately circular with the radius $r=18.5$ in units of the lattice spacing. The following parameters were chosen: the hopping $t=1$, $m=\sqrt{\Delta^{2}_{0}(\infty)+\mu^{2}_{\mathrm{s}}}\approx 0.51$ but with two different value of $\mu^{\ }_{\mathrm{s}}=0.1$ and $0.025$. Filled circles and solid lines represent the case when the two quasiparticles carry the same unit vorticity. Stars and dashed lines represent the case when the two quasiparticles carry the opposite unit vorticity. Symbols are obtained numerically while the lines are the predictions from Sec.~\ref{subsec; Unscreened vortices}. } \label{fig: statistics a} \end{figure} The microscopic Berry phase $\gamma$ as a function of the ratio $\Delta/m$, which also parametrizes $Q(\Delta,\mu^{\ }_{\mathrm{s}})$, when $\eta=0.025$ is held fixed is shown in Fig.~\ref{fig: statistics b} as filled circles when the quasiparticles carry the same vorticities or as stars when the quasiparticles carry the opposite vorticities. As expected, exchanging a pair of quasiparticles with equal unit vorticities differs solely by a sign relative to exchanging a pair of quasiparticles with opposite unit vorticities. The lines (solid when the quasiparticles have the same unit vorticity, dashed otherwise) are given by Eq.~(\ref{eq:statistical-angle-eta-smaller-m}). Evidently, the dependence on $Q\ll1/2$ of the microscopic exchange statistics is not captured by the field theory. As discussed in Sec.~\ref{subsec:one-more-electron}, when adding (removing) one fermion to (from) half-filling, the Berry phase accumulated by a complete winding of quasiparticles of opposite unit vorticities changes by $\pi$ for the case $\eta=0$. This extra phase is the response of the single-particle midgap states to varying the phase of $\Delta$ by $2\pi$. Numerically, this assertion is confirmed directly by computing the accumulated Berry phase and obtaining $\gamma=\pm \pi$ when filling or emptying one midgap state. In summary, comparison of the microscopic Berry phase accumulated by winding an unscreened quasiparticle around a static one with the field-theory computation of the exchange statistic in Sec.~\ref{subsec; Unscreened vortices} shows that: 1)~The microscopic Berry phase $\gamma$ (and consequently the microscopic exchange statistical angle $\Theta=\gamma/2$) varies continuously as a function of $\eta$ and in a linear fashion for small $\eta$, in good agreement with the field-theory results. 2)~The slope $\gamma/\eta$ shows a monotonic dependence on the ratio $\Delta/m$, which is not in good quantitative agreement with the field-theory results. 3)~ The magnitude $|\gamma|$ is independent of the relative sign of the quasiparticles vorticities. This is expected for a vortex and its anti-vortex can annihilate. Consequently, winding a third vortex around a vortex anti-vortex pair must accumulate a vanishing Berry phase. 4)~Microscopic semion statistics $\Theta=\pm\pi/2$ is obtained when adding (removing) one fermion to (from) the half-filled system in agreement with the prediction from the continuum theory. \begin{figure} \includegraphics[angle=0,scale=0.55]{FIGURES/Berry-Delta-dependence.eps} \caption{ (Color online) Berry phase in units of $\pi$ as a function of $\Delta^{\ }_{0}(\infty)/m$ for fixed $m$ and $\eta\ll m$ acquired during the exchange of two unscreened quasiparticles, i.e., unit vortices in $\Delta(\boldsymbol{r},t)$ without the attachment of axial gauge half fluxes in $\boldsymbol{a}^{\ }_{5}(\boldsymbol{r},t)$. Numerical computations along the lines outlined in Sec.% ~\ref{subsec: Dynamic calculation of the charge} and Appendix% ~\ref{appsec: Numerical Berry phase in the single-particle approximation} were performed for spinless fermions hopping on the square lattice with lattice spacing $\mathfrak{a}$ and with a magnetic flux of $\pi$ in units of the flux quantum $\phi^{\ }_{0}=hc/e$ threading each elementary plaquette in the \textit{dynamic background} of a unit vortex in $\Delta(\boldsymbol{r},t)$ without the attachment of an axial gauge half flux in $\boldsymbol{a}^{\ }_{5}(\boldsymbol{r},t)$ and for a uniform value of $\eta$. The square lattice is $72\times 72$ and the exchange path is approximately circular with the radius $r=18.5$ in units of the lattice spacing. The following parameters were chosen: the hopping $t=1$, $m=\sqrt{\Delta^{2}_{0}(\infty)+\mu^{2}_{\mathrm{s}}}=0.51$ and $\eta=0.025$. Filled circles and solid lines represent the case when the two quasiparticles carry the same unit vorticity. Stars and dashed lines represent the case when the two quasiparticles carry the opposite unit vorticity. Symbols are obtained numerically while the lines are the predictions from Sec.~\ref{subsec; Unscreened vortices}. } \label{fig: statistics b} \end{figure} \section{ Microscopic model } \label{sec: Microscopic models} We have seen in Sec.~\ref{eq: Numerical calculation of the charge and Berry phase} that the fractional charge induced by an axial gauge half flux in $\boldsymbol{a}^{\ }_{5}$ cannot be measured dynamically from the Aharonov-Bohm phase inferred from the numerical computation of a Berry phase. This is so because the local axial gauge symmetry in the continuum Hamiltonian~(\ref{eq: def quantum Hamiltonian}) is not present in the lattice model used in Sec.~\ref{eq: Numerical calculation of the charge and Berry phase}. Thus, there is a dynamical contribution that is missing and that cannot be captured by the simple models of one species of fermions hopping either on the honeycomb or $\pi$-flux lattices used in Sec.~\ref{eq: Numerical calculation of the charge and Berry phase}. For the same reason, we could not obtain numerically the exchange statistics in the case when the vortices are screened by the axial gauge potential, since the exchange of the topological defects necessarily acquires a dynamical contribution from $\boldsymbol{a}^{\ }_{5}$. We now construct a lattice model sharing the \textit{same} local U(1)$\times$U(1) symmetry and the \textit{same} particle content as the dynamical theory (\ref{eq: def Z}). The predictions for the exchange statistics of screened vortices done in Sec.~\ref{subsec: Screened vortices} should be captured by this lattice model. Unfortunately, we cannot verify this claim, for the largest system sizes that we could treat numerically are of the order of the vortex core. Consider a square lattice $\Lambda$ whose sites we denote with the Latin letters $i,j,k$, and $l$. We denote with $\hat{1}\equiv\hat{\mathbf x},\hat{2}\equiv\hat {\mathbf y}$ the two orthonormal vectors spanning the square lattice $\Lambda$ (and we will index these two vectors as $\hat\mu=\hat{1},\hat{2}$, for $\mu=1,2$). Links (or bonds) on the square lattice between nearest-neighbor sites $i$ and $j$ are labeled by $\langle ij\rangle$ (or simply by $ij$ when used as an index to a field defined on the links). We denote by $\Box^{\ }_{ijkl}$ the square plaquette with the corners $i$, $j$, $k$, and $l$. We define four sets of operators. There are the bosonic operators $\hat{A}^{\ }_{ij}$ and $\hat{A}^{\ }_{5ij}$ living on the links of the square lattice $\Lambda$. There are the bosonic operators $\hat{\phi}^{\ }_{i}$ and the fermionic operators $\hat{\psi}^{\ }_{i}$ living on the sites. The spinor-valued operator $\hat{\psi}^{\ }_{i}$ has here four components on which the $4\times4$ matrices defined in Eqs.~(2.1d) and (2.1e) act. These four sets of operators, together with their canonical conjugate operators, satisfy the following relations: \begin{subequations} \label{eq:alg-AA5phipsi} \begin{equation} \begin{split} & \hat{A}^{\dag}_{kl}= \hat{A}^{\ }_{kl}= -\hat{A}^{\ }_{lk}, \quad \hat{L}^{\dag}_{ij}= \hat{L}^{\ }_{ij}= -\hat{L}^{\ }_{ji}, \\ & \left[ \hat{L}^{\ }_{ij}, \hat{A}^{\ }_{kl} \right]= -{i}\left( \delta^{\ }_{ik} \delta^{\ }_{jl} - \delta^{\ }_{il} \delta^{\ }_{jk} \right), \end{split} \end{equation} \begin{equation} \begin{split} & \hat{A}^{\dag}_{5kl}= \hat{A}^{\ }_{5kl}= -\hat{A}^{\ }_{5lk}, \quad \hat{L}^{\dag}_{5ij}= \hat{L}^{\ }_{5ij}= -\hat{L}^{\ }_{5ji}, \\ & \left[ \hat{L}^{\ }_{5ij}, \hat{A}^{\ }_{5kl} \right]= -{i}\left( \delta^{\ }_{ik} \delta^{\ }_{jl} - \delta^{\ }_{il} \delta^{\ }_{jk} \right), \end{split} \end{equation} \begin{equation} \begin{split} &\hat{\phi}^{\dag}_{j}= \hat{\phi}^{\ }_{j}, \quad \hat{\Pi}^{\dag}_{i}=\hat{\Pi}^{\ }_{i}, \quad \left[ \hat{\Pi}^{\ }_{i}, \hat{\phi}^{\ }_{j} \right]= -{i}\, \delta^{\ }_{ij}, \end{split} \end{equation} and, finally, \begin{equation} \begin{split} & \left\{ \hat{\psi}^{\ }_{i}, \hat{\psi}^{\dag}_{j} \right\}= \openone^{\ }_{4}\; \delta^{\ }_{ij}, \qquad \left\{ \hat{\psi}^{\dag}_{i}, \hat{\psi}^{\dag}_{j} \right\}= \left\{ \hat{\psi}^{\ }_{j}, \hat{\psi}^{\ }_{i} \right\}=0, \label{eq: mathcal{F}psi} \end{split} \end{equation} with the \textit{equal-time global constraint} (half-filling constraint) \begin{equation} |\Lambda|^{-1} \sum_{i\in\Lambda} \hat{\psi}^{\dag}_{i}\hat{\psi}^{\ }_{i}=2. \label{eq: half-filling constraint} \end{equation} \end{subequations} (Since we are working with four flavors of fermions, half-filling means average 2 particles per each site.) We define the lattice model by the quantum Hamiltonian \begin{subequations} \label{eq: def lattice model with U(1xU(1) sym} \begin{equation} \hat{H}:= \hat{H}^{\ }_{g} + \hat{H}^{\ }_{g^{\ }_{5}} + \hat{H}^{\ }_{J} + \hat{H}^{\ }_{t} + \hat{H}^{\ }_{t'} + \hat{H}^{\ }_{m}. \end{equation} Here, \begin{equation} \hat{H}^{\ }_{g}:= \frac{g^{2}}{2} \sum_{\langle ij\rangle} \hat{L}^{2}_{ij} - \frac{1}{g^{2}} \sum_{\Box^{\ }_{ijkl}} \mathrm{Re}\, e^{ {i} \left( \hat{A}^{\ }_{ij} + \hat{A}^{\ }_{jk} + \hat{A}^{\ }_{kl} + \hat{A}^{\ }_{li} \right) } \end{equation} describes a U(1) lattice gauge theory with gauge coupling $g^{2}$, \begin{equation} \hat{H}^{\ }_{g^{\ }_{5}}:= \frac{g^{2}_{5}}{2} \sum_{\langle ij\rangle} \hat{L}^{2}_{5ij} - \frac{1}{g^{2}_{5}} \sum_{\Box^{\ }_{ijkl}} \mathrm{Re}\, e^{ {i} \left( \hat{A}^{\ }_{5 ij} + \hat{A}^{\ }_{5 jk} + \hat{A}^{\ }_{5 kl} + \hat{A}^{\ }_{5 li} \right) } \end{equation} describes another U(1) lattice gauge theory with gauge coupling $g^{2}_{5}$, \begin{equation} \hat{H}^{\ }_{J}:= \frac{J^{2}}{2} \sum_{i\in\Lambda} \hat{\Pi}^{2}_{i} - \frac{1}{J^{2}} \sum_{\langle ij\rangle} \left( e^{ + {i} \left( \hat{\phi}^{\ }_{i} - \hat{\phi}^{\ }_{j} \right) + 2 {i} \hat{A}^{\ }_{5 ij} } + \hbox{H.c.} \right) \end{equation} describes a quantum rotor (XY) model with coupling $J^{2}$, and \begin{equation} \begin{split} \hat{H}^{\ }_{t}:=&\, {i} t \sum_{i\in\Lambda} \sum_{{\mu}=1,2} \hat{\psi}^{\dag}_{i} \alpha^{\ }_{\mu} e^{ {i} \hat{A}^{\ }_{i(i+\hat{\mu})} + {i} \gamma^{\ }_{5}\; \hat{A}^{\ }_{5i(i+\hat{\mu})} } \;\hat{\psi}^{\ }_{(i+\hat{\mu})} \\ & + \hbox{H.c.} \end{split} \end{equation} describes the nearest-neighbor hopping with the real-valued amplitude $t$ of 4 independent fermions per site. So far, there are 4 non-equivalent Dirac points at half-filling which are located at $\boldsymbol{k}=(0,0)$, $(0,\pi)$, $(\pi,0)$, and $(\pi,\pi)$. This is why we have added the term \begin{equation} \begin{split} & \hat{H}^{\ }_{t^{\prime}}:= t^{\prime} \sum_{i\in\Lambda} \left[ \hat{\psi}^{\dag}_{i}\; 4 R\; \hat{\psi}^{\ }_{i} \vphantom{\sum_{i\in\Lambda}} \right. \\ & \, \left. - \sum_{{\mu}=1,2} \left( \hat{\psi}^{\dag}_{i}\; R\; e^{ {i} \hat{A}^{\ }_{i(i+\hat{\mu})} + {i} \gamma^{\ }_{5} \hat{A}^{\ }_{5i(i+\hat{\mu})} } \; \hat{\psi}^{\ }_{(i+\hat{\mu})} + \hbox{H.c.} \right) \right] \end{split} \label{eq: Wilson t' mass} \end{equation} that opens a gap of order $t'$ at the points $\boldsymbol{k}=(0,\pi),(\pi,0),(\pi,\pi)$, thus leaving $\boldsymbol{k}=(0,0)$ as the sole Dirac point. This scheme is precisely Wilson's procedure used to overcome the doubling problem in lattice gauge theories.~\cite{Wilson77} An important comment is in order, however. One reason why this prescription is not fully satisfying in lattice gauge theories is that any mismatch between the first and second terms of Eq.~(\ref{eq: Wilson t' mass}) leads to a gap at $\boldsymbol{k}=(0,0)$ as well, i.e., fine-tuning is needed to achieve the correct particle content. Here, this is fine because we are interested in systems where there is such a gap. Notice in that regard that the gap at $\boldsymbol{k}=(0,0)$ that arises from a small mismatch between these two terms (a small fraction of $t'$) is much smaller than the one at the edges of the Brillouin zone (order $t'$). Indeed, such a term due to a mismatch is actually part of the final term that we consider in the Hamiltonian, namely \begin{equation} \hat{H}^{\ }_{m}:= \sum_{i\in\Lambda} \hat{\psi}^{\dag}_{i} \left( \mu^{\ }_{\mathrm{s}} R + \Delta^{\ }_{\mathrm{k}} \beta e^{ {i} \gamma^{\ }_{5} \hat{\phi}^{\ }_{i} } + {i} \eta \alpha^{\ }_{1} \alpha^{\ }_{2} \right) \hat{\psi}^{\ }_{i}. \end{equation} \end{subequations} This contribution does indeed open a gap at the remaining Dirac point at $\boldsymbol{k}=(0,0)$. For any smooth and static boson background, the continuum limit of Hamiltonian~(\ref{eq: def lattice model with U(1xU(1) sym}) upon linearization of the fermion spectrum at the two non-equivalent Dirac points at half-filling is given by Eq.~(2.1), as is also the case with the fermion spectrum of graphene restricted to spinless fermions hopping with sufficiently smooth modulations of the hopping amplitudes. Contrary to graphene for spinless fermions, Hamiltonian~(\ref{eq: def lattice model with U(1xU(1) sym}) is invariant under the local U(1)$\times$U(1) gauge transformation \begin{subequations} \label{eq: U(1)xU(1) local gauge symmetry} \begin{equation} \begin{split} & \hat{L}^{\ }_{ij} \to \hat{L}^{\ }_{ij}, \qquad \hat{A}^{\ }_{ij} \to \hat{A}^{\ }_{ij} - \left( \chi^{\ }_{i} - \chi^{\ }_{j} \right), \\ & \hat{L}^{\ }_{5ij} \to \hat{L}^{\ }_{5ij}, \qquad \hat{A}^{\ }_{5 ij} \to \hat{A}^{\ }_{5 ij} - \left( \xi^{\ }_{i} - \xi^{\ }_{j} \right), \\ & \hat{\Pi}^{\ }_{i} \to \hat{\Pi}^{\ }_{i}, \qquad \hat{\phi}^{\ }_{i} \to \hat{\phi}^{\ }_{i}+2\xi^{\ }_{i}, \\ & \hat{\psi}^{\dag}_{i} \to \hat{\psi}^{\dag}_{i} e^{ + {i}\chi^{\ }_{i} + {i} \gamma^{\ }_{5} \xi^{\ }_{i} }, \qquad \hat{\psi}^{\ }_{j} \to \psi^{\ }_{j} e^{ - {i}\chi^{\ }_{j} - {i} \gamma^{\ }_{5} \xi^{\ }_{j} }, \end{split} \end{equation} generated by \begin{equation} \hat{H}\to \hat{G}(\chi,\xi)\;\hat{H}\;\hat{G}^{-1}(\chi,\xi) \end{equation} with \begin{equation} \begin{split} \hat{G}(\chi,\xi):=&\, \prod\limits_{i\in\Lambda} \exp \left[ {i} \left( \hat{\psi}^{\dag}_{i} \hat{\psi}^{\ }_{i} + \sum\limits_{{\mu}=1,2} \hat{L}^{\ }_{i(i+\hat{\mu})} \right) \chi^{\ }_{i} \right. \\ &\, \left. + {i} \left( \hat{\psi}^{\dag}_{i} \gamma^{\ }_{5} \hat{\psi}^{\ }_{i} + 2 \hat{\Pi}^{\ }_{i} + \sum\limits_{{\mu}=1,2} \hat{L}^{\ }_{5i(i+\hat{\mu})} \right) \xi^{\ }_{i} \right]. \end{split} \end{equation} \end{subequations} where $\chi^{\ }_{i}$ and $\xi^{\ }_{i}$ are arbitrary real-valued numbers. The physical subspace is the set of gauge invariant states, i.e., states that are tensor products of states in the Fock space generated by the algebra Eqs.~(\ref{eq:alg-AA5phipsi}), \begin{subequations} \label{eq: def gauge inv states} \begin{equation} \begin{split} |\Psi\rangle\equiv&\, |\Psi^{\ }_{A}\rangle \otimes |\Psi^{\ }_{A^{\ }_{5}}\rangle \otimes |\Psi^{\ }_{\phi}\rangle \otimes |\Psi^{\ }_{\psi}\rangle \end{split} \end{equation} such that Gauss law holds globally, \begin{equation} \hat{G}^{-1}(\chi,\xi)\; |\Psi\rangle= |\Psi\rangle \end{equation} for all real-valued function $\chi$ and $\xi$, or, equivalently, locally \begin{equation} \begin{split} & 0= \left( \hat{L}^{\ }_{i(i+\hat{1})} - \hat{L}^{\ }_{i(i-\hat{1})} + \hat{L}^{\ }_{i(i+\hat{2})} - \hat{L}^{\ }_{i(i-\hat{2})} + \hat{\psi}^{\dag}_{i} \hat{\psi}^{\ }_{i} \right) |\Psi\rangle, \\ & 0= \left( \hat{L}^{\ }_{5i(i+\hat{1})} - \hat{L}^{\ }_{5i(i-\hat{1})} + \hat{L}^{\ }_{5i(i+\hat{2})} - \hat{L}^{\ }_{5i(i-\hat{2})} \right. \\ &\hphantom{0=} \left. + \hat{\psi}^{\dag}_{i} \gamma^{\ }_{5} \hat{\psi}^{\ }_{i} + 2 \hat{\Pi}^{\ }_{i} \right) |\Psi\rangle, \end{split} \end{equation} \end{subequations} for any $i\in\Lambda$. We denote by $|\Psi^{\ }_{i,j}\rangle$ a gauge invariant state~(\ref{eq: def gauge inv states}) with two fractional charges localized around sites $i$ and $j$, respectively. The statistical phase $\Theta$ induced by the physical process by which two fractional charges are exchanged is given by the difference between two Berry phases,~\cite{Levin03} \begin{equation} \begin{split} \Theta:=&\, \frac{1}{2} \mathrm{arg}\, \prod_{i^{\ }_{\iota}\in\mathcal{P}}^{j\subset\mathcal{P}} \left\langle \Psi^{\ }_{i^{\ }_{\iota+1},j} \right| \hat{H} \left| \Psi^{\ }_{i^{\ }_{\iota},j} \right\rangle \\ &\, - \frac{1}{2} \mathrm{arg}\, \prod_{i^{\ }_{\iota}\in\mathcal{P}}^{j\subset\bar{\mathcal{P}}} \left\langle \Psi^{\ }_{i^{\ }_{\iota+1},j} \right| \hat{H} \left| \Psi^{\ }_{i^{\ }_{\iota},j} \right\rangle. \end{split} \label{eq: U(1)xU(1) latice Berry phase} \end{equation} For both Berry phases, one fractional charge hops along the closed path $\mathcal{P}=\{i^{\ }_{\iota}\}$, while the other fractional charge is static. For the former Berry phase, $j$ is located inside the area bounded by $\mathcal{P}$, a choice that we denote by $j\subset\mathcal{P}$. For the latter Berry phase, $j$ is located outside the area bounded by $\mathcal{P}$, a choice that we denote by $j\subset\bar{\mathcal{P}}$. The dimensionality of the gauge-invariant Hilbert space scales with the dimensionality of the fermionic Hilbert space% ~(\ref{eq: mathcal{F}psi}), which itself scales exponentially fast with the number of sites. Given the half-filling constraint% ~(\ref{eq: half-filling constraint}), this limits the numerical evaluation of the right-hand side of% ~(\ref{eq: U(1)xU(1) latice Berry phase}) to lattices with linear dimensions of the order of the core size $1/m$ of the defects, i.e., on distances much too short for the right-hand side of Eq.~(\ref{eq: U(1)xU(1) latice Berry phase}) to be interpreted as the statistical angle of point-like quasiparticles. If we are willing to give up the local U(1)$\times$U(1) gauge invariance% ~(\ref{eq: U(1)xU(1) local gauge symmetry}), i.e., the strongly correlated nature of the problem, we can compute the contribution to the statistical phase arising from the fermion hopping. Indeed, the problem then reduces to a single-particle one for which the dimensionality of the relevant Hilbert spaces only scales linearly with the number of sites. We stress that this contribution alone violates the local U(1)$\times$U(1) gauge invariance. \begin{widetext} \begin{table*} \caption{ \label{tab: 36 masses} The 36 mass matrices with particle-hole symmetry (PHS), see Eq.~(\ref{eq: def PHS}), for the massless Dirac Hamiltonian $\mathcal{K}^{\ }_{0}$ from Eq.~(\ref{eq: def K0}) are of the form% ~(\ref{eq: 256 matrices}) and anticommute with $\mathcal{K}^{\ }_{0}$. Each mass matrix can be assigned an order parameter for the underlying microscopic model, here graphene or the square lattice with $\pi$-flux phase. The latin subindex of the order parameter's name corresponds to the preferred quantization axis in SU(2) spin space. The pair of numeral subindices 02 and 32 are used to distinguish the two unit vectors spanning two-dimensional space. Each mass matrix preserves or breaks time-reversal symmetry (TRS), see Eq.~(\ref{eq: def TRS}), spin-rotation symmetry (SRS), see Eq.~(\ref{eq: def SRS}), and sublattice symmetry (SLS), see Eq.~(\ref{eq: def SLS}). To any of the 36 mass matrices corresponds a ``partner'' mass matrix obtained through the involutive transformation% ~(\ref{eq: def partner mass matrix}) denoted $C$. } \begin{ruledtabular} \begin{tabular}{llllllll} Mass matrix & Order parameter & TRS & SRS & SLS & Partner by $C$ & Order parameter by $C$& $C$ invariant\\ &&&&&&\\ $X^{\ }_{3010}$& {ReVBS} & {True} & {True} & {True} &$X^{\ }_{3010}$& {ReVBS} &{True} \\ $X^{\ }_{0020}$& {ImVBS} & {True} & {True} & {True} &$X^{\ }_{0020}$& {ImVBS} &{True} \\ $X^{\ }_{3033}$& {CDW} & {True} & {True} & {False}&$X^{\ }_{3333}$& {N\'eel}$^{\ }_{z}$ &{False}\\ &&&&&&\\ $X^{\ }_{3003}$& {QHE} & {False}& {True} & {False}&$X^{\ }_{3003}$& {QHE} &{True} \\ &&&&&&\\ $X^{\ }_{3110}$& {ReVBS}$^{\ }_{x}$& {False}& {False}& {True} &$X^{\ }_{2132}$& {ImTSC}$^{\ }_{32z}$ &{False}\\ $X^{\ }_{0210}$& {ReVBS}$^{\ }_{y}$& {False}& {False}& {True} &$X^{\ }_{1132}$& {ReTSC}$^{\ }_{32z}$ &{False}\\ $X^{\ }_{3310}$& {ReVBS}$^{\ }_{z}$& {False}& {False}& {True} &$X^{\ }_{3310}$& {ReVBS}$^{\ }_{z}$ &{True} \\ &&&&&&\\ $X^{\ }_{0120}$& {ImVBS}$^{\ }_{x}$& {False}& {False}& {True} &$X^{\ }_{1102}$& {ReTSC}$^{\ }_{02z}$ &{False}\\ $X^{\ }_{3220}$& {ImVBS}$^{\ }_{y}$& {False}& {False}& {True} &$X^{\ }_{2102}$& {ImTSC}$^{\ }_{02z}$ &{False}\\ $X^{\ }_{0320}$& {ImVBS}$^{\ }_{z}$& {False}& {False}& {True} &$X^{\ }_{0320}$& {ImVBS}$^{\ }_{z}$ &{True} \\ &&&&&&\\ $X^{\ }_{3103}$& {QSHE}$^{\ }_{x}$& {True} & {False}& {False}&$X^{\ }_{2121}$& {ImTSC}$^{\ }_{z}$ &{False}\\ $X^{\ }_{0203}$& {QSHE}$^{\ }_{y}$& {True} & {False}& {False}&$X^{\ }_{1121}$& {ReTSC}$^{\ }_{z}$ &{False}\\ $X^{\ }_{3303}$& {QSHE}$^{\ }_{z}$& {True} & {False}& {False}&$X^{\ }_{3303}$& {QSHE}$^{\ }_{z} $ &{True} \\ &&&&&&\\ $X^{\ }_{3133}$& {N\'eel}$^{\ }_{x}$& {False}& {False}& {False}&$X^{\ }_{2211}$& {ReSSC} &{False}\\ $X^{\ }_{0233}$& {N\'eel}$^{\ }_{y}$& {False}& {False}& {False}&$X^{\ }_{1211}$& {ImSSC} &{False}\\ $X^{\ }_{3333}$& {N\'eel}$^{\ }_{z}$& {False}& {False}& {False}&$X^{\ }_{3033}$& {CDW} &{False}\\ &&&&&&\\ $X^{\ }_{2211}$& {ReSSC} & {True} & {True} & {False}&$X^{\ }_{3133}$& {N\'eel}$^{\ }_{x}$ &{False}\\ $X^{\ }_{1211}$& {ImSSC} & {False}& {True} & {False}&$X^{\ }_{0233}$& {N\'eel}$^{\ }_{y}$ &{False}\\ &&&&&&\\ $X^{\ }_{1002}$& {ReTSC}$^{\ }_{02y}$& {True} & {False}& {True} &$X^{\ }_{1002}$& {ReTSC}$^{\ }_{02y}$ &{True} \\ $X^{\ }_{2002}$& {ImTSC}$^{\ }_{02y}$& {False}& {False}& {True} &$X^{\ }_{2302}$& {ImTSC}$^{\ }_{02x}$ &{False}\\ $X^{\ }_{1102}$& {ReTSC}$^{\ }_{02z}$& {False}& {False}& {True} &$X^{\ }_{0120}$& {ImVBS}$^{\ }_{x}$ &{False}\\ $X^{\ }_{2102}$& {ImTSC}$^{\ }_{02z}$& {True} & {False}& {True} &$X^{\ }_{3220}$& {ImVBS}$^{\ }_{y}$ &{False}\\ $X^{\ }_{1302}$& {ReTSC}$^{\ }_{02x}$& {False}& {False}& {True} &$X^{\ }_{1302}$& {ReTSC}$^{\ }_{02x}$ &{True} \\ $X^{\ }_{2302}$& {ImTSC}$^{\ }_{02x}$& {True} & {False}& {True} &$X^{\ }_{2002}$& {ImTSC}$^{\ }_{02y}$ &{False}\\ &&&&&&\\ $X^{\ }_{1032}$& {ReTSC}$^{\ }_{32y}$& {False}& {False}& {True} &$X^{\ }_{1332}$& {ReTSC}$^{\ }_{32x}$ &{False}\\ $X^{\ }_{2032}$& {ImTSC}$^{\ }_{32y}$& {True} & {False}& {True} &$X^{\ }_{2032}$& {ImTSC}$^{\ }_{32y}$ &{True} \\ $X^{\ }_{1132}$& {ReTSC}$^{\ }_{32z}$& {True} & {False}& {True} &$X^{\ }_{0210}$& {ReVBS}$^{\ }_{y}$ &{False}\\ $X^{\ }_{2132}$& {ImTSC}$^{\ }_{32z}$& {False}& {False}& {True} &$X^{\ }_{3110}$& {ReVBS}$^{\ }_{x}$ &{False}\\ $X^{\ }_{1332}$& {ReTSC}$^{\ }_{32x}$& {True} & {False}& {True} &$X^{\ }_{1032}$& {ReTSC}$^{\ }_{32y}$ &{False}\\ $X^{\ }_{2332}$& {ImTSC}$^{\ }_{32x}$& {False}& {False}& {True} &$X^{\ }_{2332}$& {ImTSC}$^{\ }_{32x}$ &{True} \\ &&&&&&\\ $X^{\ }_{1021}$& {ReTSC}$^{\ }_{y}$& {True} & {False}& {False}&$X^{\ }_{1321}$& {ReTSC}$^{\ }_{x}$ &{False}\\ $X^{\ }_{2021}$& {ImTSC}$^{\ }_{y}$& {False}& {False}& {False}&$X^{\ }_{2021}$& {ImTSC}$^{\ }_{y}$ &{True} \\ $X^{\ }_{1121}$& {ReTSC}$^{\ }_{z}$& {False}& {False}& {False}&$X^{\ }_{0203}$& {QSHE}$^{\ }_{y}$ &{False}\\ $X^{\ }_{2121}$& {ImTSC}$^{\ }_{z}$& {True} & {False}& {False}&$X^{\ }_{3103}$& {QSHE}$^{\ }_{x}$ &{False}\\ $X^{\ }_{1321}$& {ReTSC}$^{\ }_{x}$& {False}& {False}& {False}&$X^{\ }_{1021}$& {ReTSC}$^{\ }_{y}$ &{False}\\ $X^{\ }_{2321}$& {ImTSC}$^{\ }_{x}$& {True} & {False}& {False}&$X^{\ }_{2321}$& {ImTSC}$^{\ }_{x}$ &{True} \end{tabular} \end{ruledtabular} \end{table*} \end{widetext} \section{ More species of fermions -- classification of all masses in graphene and $\pi$-flux phase } \label{sec: Reinstating the electron spin} So far we have ignored the spin-1/2 quantum number of electrons. If so, in the linear approximation% ~(2.1) of graphene restricted to spinless fermions say, $\hat{H}^{\ }_{\mathrm{scalar}}$ exhausts all possible symmetry-breaking instabilities with a local order parameters compatible with charge conservation. The local order parameter for a charge-density wave that breaks the sublattice symmetry but preserves the time-reversal symmetry is the real-valued order parameter $\mu^{\ }_{\mathrm{s}}(\boldsymbol{r})$ (introduced by Semenoff for graphene in Ref.~\onlinecite{Semenoff84}). The local order parameter for a bond-density wave instability that preserves the sublattice and time-reversal symmetries is the complex-valued order parameter $\Delta(\boldsymbol{r})$ (the U(1) Kekul\'e order parameter introduced by Hou {\it et al.}\ for graphene in Ref.~\onlinecite{Hou07}). The local order parameter for a bond-density wave instability that breaks the sublattice and time-reversal symmetries is the real-valued order parameter $\eta(\boldsymbol{r})$ (introduced by Haldane for graphene in Ref.~\onlinecite{Haldane88}). If we reinstate spin-1/2 in the most naive way and consider two independent copies of the model in Eq.~(\ref{eq: def quantum Hamiltonian}), then the results we found for spinless electrons are modified in a trivial way. Defects bind equal values for the fractional charge for both species, up and down spin, thereby doubling the total induced fermionic charge (which is to be associated with a spin-singlet state). The same happens to the exchange statistical angle. It is simply doubled with respect to the results in Sec.~\ref{sec: Fractional statistical angle}. However, if spin is not a good quantum number, a larger number of instabilities can occur and more masses or order parameters (other than $\mathrm{Re}\,\Delta$, $\mathrm{Im}\,\Delta$, $\mu^{\ }_{\mathrm{s}}$, and $\eta$) need to be taken into account. Thus, one must consider more generic Dirac Hamiltonians and study all their allowed masses. Topological defects in these order parameters could bind states, whose (fractional) charge and statistics would depend on the effective action (as function of all the mass order parameters and the $a^{\ }_{\mu}$ and $a^{\ }_{5 \mu}$ fields) that is obtained upon integrating all the species of fermions. This effective action would be the extension of the one derived in Sec.~\ref{sec: Derivative expansion} for the case of the four order parameters ($\mathrm{Re}\,\Delta$, $\mathrm{Im}\,\Delta$, $\mu^{\ }_{\mathrm{s}}$, and $\eta$). We do not fully carry this program in this paper. Nonetheless, we classify all these masses according to the microscopic symmetries. This classification applies as well to the microscopic model of Sec.~\ref{sec: Microscopic models}. There, we chose a specific way to add Wilson masses [see Eq.~(\ref{eq: Wilson t' mass})] to selectively get rid of all but 2 Dirac points in order to recover in the long-wavelength limit Hamiltonian% ~(\ref{eq: def quantum Hamiltonian}). The set of all (64) Wilson masses can also classified as we do below. \begin{widetext} \begin{table*} \caption{ \label{tab: 56 5-tuplets} Enumeration of the 56 distinct 5-tuplets of maximally pairwise anticommuting PHS $X^{\ }_{\mu^{\ }_{1}\mu^{\ }_{2}\mu^{\ }_{3}\mu^{\ }_{4}}$. The 56 5-tuplets are broken into 28 pairs related by the operation of $C$ conjugation% ~(\ref{eq: def partner mass matrix}). } \begin{ruledtabular} \begin{tabular}{ll} 5-tuplet & Partner 5-tuplet by $C$ conjugation\\ &\\ $\left\{ \text{ReVBS}, \text{ImVBS}, \text{ReSSC}, \text{ImSSC}, \text{CDW} \right\}$ & $\left\{ \text{ReVBS}, \text{ImVBS}, \text{N\'eel}^{\ }_x, \text{N\'eel}^{\ }_y, \text{N\'eel}^{\ }_z \right\}$ \\&\\ $\left\{ \text{ImVBS}, \text{CDW}, \text{ReVBS}^{\ }_{x}, \text{ReVBS}^{\ }_{y}, \text{ReVBS}^{\ }_{z} \right\}$ & $\left\{ \text{ImVBS}, \text{N\'eel}^{\ }_z, \text{ImTSC}^{\ }_{32z}, \text{ReTSC}^{\ }_{32z}, \text{ReVBS}^{\ }_{z} \right\}$ \\ $\left\{ \text{ReVBS}, \text{CDW}, \text{ImVBS}^{\ }_{x}, \text{ImVBS}^{\ }_{y}, \text{ImVBS}^{\ }_{z} \right\}$ & $\left\{ \text{ReVBS}, \text{N\'eel}^{\ }_z, \text{ReTSC}^{\ }_{02z}, \text{ImTSC}^{\ }_{02z}, \text{ImVBS}^{\ }_{z} \right\}$ \\ $\left\{ \text{ReSSC}, \text{ImSSC}, \text{QSHE}^{\ }_x, \text{QSHE}^{\ }_y, \text{QSHE}^{\ }_z \right\}$ & $\left\{ \text{N\'eel}^{\ }_x, \text{N\'eel}^{\ }_y, \text{ImTSC}^{\ }_z, \text{ReTSC}^{\ }_z, \text{QSHE}^{\ }_z \right\}$ \\&\\ $\left\{ \text{ReVBS}, \text{ReSSC}, \text{ReTSC}^{\ }_{02x}, \text{ImTSC}^{\ }_{02y}, \text{ReTSC}^{\ }_{02z} \right\}$ & $\left\{ \text{ReVBS}, \text{N\'eel}^{\ }_x, \text{ReTSC}^{\ }_{02x}, \text{ImTSC}^{\ }_{02x}, \text{ImVBS}^{\ }_{x} \right\}$ \\ $\left\{ \text{ReVBS}, \text{ImSSC}, \text{ImTSC}^{\ }_{02x}, \text{ReTSC}^{\ }_{02y}, \text{ImTSC}^{\ }_{02z} \right\}$ & $\left\{ \text{ReVBS}, \text{N\'eel}^{\ }_y, \text{ImTSC}^{\ }_{02y}, \text{ReTSC}^{\ }_{02y}, \text{ImVBS}^{\ }_{y} \right\}$ \\ $\left\{ \text{ImVBS}, \text{ImSSC}, \text{ReTSC}^{\ }_{32x}, \text{ImTSC}^{\ }_{32y}, \text{ReTSC}^{\ }_{32z} \right\}$ & $\left\{ \text{ImVBS}, \text{N\'eel}^{\ }_y, \text{ReTSC}^{\ }_{32y}, \text{ImTSC}^{\ }_{32y}, \text{ReVBS}^{\ }_{y} \right\}$ \\ $\left\{ \text{ImVBS}, \text{ReSSC}, \text{ImTSC}^{\ }_{32x}, \text{ReTSC}^{\ }_{32y}, \text{ImTSC}^{\ }_{32z} \right\}$ & $\left\{ \text{ImVBS}, \text{N\'eel}^{\ }_x, \text{ImTSC}^{\ }_{32x}, \text{ReTSC}^{\ }_{32x}, \text{ReVBS}^{\ }_{x} \right\}$ \\ $\left\{ \text{CDW}, \text{ImSSC}, \text{ImTSC}^{\ }_x, \text{ReTSC}^{\ }_y, \text{ImTSC}^{\ }_z \right\}$ & $\left\{ \text{N\'eel}^{\ }_z, \text{N\'eel}^{\ }_y, \text{ImTSC}^{\ }_x, \text{ReTSC}^{\ }_x, \text{QSHE}^{\ }_x \right\}$ \\ $\left\{ \text{CDW}, \text{ReSSC}, \text{ReTSC}^{\ }_x, \text{ImTSC}^{\ }_y, \text{ReTSC}^{\ }_z \right\}$ & $\left\{ \text{N\'eel}^{\ }_z, \text{N\'eel}^{\ }_x, \text{ReTSC}^{\ }_y, \text{ImTSC}^{\ }_y, \text{QSHE}^{\ }_y \right\}$ \\&\\ $\left\{ \text{ImVBS}^{\ }_{x}, \text{QSHE}^{\ }_y, \text{ImVBS}^{\ }_{z}, \text{ReTSC}^{\ }_{32y}, \text{ImTSC}^{\ }_{32y} \right\}$ & $\left\{ \text{ReTSC}^{\ }_{02z}, \text{ReTSC}^{\ }_z, \text{ImVBS}^{\ }_{z}, \text{ReTSC}^{\ }_{32x}, \text{ImTSC}^{\ }_{32y} \right\}$ \\ $\left\{ \text{ImVBS}^{\ }_{x}, \text{QSHE}^{\ }_y, \text{ReVBS}^{\ }_{x}, \text{N\'eel}^{\ }_x, \text{QSHE}^{\ }_z \right\}$ & $\left\{ \text{ReTSC}^{\ }_{02z}, \text{ReTSC}^{\ }_z, \text{ImTSC}^{\ }_{32z}, \text{ReSSC}, \text{QSHE}^{\ }_z \right\}$ \\ $\left\{ \text{ImVBS}^{\ }_{x}, \text{ReTSC}^{\ }_{32y}, \text{ImTSC}^{\ }_{32z}, \text{ImTSC}^{\ }_{02x}, \text{ImTSC}^{\ }_x \right\}$ & $\left\{ \text{ReTSC}^{\ }_{02z}, \text{ReTSC}^{\ }_{32x}, \text{ReVBS}^{\ }_{x}, \text{ImTSC}^{\ }_{02y}, \text{ImTSC}^{\ }_x \right\}$ \\ $\left\{ \text{ImVBS}^{\ }_{x}, \text{ReTSC}^{\ }_{32z}, \text{ReTSC}^{\ }_{02x}, \text{ReTSC}^{\ }_x, \text{ImTSC}^{\ }_{32y} \right\}$ & $\left\{ \text{ReTSC}^{\ }_{02z}, \text{ReVBS}^{\ }_{y}, \text{ReTSC}^{\ }_{02x}, \text{ReTSC}^{\ }_y, \text{ImTSC}^{\ }_{32y} \right\}$ \\ $\left\{ \text{ImVBS}^{\ }_{x}, \text{ReTSC}^{\ }_{32z}, \text{ImTSC}^{\ }_{32z}, \text{ImVBS}^{\ }_{y}, \text{QSHE}^{\ }_z \right\}$ & $\left\{ \text{ReTSC}^{\ }_{02z}, \text{ReVBS}^{\ }_{y}, \text{ReVBS}^{\ }_{x}, \text{ImTSC}^{\ }_{02z}, \text{QSHE}^{\ }_z \right\}$ \\ $\left\{ \text{ImVBS}^{\ }_{x}, \text{ReTSC}^{\ }_x, \text{ImTSC}^{\ }_x, \text{CDW}, \text{ReVBS}^{\ }_{x} \right\}$ & $\left\{ \text{ReTSC}^{\ }_{02z}, \text{ReTSC}^{\ }_y, \text{ImTSC}^{\ }_x, \text{N\'eel}^{\ }_z, \text{ImTSC}^{\ }_{32z} \right\}$ \\&\\ $\left\{ \text{QSHE}^{\ }_y, \text{ImVBS}^{\ }_{z}, \text{QSHE}^{\ }_x, \text{ReVBS}^{\ }_{z}, \text{N\'eel}^{\ }_z \right\}$ & $\left\{ \text{ReTSC}^{\ }_z, \text{ImVBS}^{\ }_{z}, \text{ImTSC}^{\ }_z, \text{ReVBS}^{\ }_{z}, \text{CDW} \right\}$ \\ $\left\{ \text{QSHE}^{\ }_y, \text{ReTSC}^{\ }_{02y}, \text{ReTSC}^{\ }_y, \text{ImSSC}, \text{ImTSC}^{\ }_{32y} \right\}$ & $\left\{ \text{ReTSC}^{\ }_z, \text{ReTSC}^{\ }_{02y}, \text{ReTSC}^{\ }_x, \text{N\'eel}^{\ }_y, \text{ImTSC}^{\ }_{32y} \right\}$ \\ $\left\{ \text{QSHE}^{\ }_y, \text{ReTSC}^{\ }_{02y}, \text{ImTSC}^{\ }_{02y}, \text{ReVBS}^{\ }_{x}, \text{ReVBS}^{\ }_{z} \right\}$ & $\left\{ \text{ReTSC}^{\ }_z, \text{ReTSC}^{\ }_{02y}, \text{ImTSC}^{\ }_{02x}, \text{ImTSC}^{\ }_{32z}, \text{ReVBS}^{\ }_{z} \right\}$ \\ $\left\{ \text{QSHE}^{\ }_y, \text{ReTSC}^{\ }_{32y}, \text{ImTSC}^{\ }_{02y}, \text{ImTSC}^{\ }_y, \text{ReSSC}\right\}$ & $\left\{ \text{ReTSC}^{\ }_z, \text{ReTSC}^{\ }_{32x}, \text{ImTSC}^{\ }_{02x}, \text{ImTSC}^{\ }_y, \text{N\'eel}^{\ }_x \right\}$ \\&\\ $\left\{ \text{ReVBS}^{\ }_{y}, \text{N\'eel}^{\ }_y, \text{QSHE}^{\ }_x, \text{ImVBS}^{\ }_{y}, \text{QSHE}^{\ }_z \right\}$ & $\left\{ \text{ReTSC}^{\ }_{32z}, \text{ImSSC}, \text{ImTSC}^{\ }_z, \text{ImTSC}^{\ }_{02z}, \text{QSHE}^{\ }_z \right\}$ \\ $\left\{ \text{ReVBS}^{\ }_{y}, \text{ReTSC}^{\ }_y, \text{ImTSC}^{\ }_y, \text{CDW}, \text{ImVBS}^{\ }_{y} \right\}$ & $\left\{ \text{ReTSC}^{\ }_{32z}, \text{ReTSC}^{\ }_x, \text{ImTSC}^{\ }_y, \text{N\'eel}^{\ }_z, \text{ImTSC}^{\ }_{02z} \right\}$ \\ $\left\{ \text{ReVBS}^{\ }_{y}, \text{ReTSC}^{\ }_{32y}, \text{ImTSC}^{\ }_y, \text{ImTSC}^{\ }_{02z}, \text{ImTSC}^{\ }_{02x} \right\}$ & $\left\{ \text{ReTSC}^{\ }_{32z}, \text{ReTSC}^{\ }_{32x}, \text{ImTSC}^{\ }_y, \text{ImVBS}^{\ }_{y}, \text{ImTSC}^{\ }_{02y} \right\}$ \\ $\left\{ \text{ReVBS}^{\ }_{y}, \text{ReTSC}^{\ }_{02x}, \text{ImTSC}^{\ }_{02x}, \text{QSHE}^{\ }_x, \text{ReVBS}^{\ }_{z} \right\}$ & $\left\{ \text{ReTSC}^{\ }_{32z}, \text{ReTSC}^{\ }_{02x}, \text{ImTSC}^{\ }_{02y}, \text{ImTSC}^{\ }_z, \text{ReVBS}^{\ }_{z} \right\}$ \\&\\ $\left\{ \text{N\'eel}^{\ }_y, \text{ReTSC}^{\ }_{32y}, \text{ImTSC}^{\ }_{02y}, \text{ImTSC}^{\ }_z, \text{ImTSC}^{\ }_x \right\}$ & $\left\{ \text{ImSSC}, \text{ReTSC}^{\ }_{32x}, \text{ImTSC}^{\ }_{02x}, \text{QSHE}^{\ }_x, \text{ImTSC}^{\ }_x \right\}$ \\ $\left\{ \text{ImVBS}^{\ }_{z}, \text{ReTSC}^{\ }_{32y}, \text{ImTSC}^{\ }_{02z}, \text{ImTSC}^{\ }_z, \text{ImTSC}^{\ }_{32x} \right\}$ & $\left\{ \text{ImVBS}^{\ }_{z}, \text{ReTSC}^{\ }_{32x}, \text{ImVBS}^{\ }_{y}, \text{QSHE}^{\ }_x, \text{ImTSC}^{\ }_{32x} \right\}$ \\ $\left\{ \text{ReTSC}^{\ }_{02y}, \text{ReTSC}^{\ }_y, \text{ImTSC}^{\ }_{32z}, \text{ImTSC}^{\ }_{32x}, \text{ImVBS}^{\ }_{y} \right\}$ & $\left\{ \text{ReTSC}^{\ }_{02y}, \text{ReTSC}^{\ }_x, \text{ReVBS}^{\ }_{x}, \text{ImTSC}^{\ }_{32x}, \text{ImTSC}^{\ }_{02z} \right\}$ \\ $\left\{ \text{ReTSC}^{\ }_y, \text{ReTSC}^{\ }_{02x}, \text{ImTSC}^{\ }_z, \text{ImTSC}^{\ }_{32x}, \text{N\'eel}^{\ }_x \right\}$ & $\left\{ \text{ReTSC}^{\ }_x, \text{ReTSC}^{\ }_{02x}, \text{QSHE}^{\ }_x, \text{ImTSC}^{\ }_{32x}, \text{ReSSC} \right\}$ \end{tabular} \end{ruledtabular} \end{table*} \end{widetext} \subsection{ Classification of masses in graphene and $\pi$-flux phases } \label{subsec: Classification of masses in graphene and pi-flux phases} To describe all symmetry-breaking instabilities with a local order parameter in graphene or the square lattice with $\pi$-flux phase, we consider the Bogoliubov-de Gennes (BdG) Hamiltonian \begin{subequations} \begin{equation} \hat{H}^{\ }_{\mathrm{BdG}} = \frac{1}{2} \int d^{2}\boldsymbol{r}\, \hat{\Psi}^{\dag} \mathcal{K} \hat{\Psi} \end{equation} where $\hat{\Psi}$ is the 16-component Nambu spinor \begin{equation} \hat{\Psi}:= \begin{pmatrix} \hat{\psi}_{\uparrow}^{\ }, & \hat{\psi}_{\downarrow}^{\ }, & \hat{\psi}_{\uparrow}^{\dag}, & \hat{\psi}_{\downarrow}^{\dag} \end{pmatrix}^{\mathrm{t}} \end{equation} and $\hat{\psi}^{\ }_{s=\uparrow,\downarrow}$ is a 4-component fermion annihilation operator that accounts for the 2 valley and the 2 sublattice degrees of freedom. The kernel of the BdG Hamiltonian has the block structure \begin{equation} \mathcal{K}= \begin{pmatrix} \mathcal{H}^{\ }_{\mathrm{p}\mathrm{p}} & \mathcal{H}^{\ }_{\mathrm{p}\mathrm{h}} \\ \mathcal{H}^{\dag}_{\mathrm{p}\mathrm{h}} & -\mathcal{H}^{\mathrm{t}}_{\mathrm{p}\mathrm{p}} \end{pmatrix} \label{eq: BdG block structure} \end{equation} \end{subequations} where the $8\times 8$ blocks $\mathcal{H}^{\ }_{\mathrm{p}\mathrm{p}}$ and $\mathcal{H}^{\ }_{\mathrm{p}\mathrm{h}}$ act on the combined space of valley, sublattice, and spin degrees of freedom, and represent the normal and anomalous part of the BdG Hamiltonian, respectively. These blocks satisfy \begin{equation} \begin{split} & \mathcal{H}^{\dag}_{\mathrm{p}\mathrm{p}}= \mathcal{H}^{\ }_{\mathrm{p}\mathrm{p}} \qquad \mbox{(Hermiticity)}, \\ & \mathcal{H}^{\mathrm{t}}_{\mathrm{p}\mathrm{h}}= -\mathcal{H}^{\ }_{\mathrm{p}\mathrm{h}} \qquad \mbox{(Fermi statistics)}. \end{split} \label{eq: phs} \end{equation} To represent the single particle Hamiltonian $\mathcal{K}$, define the 256 16-dimensional Hermitian matrices \begin{equation} X^{\ }_{\mu^{\ }_{1}\mu^{\ }_{2}\mu^{\ }_{3}\mu^{\ }_{4}}:= {\rho}^{\ }_{\mu^{\ }_{1}} \otimes {s}^{\ }_{\mu^{\ }_{2}} \otimes \sigma^{\ }_{\mu^{\ }_{3}} \otimes \tau^{\ }_{\mu^{\ }_{4}} \label{eq: 256 matrices} \end{equation} where $\mu^{\ }_{1,2,3,4}=0,1,2,3$. Here, we have introduced the four families $ {\rho}^{\ }_{\mu^{\ }_{1}}$, $ {s}^{\ }_{\mu^{\ }_{2}}$, ${\sigma}^{\ }_{\mu^{\ }_{3}}$, and $ {\tau}^{\ }_{\mu^{\ }_{4}}$ of unit $2\times2$ and Pauli matrices that encode the particle-hole (Nambu), spin-1/2, valley, and sublattice degrees of freedom of graphene or the square lattice with $\pi$-flux phase, respectively. The Dirac kinetic energy $\mathcal{K}^{\ }_{0}$ of graphene or the square lattice with $\pi$-flux phase that accounts for the BdG block structure% ~(\ref{eq: BdG block structure}) is assigned the two 16$\times$16 Dirac matrices \begin{subequations} \begin{equation} \alpha^{\ }_{1}\equiv X^{\ }_{0031}, \qquad \alpha^{\ }_{2}\equiv X^{\ }_{3032}, \end{equation} and is given by \begin{equation} \mathcal{K}^{\ }_{0}:= \boldsymbol{\alpha}\cdot(-{i}\boldsymbol{\partial}). \label{eq: def K0} \end{equation} Similarly, by introducing the 16$\times$16 Hermitian matrices \begin{equation} \beta\equiv X^{\ }_{3010}, \qquad R\equiv X^{\ }_{3033}, \qquad \gamma^{\ }_{5}\equiv X^{\ }_{3030}, \end{equation} the counterpart to $\hat{H}$ in Eq.~(2.1) is given by \begin{equation} \mathcal{K}:= \mathcal{K}^{\ }_{0} + \mathcal{K}^{\ }_{\mathrm{gauge}} + \mathcal{K}^{\ }_{\mathrm{scalar}}, \label{eq: def K Dirac} \end{equation} where \begin{equation} \begin{split} & \mathcal{K}^{\ }_{\mathrm{gauge}}:= \boldsymbol{\alpha}\cdot \left( - \boldsymbol{a} - \boldsymbol{a}^{\ }_{5}\gamma^{\ }_{5} \right), \\ & \mathcal{K}^{\ }_{\mathrm{scalar}}:= |\Delta|\beta e^{{i}\theta\gamma^{\ }_{5}} + \mu^{\ }_{\mathrm{s}} R + {i} \eta \alpha^{\ }_{1} \alpha^{\ }_{2}. \end{split} \label{eq: 16 dimensional Dirac single-particle Hamiltonian} \end{equation} \end{subequations} Given the Dirac kinetic term $\mathcal{K}^{\ }_{0}$, we treat $X^{\ }_{\mu^{\ }_{1}\mu^{\ }_{2}\mu^{\ }_{3}\mu^{\ }_{4}}$ as a perturbation, \begin{equation} \mathcal{K}^{\ }_{m} := \mathcal{K}^{\ }_{0} + m X^{\ }_{\mu^{\ }_{1}\mu^{\ }_{2}\mu^{\ }_{3}\mu^{\ }_{4}} \label{eq: def K Dirac kinectic+m} \end{equation} where $m\in \mathbb{R}$ is constant in space and time. If $X^{\ }_{\mu^{\ }_{1}\mu^{\ }_{2}\mu^{\ }_{3}\mu^{\ }_{4}}$ anticommutes with the Dirac kinetic energy $\mathcal{K}^{\ }_{0}$, then it opens a gap in the massless Dirac spectrum of $\mathcal{K}^{\ }_{0}$. We shall call such a perturbation a mass in short. Each mass can be thought of as being induced by a breaking of a microscopic symmetry (see below). There are $64=4\times16$ mass matrices (i.e., $X^{\ }_{\mu^{\ }_{1}\mu^{\ }_{2}\mu^{\ }_{3}\mu^{\ }_{4}}$ that anticommutes with $\mathcal{K}^{\ }_{0}$). Of these 64 mass matrices, only 36 satisfy the condition \begin{equation} X^{\ }_{1000}\, X^{\mathrm{t}}_{\mu^{\ }_{1}\mu^{\ }_{2}\mu^{\ }_{3}\mu^{\ }_{4}} \, X^{\ }_{1000} = - X^{\ }_{\mu^{\ }_{1}\mu^{\ }_{2}\mu^{\ }_{3}\mu^{\ }_{4}} \label{eq: def PHS} \end{equation} for particle-hole symmetry (PHS) and are thus compatible with the symmetry condition $(\rho^{\ }_{1}\otimes s^{\ }_{0}\otimes \sigma^{\ }_{0}\otimes \tau^{\ }_{0} \hat{\Psi})^{\mathrm{t}}=\hat{\Psi}^{\dag}$ on the Nambu spinors [i.e., compatible with Eq.\ (\ref{eq: phs})]. All mass matrices with PHS are enumerated in Table \ref{tab: 36 masses}. All 36 mass matrices from Table \ref{tab: 36 masses} can be classified in terms of the following (microscopic) 3 symmetry properties. (i) A BdG Hamiltonian is time-reversal symmetry (TRS) when \begin{equation} X^{\ }_{0211}\, \mathcal{K}^{*}\, X^{\ }_{0211}= \mathcal{K}. \label{eq: def TRS} \end{equation} (ii) A BdG Hamiltonian has SU(2) spin rotation symmetry (SRS) when \begin{equation} \left[ X^{\ }_{3100}, \mathcal{K} \right] = \left[ X^{\ }_{0200}, \mathcal{K} \right] = \left[ X^{\ }_{3300}, \mathcal{K} \right] =0. \label{eq: def SRS} \end{equation} (iii) A BdG Hamiltonian has sublattice symmetry (SLS) when \begin{equation} X^{\ }_{0033}\, \mathcal{K}\, X^{\ }_{0033}= - \mathcal{K}. \label{eq: def SLS} \end{equation} For any lattice regularization of the BdG Hamiltonian~(\ref{eq: def K Dirac kinectic+m}) supporting two sublattices $\Lambda^{\ }_{\mathrm{A}}$ and $\Lambda^{\ }_{\mathrm{B}}$, as is the case for graphene or the square lattice with $\pi$-flux phase, the microscopic order parameter corresponding to a mass matrix satisfying the SLS~(\ref{eq: def SLS}) is a non-vanishing expectation value for a fermion bilinear with the two lattice fermions residing on the opposite ends of a bond connecting a site belonging to sublattice $\Lambda^{\ }_{\mathrm{A}}$ and another site belonging to sublattice $\Lambda^{\ }_{\mathrm{B}}$. We shall say that such a mass matrix is associated to a valence-bond solid (VBS) order parameter in analogy to the terminology used for quantum dimer models. A VBS order picks up a microscopic orientation that translates into a complex-valued order parameter in the continuum limit. Hence, we shall distinguish between the real (ReVBS) and imaginary (ImVBS) parts of the VBS. Triplet superconductivity is also possible on bonds connecting the two sublattices. The terminology TSC will then also be used. To distinguish TSC with or without TRS we shall reserve the prefixes Re and Im for real and imaginary parts. This is a different convention for the use of the prefixes Re and Im than for a VBS. Any mass matrix that does not satisfy the SLS~(\ref{eq: def SLS}) corresponds to a microscopic order parameter for which the fermion bilinear has the two lattice fermions sitting on the same sublattice. Microscopic examples are charge-density waves (CDW), spin-density waves (SDW) such as N\'eel ordering, orbital currents leading to the quantum Hall effect (QHE), spin-orbit couplings leading to the quantum spin Hall effect (QSHE), singlet superconductivity (SSC), or triplet superconductivity (TSC). When SU(2) spin symmetry is broken by the order parameter, we add a subindex $x$, $y$, or $z$ that specifies the relevant quantization axis to the name of the mass matrix. Moreover, TSC with SLS must be distinguished by the 2 possible bond orientations (the underlying two-dimensional lattice has 2 independent vectors connecting nearest-neighbor sites). These 2 orientations are specified by the Pauli matrices used in the valley and sublattice subspaces, i.e., by the 2 pairs of numbers 02 and 32. Symmetry properties of all 36 PHS masses are summarized in Table \ref{tab: 36 masses}. The set of all 36 PHS masses in Table \ref{tab: 36 masses} is invariant under an involutive transformation defined by \begin{equation} \begin{split} & \hat{\Psi} \to C\hat{\Psi}, \\ & C = \rho^{\ }_{0} \otimes s^{\ }_{+} \otimes \sigma^{\ }_{0} \otimes \tau^{\ }_{0} + \rho^{\ }_{1} \otimes s^{\ }_{-} \otimes \sigma^{\ }_{2} \otimes \tau^{\ }_{2}, \end{split} \end{equation} and which we shall call $C$ conjugation to distinguish it from the particle-hole transformation~(\ref{eq: def PHS}). Here, $s^{\ }_{\pm} = (s^{\ }_{3}\pm s^{\ }_{0})/2$. For graphene or the square lattice with $\pi$-flux phase, this transformation corresponds to \begin{equation} \begin{split} & \hat{a}^{\ }_{\boldsymbol{r}^{\ }_{A} \uparrow} \to \hat{a}^{\ }_{\boldsymbol{r}^{\ }_{A} \uparrow}, \qquad \hat{b}^{\ }_{\boldsymbol{r}^{\ }_{B} \uparrow} \to \hat{b}^{\ }_{\boldsymbol{r}^{\ }_{B} \uparrow}, \\ & \hat{a}^{\ }_{\boldsymbol{r}^{\ }_{A} \downarrow} \to \hat{a}^{\dag}_{\boldsymbol{r}^{\ }_{A} \downarrow}, \qquad \hat{b}^{\ }_{\boldsymbol{r}^{\ }_{B} \downarrow} \to -\hat{b}^{\dag}_{\boldsymbol{r}^{\ }_{B} \downarrow}, \end{split} \end{equation} where $\hat{a}^{\dag}_{\boldsymbol{r}^{\ }_{A} s}$ and $\hat{b}^{\dag}_{\boldsymbol{r}^{\ }_{B} s}$ creates an electron with spin $s=\uparrow,\downarrow$ on sublattice $\Lambda^{\ }_{\mathrm{A}}$ and sublattice $\Lambda^{\ }_{\mathrm{B}}$, respectively (see Appendix \ref{appsec: Numerical Berry phase in the single-particle approximation}). Under this transformation \begin{equation} X^{\ }_{\mu^{\ }_{1}\mu^{\ }_{2}\mu^{\ }_{3}\mu^{\ }_{4}} \to C^{\dag} X^{\ }_{\mu^{\ }_{1}\mu^{\ }_{2}\mu^{\ }_{3}\mu^{\ }_{4}} C. \label{eq: def partner mass matrix} \end{equation} Hence, it leaves the massless Dirac kernel $\mathcal{K}^{\ }_{0}$ invariant. The organization of the mass matrices in Table \ref{tab: 36 masses} can be understood as follows. First, we preserve both SRS and charge conservation, i.e., we start with the 4 order parameters we have already encountered in the spinless case with charge conservation. There are two valence bond solids, ReVBS ($\mathrm{Re}\,\Delta$) and ImVBS ($\mathrm{Im}\,\Delta$). They have maximal symmetry and are invariant under the operation of $C$ conjugation% ~(\ref{eq: def partner mass matrix}). The CDW order parameter ($\mu^{\ }_{\mathrm{s}}$) breaks the SLS. It is mapped into the N\'eel spin-density wave with quantization axis $z$ under the operation of $C$ conjugation% ~(\ref{eq: def partner mass matrix}). The QHE order parameter ($\eta$) breaks both the SLS and TRS symmetries. It is invariant under the operation of $C$ conjugation% ~(\ref{eq: def partner mass matrix}). Second, we break SRS with or without either TRS or SLS while always preserving charge conservation. The breaking of SRS is achieved by choosing a preferred quantization axis, say $x$, $y$, or $z$ in SU(2) spin space. Breaking SRS while preserving SLS is achieved with spin-polarized valence-bond ordering in 6=3$\times$2 different ways, which we abbreviate by ReVBS$^{\ }_{x}$, ReVBS$^{\ }_{y}$, ReVBS$^{\ }_{z}$, ImVBS$^{\ }_{x}$, ImVBS$^{\ }_{y}$, and ImVBS$^{\ }_{z}$ in Table \ref{tab: 36 masses}. In doing so TRS is always broken. Breaking SRS and SLS while preserving TRS is achieved through any of the 3 order parameters for the spin quantum Hall effect (QSHE) introduced by Kane and Mele in Ref.~\onlinecite{Kane05}, which we abbreviate by QSHE$^{\ }_{x}$, QSHE$^{\ }_{y}$, and QSHE$^{\ }_{z}$ in Table \ref{tab: 36 masses}. Breaking SRS, SLS, and TRS is achieved through any one of 3 colinear magnetic order in the form of N\'eel order, which we abbreviate by N\'eel$^{\ }_{x}$, N\'eel$^{\ }_{y}$, and N\'eel$^{\ }_{z}$ in Table \ref{tab: 36 masses}. This brings the number of order parameters that conserve the electronic charge to 16=4+6+3+3. There are thus 20=2+6+6+6 remaining order parameters that do not conserve the electronic charge. Third, superconducting order is achieved microscopically by pairing two electrons sitting on different or identical sublattices. In the former case, SLS is preserved. In the latter case, SLS is broken. Pairing of the 2 electronic spins takes place either in a singlet or in a triplet channel. Antisymmetry under exchange of the two electrons making up a spin-singlet Cooper pair can only be achieved in an even angular momentum channel. On-site pairing is of course associated to vanishing angular momentum so that singlet superconductivity can only be realized when SLS is broken. This only leaves 2 possible singlet superconducting order parameters that are distinguished by whether they preserve or break TRS. They are denoted ReSSC and ImSSC, respectively. (Real and imaginary parts thus take a different meaning here as for ReVBS and ImVBS.) Fourth, a triplet superconducting order parameter, which we abbreviate by TSC in Table \ref{tab: 36 masses}, is characterized by a vector $\boldsymbol{d}$ in SU(2) spin space. This vector can point along any one of the three quantization axis $x$, $y$, and $z$ in SU(2) spin space. Moreover, it can either preserve or break TRS for which cases we use the notations ReTSC and ImTSC, respectively, in Table \ref{tab: 36 masses}. (Real and imaginary parts thus take a different meaning here as for ReVBS and ImVBS.) When SLS is preserved by the superconducting order parameter, there are $12=2\times2\times3$ independent order parameters, for a second factor of 2 besides the one for TRS arises since there are 2 directed nearest-neighbor lattice-bonds connecting nearest-neighbor sites of the two-dimensional lattice. This is abbreviated in Table \ref{tab: 36 masses} by using the index bond=02,32 in ReTSC$^{\ }_{\mathrm{bond}x}$, ReTSC$^{\ }_{\mathrm{bond}y}$, ReTSC$^{\ }_{\mathrm{bond}z}$, ImTSC$^{\ }_{\mathrm{bond}x}$, ImTSC$^{\ }_{\mathrm{bond}y}$, and ImTSC$^{\ }_{\mathrm{bond}z}$. Finally, when SLS is broken by the superconducting order parameter, there are $6=2\times3$ independent order parameters that we abbreviate by ReTSC$^{\ }_{x}$, ReTSC$^{\ }_{y}$, ReTSC$^{\ }_{z}$, ImTSC$^{\ }_{x}$, ImTSC$^{\ }_{y}$, and ImTSC$^{\ }_{z}$ in Table \ref{tab: 36 masses}. There are $12=4\times3$ order parameters that are invariant under the operation of $C$ conjugation% ~(\ref{eq: def partner mass matrix}). They can be arranged in 4 groups of 3 each. Each group of 3 obeys the same algebra. The 4 groups of 3 are: (i) ReVBS, ImVBS, QHE; (ii) ReVBS$^{\ }_{z}$, ImVBS$^{\ }_{z}$, QSHE$^{\ }_{z}$; (iii) ReTSC$^{\ }_{02x}$, ImTSC$^{\ }_{32x}$, ImTSC$^{\ }_{y}$; (iv) ReTSC$^{\ }_{02y}$, ImTSC$^{\ }_{32y}$, and ImTSC$^{\ }_{x}$. The operation of $C$ conjugation% ~(\ref{eq: def partner mass matrix}) is a useful tool to identify the possibility of exotic topological effects. For example, we observe that the pair of SSC order parameters ReSSC and ImSSC, studied in Refs.~\onlinecite{Beenakker06} and \onlinecite{GhaemiWilczek} in the context of graphene, are conjugate by $C$ to the N\'eel order parameters $\mbox{N\'eel}^{\ }_{x}$ and $\mbox{N\'eel}^{\ }_{y}$, respectively. Furthermore, Table \ref{tab: 36 masses} indicates that several triplets of masses that obeys the SU(2) algebra are related by the operation of $C$ conjugation% ~(\ref{eq: def partner mass matrix}). They are \begin{equation} \begin{split} & \left\{ \mathrm{ReVBS}, \mathrm{ReVBS}, \mathrm{CDW} \right\} \\ &\hphantom{AAAAA} \overset{C}{\longleftrightarrow} \left\{ \mathrm{ReVBS}, \mathrm{ImVBS}, \hbox{N\'eel}^{\ }_{z} \right\}, \\ & \left\{ \mathrm{ReVBS}^{\ }_{x}, \mathrm{ReVBS}^{\ }_{y}, \mathrm{ReVBS}^{\ }_{z} \right\} \\ &\hphantom{AAAA} \overset{C}{\longleftrightarrow} \left\{ \mathrm{ImTSC}^{\ }_{32z}, \mathrm{ReTSC}^{\ }_{32z}, \mathrm{ReVBS}^{\ }_{z} \right\}, \\ & \left\{ \mathrm{ImVBS}^{\ }_{x}, \mathrm{ImVBS}^{\ }_{y}, \mathrm{ImVBS}^{\ }_{z} \right\} \\ &\hphantom{AAAA} \overset{C}{\longleftrightarrow} \left\{ \mathrm{ReTSC}^{\ }_{02z}, \mathrm{ImTSC}^{\ }_{02z}, \mathrm{ImVBS}^{\ }_{z} \right\}, \\ & \left\{ \mathrm{QSHE}^{\ }_{x}, \mathrm{QSHE}^{\ }_{y}, \mathrm{QSHE}^{\ }_{z} \right\} \\ &\hphantom{AAAAAAA} \overset{C}{\longleftrightarrow} \left\{ \mathrm{ImTSC}^{\ }_{z}, \mathrm{ReTSC}^{\ }_{z}, \mathrm{QSHE}^{\ }_{z} \right\}. \end{split} \end{equation} Vortex-like defective textures in any of these mass doublets or meron-like defective textures in any of these mass triplets display fractionalization of some suitably defined quantum numbers. Finally, the topological property that a band insulator supporting the QSHE carries an odd number of Kramers doublets on its edges carries over to the $C$ conjugate TSC. More precisely, the fact that the superconductors with the ImTSC$^{\ }_{z}$ and ReTSC$^{\ }_{z}$ order parameters are examples of $\mathbb{Z}^{\ }_{2}$ topological triplet superconductors according to Refs.~\onlinecite{Roy06}, \onlinecite{Schnyder08}, \onlinecite{Roy08}, \onlinecite{Qi09}, and \onlinecite{Kitaev08} is here a mere consequence of their $C$ conjugation with the QSHE$^{\ }_{x}$ and QSHE$^{\ }_{y}$ order parameters, respectively. \subsection{ Classification of 5-tuplets of masses in graphene and $\pi$-flux phases } \label{subsec: Classification of 5-tuplets of masses in graphene and pi-flux phases} Mass matrices that commute pairwise generate competing local order parameters. Conversely, mass matrices that anticommute pairwise generate compatible local order parameters. All but one PHS masses anticommute with 16 out of the 36 PHS masses. The Haldane mass is unique in that it commutes with all PHS masses. There are 560 sets of three mutually anticommuting PHS masses. These triplets are generalizations of the triplet of compatible masses $\Delta=\mathrm{Re}\,\Delta+{i}\mathrm{Im}\,\Delta$ and $\mu^{\ }_{\mathrm{s}}$. Integration over the Dirac fermions in the presence of any one of these mass triplets of mass $m$ in competition with the Haldane mass $\eta$ induces an O(3) NLSM in (2+1)-dimensional space and time with or without a Hopf term for $m>|\eta|$ and $|\eta|>m$, respectively, as was derived in Ref.~\onlinecite{Chamon08a}. There are 280 sets of four mutually anticommuting PHS masses and the maximum number of pairwise anticommuting PHS mass matrices is 5. Out of $\binom{36}{5}=376992$ possibilities, there are 56 distinct 5-tuplets of compatible PHS mass matrices. They are enumerated in Tables \ref{tab: 56 5-tuplets}. (If PHS is not imposed, the maximum number of pairwise anticommuting mass matrices in the 64 mass matrices is 7. There are 288 distinct 7-tuplets of compatible mass matrices.) In the background of each of these 5-tuplet, integration over the Dirac fermions yields an O(5) NLSM in (2+1)-dimensional space and time augmented by a Wess-Zumino-Witten (WZW) term as was derived in Refs.~\onlinecite{Tanaka05a} and \onlinecite{Tanaka05b}. Defects-driven continuous phase transition between phases of matter unrelated by symmetries (i.e., Landau forbidden) become possible whenever the quantum numbers of the defective order parameters in a given 5-tuplet are dual in the sense of BF Chern-Simons field theories.% ~\cite{Ghaemi09} We illustrate this idea with the following examples. \begin{figure} \includegraphics[angle=0,scale=0.55]{FIGURES/setup.eps} \caption{ Two setups used to induce topological defects in an order parameter that support fractional quantum numbers. } \label{fig: setup} \end{figure} \subsubsection{ VBS-SSC-CDW 5-tuplet } \label{subsubsec: VBS-SSC-CDW 5-tuplet} The 5-tuplet \begin{equation} \{ \mathrm{ReVBS}, \mathrm{ImVBS}, \mathrm{ReSSC}, \mathrm{ImSSC}, \mathrm{CDW} \} \label{eq: CDW-VBS-SSC 5-tuplet} \end{equation} embeds the triplet made of the CDW and the 2 VBS order parameters into a 5-tuplet.% ~\cite{Ghaemi09} Integration over the fermions yields an O(5) NLSM augmented by a WZW term for the corresponding 5-tuplets of bosonic fields $n^{\ }_{1}$, $n^{\ }_{2}$, $n^{\ }_{3}$, $n^{\ }_{4}$, and $n^{\ }_{5}$ obeying the constraint that they add in quadrature to unity. The O(5) symmetry can be broken, either spontaneously or explicitly, down to the U(1)$\times$U(1) subgroup corresponding to holding $\Delta^{2}_{\mathrm{CDW}}\equiv n^{2}_{5}$, $\Delta^{2}_{\mathrm{BDW}}\equiv n^{2}_{1}+n^{2}_{2}$, and $\Delta^{2}_{\mathrm{SSC}}\equiv n^{2}_{3}+n^{2}_{4}$ fixed (except at the core of topological defects) throughout space and time. The corresponding Goldstone modes are the phases $\theta^{\ }_{\mathrm{BDW}}$ and $\theta^{\ }_{\mathrm{SSC}}$. They become charge 2 Higgs fields if the U(1)$\times$U(1) global symmetry they generate is gauged through the introduction of the axial gauge fields $a^{\mu}_{\mathrm{VBS}}$ and the electro-magnetic gauge fields $a^{\mu}_{\mathrm{SSC}}$, respectively. Their dynamics is governed by the Anderson-Higgs-Chern-Simons theory% ~(\ref{eq: final effective action}) with the identifications $\theta\to \theta^{\ }_{\mathrm{BDW}}$, $a^{\mu}_{5}\to a^{\mu}_{\mathrm{VBS}}$, and $a^{\mu}\to a^{\mu}_{\mathrm{SSC}} - \partial^{\mu}\theta^{\ }_{\mathrm{SSC}}/2$. The VBS phase is destroyed when the vortices carried by the conserved topological current $j^{\mathrm{vrt}\mu}_{\mathrm{VBS}}= \epsilon^{\mu\nu\rho} \partial^{\ }_{\nu}\partial^{\ }_{\rho}\theta^{\ }_{\mathrm{VBS}}/(2\pi)$ deconfine. The SSC phase is destroyed when the vortices carried by the conserved topological current $j^{\mathrm{vrt}\mu}_{\mathrm{SSC}}= \epsilon^{\mu\nu\rho} \partial^{\ }_{\nu}\partial^{\ }_{\rho}\theta^{\ }_{\mathrm{SSC}}/(2\pi)$ deconfine. Because of the BF term in the effective action, the quasiparticles supported by $j^{\mathrm{vrt}\mu}_{\mathrm{VBS}}$ also carry a fraction of the gauge charge of the gauge fields $a^{\mu}_{\mathrm{SSC}}$, while the quasiparticles supported by $j^{\mathrm{vrt}\mu}_{\mathrm{SSC}}$ also carry a fraction of the gauge charge of the gauge fields $a^{\mu}_{\mathrm{VBS}}$. Furthermore, both types of quasiparticles are bosons (there is no TRS-breaking Haldane mass). {}From these \textit{two} facts follows that deconfinement of one type of quasiparticles implies confinement of the second type of quasiparticles, i.e., a direct transition between the VBS and SSC phases. An experimental setup to detect exotic quantum numbers related to the 5-tuplet% ~(\ref{eq: CDW-VBS-SSC 5-tuplet}) is given in Fig.~\ref{fig: setup}(a).\cite{Ghaemi09} We assume that graphene sits on top of a type-II s-wave SC substrate. By the proximity effect, graphene develops a SSC order. The SSC order can coexist with the CDW and VBS orders in graphene according to Eq.~(\ref{eq: CDW-VBS-SSC 5-tuplet}). An applied magnetic field perpendicular to graphene creates an Abrikosov lattice of vortices in the substrate and, by the proximity effect, in graphene. The magnetic flux tubes threading graphene pin axial charges according to Eq.~(\ref{eq: final effective action}). (See also Refs.~\onlinecite{Seradjeh08} and \onlinecite{Ghaemi09}.) Increasing the magnetic field so as to destroy SSC deconfines the axial charges, i.e., stabilizes the VBS. Conversely, destroying the VBS by the deconfinement of VBS vortices also deconfines the electric charges, i.e., stabilizes the SSC. \subsubsection{ VBS-N\'eel 5-tuplet } The operation of $C$ conjugation% ~(\ref{eq: def partner mass matrix}) on the 5-tuplet~(\ref{eq: CDW-VBS-SSC 5-tuplet}) yields the 5-tuplet \begin{equation} \{ \mathrm{ReVBS}, \mathrm{ImVBS}, \hbox{N\'eel}^{\ }_{x}, \hbox{N\'eel}^{\ }_{y}, \hbox{N\'eel}^{\ }_{z} \}. \label{eq: Neel-VBS 5-tuplet} \end{equation} The triplet of N\'eel order parameters is here embedded into a 5-tuplet by adding the doublet of VBS order parameters.% ~\cite{Tanaka05a,Tanaka05b,Herbut07} This 5-tuplet has been discussed in the context of deconfined quantum criticality of two-dimensional $S=1/2$ quantum antiferromagnetic spin models.% \cite{Senthil04a,Senthil04b,Senthil05,Ran06, Sandvik07, Melko07, Kaul08} The 5-tuplet~(\ref{eq: Neel-VBS 5-tuplet}) is the only 5-tuplet supporting the full SU(2) symmetry of the N\'eel vector. The symmetry analysis of Sec.~\ref{subsubsec: VBS-SSC-CDW 5-tuplet} follows with the identifications $\theta^{\ }_{\mathrm{VBS}}\to \theta^{\ }_{\mathrm{VBS}}$, $\theta^{\ }_{\mathrm{SSC}}\to \theta^{\ }_{\hbox{\begin{scriptsize}N\'eel\end{scriptsize}}_{xy}}$, $a^{\mu}_{\mathrm{VBS}}\to a^{\mu}_{\mathrm{VBS}}$, $a^{\mu}_{\mathrm{SSC}}\to a^{\mu}_{\hbox{\begin{scriptsize}N\'eel\end{scriptsize}}_{xy}}$, $j^{\mathrm{vrt}\mu}_{\mathrm{VBS}}\to j^{\mathrm{vrt}\mu}_{\mathrm{VBS}}$, and $j^{\mathrm{vrt}\mu}_{\mathrm{SSC}}\to j^{\mathrm{vrt}\mu}_{\hbox{\begin{scriptsize}N\'eel\end{scriptsize}}_{xy}}$. \subsubsection{ SSC-QSHE 5-tuplet } The 5-tuplet \begin{equation} \{ \mathrm{ReSSC}, \mathrm{ImSSC}, \mathrm{QSHE}^{\ }_{x}, \mathrm{QSHE}^{\ }_{y}, \mathrm{QSHE}^{\ }_{z} \} \label{eq: SSC_QSHE 5-tuplet} \end{equation} embeds the triplet of QSHE order parameters into a 5-tuplet by adding the two possible SSC order parameters.% ~\cite{Grover08,RanVishwanathLee08,QiZhang08} The symmetry analysis of Sec.~\ref{subsubsec: VBS-SSC-CDW 5-tuplet} follows with the identifications $\theta^{\ }_{\mathrm{VBS}}\to \theta^{\ }_{\mathrm{SSC}}$, $\theta^{\ }_{\mathrm{SSC}}\to \theta^{\ }_{\mathrm{QSHE}_{xy}}$, $a^{\mu}_{\mathrm{VBS}}\to a^{\mu}_{\mathrm{SSC}}$, $a^{\mu}_{\mathrm{SSC}}\to a^{\mu}_{\mathrm{QSHE}_{xy}}$, $j^{\mathrm{vrt}\mu}_{\mathrm{VBS}}\to j^{\mathrm{vrt}\mu}_{\mathrm{SSC}}$, and $j^{\mathrm{vrt}\mu}_{\mathrm{SSC}}\to j^{\mathrm{vrt}\mu}_{\mathrm{QSHE}_{xy}}$. An experimental setup to detect exotic quantum numbers related to the 5-tuplet% ~(\ref{eq: SSC_QSHE 5-tuplet}) is given in Fig.~\ref{fig: setup}(b). We bring in contact a (3D) bulk type-II SSC with a material displaying the QSHE. Instead of graphene for which the spin-orbit coupling is very small, HgTe/(Hg,Cd)Te semiconductor quantum wells are suitable.% ~\cite{Bernevig-Taylor-Zhang06, Koenig07,Koenig08} Any SSC vortex in the substrate induces by proximity effect an ``$S^{\ }_{z}$ spin charge'' in the device supporting the QSHE, while any $S^{\ }_{z}$ ``spin flux'' in the device supporting the QSHE induces an electric charge. \subsubsection{ XY-N\'eel-TSC-QSHE 5-tuplet } The operation of $C$ conjugation% ~(\ref{eq: def partner mass matrix}) on the 5-tuplet~(\ref{eq: SSC_QSHE 5-tuplet}) yields the 5-tuplet% ~\cite{RanVishwanathLee08,QiZhang08} \begin{subequations} \label{eq: Neel-TSC-QSHE 5-tuplet} \begin{equation} \{ \hbox{N\'eel}^{\ }_{x}, \hbox{N\'eel}^{\ }_{y}, \mathrm{ImTSC}^{\ }_{z}, \mathrm{ReTSC}^{\ }_{z}, \mathrm{QSHE}^{\ }_{z} \}. \end{equation} By rotating SU(2) spin quantization axis, i.e., by cyclic permutation of the indices $x$, $y$, and $z$, we also get the 5-tuplets \begin{equation} \{ \hbox{N\'eel}^{\ }_{y}, \hbox{N\'eel}^{\ }_{z}, \mathrm{ImTSC}^{\ }_{x}, \mathrm{ReTSC}^{\ }_{x}, \mathrm{QSHE}^{\ }_{x} \} \end{equation} and \begin{equation} \{ \hbox{N\'eel}^{\ }_{z}, \hbox{N\'eel}^{\ }_{x}, \mathrm{ImTSC}^{\ }_{y}, \mathrm{ReTSC}^{\ }_{y}, \mathrm{QSHE}^{\ }_{y} \}. \end{equation} \end{subequations} These 5-tuplets describe SLS- and SRS-breaking order parameters consisting of an easy plane antiferromagnetic order parameter coexisting with the QSHE and TSC order parameters. The symmetry analysis of Sec.~\ref{subsubsec: VBS-SSC-CDW 5-tuplet} follows with the identifications $\theta^{\ }_{\mathrm{VBS}}\to \theta^{\ }_{\hbox{\begin{scriptsize}N\'eel\end{scriptsize}}_{xy}}$, $\theta^{\ }_{\mathrm{SSC}}\to \theta^{\ }_{\mathrm{TSC}_{z}}$, $a^{\mu}_{\mathrm{VBS}}\to a^{\mu}_{\hbox{\begin{scriptsize}N\'eel\end{scriptsize}}_{xy}}$, $a^{\mu}_{\mathrm{SSC}}\to a^{\mu}_{\mathrm{TSC}_{z}}$, $j^{\mathrm{vrt}\mu}_{\mathrm{VBS}}\to j^{\mathrm{vrt}\mu}_{\hbox{\begin{scriptsize}N\'eel\end{scriptsize}}_{xy}}$, and $j^{\mathrm{vrt}\mu}_{\mathrm{SSC}}\to j^{\mathrm{vrt}\mu}_{\mathrm{TSC}_{z}}$, say. An experimental setup to detect exotic quantum numbers related to the 5-tuplet% ~(\ref{eq: Neel-TSC-QSHE 5-tuplet}) is also given in Fig.~\ref{fig: setup}(b). Any defect in the bulk $XY$ antiferromagnet, i.e., a magnetic vortex, induces a localized midgap state that carries a fraction of the electric charge carried by the phase of the TSC in the band insulator supporting the QSHE. Any TSC vortex induces an ``$S^{\ }_{z}$ spin charge'' in the device supporting the QSHE. [A related fractional (electrical) charge is discussed at the helical edges of the QSHE.\cite{QiTaylorZhang07}] \section{ Discussion } \label{sec: Summary} Motivated by the interplay between charge-density ($\mu^{\ }_{\mathrm{s}}$), bond-density ($\Delta=|\Delta|e^{-{i}\theta}$), and integer quantum Hall ($\eta$) instabilities in graphene-like two-dimensional electronic systems, we have computed the fractional charge and fractional statistics of both screened and unscreened quasiparticles. At the microscopic level, screened quasiparticles are here the linear superpositions of two bond-density waves ($\Delta$ and $\boldsymbol{a}^{\ }_{5}$), each of which carry a point defect. Unscreened quasiparticles are defects in one type ($\Delta$) of bond-density wave. In the long-wave-length and low-energy limit and after integrating out the fermions, the quantum dynamics of screened quasiparticles is controlled by the effective theory% ~(\ref{eq: final effective action}) of the Anderson-Higgs-Chern-Simons type involving three fields. There are two U(1) gauge fields and one phase field. The first gauge field $a^{\ }_{\mu}$ is responsible for the conservation of the total fermion number. The second gauge field $a^{\ }_{5\mu}$ is responsible for the conservation of a relative fermion number, i.e., the difference in the fermion number located at the two valleys of graphene say, and is thus called an axial gauge field. The phase field $\theta=-\mathrm{arg}\,\Delta$ originates microscopically from the fact that bond distortions include atomic displacements away from the crystalline order that are parametrized by continuous angular degrees of freedom. Screened quasiparticles are not yet explicitly manifest in the field theory% ~(\ref{eq: final effective action}). They appear as point particles with the conserved topological current $\bar{j}^{\mu}_{\mathrm{vrt}}= (2\pi)^{-1} \epsilon^{\mu\nu\lambda} \partial^{\ }_{\nu} \partial^{\ }_{\lambda} \theta$ that carries no axial gauge charge, once a duality transformation has been performed. The Lagrangian dual to the Lagrangian% ~~(\ref{eq: final effective action}) can be presented as a Chern-Simons theory for 4 gauge fields whose $K$ matrix\cite{Wen91} is 4-dimensional and couple through a 4-dimensional charge vector to the vortex current. Because the $K$ matrix has a vanishing eigenvalue,\cite{Wen91} \textit{this dual theory is not a topological theory}, say such as a BF Chern-Simons theory.\cite{Blau91,Hansson04} The vanishing eigenvalue of the $K$ matrix signals the existence of low-energy excitations, the screened quasiparticles. Their fractional charges $Q$ and statistical angle $\Theta$ can then be calculated and are presented in the phase diagram of Fig.~\ref{fig:phase-diagram}. When the U(1)$\times$U(1) local gauge symmetry holds, i.e., for screened quasiparticles that represent vortices in the phase field $\theta$ whose axial charges are dynamically screened by axial gauge half fluxes in $a^{\ }_{5\mu}$, the fractional charge $Q$ and the fractional statistical angle $\Theta$ in Fig.~\ref{fig:phase-diagram} are complementary. One is non-vanishing if and only if the other vanishes. Moreover, $Q$ and $\Theta$ are universal in the fully gaped phases for which they are non-vanishing and given by a rational number in some units. When the U(1)$\times$U(1) local gauge symmetry is broken, i.e., for unscreened quasiparticles that represent vortices in the phase field $\theta$ without the attachment of axial gauge half fluxes, the fractional statistical angle $\Theta$ is non-vanishing everywhere in Fig.~\ref{fig:phase-diagram} with a discontinuous jump at $m\equiv\sqrt{\mu^{2}_{\mathrm{s}}+|\Delta|^{2}}=|\eta|$ and a non-universal dependence on the ratios $\mu^{\ }_{\mathrm{s}}/m$ and $\eta/m$. The fractional charge $Q$ is only non-vanishing when $|\eta|<m$ where it is also non-universal. Comparing the values of $\Theta$ in Fig.~\ref{fig:phase-diagram} calculated from field theory with a numerical evaluation of $\Theta$ for an underlying microscopic (lattice) model is difficult for two reasons. Defects in the phase $\theta$ have a characteristic size of the order of $1/m$ for lattice models, i.e., they bind a fermionic charge through midgap states. The profile of defects in the axial gauge fields $a^{\ }_{5\mu}$ is power law, i.e., they bind a fermionic charge through threshold continuum states. Thus, the linear extend of any lattice model must be much larger than $1/m$ for any reliable numerical calculation of $\Theta$. On the one hand, if we impose the U(1)$\times$U(1) local gauge invariance at the lattice level, the system sizes accessible to a numerical computation of $\Theta$ are, at best, of the order $1/m$, i.e., too small for a comparison with field theory. On the other hand, if the U(1)$\times$U(1) local gauge invariance does not hold at the lattice level, say after performing a mean-field approximation for which the accessible system sizes are sufficient to measure $Q$ with the help of a static probe such as the spectral asymmetry, then the values of $Q$ and $\Theta$ are not universal anymore. To put it differently, the values of $Q$ and $\Theta$ measured dynamically depend sensitively on the dynamical rules used. But these dynamical rules are model dependent when they are not fixed by imposing the local axial gauge symmetry. The fractional charge $Q$ or the statistical angle $\Theta$ in the phase diagram of Fig.~\ref{fig:phase-diagram} disagree with the results of Refs.% ~\onlinecite{Chamon08a}, \onlinecite{Seradjeh08}, and \onlinecite{Milovanovic08}. Although the charge assignment in Ref.~\onlinecite{Chamon08a} agrees with that in Fig.~\ref{fig:phase-diagram} the statistical angle is ascribed the value $\Theta=\mathrm{sgn}\,(\eta)\,\pi/4$ whenever $\eta\neq0$.% ~\cite{footnote: error in Chamon08a} However, the statistical angle $\Theta$ is non-vanishing if and only if the Hopf term is present in the O(3) non-linear-sigma model derived in Ref.~\onlinecite{Chamon08a}, i.e., if and only if $|\eta|>m$, in which case full agreement with the charge and statistical angle assignments of Fig.~\ref{fig:phase-diagram} is recovered. Seradjeh and Franz in Ref.~\onlinecite{Seradjeh08} have computed the fractional charge $Q$ and fractional statistics $\Theta$ of dynamical defects in $\Theta$ and $\boldsymbol{a}^{\ }_{5}$ for the field theory% ~(\ref{eq: def Z}) when $\mu^{\ }_{\mathrm{s}}=\eta=0$. Their analysis has been repeated by Milovanovic in Ref.~\onlinecite{Milovanovic08}. They found the assignments $Q=\pm1/2$ and $\Theta=\pm\pi$. Their semion statistics contradicts our result $\Theta=0$ in Fig.~\ref{fig:phase-diagram}. This discrepancy can be traced to the fact that Seradjeh and Franz used a singular chiral U(1) gauge transformation with the Pauli-Villars regularization to derive an effective action different than Eq.~(\ref{eq: final effective action}). As we show in Appendix% ~\ref{appsec: The pitfall of singular gauge transformations} the effective action used by Seradjeh and Franz, when suitably generalized to the case $\mu^{\ }_{\mathrm{s}}\neq\eta=0$, fails to reproduce the fractional charge% ~(\ref{eq: derivation of Q}) of quasiparticles in the presence of a flux in $\boldsymbol{a}_5$ gauge field. Explicitly, it follows from Eq.\ 9 of their paper Ref.~\onlinecite{Seradjeh08} that the fractional charge in the case when the mass vortex is accompanied by an axial half-flux, enforcing the screening condition $a^{\ }_{5\kappa} - \frac{1}{2}\partial^{\ }_{\kappa}\theta=0$, is $Q=0$ ! However, the fractional charge $Q=1/2$ [see Eq.~(\ref{eq: derivation of Q})] of screened quasiparticles is a result established from direct (static) numerical computation of $Q$ on a suitable lattice regularization of the field theory ~(\ref{eq: def Z}). The charge-density ($\mu^{\ }_{\mathrm{s}}$), bond-density ($\Delta$), and integer quantum Hall ($\eta$) instabilities are the only instabilities compatible with the electron-number conservation and SU(2) spin-rotation symmetry (these are, naturally, also the only four possible instabilities for the spinless case). However, there can also be superconducting instabilities or, if the electron spin is accounted for, magnetic instabilities. We have performed a systematic classification of all instabilities for the 16 dimensional free Dirac Hamiltonian induced by local order parameters that respect the Bogoliubov-de-Gennes particle-hole symmetry. We have found that the order parameter for the integer quantum Hall effect (Haldane mass $\eta$) is unique, for it competes with all other instabilities. We have also found that the largest number of coexisting order parameters is 5 and enumerated all the corresponding 5-tuplets of masses. Each of these 5-tuplet can be thought of as a generalization of the 3-tuplet $(\mu^{\ }_{\mathrm{s}},\mathrm{Re}\,\Delta,\mathrm{Im}\,\Delta)$ that supports quasiparticles with fractional quantum numbers. These 5-tuplets provide a rich playground for Landau-forbidden continuous phase transitions. Any U(1) order parameter in a 5-tuplet can be assigned a conserved charge and supports topological defects in the form of vortices. A pair of U(1) order parameters in a 5-tuplet is said to be dual if the vortices of one order parameter binds the charge of the other order parameter and vice versa. A continuous phase transition can then connect directly the two dual U(1) ordered phases through a confining-deconfining transition of their vortices. \section*{ Acknowledgments } This work is supported in part by the DOE Grant DE-FG02-06ER46316 (C-Y. H. and C. C.). C.~M.\ acknowledges the kind hospitality of the Isaac Newton Institute, Cambridge and RIKEN. We thank the Condensed Matter Theory Visitor's Program at Boston University for support. S.~R. thanks the Center for Condensed Matter Theory at University of California, Berkeley for its support. S.~R. thanks P. Ghaemi, D.-H.\ Lee, and A. Vishwanath for useful discussions.
{ "redpajama_set_name": "RedPajamaArXiv" }
8,014
\documentclass{article} \usepackage{multicol} \usepackage[utf8]{inputenc} \usepackage{graphicx} \usepackage{geometry} \PassOptionsToPackage{hyphens}{url}\usepackage{hyperref} \geometry{ a4paper, left=20mm, right=20mm } \DeclareGraphicsExtensions{.png,.pdf} \title{\vspace{-4cm}Thorlabs SM1 tube construction} \date{} \begin{document} \maketitle \vspace{-1cm} Last updated: \today URL: \url{https://www.thorlabs.hk/newgrouppage9.cfm?objectgroup_id=3307} \begin{multicols}{2} \section{Brief description} They all seem to be kind of simliar, so they're all in this document. The SM1 tubes help to hold 1" diameter optical components. These optical components can be locked in position by retaining rings, which are in turn inserted into the SM1 tube by a specialised spanner wrench. SM1 tubes have 40 rotations per inch, so expect 0.634mm per turn. Human resolution is perhaps an eighth of a turn, so that's about 0.08 mm resolution for collimation. The travelling tubes are included as they're pretty similar. As of writing, the lab mostly has half inch long SM1 tubes (SM105) which are awkward to use with the 25mm efl lenses. SM1V10 is also awkward to use without that many SM1L10 tubes. \section{Technical Specifications} SM1 Internal Threading \begin{tabular}{|l|l|} Min. Major diameter & 1.0350''\\ Min. Pitch diameter & 1.0188''\\ Max. Pitch diametr & 1.0234'' \\ Min. Minor diameter & 1.008''\\ Max. Minor diameter & 1.014'' \end{tabular}% \vspace{5mm} \noindent SM1V10 \begin{tabular}{|l|l|} External thread length & 0.81'' (20.6mm)\\ Maximum adjustment range & 1.00'' \end{tabular}% \end{multicols} \begin{center} \includegraphics[width=15cm]{assets/sm1rr} \\ Retaining ring \end{center} \begin{center} \includegraphics[width=15cm]{assets/sm1v10}\\ Travelling tube; awkward to use with SM1L05 \end{center} \end{document}
{ "redpajama_set_name": "RedPajamaGithub" }
6,053
Search in titles only Search in Blog Feeds only Revit - All Flavors Shades of Grey: PROJECT SOANE IS DEAD, LONG LIVE P.S. Blog Fetcher Join Date: December 11, 2014 About a month ago something very strange happened. Nobody seems to know how, but most of the data on Project Soane was wiped clean. I didn't realise this until last week when two project members alerted me. Over the past few days, with help and support from Autodesk, I have restored the site, taking the opportunity to reorganise it with the benefit of hindsight and a much deeper understanding of the building, the architect, the historical period. There is a folder called "Current Sheets" which will be updated from time to time. This contains images exported directly from the sheets in the main model file, plus a few other images that help to give newcomers a snapshot of the current state of the model and some insight into what has been achieved. The Reference Material has been completely reorganised in a folder of that name. I think I have re-posted all the original hi-res material sourced by the project founders from the Soane Museum in London. Let me know if you notice any omissions and we will try to sort it out. There is also a lot of additional material, mostly slightly lower resolution, much of it downloaded from the Soane Museum online archive, which is THE MOST AMAZING RESOURCE. SOANE MUSEUM ONLINE ARCHIVE Please be aware that copyright for most of the images still resides with their original owners. They are provided here under the original agreement with the Soane Museum for private study and research purposes. Please respect this at all times. I have created a new folder called "Stories" where people can share work they have done which extends the core modelling project in some way or other. I have started this off with a subfolder called "Timeline Images" which are exported from various study files that I have created over the past 2 years or so in order to better understand the sequential history of this very complex project. I have also made my first foray into the Wiki Pages section, which was originally set up by the project founders, adding a note to the welcome page, for example. As you may know, the model is now hosted on C4R, so you no longer have to download and re-upload in order to contribute. Everything is 2017 version by the way, and please ensure that you have the latest updates installed before opening anything. If you want to contribute, please leave a message on the site, and we will come to an agreement about allocation of responsibilities. I made an excel file to record activity which will be restored shortly, hopefully in a way that enables direct editing of the worksheet. But my major contribution to the Wiki Pages is a now page based on some of the Timeline images. This gives an overview of the architectural history of the Bank from its foundation to Soane's retirement. Future pages will go into a bit more detail and extend the period into the early twentieth century when most of Soane's work was demolished to make way for a design by Sir Herbert Baker. TIMELINE WIKI So every cloud has a silver lining, and this little cloud catastrophe has given me renewed inspiration and energy to take Project Soane to the next level. Why don't you get involved. Many people have contributed to the project, not just by modelling but in all kinds of ways, the original founders deserve a very special mention for example. There were many submissions for the rendering stage of the project and I would like to integrate this work more fully with the A360 site where collaboration is ongoing. So there are many ways to participate. There is at least one student currently developing a project based on the site, and inspired by the spirit of Project Soane. I would really like to incorporate initiatives of this kind into the Stories section. So how would you like to contribute? Lots of possibilities. Think outside the box. Give me a shout. Click here to view the entire blog post. Shades of Grey: WHAT FAT LADY ? The first stage of Project Soane is "over"... but this was never primarily a competition to me. It was (and is) an opportunity to get involved Channel: Blog Feeds Shades of Grey: ASK NOT WHAT PROJECT SOANE CAN DO FOR YOU This post is directed at all those who have registered for Project Soane but not yet contributed anything (except maybe a like or two). Perhaps you are Shades of Grey: OVERTIME OVERVIEW Project Soane started in mid 2015. It would be interesting to do a graphic of the burst of activity and pauses. My guess is that there have been 4 or Shades of Grey: SAMPSON & DELILAH George Sampson was a bread & butter sort of chap, a project manager cum architect with just enough flair to impress a group of hard-nosed businessmen.... December 6, 2020, 05:16 PM Shades of Grey: WHERE YOU BEEN I an recently back in Dubai after 2 glorious, eventful weeks in the UK. I'm going to review it backwards. Maybe that will shed fresh light on the experience,
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,448
Rewards & Consequences The Delicate Balance of Rewards and Consequences On a typical day, your child receives 20 criticisms or corrections for every one accolade. So your praise and rewards can go a long way toward inspiring good behavior — if paired with fair, consistent consequences. Here's how to strike that balance. By Royce Flippin A young ADHD girl crying after her parent doled out consequences for bad behavior Use these strategies — like rewards for good behaviors and consistent consequences for bad behaviors — to stop defiance or negative impulsivity. Spend Unstructured Time Together Schedule 15 minutes each day with your child, to do whatever he wants to. Playing together helps repair the parent-child bond and lays the groundwork for positive reinforcement in the future. Make Praise Immediate and Often Positive reinforcement is the best behavioral tool, and especially powerful when it comes from a parent. Look for opportunities throughout the day to praise your child. Keep praise immediate and enthusiastic, and specify the exact behavior you're commending. Reinforce Praise with Tokens This works especially well with young children. Tokens can be anything tangible and easily recorded — stars on a chart, coins in a jar — and should be awarded promptly for good behavior. Once a certain number of tokens are amassed, the child earns a predetermined reward, such as a video game, a sleepover at a friend's house, or a trip to the movies. Don't Ask, Tell Don't start your requests with "Would you mind?" or finish them with "O.K.?" Instead, make directives clear and succinct: "I notice your coat is on the floor. I'd like you to pick it up." [Free Download: 10 Rules for Parents of Defiant Kids with ADHD] Insist on Eye Contact When You Talk to Your Child That way, you prevent your kid from ignoring you, while reinforcing what you're trying to communicate. "This can be done with humor," says child psychologist Douglas Riley. "I use the phrase, 'Give me your eyeballs.'" Let Your Children Know (Politely) That They're Not Your Equals "I urge parents to make it clear that they own everything in their home," says Riley. "Kids are often outraged to discover this. But they need to know that you're in charge, and that access to all the nice things in life, like the phone, TV, and computer, has to be earned by showing positive behavior and a good attitude." Set Up Consequences Ahead of Time These consequences should involve taking away privileges, such as access to the TV, playtime with friends, or another favorite activity. Particularly bad conduct, such as hitting or other physical violence, should result in an extended time-out (30 minutes for children over 8, an hour for adolescents), in an isolated room, where the child is instructed to think about his or her behavior. Stick to the Consequences — No Matter What "If your child hits a sibling five times and gets punished for it only three times, he knows he's got a 40 percent chance of getting away with that behavior," says psychiatrist Larry Silver, M.D. "A parent has to be 100 percent consistent in addressing bad behavior. Otherwise, the behavior may persist or even get worse." [Answers to the Discipline Questions You Were Too Exhausted to Ask] Tags: following directions, June/July 2005 Issue of ADDitude Magazine Why Is My Child So Angry and Defiant? Back From the Brink: Two Families' Stories of Oppositional Defiant Disorder How to Engineer Better Environments for a Child with Sensory Processing Disorder and ADHD Fight, Flight, Freeze… or Fib? The ADHD Parenting Guide Behavior & discipline, positive parenting, organization, happiness & more. When your child ignores, disregards, or otherwise disobeys you, punishment is an understandable consequence.... Q&A: Could My Toddler Really Have ADHD? How Can I Tell? ADHD in toddlers manifests through extreme behaviors, emotions, and reactions as well as typical ADHD... Safeguarding ADHD Youth Against Depression in a Pandemic Parents are worried. About online learning and coronavirus and too many video games, yes. But more... "Why is My Child So Angry and Aggressive?" My Child Doesn't Listen! Q: Did My Son's Childhood Trauma Cause His Behavior Problems? Free Checklist: Executive Dysfunctions in the Classroom
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,600
The oil is fat and not less suitable for burning than olive-oil, but it gives forth a disagreeable smell. Jars of raisins were allotted by the thousands to the Nile god temple by Ramses III, as were dried dates. There are other lilies too, in flower resembling roses, which also grow in the river, and from them the fruit is produced in a separate vessel springing from the root by the side of the plant itself, and very nearly resembles a wasp's comb: in this there grow edible seeds in great numbers of the size of an olive-stone, and they are eaten either fresh or dried. Some priests related pigs with Set, an evil god, and made it so most people did not want to eat pigs. Egyptians ate calves, oxen, and poultry like duck, goose, stork, and pigeon. A small number of fruit and vegetables like garlic, onions, carobs, dates, or nuts, kept for quite a while, some could be preserved by drying, a technique known to the ancient Egyptians, although the frequency of its implementation with perishable food stuffs is unknown. But most had to be consumed when they were ripe or processed into a product that would keep. For it alone is planted with large, perfect, and richly productive olive-trees, and the oil is good when carefully prepared; those who are neglectful may, indeed, obtain oil in abundance, but it has a bad smell. ) of the House of the Million [Years belonging to the king of Upper and Lower Egypt ...... From the little that we know, it appears that Egyptian ointments were made with nut oil, but it is probable that animal as well as vegetable grease was employed for this purpose too.
{ "redpajama_set_name": "RedPajamaC4" }
6,914
Students will write a letter to someone in a career to which they aspire. The person can be famous or unknown, but it must be someone who is currently doing a job in which the student is interested. Students will inquire about skills, education, and activities they can work on now to help them be successful in that career. The materials students received for this lesson are attached to this post. They can also be found on the documents page here.
{ "redpajama_set_name": "RedPajamaC4" }
1,516
Q: Proving '$A$ open in $V\subseteq M$ (metric space) iff $A=C\cap V$ (certain open $C$ in $M$)' I want to prove the following: Let $(M,d)$ be a metric space. Let $A\subseteq V\subseteq M$. 1) $A$ is open in $V \Leftrightarrow A = C\cap V$ (for a certain open $C$ in $M$) 2) $A$ is closed in $V \Leftrightarrow A = C\cap V$ (for a certain closed $C$ in $M$) Questions: * *Could someone check the proof? *'for a certain open $C$ in $\color{Blue}{M}$.' Would this proof also work for a more specific choice of $C$? Like for a certain open $C$ in $\color{blue}{V}$. I don't really see the added value of choosing $M$ over $V$. *Could some give me some pointers on how to prove $2, \Rightarrow$? Proof 1) $\Leftarrow$: Choose $a\in A$. $$\begin{array}{rl} & a \in A = C\cap V\\ \Rightarrow & a \in C\\ \Rightarrow & (\exists r > 0)(B_M(a,r)\subseteq C)\\ \Rightarrow & (\exists r > 0)(B_M(a,r)\cap V \subseteq C\cap V)\\ \Rightarrow & (\exists r> 0) (B_V(a,r)\subseteq A \end{array}$$ $\Rightarrow$: Choose $a\in A$. $$\begin{array}{rl} \Rightarrow & (\exists r_a >0)(B_V(a,r_a) \subseteq A) \end{array}$$ Consider all $a\in A$ then: $$\begin{array}{rl} & A = \bigcup_{a\in A} B_V(a,r_a)\\ \Rightarrow & A = \bigcup_{a\in A} \left[ V\cap B_M(a,r_a)\right]\\ \Rightarrow & A = V\cap\left[ \bigcup_{a\in A} B_M(a,r_a)\right] \end{array}$$ Let $$\left[ \bigcup_{a\in A} B_M(a,r_a)\right] = C$$ which is open as a union of open sets. Proof 2) $\Leftarrow$: $$\begin{array}{rrl} & V\setminus A &= V\setminus(C\cap V)\\ \Rightarrow & & = (V\setminus C)\cup (V\setminus V)\\ \Rightarrow && = V\setminus C \end{array}$$ Since $C$ is closed then $V\setminus C$ is open and so is $V\setminus A$. Then $A$ is closed in $V$. $\Rightarrow$: How? A: So far everything seems right. For the last proof, let $ A$ closed in $ V $. Then $ V\backslash A $ is open in $ V $. By the first statement (which you already proved) there exists an open $ B $ in $ M $ such that $ V\backslash A=B\cap V $. Now $ A=(M\backslash B)\cap V $. And we know that $C= M\backslash B$ is closed in $ M $ because $ B $ is open in $ M$. Sidenote: For non-metric spaces, we actually define the induced topology (the most natural and conventional one) on a subset of a topological space like this.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,944
Climate & Temperature > Philippines > Puerto Princesa, Palawan > Vs > Caxias Do Sul Puerto Princesa, Palawan vs Caxias Do Sul Climate & Distance Between Puerto Princesa, Palawan vs Ajan, Russia Puerto Princesa, Palawan vs Nagano, Japan Puerto Princesa, Palawan vs Ushibuka, Japan Puerto Princesa, Palawan vs Mcconnelsville, Oh, Usa Caxias Do Sul vs Aden, Yemen Caxias Do Sul vs Mombasa, Kenya Caxias Do Sul vs Vitebsk, Belarus Caxias Do Sul vs Fort Collins, Co, Usa Monte Caseros vs Wellington Yap Island/ Caroline Islands vs Moanda Oak Ridge, Tn vs Chaska, Mn Bratsk vs Milan, Mn The distance between Puerto Princesa, Palawan, Philippines and Caxias Do Sul, Brazil is approximately 17,615 km or 10,946 mi. To reach Puerto Princesa, Palawan in this distance one would set out from Caxias Do Sul bearing 152.1° or SSE and follow the great circles arc to approach Puerto Princesa, Palawan bearing 24.5° or NNE . Puerto Princesa, Palawan has a tropical wet and dry/ savanna climate with dry winters (Aw) whereas Caxias Do Sul has a subtropical highland climate with no dry season (Cfb). Puerto Princesa, Palawan is in or near the tropical moist forest biome whereas Caxias Do Sul is in or near the warm temperate moist forest biome. The annual average temperature is 11.1 °C (20°F) warmer. Average monthly temperatures vary by 6.6 °C (11.9°F) less in Puerto Princesa, Palawan. The continentality subtype is extremely hyperoceanic as opposed to barely hyperoceanic. Total annual precipitation averages 308.1 mm (12.1 in) less which is equivalent to 308.1 l/m² (7.56 US gal/ft²) or 3,081,000 l/ha (329,379 US gal/ac) less. About 5/6 as much. The altitude of the sun at midday is overall 13.5° higher in Puerto Princesa, Palawan than in Caxias Do Sul. Relative humidity levels are 9.5% lower. The mean dew point temperature is 8.6°C (15.5°F) higher. Climate Comparison Table The table shows values for Puerto Princesa, Palawan relative to Caxias Do Sul. You can also view this comparison the other way around from the perspective of Caxias Do Sul vs Puerto Princesa, Palawan. Average Max Temperature °C ( °F) +4 (+8) +6 (+10) +7 (+13) +11 (+20) +14 (+25) +15 (+27) +15 (+26) +14 (+25) +12 (+22) +9 (+17) +7 (+13) +5 (+10) +10 (+18) Average Temperature °C ( °F) +6 (+12) +6 (+11) +9 (+16) +13 (+23) +15 (+27) +15 (+27) +15 (+26) +14 (+26) +13 (+23) +11 (+20) +9 (+16) +7 (+13) +11 (+20) Average Min Temperature °C ( °F) +4 (+8) +4 (+7) +5 (+10) +9 (+17) +12 (+22) +14 (+26) +14 (+26) +13 (+23) +12 (+21) +10 (+18) +8 (+15) +7 (+12) +9 (+17) Average Precipitation mm (in) -108 (-4) -124 (-5) -145 (-6) -91 (-4) +27 (+1) +18 (+1) +27 (+1) +23 (+1) +17 (+1) +34 (+1) +64 (+3) -51 (-2) -308 (-12) Average Daylight Hours & Minutes/ Day -2h 08' -1h 20' -0h 14' +0h 55' +1h 54' +2h 23' +2h 10' +1h 20' +0h 12' -0h 57' -1h 55' -2h 23' 0h 00' Sun altitude at solar noon on the 21st day (°) -20.1 -1.6 +19.9 +39 +39 +38.9 +38.8 +38.7 +20.6 -2 -20.1 -27.5 +13.5 Relative Humidity (%) -10 -15 -20 -18 -11 -9 -6 -7 -6 -5 -2 -5 -9.5 Average Dew Point Temperature °C & (°F) +4 (+7) +2.7 (+5) +3.9 (+7) +8.2 (+15) +12.2 (+22) +12.5 (+23) +12.8 (+23) +12.3 (+22) +11.1 (+20) +9.6 (+17) +8.2 (+15) +6 (+11) +8.6 (+15) There are so many comparison pages. Please post a link to one to help people find them: http://www.puerto-princesa.climatemps.com/vs/caxias-do-sul.php Puerto Princesa, Palawan vs Brussels, Belgium Puerto Princesa, Palawan vs San Angelo, Texas, Usa Puerto Princesa, Palawan vs Dir, Pakistan Puerto Princesa, Palawan vs Rovno, Ukraine Caxias Do Sul vs Pickle Lake, On, Canada Caxias Do Sul vs Watertown, Sd, Usa Caxias Do Sul vs The Pas, Manitoba, Canada Caxias Do Sul vs Sachs Harbour, Nt, Canada Kaunas vs Charlottetown (cda), Pe Constanta vs Mogadishu Cobb (Dam) vs Milan Lewisburg, Wv vs Arbaat Puerto Princesa, Palawan vs Caxias Do Sul Discussion You are welcome to incorporate your thoughts on the differences in climate or other matters such as contrasts in culture, standard of living, demographics etc. Currently under general maintenance. Climate Guides for Locations near Puerto Princesa, Palawan Kota Kinabalu, Malaysia - 517.2 kms (321.4 miles) SW Sandakan, Borneo, Malaysia - 434.4 kms (269.9 miles) S Labuan, Malaysia - 626.3 kms (389.2 miles) SW Brunei Darussalam, Brunei Darussalam - 680 kms (422.5 miles) SW Miri, Malaysia - 798.3 kms (496.1 miles) SW Tawau, Malaysia - 616.9 kms (383.3 miles) S Bintulu, Malaysia - 962.6 kms (598.2 miles) SW Sibu, Malaysia - 1123.1 kms (697.9 miles) SW Aparri, Luzon, Philippines - 1006.9 kms (625.7 miles) NNE Daguaan City, Philippines - 726.1 kms (451.2 miles) NNE Manila, Luzon, Philippines - 590.4 kms (366.9 miles) NNE Naia, Mai (Pasay City), Philippines - 583.7 kms (362.7 miles) NNE Legaspi, Philippines - 662.7 kms (411.8 miles) NE Tacloban City, Philippines - 707.1 kms (439.4 miles) ENE Iloilo, Philippines - 430.4 kms (267.5 miles) ENE Mactan, Philippines - 578 kms (359.2 miles) E Surigao, Mindanao, Philippines - 740.3 kms (460 miles) E Zamboanga City, Philippines - 482.9 kms (300.1 miles) SE Zamboanga, Mindanao, Philippines - 484.7 kms (301.2 miles) SE Manado, Sulawesi, Indonesia - 1138.6 kms (707.5 miles) SE Climate Guides for Locations near Caxias Do Sul Chapeco, Brazil - 266.9 kms (165.9 miles) NNW Irai, Brazil - 297.2 kms (184.7 miles) NW Passo Fundo, Brazil - 155.2 kms (96.4 miles) NW Sao L. Gonzaga, Brazil - 381.6 kms (237.1 miles) WNW Santa Maria, Brazil - 249.3 kms (154.9 miles) WSW Porto-Alegre, Brazil - 94.5 kms (58.7 miles) S Encruz. Do Sul, Brazil - 198 kms (123.1 miles) SW Bage, Brazil - 368.3 kms (228.8 miles) SW Rio Negro, Brazil - 367.8 kms (228.6 miles) NNE Porto Uniao, Brazil - 326.4 kms (202.9 miles) N Indaial, Brazil - 318.4 kms (197.9 miles) NE Florianopolis, Brazil - 312 kms (193.9 miles) ENE Sao Joaquim, Brazil - 159.1 kms (98.9 miles) NE Bom Jesus, Brazil - 93.1 kms (57.8 miles) NE Torres, Brazil - 143.5 kms (89.2 miles) E Countries A-Z: A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | R | S | T | U | V | W | Y | Z Bright Future Dir. © 2009 - 2015 climatemps.com - Sitemap - Contact Us - Privacy Policy -
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,793
\section{Introduction} \textit{Ab initio} quantum Monte Carlo (QMC) techniques~\cite{2001FOU}, such as variational Monte Carlo (VMC) and diffusion Monte Carlo (DMC) have been used to compute accurate many-body wavefunctions (WF) for atomic, molecular, simple crystal, and even complex electronic systems~{\cite{2020NEE, 2020KEN, 2020NAK}}. Benchmark results are of great interest for QMC methods because these methods have the capability to match the accuracy of quantum chemical methods at a significantly lower computational cost~\cite{2001FOU}. A WF-based quantum chemical method, such as the coupled cluster with triple excitations (CCSD(T)), scales as $O(N^7)$, where \textit{N} is the number of particles~\cite{2006FEL,2021GYE}. In contrast, DMC at most scales as $O(N^4)$ and is closer to the computational scaling of $O(N^3)$ (for $N < 1000 \sim 2000$)~\cite{2020NEED} of the more traditional mean-field methods like DFT. \vspace{2mm} The accuracy and efficiency of QMC are similar to those of quantum chemical methods. Hence, benchmark tests are of great importance. A widely used set of benchmark data is the binding energies of the G2 set of molecules~\cite{1995CUR}. It contains 55 small molecules composed of elements from the first, second, and third rows of the periodic table. Accurate experimental values for the binding energies of the G2 set of molecules are available, which makes it very attractive for benchmarking. Another way to benchmark is to compare the computed total energies with the estimated exact energies. However, to evaluate the error cancellation, it is important to benchmark the binding energies. \vspace{2mm} G2 set benchmarks have been used to test several state-of-the-art \textit{ab initio} computational methods. Chemical accuracy (deviation of 1 kcal/mol from experimental estimates of binding energies) has often been used as the target accuracy. The benchmarking relative to molecular atomization energies of DFT methods in previous studies resulted in a relatively large mean absolute deviation (MAD) value of $\sim$40 kcal/mol for local density approximation (LDA) and $\sim$2.5 kcal/mol for the hybrid B3LYP functional.~\cite{1995BAU}. In another study, B3LYP on an extended G2 set gave an MAD of 3.11 kcal/mol~\cite{1997CUR}, with a maximum deviation of $\sim$20 kcal/mol. This large deviation suggests that DFT methods are not systematically accurate and there might be significant variation in the accuracy across various systems. Efforts have been made to improve the DFT estimates by determining correction factors from fitting to experimental data~\cite{2013GRI}; however, these reduce the prediction ability of the theory. Coupled cluster theory based methods such as CCSD(T) have been largely regarded as the ``gold standard'' for accuracy in quantum chemistry. Several CCSD(T) studies have shown that the method can achieve sub-chemical accuracy if large enough basis sets and multiple corrections are used~\cite{2001FEL,2006FEL,2007FEL,2008FEL, 2012HAU}. \vspace{2mm} QMC methods lie on the sweet spot of accuracy and computational cost. They are far more accurate than mean-field methods (no inherent approximation like the XC functional) and also far less computationally expensive than the WF-based quantum chemical methods. Calculation of electronic contribution to reaction enthalpies for several small molecules conclusively demonstrated that the all-electron fixed node (FN) DMC is more accurate than CCSD(T)/cc-pVDZ and almost as accurate as CCSD(T)/cc-pVTZ~\cite{2001MAN}. For instance, Nemec \textit{et al.}~\cite{2010NEM} used all-electron FN DMC on Slater determinant (SD) WF to obtain the binding energies of the G2 set to an MAD of 3.2 kcal/mol. Similar accuracy in binding energies was also obtained previously in DMC pseudopotential calculations~\cite{2002GRO}. However, these simple DMC approaches (i.e., Jastrow + SD) have not been able to achieve chemical accuracy or sub-chemical accuracy due to the residual FN errors. \vspace{2mm} In this paper, we present FN DMC benchmark results using the so-called AGPs (antisymmetrized geminal power with singlet correlation) \textit{ansatz} along with a Jastrow factor (JF). The more flexible AGPs \textit{ansatz} leads to improved nodal surfaces, which are further relaxed at the VMC level. The main outcome of this work is that the combination of a more flexible \textit{ansatz} (AGPs) and nodal surface optimization leads to a much better quality many body WF, which in turn leads to better DMC energies. This is very important because, AGPs (even though being multiconfigurational in nature~\cite{2020NAK}) in practice is as efficient as a single determinant \textit{ansatz} and thus can be extended to much larger systems, even within the computationally demanding QMC methods. QMC also provides an added advantage of the near ideal parallel scaling of QMC algorithms~\cite{2020NAK}. \section{Computational details} The TurboRVB~\cite{2020NAK} QMC package was used for all calculations. It employs resonating valence bond (RVB~\cite{1973AND}) WF and allows one to choose a more flexible \textit{ansatz} than the SD \textit{ansatz}, and includes correlation effects beyond the standard SD. \subsection{Wavefunctions} The choice of the WF \textit{ansatz} plays an important role in determining the accuracy and the computational cost of QMC calculations. A many-body WF \textit{ansatz} can be written as the following product: \begin{equation} \Psi = \Phi_{{\rm{AS}}} * \exp J, \end{equation} where $\exp J$ is the JF, and $\Phi_{{\rm{AS}}}$ is the antisymmetric part that satisfies the antisymmetry condition for fermions. Generally, a single SD is used for the antisymmetric part in QMC calculations. SD is simply an antisymmetrized product of single particle electron orbitals and does not include any electron correlation by itself. \subsubsection{Jastrow factor} The Jastrow factor is a multiplier term which improves the quality of a many-body WF by providing a significant portion ($\approx 70\%$) of correlation energy and is necessary for fulfilling Kato's cusp conditions ~\cite{1957KAT}. The JF used here comprises three terms: one-body, two-body, and three/four body term ($J = J_1 + J_2 + J_{3/4}$). The one-body term is necessary to satisfy the electron-ion cusp condition. A separate one-body term is used for each element present in the molecule. It consists of a homogeneous part, \begin{equation} J_1^h(\textbf{r}_1, ..., \textbf{r}_N) = \sum_{i=1}^{N} \sum_{a=1}^{N_{at}} \left(-(2Z_a)^{3/4} u_a\left((2Z_a)^{1/4}|\textbf{r}_i - \textbf{R}_a|\right)\right), \label{hom} \end{equation} and an inhomogeneous part, \begin{equation} J_1^{inh}(\textbf{r}_1, ..., \textbf{r}_N) = \sum_{i=1}^{N} \sum_{a=1}^{N_{at}} \left( \sum_{l} M_{a, l} \chi_{a, l}(\textbf{r}_i) \right), \label{inhom} \end{equation} where, $\textbf{r}_i$ denotes the electron coordinates, $\textbf{R}_a$ represents the atomic positions, $Z_a$ denotes the corresponding atomic numbers, ${N_{at}}$ is the number of nuclei, $\chi_{a, l}$ represents a Gaussian-type atomic orbital $l$ centered over atom $a$, and $M_{a, l}$ denotes the corresponding variational parameters. The function $u_a$ is defined as: \begin{equation} u_{a}(r) = \frac{1}{2b_{{\rm{e}}a}} \left( 1 - e^{-rb_{{\rm{e}}a}}\right), \label{u} \end{equation} where $b_{{\rm{e}}a}$ is a variational parameter that depends on each nucleus $a$. The one-body terms were carefully optimized at the DFT level before final optimization at the VMC level. The two-body term is necessary to satisfy the electron-electron cusp condition and is optimized at the VMC level. \begin{equation} J_2(\textbf{r}_1, ..., \textbf{r}_N) = \sum_{i<j} v (|\textbf{r}_i - \textbf{r}_j|). \label{j2} \end{equation} The function $v$ has the following form: \begin{equation} v(r_{i, j}) = \frac{r_{i, j}}{2} (1 + b_{{\rm{ee}}}r_{i, j})^{-1}, \label{j2_d} \end{equation} where $r_{i, j} = |\textbf{r}_i - \textbf{r}_j|$ and $b_{{\rm{ee}}}$ is a single variational parameter. The three/four-body Jastrow term is defined as: \begin{equation} J_{3/4}(\textbf{r}_1, ..., \textbf{r}_N) = \sum_{i<j} \left( \sum_{a, l} \sum_{b, m} M_{\{a, l\}, \{b, m\}} \chi_{a, l}(\textbf{r}_i), \chi_{b, m}(\textbf{r}_j) \right), \label{j3/4} \end{equation} where $M_{\{a, l\}, \{b, m\}}$ represents the variational parameters, $l$ and $m$ indicate orbitals centered on atomic sites $a$ and $b$, respectively. In the current work, the four-body Jastrow term was not used. \subsubsection{Antisymmetrized geminal power (AGP)} One way to improve the description of electron correlations and move beyond the standard SD is to explicitly include pairwise correlation among electrons. This fairly general and flexible \textit{ansatz} is called as the Pfaffian WF ~\cite{2006BAJ}. The Pfaffian WF is constructed from pairing functions (known as geminals). When only the singlet electron pairing terms are considered, we get the AGPs \textit{ansatz}. A generic pairing function for the AGPs, $f(\textbf{r}_i, \textbf{r}_j)$ can be written as: \begin{equation} f(\textbf{r}_i, \textbf{r}_j) = \frac{1}{\sqrt{2}} (\ket{\uparrow \downarrow} - \ket{\downarrow \uparrow}) f_s(\textbf{r}_i, \textbf{r}_j). \label{agps_term} \end{equation} For a simpler unpolarized case, where the number of electrons $N$ is even and $N_\uparrow = N_\downarrow$, all possible combinations of singlet pairs can be written in the form of a matrix: \begin{equation} F = \begin{psmallmatrix} f(\textbf{r}_{1}^{\uparrow}, \textbf{r}_{1}^{\downarrow}) & f(\textbf{r}_{1}^{\uparrow}, \textbf{r}_{2}^{\downarrow}) & \dots & f(\textbf{r}_{1}^{\uparrow}, \textbf{r}_{N}^{\downarrow}) \\ f(\textbf{r}_{2}^{\uparrow}, \textbf{r}_{1}^{\downarrow}) & f(\textbf{r}_{2}^{\uparrow}, \textbf{r}_{2}^{\downarrow}) & \dots & f(\textbf{r}_{2}^{\uparrow}, \textbf{r}_{N}^{\downarrow}) \\ \vdots & \vdots & \ddots & \vdots \\ f(\textbf{r}_{N}^{\uparrow}, \textbf{r}_{1}^{\downarrow}) & f(\textbf{r}_{N}^{\uparrow}, \textbf{r}_{2}^{\downarrow}) & \dots & f(\textbf{r}_{N}^{\uparrow}, \textbf{r}_{N}^{\downarrow}) \end{psmallmatrix}. \label{agps_matrix} \end{equation} The determinant of matrix \textit{F} gives the AGPs WF: \begin{equation} \Phi_{{\rm{AGPs}}} = \det F \label{agps_eq} \end{equation} In case of a polarized system, additional unpaired molecular orbitals are added to the matrix \textit{F}. The antisymmetric AGPs WF can then be obtained by computing the determinant of \textit{F}. The singlet electron pairing terms are represented by: \begin{equation} f_s(\textbf{r}_i, \textbf{r}_j) = \sum_{a, l} \sum_{b, m} \lambda_{\{a,l\}, \{b,m\}} \phi_{a,l}(\textbf{r}_{i}) \phi_{b,m}(\textbf{r}_{j}), \label{geminal} \end{equation} where, $\phi_a$ and $\phi_b$ represent atomic orbitals centered on atoms $a$ and $b$, respectively; and the indices $l$ and $m$ indicate different orbitals centered on atoms $a$ and $b$, respectively. The elements of the matrix $\lambda$ are the coefficients or the variational parameters of the WF. An important advantage of the AGPs \textit{ansatz} is that it is equivalent to a linear combination of SDs (or multideterminants) while maintaining the computational cost of a single determinant \textit{ansatz}~\cite{2020NAK}. Hence, this flexible \textit{ansatz} (greater variational freedom) could be an effective way to improve the quality of the many-body WF. \subsection{Computational workflow} The equilibrium geometries of the G2 set molecules were taken from previous benchmark studies~\cite{1997CUR,2008FEL,2005OEI} (see Table. S2 in supplementary information). The \textit{ansatz} was expanded using the triple-zeta atomic basis sets obtained from the Basis Set Exchange library \cite{2019PRI}. Larger exponents greater than $8Z^2$ (where \textit{Z} is the atomic number) were removed from the basis set to avoid numerical instabilities. The large exponent orbitals cut from the basis set are implicitly included by utilizing the one-body Jastrow term~\cite{2018MAZ,2019NAK}. The basis sets used for the determinant and Jastrow expansion are listed in Table.~\ref{basis_set_table}. The same basis sets were used for the VMC and LRDMC calculations. \begin{table}[htbp] \centering \caption{Basis set orbitals used for the determinant and Jastrow expansion} \label{basis_set_table} \begin{tabular}{@{}lcc@{}} \toprule Element & Det. basis & Jas. basis \\ \midrule H & 4s, 2p, 1d & 4s, 2p \\ Li & 8s, 5p, 2d, 1f & 8s, 5p, 2d \\ C & 8s, 5p, 2d, 1f & 8s, 5p, 2d \\ N & 8s, 5p, 2d, 1f & 8s, 5p, 2d \\ O & 7s, 5p, 2d, 1f & 7s, 5p, 2d \\ F & 7s, 5p, 2d, 1f & 7s, 5p, 2d \\ Na & 11s, 10p, 2d, 1f & 11s, 10p, 2d \\ Si & 11s, 9p, 2d, 1f & 11s, 9p, 2d \\ P & 11s, 9p, 2d, 1f & 11s, 9p, 2d \\ S & 11s, 9p, 2d, 1f & 11s, 9p, 2d \\ Cl & 11s, 9p, 2d, 1f & 11s, 9p, 2d \\ \bottomrule \end{tabular} \end{table} The trial WF for the Jastrow single determinant \textit{ansatz} was obtained from DFT calculations using the TurboRVB DFT module. To improve the efficiency of the DFT calculations, we utilized the double-grid DFT algorithm, which used a finer DFT mesh when in the vicinity of the nuclei \cite{2019NAK}. For the JDFT (JF + SD with DFT orbitals) \textit{ansatz}, the JF was optimized at the VMC level, which was followed by the LRDMC projection. In LRDMC, instead of the conventional time discretization\cite{1993UMR} of the continuous Hamiltonian, the regularization of the original Hamiltonian is done over the lattice with a step size $a$, such that ${\hat{H}}^a \to \hat{H}$ for $a \to 0${~\cite{1994TEN, 1998BUO, 2000SOR}}. Further details on the VMC and LRDMC algorithms can be found in Refs. {\citenum{2005CAS, 2006CAS, 2010CAS, 2017BEC, 2020NAK2}}. The target error-bar for the DMC and VMC energies was taken as $\approx$0.3 mHa. For the VMC optimization, we used linear~\cite{2005SOR,2007TOU,2007UMR} and stochastic reconfiguration methods~\cite{1998SOR,2007SOR}. The JDFT WF \textit{ansatz} was then converted into JsAGPs. No information loss occurs during this conversion because we are rewriting the SD \textit{ansatz} into a more flexible AGPs \textit{ansatz} and maximum overlap between the two WFs is ensured. The JsAGPs was then optimized at the VMC level, including both JF and nodal surface optimization. This was followed by the LRDMC projection and extrapolation to zero lattice space. Majority of the calculations were performed on the supercomputer Fugaku using ~2304 CPU cores distributed across 48 nodes. To improve the efficiency of the calculations, we used TurboGenius, a python-based wrapper for TurboRVB, which is useful in performing high throughput calculations \cite{2020NAK}. \section{Results} \subsection{JsAGPs for N$_2$ molecule} To validate the methodology and verify whether the basis sets used were good enough to approach chemical accuracy, the energies of N$_2$ molecule and N atom were compared with previous benchmark tests and experimental values (see Table.~{\ref{n_energies_table}}). \begin{table*}[htbp] \caption{Nitrogen energies} \label{n_energies_table} \begin{tabular}{@{}lccccc@{}} \toprule & Method & Basis set & Atom (Ha) & Molecule (Ha) & Binding energy (eV) \\ \midrule & JDFT-VMC & cc-pVTZ & $-54.5543(2)$ & $-109.4522(3)$ & $9.35(1)$ \\ & {\color[HTML]{9B9B9B} JDFT-DMC} & {\color[HTML]{9B9B9B} cc-pVTZ} & {\color[HTML]{9B9B9B} $-54.5765(3)$} & {\color[HTML]{9B9B9B} $-109.5068(3)$} & {\color[HTML]{9B9B9B} $9.61(2)$} \\ & JsAGPs-VMC & cc-pVTZ & $-54.5614(1)$ & $-109.4702(5)$ & $9.45(1)$ \\ \multirow{-4}{*}{\begin{tabular}[c]{@{}l@{}}Current work\\ (TurboRVB)\end{tabular}} & {\color[HTML]{9B9B9B} JsAGPs-DMC} & {\color[HTML]{9B9B9B} cc-pVTZ} & {\color[HTML]{9B9B9B} $-54.5785(4)$} & {\color[HTML]{9B9B9B} $-109.5165(3)$} & {\color[HTML]{9B9B9B} $9.92(1)$} \\ \midrule & \begin{tabular}[c]{@{}c@{}}JDFT-DMC$^a$\end{tabular} & cc-pVQZ & $-54.5765(2)$ & $-109.5065(4)$ & $9.61(2)$ \\ & {\color[HTML]{9B9B9B} CCSD(T)$^b$} & {\color[HTML]{9B9B9B} --} & {\color[HTML]{9B9B9B} --} & {\color[HTML]{9B9B9B} --} & {\color[HTML]{9B9B9B} $9.85(1)$} \\ & Fermi net$^c$ & -- & $-54.58882(6)$ & $-109.5388(1)$ & $9.828(5)$ \\ & {\color[HTML]{9B9B9B} \begin{tabular}[c]{@{}c@{}}JSD-DMC\\ pseudopotential$^d$\end{tabular}} & {\color[HTML]{9B9B9B} cc-pV5Z} & {\color[HTML]{9B9B9B} --} & {\color[HTML]{9B9B9B} --} & {\color[HTML]{9B9B9B} $9.573(4)$} \\ & Estimated exact$^e$ & -- & $-54.5892$ & $-109.5427$ & $9.91$ \\ \multirow{-6}{*}{Previous reports} & {\color[HTML]{9B9B9B} Experimental$^f$} & {\color[HTML]{9B9B9B} --} & {\color[HTML]{9B9B9B} --} & {\color[HTML]{9B9B9B} --} & {\color[HTML]{9B9B9B} $9.91$} \\ \bottomrule \end{tabular} \vspace{2mm} \textsuperscript{\emph{a}} Reference{~\citenum{2010NEM}}; \textsuperscript{\emph{b}} Reference{~\citenum{2008FEL}}; \textsuperscript{\emph{c}} Reference{~\citenum{2020PFA}}; \textsuperscript{\emph{d}} Reference{~\citenum{2012PET}}; \textsuperscript{\emph{e}} Reference{~\citenum{2005BYT}}; \textsuperscript{\emph{f}} Reference{~\citenum{1999FEL}}. \end{table*} The JsAGPs-DMC binding energy shows excellent agreement with the experimental value~\cite{1999FEL}. It is better than the value computed using CCSD(T)~\cite{2008FEL} and the value computed recently using a neural network based \textit{ansatz} (called as Fermi net)~\cite{2020PFA}. The correlation energies recovered for the N$_2$ molecule at the JsAGPs-DMC level and the JDFT-DMC level were $\approx$95\% and $\approx$93\% respectively. At the VMC level, for JsAGPs, $\approx$86\% percent correlation energy was recovered. The computed JDFT-DMC energies are in excellent agreement with the ones computed by Nemec \textit{et al.}~\cite{2010NEM}. Clearly, in the case of nitrogen, a triple-zeta basis set for orbital and Jastrow expansion was good enough. The best total (closest to the estimated exact) energies are the ones computed by Pfau \textit{et al.} using Fermi net~\cite{2020PFA}. The JsAGPs binding energy however, is more accurate than that obtained by Pfau \textit{et al.}. This shows that the JsAGPs \textit{ansatz} allows remarkable cancellation of errors when the difference between the molecular and atomic energies is computed. Interestingly, the N$_2$ binding energy computed using the Jastrow Slater determinant (JSD) \textit{ansatz} by Petruzielo \textit{et al.}~\cite{2012PET} could not approach CCSD(T) level accuracy. Hence, for the JSD \textit{ansatz}, Jastrow and nodal surface optimizations are not sufficient for improving the quality of the WF \textit{ansatz}. Thus, the standard JSD might be inadequate for this purpose. \subsection{Application to the G2 set} Our JDFT-DMC binding energies (Fig.~\ref{Figure1}) were first compared with the ones obtained by Nemec \textit{et al.}~\cite{2010NEM}(JDFT-DMC, Qz basis set) and Petruzielo \textit{et al.}~\cite{2012PET}(JSD-DMC, 5z basis set). The total energy deviations for molecules and atoms are shown in Fig.~\ref{Figure3} and Fig.~\ref{Figure4}, respectively. The results obtained for the JDFT-DMC binding energies are in good agreement with the ones obtained by Nemec \textit{et al.} Most binding energies obtained using the JDFT \textit{ansatz} were within a deviation of $\pm$0.25 eV ($\pm$3.0 kcal/mol) from the experimental values although very few were in the chemical accuracy range. The JDFT binding energies had an MAD of $\approx$3.2 kcal/mol, which is quite close to the value of $\approx$3.13 kcal/mol reported by Nemec \textit{et al.} An MAD of 2.9 kcal/mol was reported in another FN DMC (atomic cores treated with pseudopotentials) G2 set benchmark by Grossman~\cite{2002GRO}. This overall agreement with previous JDFT benchmark tests points out that FN DMC provides ``near chemical accuracy'' and the primary sources of error are the fixed (unoptimized) nodes. There could be other sources of error, such as the basis set used for orbital expansion. However, it can be ruled out based on the fact that Nemec \textit{et al.} used a larger basis set (Qz) than that used in the current study. Nonetheless, the errors in the binding energies are quite similar. Comparison of the JDFT binding energies with the JSD binding energies (obtained by Petruzielo \textit{et al.}) shows that optimizing the nodal surfaces improves the DMC binding energy estimates over the ones obtained using DFT or mean-field nodal surfaces. \begin{figure*}[] \centering \includegraphics[width=0.85\linewidth]{./figs/jdft_binding_cas_champ.pdf} \caption{Deviation of the DMC binding energies from the experimentally obtained values for the JDFT \textit{ansatz} with the triple zeta basis set. Zero point energies and relativistic + spin orbit were corrected before computing the deviations between the DMC and experimental values. Values obtained by Nemec \textit{et al.}~\cite{2010NEM} (JDFT-Qz) and Petruzielo \textit{et al.}~\cite{2012PET} (JSD-5z) are also plotted for comparison. The MAD for the JDFT \textit{ansatz} is $\approx$3.2 kcal/mol and the MAD values obtained by Nemec \textit{et al.} and Petruzielo \textit{et al.} are 3.13 kcal/mol and 2.1 kcal/mol, respectively.} \label{Figure1} \end{figure*} \begin{figure*}[] \centering \includegraphics[width=0.85\linewidth]{./figs/binding_jdft_agps.pdf} \caption{Deviation of the DMC binding energies from the experimentally obtained values for the JDFT and JsAGPs \textit{ansatz}. The error-bars are shown within the markers. The MAD from the experiment for the JDFT and JsAGPs \textit{ansatz} are $\approx$3.2 kcal/mol and $\approx$1.6 kcal/mol, respectively. Zero point energies and relativistic + spin orbit were corrected before computing the deviations between the DMC and experimental values.} \label{Figure2} \end{figure*} \begin{figure*}[] \centering \includegraphics[width=\linewidth]{./figs/molecular_energies.pdf} \caption{Deviation of the VMC and DMC total energies from the estimated exact energies. Values obtained by Nemec \textit{et al.}~\cite{2010NEM} are plotted for comparison.} \label{Figure3} \end{figure*} \begin{figure*}[] \centering \includegraphics[width=\linewidth]{./figs/atomic_energies.pdf} \caption{Deviation of the VMC and DMC total energies of atoms from the estimated exact energy. Values obtained by Nemec \textit{et al.}~\cite{2010NEM} are plotted for comparison.} \label{Figure4} \end{figure*} AGPs, which is a more flexible \textit{ansatz}, allows for better nodal surfaces. Fig.~{\ref{Figure2}} shows a comparison of the DMC binding energy between the JDFT and the more flexible JsAGPs \textit{ansatz}. Total energies are shown in Fig.~\ref{Figure3} and Fig.~\ref{Figure4}. Clearly, the best variational energies were obtained when LRDMC was applied to the JsAGPs \textit{ansatz} both for the atoms and the molecules. This means that the nodal surfaces of JsAGPs WF are better than those of JDFT, and hence, considerably more correlation energy is recovered. For example, in case of atoms, an average of 95.6\% correlation energy was recovered at the DMC level for the JsAGPs WF and 93.7\% correlation energy was recovered using the JDFT. In case of binding energies, JsAGPs is clearly better than JDFT. The MAD using the JsAGPs and JDFT \textit{ansatz} were 1.6 kcal/mol and 3.2 kcal/mol, respectively, demonstrating a clear superiority of the JsAGPs. For almost all the molecules, the error was within 5.0 kcal/mol, and chemical accuracy was achieved for 26 molecules in the G2 set. These results are not only better than the ones obtained using JDFT (with nodal surface from DFT) but also better than the case wherein the SD nodal surfaces were optimized by Petruzielo \textit{et al.}~\cite{2012PET}. The better binding energies indicate that JsAGPs not only provides better variational energies but also improves error cancellation. \section{Discussion} It is interesting to note that the total energy deviation from the exact energies tends to increase as we move towards heavier and more complex molecules (i.e. second row atoms and the corresponding molecules tend to show a larger deviation in energies). However, this trend does not generally appear in case of binding energy deviations (from experimentally obtained values). This could be because the deviations in total energies (from the exact energies) of molecules and atoms are primarily due to the core electrons and when energy differences are computed to calculate binding energies, these deviations cancel out~\cite{2010NEM}. For a few molecules (eg. Si$_2$H$_6$, CO$_2$), the JsAGPs binding energies are worse than their JDFT counterparts. It turns out that although AGPs leads to better nodal surfaces (and lower DMC energies) than JDFT for all molecules and atoms, the quality of nodal surface depends upon the chemical structure of molecules and atoms, and hence, the error cancellation is not always predictably better. \vspace{2mm} It is important to compare the JsAGPs QMC results presented here with those of alternative approaches. We tried to reduce the FN error by optimizing the nodal surfaces at the variational level with the single determinant \textit{ansatz} (JsAGPs), although there are other approaches such as (1) preparing a better nodal surface with the single SD or (2) employing the multi-determinant approach. \vspace{2mm} (1): Wang \textit{et al.} benchmarked the performance of the FN DMC with a single Slater--Jastrow WF using natural orbitals, and obtained an MAD of 2.7 and 2.8 kcal/mol, using MP2/CCSD(T) and CASSCF/B3LYP respectively~\cite{2019WAN}. These values are only marginally better than the MAD of $\approx$3.2 kcal/mol obtained in this work using the JDFT with the LDA nodal surfaces \textit{ansatz} and worse than the value obtained using the JsAGPs \textit{ansatz}. In another recent study, short range XC functionals (from DFT) were combined with selected configuration interaction, which led to trial WFs with lower FN energies~\cite{2020SCE}. The least MAD obtained was $\sim$2.06 kcal/mol, which is better than the MAD obtained in this study using the JDFT with the LDA nodal surfaces \textit{ansatz} and worse than the value obtained using the JsAGPs \textit{ansatz}. Considering these results, this approach is not as promising as our single-determinant approach. \vspace{2mm} (2): The multideterminant approach seems to be the most promising one in terms of accuracy. Morales \textit{et al.}~\cite{2012MOR} were able to achieve sub-chemical accuracy (MAD of 0.8 kcal/mol) in binding energies of the G2 set. In another study, chemical accuracy (and almost exact energies) could be achieved for the ionization potentials for several atoms from Li to Mg using full-configuration interaction~\cite{2010BOO}. Yao \textit{et al.} used the recently developed semistochastic heat-bath configuration interaction method to obtain excellent atomization energies for the G2 set with an MAD of 0.46 kcal/mol~\cite{2020YAO}. The MADs obtained with the multideterminant approach are better than the value obtained using the JsAGPs \textit{ansatz} in this study. A way to achieve a higher accuracy using the single-determinant \textit{ansatz} is to use more flexible \textit{ansatz} such as the Pfaffian{~\cite{2006BAJ}}. In fact, Claudio \textit{et al.}~{\cite{2020GEN, 2020GEN2}} demonstrated that the binding energies are significantly improved by using the Pfaffian \textit{ansatz} for the molecules where triplet correlations are important. A benchmark study with a more flexible \textit{ansatz} like the Pfaffian is an interesting future work. \section{Conclusions} To conclude, we have demonstrated the effectiveness of the AGPs ~\textit{ansatz} (built from electron pairing functions or geminals) in VMC and LRDMC calculations using the TurboRVB QMC package. Using the cc-pVTZ basis set and AGPs \textit{ansatz}, the LRDMC calculations for atoms recovered 95.6\% correlation energy. The binding energies computed using AGPs had an MAD of $\sim$1.6 kcal/mol from the experimental values. Chemical accuracy was achieved for several molecules and the error was within $\pm$ 5 kcal/mol for almost all the molecules. Apart from using a more flexible AGPs \textit{ansatz} to improve the WF nodes, we also optimized the nodal surfaces at the VMC level. This led to lower DMC energies and better error cancellation in binding energies. These results are quite encouraging as they show that allowing more variational freedom by systematically utilizing more flexible \textit{ansatz} is a viable path towards more accurate QMC calculations. We believe that our work represents an important step towards showing that a combination of more flexible single determinant \textit{ansatz} (JsAGPs) and nodal surface optimization can match the accuracy of quantum chemistry packages, with the added advantages of excellent scaling and lower computational cost. \section{Conflict of interest} The authors declare no conflict of interest. \section{Code availability} \label{sec:code} TurboRVB is available from Sandro Sorella's web site [\url{https://people.sissa.it/~sorella/}] upon reasonable scientific request. \begin{suppinfo} The following files are available free of charge. \begin{itemize} \item supplement.pdf: Computed total energies and references for molecular geometries \end{itemize} \end{suppinfo} \begin{acknowledgement} The computations in this work were mainly performed using the supercomputer Fugaku provided by RIKEN through the HPCI System Research Projects (Project IDs: hp210038 and hp220060). The authors are also grateful for the computational resources at the Research Center for Advanced Computing Infrastructure at Japan Advanced Institute of Science and Technology (JAIST). A.R. is grateful to MEXT Japan for the support through the MEXT scholarship. R.M. is grateful for financial support from MEXT-KAKENHI (22H05146, 21K03400 and 19H04692), from the Air Force Office of Scientific Research (AFOSR-AOARD/FA2386-17-1-4049;FA2386-19-1-4015), and from JSPS Bilateral Joint Projects (JPJSBP120197714). K.H. is grateful for financial support from MEXT-KAKENHI, Japan (JP19K05029, JP21K03400, JP21H01998, and JP22H02170), and the Air Force Office of Scientific Research, United States (Award Numbers: FA2386-20-1-4036). K.N. acknowledges financial support from the JSPS Overseas Research Fellowships, from Grant-in-Aid for Early Career Scientists (Grant No.~JP21K17752), and from Grant-in-Aid for Scientific Research (Grant No.~JP21K03400). This work was supported by the European Centre of Excellence in Exascale Computing (TREX-Targeting Real Chemical Accuracy at the Exascale). This project received funding from the European Union's Horizon 2020 Research and Innovation program under Grant Agreement No. 952165. We dedicate this paper to the memory of Prof. Sandro Sorella (SISSA), remembering him as one of the most influential contributors to the quantum Monte Carlo community in the past century, and in particular for deeply inspiring this work with the development of ab initio QMC code, TurboRVB. \end{acknowledgement} \clearpage \twocolumn \section{Computed total energies using various \textit{ansatz}} Total energies of atoms and molecules computed using various \textit{ansatz} are listed in the tables below. For, molecules the geometry references are also provided. {\footnotesize \begin{table}[htbp] \centering \caption{Total energies [Ha] for atoms} \label{atom_energies_table} \begin{tabular}{lllllr} \toprule Element & DMC-JsAGPs & DMC-JDFT & VMC-JsAGPs & DMC-JDFT$^+$ & Est. exact$^*$ \\ \midrule Li & -7.47815(7) & -7.4779(2) & -7.47777(6) & -7.47802(6) & -7.4781 \\ C & -37.8371(2) & -37.8305(3) & -37.8255(2) & -37.8302(3) & -37.8450 \\ N & -54.5785(4) & -54.5765(3) & -54.5614(1) & -54.5766(2) & -54.5892 \\ O & -75.0550(3) & -75.0521(2) & -75.0331(2) & -75.0525(2) & -75.0673 \\ F & -99.7199(3) & -99.7179(2) & -99.6920(2) & -99.7181(2) & -99.7339 \\ Na & -162.2424(3) & -162.2415(3) & -162.2146(2) & -162.2397(2) & -162.2546 \\ Si & -289.3344(3) & -289.3293(3) & -289.2817(2) & -289.3276(4) & -289.3590 \\ P & -341.2262(3) & -341.2251(3) & -341.1740(2) & -341.2225(6) & -341.2590 \\ S & -398.0709(3) & -398.0653(9) & -398.0065(2) & -398.0659(4) & -398.1100 \\ Cl & -460.1050(2) & -460.0999(4) & -460.0365(2) & -460.0949(4) & -460.1480 \\ \bottomrule \end{tabular} \end{table}} {\footnotesize \begin{longtable} [htbp] {llllllr} \caption{Total energies [Ha] for molecules} \label{molecular_energies_table}\\ \toprule species\_list & Geometry & DMC-JsAGPs & DMC-JDFT & VMC-JsAGPs & DMC-JDFT$^+$ & Est. exact$^*$ \\ \midrule \endfirsthead \caption[]{Total energies [Ha] for molecules} \\ \toprule species\_list & Geometry & DMC-JsAGPs & DMC-JDFT & VMC-JsAGPs & DMC-JDFT$^+$ & Est. exact$^*$ \\ \midrule \endhead \midrule \multicolumn{7}{r}{{Continued on next page}} \\ \midrule \endfoot \bottomrule \endlastfoot LiH & a& -8.0699(3) & -8.070(1) & -8.06945(7) & -8.0704(2) & -8.070529 \\ BeH & a& -15.2467(5) & -15.2454(2) & -15.2445(2) & -15.2460(2) & -15.246761 \\ Li$_2$ & a& -14.9937(3) & -14.9905(7) & -14.9932(2) & -14.9917(2) & -14.995084 \\ CH & a& -38.4724(3) & -38.4632(3) & -38.4603(3) & -38.4638(3) & -38.478863 \\ CH$_2$\_singlet & a& -39.1280(3) & -39.1173(4) & -39.11534(7) & -39.1189(3) & -39.133920 \\ CH$_2$\_triplet & a& -39.1420(3) & -39.1401(4) & -39.1271(2) & -39.1413(2) & -39.149059 \\ NH & a& -55.2104(3) & -55.2078(2) & -55.1929(1) & -55.2085(3) & -55.222744 \\ CH$_3$ & a& -39.8276(3) & -39.8273(3) & -39.8101(1) & -39.8277(2) & -39.835829 \\ NH$_2$ & a& -55.8698(3) & -55.8652(2) & -55.8517(1) & -55.8671(3) & -55.879554 \\ OH & a& -75.7248(1) & -75.7204(5) & -75.7033(2) & -75.7222(3) & -75.737497 \\ CH$_4$ & a& -40.5086(2) & -40.5072(4) & -40.49311(8) & -40.5075(3) & -40.515269 \\ H$_2$O & a& -76.4270(2) & -76.4225(5) & -76.4052(2) & -76.4240(3) & -76.439167 \\ HF & a& -100.4478(3) & -100.4422(5) & -100.4230(2) & -100.4439(3) & -100.459713 \\ NH$_3$ & a& -56.5539(2) & -56.5495(4) & -56.53521(9) & -56.5525(3) & -56.564731 \\ LiF & a& -107.4214(3) & -107.4187(3) & -107.3991(2) & -107.4187(2) & -107.434307 \\ CN & b& -92.6950(3) & -92.6881(3) & -92.6556(1) & -92.6891(4) & -92.722961 \\ C$_2$H$_2$ & a& -77.3175(3) & -77.3137(3) & -77.2862(2) & -77.3137(4) & -77.332381 \\ CO & b& -113.3038(3) & -113.2937(3) & -113.2676(2) & -113.2930(4) & -113.326318 \\ HCN & a& -93.4107(3) & -93.4023(3) & -93.3777(2) & -93.4048(4) & -93.431085 \\ N$_2$ & b& -109.5165(3) & -109.5068(3) & -109.4702(5) & -109.5065(4) & -109.542697 \\ HCO & a& -113.8342(3) & -113.8272(4) & -113.7954(2) & -113.8278(4) & -113.856915 \\ NO & b& -129.8696(3) & -129.8608(3) & -129.82208(8) & -129.8612(4) & -129.900576 \\ C$_2$H$_4$ & a& -78.5735(2) & -78.5676(4) & -78.5427(2) & -78.5672(4) & -78.588951 \\ H$_2$CO & a& -114.4876(3) & -114.4802(3) & -114.4509(2) & -114.4803(4) & -114.509104 \\ O$_2$ & b& -150.2949(3) & -150.2874(4) & -150.2483(2) & -150.2898(4) & -150.327203 \\ C$_2$H$_6$ & a& -79.8120(1) & -79.8107(2) & -79.7818(2) & -79.8098(5) & -79.826875 \\ F$_2$ & a& -199.4989(3) & -199.4852(8) & -199.4449(2) & -199.4868(4) & -199.530110 \\ H$_2$O$_2$ & a& -151.53355(9) & -151.5257(2) & -151.4846(2) & -151.5274(4) & -151.564235 \\ H$_4$CO & a& -115.7081(2) & -115.7058(4) & -115.6710(2) & -115.7066(4) & -115.730774 \\ N$_2$H$_4$ & a& -111.8531(3) & -111.8488(5) & -111.8146(2) & -111.8502(5) & -111.877672 \\ CO$_2$ & b& -188.5658(3) & -188.5580(3) & -188.4982(2) & -188.5537(6) & -188.601471 \\ SiH$_2$\_singlet & c& -290.5836(4) & -290.5739(5) & -290.5372(2) & -290.5727(4) & -290.601705 \\ SiH$_2$\_triplet & a& -290.5490(3) & -290.5438(2) & -290.50460(9) & -290.5433(4) & -290.568877 \\ PH$_2$ & c& -342.4759(4) & -342.4674(3) & -342.4233(2) & -342.4676(5) & -342.503299 \\ SiH$_3$ & a& -291.2015(4) & -291.1933(3) & -291.1556(2) & -291.1947(4) & -291.222022 \\ H$_2$S & a& -399.3620(2) & -399.3596(3) & -399.2930(2) & -399.3575(4) & -399.402426 \\ HCl & a& -460.7743(2) & -460.7673(2) & -460.7028(2) & -460.7639(5) & -460.819376 \\ PH$_3$ & a& -343.1141(3) & -343.1098(2) & -343.0539(2) & -343.1068(5) & -343.147839 \\ SiH$_4$ & a& -291.8530(3) & -291.8518(3) & -291.7936(2) & -291.8506(5) & -291.873733 \\ CS & a& -436.1801(3) & -436.1656(2) & -436.1024(2) & -436.1620(6) & -436.228940 \\ SiO & a& -364.6950(3) & -364.6811(4) & -364.6225(2) & -364.6822(5) & -364.733068 \\ SO & b& -473.3246(8) & -473.3113(3) & -473.2348(2) & -473.3126(6) & -473.378253 \\ ClO & b& -535.2580(4) & -535.2449(4) & -535.1553(2) & -535.2422(6) & -535.320350 \\ CH$_3$Cl & a& -500.0746(5) & -500.0636(4) & -499.9848(2) & -500.0618(6) & -500.123907 \\ ClF & a& -559.9217(3) & -559.9080(5) & -559.8215(2) & -559.9057(6) & -559.982138 \\ H$_4$CS & a& -438.6654(3) & -438.6584(4) & -438.5790(2) & -438.6556(6) & -438.711801 \\ HOCl & a& -535.9203(3) & -535.9079(4) & -535.8239(2) & -535.9070(6) & -535.979997 \\ SO$_2$ & a& -548.5908(3) & -548.5752(4) & -548.4945(2) & -548.5704(7) & -548.658618 \\ Na$_2$ & c& -324.5103(4) & -324.5066(6) & -324.4556(2) & -324.5032(4) & -324.536291 \\ NaCl & a& -622.5069(4) & -622.4977(5) & -622.4105(2) & -622.4913(6) & -622.560207 \\ Si$_2$ & a& -578.7899(3) & -578.7805(3) & -578.6889(2) & -578.7735(6) & -578.838636 \\ P$_2$ & a& -682.6364(4) & -682.6247(3) & -682.5193(2) & -682.6191(7) & -682.704451 \\ S$_2$ & a& -796.3076(4) & -796.2935(3) & -796.1822(2) & -796.2869(8) & -796.384237 \\ Cl$_2$ & a& -920.3023(3) & -920.2878(6) & -920.1569(2) & -920.2803(6) & -920.389991 \\ Si$_2$H$_6$ & c& -582.5298(5) & -582.5160(8) & -582.4238(2) & -582.5134(8) & -582.566752 \\ \end{longtable}} \begin{footnotesize} $^+$Nemec \textit{et al.}, J. Chem. Phys. 2010, 132, 034111 $^*$Bytautas \textit{et al.}, J. Chem. Phys. 2005, 122, 154110 $^a$Curtiss \textit{et al.}, J. Chem. Phys. 1997, 106, 1063–1079 $^b$Feller \textit{et al.}, J. Chem. Phys. 2008, 129, 204105 $^c$O'eill \textit{et al.}, Mol. Phys. 2005, 103, 763-766 \end{footnotesize} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,534
\section{Introduction} Discrete orthogonal polynomials is a distinguished and well established part of the theory of orthogonal polynomials and classical monographs have been devoted to this type of orthogonal polynomials. For example, the classical case is discussed in detail in \cite{NSU} and the Riemann--Hilbert problem was considered in order to study asymptotics and applications in \cite{baik}. For a authoritative discussion of the subject see \cite{Ismail,Ismail2,Beals_Wong,walter}. Semiclassical discrete orthogonal polynomials, where a discrete Pearson equation is fulfilled by the weight, has been treated extensively in the literature, see \cite{diego_paco,diego_paco1 ,diego,diego1} and references therein for an comprehensive account. For some specific type of weights of generalized Charlier and Meixner types, the corresponding Freud--Laguerre type equations for the coefficients of the three term recurrence has been studied, see for example \cite{clarkson,filipuk_vanassche0,filipuk_vanassche1,filipuk_vanassche2,smet_vanassche}. This paper is a sequel of \cite{Manas_Fernandez-Irrisarri} in where we used the Cholesky factorization of the moment matrix to study discrete orthogonal polynomials $\{P_n(x)\}_{n=0}^\infty$ on the homogeneous lattice. We considered semiclassical discrete orthogonal polynomials with weights subject to a discrete Pearson equation and, consequently, with moments constructed in terms of generalized hypergeometric functions. A banded semi-infinite matrix $\Psi$, named as Laguerre--Freud structure matrix, that models the shifts by $\pm 1$ in the independent variable of the sequence of orthogonal polynomials $\{P_n(x)\}_{n=0}^\infty$ was given. We also shew in that paper that the contiguous relations for the generalized hypergeometric functions translate into symmetries for the corresponding moment matrix, and that the 3D Nijhoff--Capel discrete Toda lattice \cite{nijhoff,Hietarinta} describes the corresponding contiguous shifts for the squared norms of the orthogonal polynomials. In \cite{Manas} we gave an interpretation for the contiguous transformations for the generalized hypergeometric functions in terms of simple Christoffel and Geronimus transformations. Using the Geronimus--Uvarov perturbations we got determinantal expressions for the shifted orthogonal polynomials. In this paper we consider the generalized Charlier, Meixner and type I Hahn discrete orthogonal polynomials, analyze the Laguerre--Freud structure matrix $\Psi$, and derive from its banded structure and its compatibility with the Toda equation and the Jacobi matrix, a number of nonlinear equations for the coefficients $\{\beta_{n},\gamma_n\}$ of the three term recursion relations $zP_n(z)=P_{n+1}(z)+\beta_n P_n(z)+\gamma_n P_{n-1}(z)$ satisfied by the orthogonal polynomial sequence. These non linear recurrences for the recursion coefficients are of the type $ \gamma_{n+1} =F_1 (n,\gamma_n,\gamma_{n-1},\dots,\beta_n,\beta_{n-1}\dots)$ and $\beta_{n+1 }= F_2 (n,\gamma_{n+1},\gamma_n,\dots,\beta_n,\beta_{n-1},\dots)$, for some functions $F_1,F_2$. Magnus \cite{magnus,magnus1,magnus2,magnus3} named these type of relations, attending to \cite{laguerre,freud}, as Laguerre--Freud relations. There are several papers describing Laguerre--Freud relations for the generalized Charlier, generalized Meixner and type I generalized Hahn cases, see \cite{smet_vanassche,clarkson,filipuk_vanassche0,filipuk_vanassche1,filipuk_vanassche2,diego}. The layout of the paper is as follows. After this introduction, Section \ref{S:Charlier} is devoted to the generalized Charlier orthogonal polynomials. In particular, the tridiagonal Laguerre--Freud structure matrix is given in Theorem \ref{teo:generalized Charlier} and in Proposition \ref{pro:Charlier Laguerre-Freud} we find a set of Laguerre--Freud equations that we prove to be equivalent to those of \cite{smet_vanassche}. Here we derive for the coefficient $\gamma_n$ a third and second order ODEs, see Theorems \ref{teo:third order} and \ref{teo:second order}, respectively. Then, in \S \ref{S:Meixner} we treat the generalized Meixner orthogonal polynomials and in Theorem \ref{teo:generalized Meixner} we find the tetradiagonal Laguerre--Freud structure matrix, giving (after some simplifications) in Theorem \ref{teo:Laguerre-Freud generalized Meixner} the corresponding Laguerre--Freud relations. A comparative discussion with the Laguerre--Freud given by Smet \& Van Assche \cite{smet_vanassche} is also performed. Finally, in \S \ref{S:Hahn} we discuss the generalized Hahn of type I orthogonal polynomials, see \cite{diego,diego_paco}, giving in Theorem \ref{teo:Hahn} the pentadiagonal Laguerre--Freud structure matrix and in Theorem \ref{teo:compatibilityI_Hahn} we discuss the corresponding Laguerre--Freud relations and its comparison with the results of Dominici \cite{diego} and Filipuk \& Van Assche \cite{ filipuk_vanassche2} is given, see also \cite{Dzhamay_Filipuk_Stokes}. Now, to complete this Introduction we give a brief \emph{resume} of the basic facts regarding discrete orthogonal polynomials and then a briefing of the relevant facts of \cite{Manas_Fernandez-Irrisarri}. \subsection{Basics on orthogonal polynomials}Given a linear functional $\rho_z\in\mathbb{C}^*[z]$, the corresponding moment matrix is \begin{align*} G&=(G_{n,m}), &G_{n,m}&=\rho_{n+m}, &\rho_n&= \big\langle\rho_z,z^n\big\rangle, & n,m\in\mathbb{N}_0:=\{0,1,2,\dots\}, \end{align*} with $\rho_n$ the $n$-th moment of the linear functional $\rho_z$. If the moment matrix is such that all its truncations, which are Hankel matrices, $G_{i+1,j}=G_{i,j+1}$, \begin{align*} G^{[k]}=\begin{pNiceMatrix} G_{0,0}&\Cdots &G_{0,k-1}\\ \Vdots & & \Vdots\\ G_{k-1,0}&\Cdots & G_{k-1,k-1} \end{pNiceMatrix}=\begin{pNiceMatrix}[columns-width = 0.5cm] \rho_{0}&\rho_1&\rho_2&\Cdots &\rho_{k-1}\\ \rho_1 &\rho_2 &&\Iddots& \rho_k\\ \rho_2&&&&\Vdots\\ \Vdots& &\Iddots&&\\[9pt] \rho_{k-1}&\rho_k&\Cdots &&\rho_{2k-2} \end{pNiceMatrix} \end{align*} are nonsingular; i.e. the Hankel determinants $\varDelta_k:=\det G^{[k]} $ do not cancel, $\varDelta_k\neq 0 $, $k\in\mathbb{N}_0$. If this is the case, we have monic polynomials \begin{align}\label{eq:polynomials} P_n(z)&=z^n+p^1_n z^{n-1}+\dots+p_n^n, & n&\in\mathbb{N}_0, \end{align} with $p^1_0=0$, fulfilling the orthogonality relations \begin{align*} \big\langle \rho, P_n(z)z^k\big\rangle &=0, & k&\in\{0,\dots,n-1\},& \big\langle \rho, P_n(z)z^n\big\rangle &=H_n\neq 0, \end{align*} and $\{P_n(z)\}_{n\in\mathbb{N}_0}$ is a sequence of orthogonal polynomials, i.e., $ \big\langle\rho,P_n(z)P_m(z)\big\rangle=\delta_{n,m}H_n$ for $n,m\in\mathbb{N}_0$. The symmetric bilinear form $ \langle F, G\rangle_\rho:=\langle \rho, FG\rangle$, is such that the moment matrix is the Gram matrix of this bilinear form and $\langle P_n, P_m\rangle_\rho:=\delta_{n,m} H_n$. \begin{comment} \subsection{Discrete Riemann--Hilbert problem} \begin{theorem} The fundamental matrix $Y_n(z)$ is the unique matrix function fulfilling the following \begin{enumerate} \item Is holomorphic at $\mathbb{C}\setminus\Delta$. \item Has the following asymptotics at infinity \begin{align*} Y_n(z)&=\Big(I_2+O\Big(\frac{1}{z}\Big)\Big)\begin{pmatrix} z^n & 0\\ 0 & z^{-n} \end{pmatrix}, & |z|&\to\infty. \end{align*} \item The residues can be computed with the use of the following \emph{jump} condition \begin{align*} 2\pi\operatorname{i} \res{Y_n}{a_k}=\lim_{z\to a_k} Y_n\begin{pmatrix} 0 & w\\0 &0 \end{pmatrix}. \end{align*} \end{enumerate} \end{theorem} \begin{proof} First let us check that the fundamental matrix is indeed a solution \begin{enumerate} \item The singularities of the fundamental matrix are those of the second kind functions \eqref{eq:second_discrete}. Thus, the matrix is meromorphic with simple poles at the nodes. \item Is is easily check from the asymptotics \eqref{eq:second_asymptotics} of the second kind functions. \item For the residues we have \begin{align*} 2\pi\operatorname{i} \res{Y_n}{a_k}&=\begin{pmatrix} 0 & P_n(a_k)w(a_k)\\[2pt] 0 & -\dfrac{P_{n-1}(a_k )w(a_k)}{H_{-1}} \end{pmatrix}, \\ \lim_{z\to a_k}Y_n\begin{pmatrix} 0 & w\\0 &0 \end{pmatrix}&=\lim_{z\to a_k}\begin{pmatrix} 0 & P_n(z)w(z)\\[2pt] 0 & -\dfrac{P_{n-1}(z)w(z)}{H_{-1}} \end{pmatrix}=\begin{pmatrix} 0 & P_n(a_k)w(a_k)\\[2pt] 0 & -\dfrac{P_{n-1}(a_k )w(a_k)}{H_{-1}} \end{pmatrix} \end{align*} and the result follows immediately. \end{enumerate} To prove that there is no more solutions we need two preliminary lemmas \begin{lemma} The function \begin{align*} Y_n(z)\begin{pmatrix} 1 &-\dfrac{w(a_k)}{z-a_k}\\[2pt] 0 &1 \end{pmatrix} \end{align*} is holomorphic at $a_k$. \end{lemma} \begin{proof} The fundamental matrix $Y_n(z)$ has simple poles at the nodes, so that we can write \begin{align*} Y_n(z)&=\frac{A_{n,k}}{z-a_k}+B_{n.k}+O(z-a_k), & z&\to a_k. \end{align*} for some constant matrices $A_{n,k}$ and $B_{n.k}$ with $A_{n,k}=\res{Y_n}{a_k}$. Consequently, \begin{align*} \lim_{z\to a_k} Y_n(z)\begin{pmatrix} 0 & w(z)\\ 0 & 0 \end{pmatrix}&= \lim_{z\to a_k} \bigg(\Big( \frac{A_{n,k}}{z-a_k}+B_{n.k}+O(z-a_k) \Big)\begin{pmatrix} 0 & w(z)\\ 0 & 0 \end{pmatrix}\bigg)\\&= \lim_{z\to a_k} \frac{1}{z-a_k}A_{n,k}\begin{pmatrix} 0 & w(a_k)\\ 0 & 0 \end{pmatrix}+B_{n.k}\begin{pmatrix} 0 & w(a_k)\\ 0 & 0 \end{pmatrix} \end{align*} From the jump condition iii) we get \begin{align*} A_{n,k}\begin{pmatrix} 0 & w(a_k)\\ 0 & 0 \end{pmatrix}&=0,& \res{Y_n}{a_k}=A_{n,k}=B_{n.k}\begin{pmatrix} 0 & w(a_k)\\ 0 & 0 \end{pmatrix}. \end{align*} Consequently, \begin{align*} Y_n(z)&=\frac{1}{z-a_k}B_{n.k}\begin{pmatrix} 0 & w(a_k)\\ 0 & 0 \end{pmatrix}+B_{n.k}+O(z-a_k), & z&\to a_k\\ &=B_{n.k}\begin{pmatrix} 1 & \dfrac{w(a_k)}{z-a_k}\\[2pt] 0 & 1 \end{pmatrix}+O(z-a_k), & z&\to a_k, \end{align*} that leads to \begin{align*} Y_n(z)\begin{pmatrix} 1 & -\dfrac{w(a_k)}{z-a_k}\\[2pt] 0 & 1 \end{pmatrix}&=B_{n.k}+O(1), & z&\to a_k. \end{align*} and the function $ Y_n(z)\begin{psmallmatrix} 1 & -\frac{w(a_k)}{z-a_k}\\ 0 & 1 \end{psmallmatrix}$ is holomorphic at $a_k$, as claimed. \end{proof} \begin{lemma} \begin{align*} \det Y_n =1 \end{align*} \end{lemma} \begin{proof} As \begin{align*} \det \begin{pmatrix} 1 & -\dfrac{w(a_k)}{z-a_k}\\[2pt] 0 & 1 \end{pmatrix}=1 \end{align*} we know that \begin{align*} \det Y_n =\det Y_n \begin{pmatrix} 1 & -\dfrac{w(a_k)}{z-a_k}\\[2pt] 0 & 1 \end{pmatrix}. \end{align*} Therefore, $\det Y_n$ is holomorphic at the nodes $\{a_k\}_{k\in\mathbb{N}_0}$ and, consequently, a matrix of entire functions. Given the asymptotics \begin{align*} \det Y_n(z) &=1+O\Big(\frac{1}{z}\Big) , & z\to \infty, \end{align*} from the Liouville theorem we conclude the desired result. \end{proof} Given two solutions $(Y_n,\tilde Y_n)$ to the discrete Riemann--Hilbert problem as $\det \tilde Y_n=1$ it is non-singular so that \begin{align*} Y_n(z)\big(\tilde Y_n(z)\big)^{-1}= Y_n(z)\begin{pmatrix} 1 & -\dfrac{w(a_k)}{z-a_k}\\[2pt] 0 & 1 \end{pmatrix}\bigg( \tilde Y_n(z)\begin{pmatrix} 1 & -\dfrac{w(a_k)}{z-a_k}\\[2pt] 0 & 1 \end{pmatrix} \bigg)^{-1} \end{align*} is a matrix of holomorphic function at each node $a_k$, $k\in \mathbb{N}_0$. Thus, it is a matrix of entire functions. The asymptotics togther with the Liouville theorem implies that $Y_n=\tilde Y_n$. \end{proof} \subsection{The case $\rho=\sum\limits_{a_k\in\Delta}\delta(z-a_k)v(z)\varphi^{2z}$} We take $w(z)=v(z)\varphi^{2z}$, where $v(z)$ do not depend on the variable $\varphi$. Hence, we have introduce a new parameter $\varphi$ and we want to explore the dependence on this variable. For that aim we introduce a new matrix \begin{align*} \Phi_n(z):=Y_n(z)\begin{pmatrix} \varphi^z & 0\\ 0 & \varphi^{-z} \end{pmatrix}. \end{align*} \begin{pro} The matrix $\Phi_n(z)$ is the unique matrix function fulfilling the following \begin{enumerate} \item Is holomorphic at $\mathbb{C}\setminus\Delta$. \item Has the following asymptotics at infinity \begin{align*} \Phi_n(z)&=\Big(I_2+O\Big(\frac{1}{z}\Big)\Big)\begin{pmatrix} z^n \varphi^{z}& 0\\ 0 & z^{-n}\varphi^{-z} \end{pmatrix}, & |z|&\to\infty. \end{align*} \item The residues can be computed with the use of the following \emph{jump} condition \begin{align*} 2\pi\operatorname{i} \res{\Phi_n}{a_k}&=\lim_{z\to k} \Phi_n\begin{pmatrix} 0 & v\\0 &0 \end{pmatrix}, & a_kk&\in\Delta. \end{align*} \end{enumerate} \end{pro} \begin{proof} Let see that $\Phi_n$ as given satisfies the three properties \begin{enumerate} \item The functions $\varphi^{\pm z}=\Exp{\pm z\log\varphi}$ are entire function in complex $z$ plane. Thus, $\Phi_n$ is holomorphic where $Y_n$ is. \item It follows from the asymptotics of $Y_n$. \item \begin{align*} \res{\Phi_n}{k}&=\res{Y_n(z)}{k}\begin{pmatrix} \varphi^k& 0\\ 0 & \varphi^{-k} \end{pmatrix}=\lim_{z\to k} \left(Y_n(z)\begin{pmatrix} 0 &v(z)\varphi^z\\ 0 & 0 \end{pmatrix}\right)\begin{pmatrix} \varphi^k& 0\\ 0 & \varphi^{-k} \end{pmatrix}\\ &=\lim_{z\to k} \left(Y_n(z)\begin{pmatrix} 0 &v(z)\varphi^z\\ 0 & 0 \end{pmatrix}\begin{pmatrix} \varphi^z& 0\\ 0 & \varphi^{-z} \end{pmatrix}\right)\\&=\lim_{z\to k} \left(\Phi_n(z)\begin{pmatrix} \varphi^{-z}& 0\\ 0 & \varphi^{z} \end{pmatrix}\begin{pmatrix} 0 &v(z)\varphi^{2z}\\ 0 & 0 \end{pmatrix}\begin{pmatrix} \varphi^z& 0\\ 0 & \varphi^{-z} \end{pmatrix}\right)= \lim_{z\to k} \left( \Phi_n(z) \begin{pmatrix} 0 &v(z)\\ 0 & 0 \end{pmatrix}\right). \end{align*} \end{enumerate} Let us check now the uniqueness. For that purpose just notice that given another solution $\tilde \Phi_n$ fulfilling the three properties, then $\tilde Y_n=\tilde \Phi_n(z) \begin{psmallmatrix} \varphi^{-z} & 0\\ 0 & \varphi^z \end{psmallmatrix}$ is a solution of the discrete Riemann--Hilbert problem for $Y_n$, and we have proven that the solution is unique. \end{proof} \begin{pro} The right logarithmic derivative \begin{align}\label{eq:Mn} M_n=\frac{\partial \Phi_n}{\partial\varphi}\Phi_n^{-1} \end{align} is an entire matrix function. \end{pro} \begin{proof} At each node $a_k\in\Delta$, the matrix \begin{align*} \Phi_{n,k}(z):= \Phi_n(z) \begin{pmatrix} 1 & -\dfrac{v(z)}{z-a_k}\\[2pt] 0 &1 \end{pmatrix} \end{align*} is holomorphic in an neighborhood of the mentioned node. Hence, \begin{align*} \frac{\partial \Phi_{n,k}}{\partial\varphi}=\frac{\partial \Phi_n}{\partial\varphi}\begin{pmatrix} 1 & -\dfrac{v(z)}{z-a_k}\\[2pt] 0 &1 \end{pmatrix} \end{align*} is also holomorphic in an neighborhood of that node. Moreover, as $\det \tilde \Phi_{n,k}=\det \Phi_n=\det Y_n=1$, we deduce that $\tilde\Phi_n,\Phi_n$ are nonsingular matrices. Hence \begin{align*} M_n=\frac{\partial \Phi_n}{\partial\varphi}\Phi_n^{-1}=\frac{\partial \Phi_{n,k}}{\partial\varphi}\Phi_{n,k}^{-1} \end{align*} is $M_n$ holomorphic at each node $a_k\in\Delta$. \end{proof} \begin{pro} We have \begin{align*} M_n=\frac{1}{\varphi} \begin{pmatrix} z & 2H_n\\[2pt] -\dfrac{2}{H_{n-1}}&-z \end{pmatrix} \end{align*} \end{pro} \begin{proof} We have just proven that $M_n$ is a matrix of entire functions, so the behavior at infinity fixes, with the aid of the Liouville theorem, the explicit form of $M_n$. For that aim we introduce the matrix \begin{align*} S_n:=Y_n\begin{pmatrix} z^{-n} & 0\\ 0 & z^n \end{pmatrix} \end{align*} that, attending to the asymptotics of $Y_n$, has the following asymptotics \begin{align*} S_n&=I_2+\sum_{k=1}^\infty S_{n,j}z^{-k}, & z&\to\infty \end{align*} with $S_{n,j}\in\mathbb{C}^{2\times 2}$. In particular, from the definition of $Y_n$ and the asymptotics of the second kind functions we deduce that \begin{align*} S_{n,1}=\begin{pmatrix} p^1_n & -H_n\\[2pt] -\dfrac{1}{H_{n-1} }& q^1_{n-1} \end{pmatrix}. \end{align*} In terms of these matrices we find \begin{align*} M_n=\frac{\partial S_n}{\partial\varphi}+S_n\begin{pmatrix} \frac{z}{\varphi} & \\ 0 & - \frac{z}{\varphi} \end{pmatrix}S_n^{-1} \end{align*} and asymptotically we have \begin{align*} M_n &=\frac{z}{\varphi} \begin{pmatrix} 1 & \\ 0 & - 1 \end{pmatrix}+ \frac{1}{\varphi} \left[S_{n,1}, \begin{pmatrix} 1& \\ 0 & - 1 \end{pmatrix}\right]+O\left(\frac{1}{z}\right), & z&\to\infty. \end{align*} Thus, the entire matrix function \begin{align*} M_n (z)-\frac{z}{\varphi} \begin{pmatrix} 1 & \\ 0 & - 1 \end{pmatrix}- \frac{1}{\varphi} \left[S_{n,1}, \begin{pmatrix} 1& \\ 0 & - 1 \end{pmatrix}\right]\underset{ z\to\infty.}{\longrightarrow}0, \end{align*} and Liouville's theorem implies the result \end{proof} \begin{pro} The discrete orthogonal polynomials are subject to \begin{align}\label{eq:EDOpolynomials} \varphi\frac{\partial P_n}{\partial \varphi}&=-2\frac{H_n}{H_{n-1}}P_{n-1},& \varphi \frac{\partial}{\partial\varphi}\left( \frac{P_{n-1}}{H_{n-1}} \right)&=2\frac{P_n-zP_{n-1}}{H_{n-1}}. \end{align} The second kind functions satisfy \begin{align*} \varphi\frac{\partial Q_n}{\partial \varphi}&=2z Q_n-2\frac{H_n}{H_{n-1}}Q_{n-1},& \varphi \frac{\partial}{\partial\varphi}\left( \frac{Q_{n-1}}{H_{n-1}} \right)&=2\frac{Q_n}{H_{n-1}}. \end{align*} \end{pro} \begin{proof} From the definition \eqref{eq:Mn} of $M_n$ we get \begin{multline*} \varphi \frac{\partial}{\partial \varphi }\begin{pmatrix} P_n& Q_n\\[2pt] -\dfrac{P_{n-1} }{H_{n-1}} & -\dfrac{Q_{n-1}}{H_{n-1}} \end{pmatrix}+\begin{pmatrix} P_n& Q_n\\[2pt] -\dfrac{P_{n-1}}{H_{n-1}} & -\dfrac{Q_{n-1}}{H_{n-1}} \end{pmatrix}\varphi \frac{\partial}{\partial \varphi }\begin{pmatrix} \varphi^z & 0\\ 0 &\varphi^{-z} \end{pmatrix}\begin{pmatrix} \varphi^z & 0\\ 0 &\varphi^{-z} \end{pmatrix}^{-1} \\ =\begin{pmatrix} z & 2H_n\\[2pt] -\dfrac{2}{H_{n-1}}&-z \end{pmatrix}\begin{pmatrix} P_n& Q_n\\[2pt] -\dfrac{P_{n-1}}{H_{n-1}}& -\dfrac{Q_{n-1}}{H_{n-1}} \end{pmatrix}. \end{multline*} But, recalling that \begin{align*} \varphi \frac{\partial}{\partial \varphi }\begin{pmatrix} \varphi^z & 0\\ 0 &\varphi^{-z} \end{pmatrix}\begin{pmatrix} \varphi^z & 0\\ 0 &\varphi^{-z} \end{pmatrix}^{-1}=\begin{pmatrix} z & 0\\0 & -z \end{pmatrix} \end{align*} we get \begin{align*} \varphi \frac{\partial}{\partial \varphi }\begin{pmatrix} P_n& Q_n\\[2pt] -\dfrac{P_{n-1}}{H_{n-1}} & -\dfrac{Q_{n-1}}{H_{n-1}} \end{pmatrix}+\left[\begin{pmatrix} P_n& Q_n\\[2pt] -\dfrac{P_{n-1}}{H_{n-1}} & -\dfrac{Q_{n-1}}{H_{n-1}} \end{pmatrix},\begin{pmatrix} z & 0\\0 & -z \end{pmatrix}\right] =\begin{pmatrix} 0 & 2H_n\\[2pt] -\dfrac{2}{H_{n-1}}& 0 \end{pmatrix}\begin{pmatrix} P_n & Q_n\\[2pt] -\dfrac{P_{n-1}}{H_{n-1}}& -\dfrac{Q_{n-1}}{H_{n-1}} \end{pmatrix}. \end{align*} and, consequently, \begin{align*} \varphi \frac{\partial}{\partial \varphi }\begin{pmatrix} P_n& Q_n\\[2pt] -\dfrac{P_{n-1}}{H_{n-1}} & -\dfrac{Q_{n-1}}{H_{n-1}} \end{pmatrix}-2z \begin{pmatrix} 0 & Q_n\\[2pt] \dfrac{P_{n-1}}{H_{n-1}} & 0 \end{pmatrix} =\begin{pmatrix} 0 & 2H_n\\[2pt] -\dfrac{2}{H_{n-1}}& 0 \end{pmatrix}\begin{pmatrix} P_n & Q_n\\[2pt] -\dfrac{P_{n-1}}{H_{n-1}}& -\dfrac{Q_{n-1}}{H_{n-1}} \end{pmatrix}, \end{align*} and the result follows by a component-wise inspection. \end{proof} \begin{pro} The functions $p^1_n(\varphi)$, see \eqref{eq:polynomials}, and $H_n(\varphi)$, fulfill the following nonlinear equations \begin{align*} \varphi \frac{\d p^1_n}{\d \varphi }&=-2\frac{H_n}{H_{n-1}}\\ \varphi \frac{\d H_{n-1}}{\d \varphi }&=2H_{n-1}\big(p^1_{n-1}-p_n^1\big) \end{align*} \end{pro} \begin{proof} Taking in \eqref{eq:EDOpolynomials} the coefficient in $z^{n-1}$ we get both equations. \end{proof} If we write \begin{align*} H_n=\Exp{a_n} \end{align*} we get \begin{align*} \left( \frac{\varphi}{2} \frac{\d }{\d \varphi }\right)^2a_n=\Exp{a_n-a_{n-1}}-\Exp{a_{n-1}-a_{n-2}}. \end{align*} that with the re-parametrization $\varphi=\Exp{\frac{s}{2}}$, i.e. $s=2\log \varphi$, is the Toda equation \begin{align*} \frac{\d^2 a_n}{\d s^2 }=\Exp{a_n-a_{n-1}}-\Exp{a_{n-1}-a_{n-2}}. \end{align*} \subsection{Shifts} We assume a Pearson equation for the weight $\theta(z+1)w(z+1)=\sigma(z)w(z)$ for two polynomials $\theta$ and $\sigma$, What happens with $F^{(\pm)}_n(z)=Y_n(z\pm 1)$ when the set of nodes is $\mathbb{Z}_+$? \begin{enumerate} \item $F^{(+)}$ is holomorphic at $\mathbb{C}\setminus\{-1,0,1,\dots\}$, it has an additional simple pole at $-1$. $F^{(-)}$ is holomorphic at $\mathbb{C}\setminus\{1,2\dots\}$, so $0$ is no longer a pole. \item It has the same asymptotics at infinity as $Y_n$ \begin{align*} F^{(\pm)}_n(z)&=\Big(I_2+O\Big(\frac{1}{z}\Big)\Big)\begin{pmatrix} z^n & 0\\ 0 & z^{-n} \end{pmatrix}, & |z|&\to\infty. \end{align*} \item The residues can be computed with the use of the following \emph{jump} condition \begin{align*} 2\pi\operatorname{i} \res{F^{(\pm)}_n}{k}=2\pi\operatorname{i} \res{Y_n}{k\pm 1}=\lim_{z\to k\pm 1} Y_n\begin{pmatrix} 0 & w\\0 &0 \end{pmatrix}. \end{align*} \end{enumerate} \end{comment} Introducing $\chi(z):=\left(\begin{NiceMatrix} 1&z&z^2&\Cdots \end{NiceMatrix}\right)^\top$ the moment matrix is $G=\left\langle\rho, \chi\chi^\top\right\rangle$, and $\chi$ is an eigenvector of the \emph{shift matrix}, $\Lambda \chi=x\chi$, where \begin{align*} \Lambda:=\left(\begin{NiceMatrix}[columns-width = auto] 0 & 1 & 0 &\Cdots&\\ \Vdots& \Ddots&\Ddots &\Ddots&\\ &&&&\\ &&&& \end{NiceMatrix}\right). \end{align*} Hence $\Lambda G=G\Lambda^\top$, and the moment matrix is a Hankel matrix. As the moment matrix symmetric its Borel--Gauss factorization is a Cholesky factorization \begin{align}\label{eq:Cholesky} G=S^{-1}HS^{-\top}, \end{align} where $S$ is a lower unitriangular matrix that can be written as \begin{align*} S=\left(\begin{NiceMatrix}[columns-width = auto] 1 & 0 &\Cdots &\\ S_{1,0 } & 1&\Ddots&\\ S_{2,0} & S_{2,1} & \Ddots &\\ \Vdots & \Ddots& \Ddots& \end{NiceMatrix}\right), \end{align*} and $H=\operatorname{diag}(H_0,H_1,\dots)$ is a diagonal matrix, with $H_k\neq 0$, for $k\in\mathbb{N}_0$. The Cholesky factorization does hold whenever the principal minors of the moment matrix; i.e., the Hankel determinants $\varDelta_k$, do not cancel. The components $P_n(z)$ of \begin{align}\label{eq:PS} P(z):=S\chi(z), \end{align} are the monic orthogonal polynomials of the functional $\rho$. \begin{pro}\label{pro:Hankel} We have the determinantal expressions \begin{align*} H_{k}&=\frac{\varDelta_{k+1}}{\varDelta_k},& p^1_k&=-\frac{\tilde \varDelta_k}{\varDelta_k}, \end{align*} with the Hankel determinants given by \begin{align*} \varDelta_k&:=\det\begin{pNiceMatrix} \rho_{0}&\Cdots & &\rho_{k-2}&\rho_{k-1}\\ \Vdots & &\Iddots& \Iddots&\Vdots\\ & & & &\\ \rho_{k-2}& & &&\rho_{2k-3}\\[8pt] \rho_{k-1}& \Cdots& &\rho_{2k-3}&\rho_{2k-2} \end{pNiceMatrix}, & \tilde \varDelta_k&:=\det\begin{pNiceMatrix} \rho_{0}&\Cdots& &\rho_{k-2}&\rho_k\\ \Vdots & &\Iddots& \rho_{k-1}&\Vdots\\ & &\Iddots&\\ \rho_{k-2}& & &&\rho_{2k-2}\\[4pt] \rho_{k-1}& \Cdots& &\rho_{2k-3}&\rho_{2k-1} \end{pNiceMatrix}. \end{align*} \end{pro} We introduce the lower Hessenberg semi-infinite matrix \begin{align}\label{eq:Jacobi} J=S\Lambda S^{-1} \end{align} that has the vector $P(z)$ as eigenvector with eigenvalue $z$ $JP(z)=zP(z)$. The Hankel condition $\Lambda G=G\Lambda^\top$ and the Cholesky factorization gives \begin{align}\label{eq:symmetry_J} J H=(JH)^\top =HJ^\top. \end{align} As the Hessenberg matrix $JH$ is symmetric the Jacobi matrix $J$ is tridiagonal. The Jacobi matrix $J$ given in \eqref{eq:Jacobi} reads \begin{align*} J=\left(\begin{NiceMatrix}[columns-width = 0.5cm] \beta_0 & 1& 0&\Cdots& \\ \gamma_1 &\beta_1 & 1 &\Ddots&\\ 0 &\gamma_2 &\beta_2 &1 &\\ \Vdots&\Ddots& \Ddots& \Ddots&\Ddots \end{NiceMatrix}\right). \end{align*} The eigenvalue equation $JP=zP$ is a three term recursion relation $zP_n(z)=P_{n+1}(z)+\beta_n P_n(z)+\gamma_n P_{n-1}(z)$, that with the initial conditions $P_{-1}=0$ and $P_0=1$ completely determines the sequence of orthogonal polynomials $\{P_n(z)\}_{n\in\mathbb{N}_0}$ in terms of the recursion coefficients $\beta_n,\gamma_n$. The recursion coefficients, in terms of the Hankel determinants, are given by \begin{align}\label{eq:equations0} \beta_n&=p_n^1-p_{n+1}^1=-\frac{\tilde \varDelta_n}{\varDelta_n}+\frac{\tilde \varDelta_{n+1}}{\varDelta_{n+1}},& \gamma_{n+1}&=\frac{H_{n+1}}{H_{n}}=\frac{\varDelta_{n+1}\varDelta_{n-1}}{\varDelta_n^2},& n\in\mathbb{N}_0, \end{align} For future use we introduce the following diagonal matrices $ \gamma:=\operatorname{diag} (\gamma_1,\gamma_2,\dots)$ and $\beta:=\operatorname{diag}(\beta_0 ,\beta_{1},\dots)$ and $J_-:=\Lambda ^\top \gamma$ and $ J_+:=\beta+\Lambda$, so that we have the splitting $J=\Lambda^\top\gamma+\beta+\Lambda=J_-+J_+$. In general, given any semi-infinite matrix $A$, we will write $A=A_-+A_+$, where $A_-$ is a strictly lower triangular matrix and $A_+$ an upper triangular matrix. Moreover, $A_0$ will denote the diagonal part of $A$. The lower Pascal matrix is defined by \begin{align}\label{eq:Pascal_matrix} B&=(B_{n,m}), & B_{n,m}&:= \begin{cases} \displaystyle \binom{n}{m}, & n\geq m,\\ 0, &n<m. \end{cases} \end{align} so that \begin{align}\label{eq:Pascal} \chi(z+1)=B\chi(z). \end{align} Moreover, \begin{align*} B^{-1}&=(\tilde B_{n,m}), & \tilde B_{n,m}&:= \begin{cases} (-1)^{n+m}\displaystyle \binom{n}{m}, & n\geq m,\\ 0, &n<m. \end{cases} \end{align*} and $\chi(z-1)=B^{-1}\chi(z)$. The lower Pascal matrix and its inverse are explicitly given by \begin{align*} B&=\left(\begin{NiceMatrix}[columns-width =auto] 1&0&\Cdots\\ 1&1&0&\Cdots\\ 1&2&1&0&\Cdots&\\ 1& 3 & 3&1 & 0&\Cdots\\ 1&4 & 6 & 4 & 1&0&\Cdots\\ 1& 5 & 10 &10 &5&1&0&\Cdots\\ \Vdots & & & & & &\Ddots&\Ddots \end{NiceMatrix}\right),& B^{-1}&=\left(\begin{NiceMatrix}[r] 1&0&\Cdots\\ -1&1&0&\Cdots\\ 1&-2&1&0&\Cdots&\\ -1& 3 & -3&1 & 0&\Cdots\\ 1&-4 & 6 & -4 & 1&0&\Cdots\\ -1& 5 & -10 &10 &-5&1&0&\Cdots\\ \Vdots & & & & & &\Ddots&\Ddots \end{NiceMatrix}\right). \end{align*} in terms of which we introduce the {dressed Pascal matrices,} $\Pi:=SBS^{-1}$ and $ \Pi^{-1}:=SB^{-1}S^{-1}$, which are connection matrices; i.e., \begin{align}\label{eq:PascalP} P(z+1)&=\Pi P(z), & P(z-1)&=\Pi^{-1}P(z). \end{align} The lower Pascal matrix can be expressed in terms of its subdiagonal structure as follows \begin{align*} B^{\pm 1}&=I\pm\Lambda^\top D+\big(\Lambda^\top\big)^2D^{[2]}\pm\big(\Lambda^\top\big)^3D^{[3]}+\cdots, \end{align*} where $ D=\operatorname{diag}(1,2,3,\dots)$ and $D^{[k]}=\frac{1}{k}\operatorname{diag}\big(k^{(k)}, (k+1)^{(k)},(k+2)^{(k)}\cdots\big)$, in terms of the falling factorials $ x^{(k)}=x(x-1)(x-2)\cdots (x-k+1)$. That is, \begin{align*} D^{[k]}_n&=\frac{(n+k)\cdots (n+1)}{k}, & k&\in\mathbb{N}, & n&\in\mathbb{N}_0. \end{align*} The lower unitriangular factor can be also written in terms of its subdiagonals $S=I+\Lambda^\top S^{[1]}+\big(\Lambda^\top\big)^2S^{[2]}+\cdots$ with $S^{[k]}=\operatorname{diag} \big(S^{[k]}_0, S^{[k]}_1,\dots\big)$. From \eqref{eq:PS} is clear the following connection between these subdiagonals coefficients and the coefficients of the orthogonal polynomials given in \eqref{eq:polynomials} \begin{align}\label{eq:Sp} S^{[k]}_k=p^k_{n+k}. \end{align} We will use the \emph{shift operators} $T_\pm$ acting over the diagonal matrices as follows \begin{align*} T_-\operatorname{diag}(a_0,a_1,\dots)&:=\operatorname{diag} (a_1, a_2,\dots),& T_+\operatorname{diag}(a_0,a_1,\dots)&:=\operatorname{diag}(0,a_0,a_1,\dots). \end{align*} These shift operators have the following important properties, for any diagonal matrix $A=\operatorname{diag}(A_0,A_1,\dots)$ \begin{align}\label{eq:ladder_lambda} \Lambda A&=(T_-A)\Lambda,& A \Lambda &=\Lambda (T_+A), & A \Lambda^\top &=\Lambda^\top(T_-A), &\Lambda^\top A &= (T_+A)\Lambda ^\top. \end{align} \begin{rem} Notice that the standard notation, see \cite{NSU}, for the differences of a sequence $\{f_n\}_{n\in\mathbb{N}_0}$, \begin{align*} \Delta f_n&:=f_{n+1}-f_n, & n&\in\mathbb{N}_0, \\ \nabla f_n&=f_n-f_{n-1}, &n&\in\mathbb{N}, \end{align*} and $\nabla f_0= f_0$, connects with the shift operators by means of \begin{align*} T_-&=I+\Delta , & T_+&=I-\nabla. \end{align*} \end{rem} In terms of these shift operators we find \begin{align}\label{eq:theDs} 2D^{[2]}&=(T_-D)D, & 3D^{[3]}&=(T_-^2D)(T_-D)D=2(T_-D^{[2]})D=2D^{[2]}(T_-^2 D). \end{align} \begin{pro}\label{pro:Sinv} The inverse matrix $S^{-1}$ of the matrix $S$ expands as follows \begin{align*} S^{-1}&=I+\Lambda^\top S^{[-1]}+\big(\Lambda^\top\big)^2 S^{[-2]}+\cdots. \end{align*} The subdiagonals $S^{[-k]}$ are given in terms of the subdiagonals of $S$. In particular, \begin{align*} S^{[-1]}&=-S^{[1]}, \\S^{[-2]}&=-S^{[2]}+(T_-S^{[1]})S^{[1]},\\ S^{[-3]}&=-S^{[3]}+(T_-S^{[2]})S^{[1]} +(T_-^2S^{[1]})S^{[2]} -(T_-^2 S^{[1]}) (T_-S^{[1]})S^{[1]}. \end{align*} \end{pro} \begin{rem} Corresponding expansions for the dressed Pascal matrices are \begin{align*} \Pi^{\pm 1}=I+\Lambda^\top \pi^{[\pm 1]}+(\Lambda^\top)^2 \pi^{[\pm 2]}+\cdots \end{align*} with $\pi^{[\pm n]}=\operatorname{diag}(\pi^{[\pm n]}_0,\pi^{[\pm n]}_1,\dots)$. \end{rem} \begin{pro}[The dressed Pascal matrix coefficients] We have \begin{gather}\label{eq:pis} \begin{aligned} \pi^{[\pm 1]}_n&=\pm (n+1), & \pi^{[\pm 2]}_n&=\frac{(n+2)(n+1)}{2}\pm p^1_{n+2}(n+1)\mp (n+2) p^{1}_{n+1}\\&&&=\frac{(n+2)(n+1)}{2}\mp (n+1)\beta_{n+1} \mp p^{1}_{n+1}, \end{aligned}\\\label{eq:piss} \begin{multlined}[t][.9\textwidth] \pi^{[\pm 3]}_n=\pm\frac{(n+3)(n+2)(n+1)}{3}+\frac{(n+2)(n+1)}{2}p^1_{n+3}- \frac{(n+3)(n+2)}{2}p^1_{n+1}\\\pm (n+1) p^2_{n+3}\mp (n+3)p^2_{n+2}\pm (n+3)p^1_{n+2}p^1_{n+1}\mp(n+2)p^1_{n+3}p^1_{n+1}.\end{multlined} \end{gather} Moreover, the following relations are fulfill \begin{gather}\label{eq:pis2} \begin{aligned} \pi^{[1]}+\pi^{[-1]}&=0, &\pi^{[2]}+\pi^{[-2]}&=2D^{[2]}, &\pi^{[3]}+\pi^{[-3]}&=2((T_-^2S^{[1]})D^{[2]}-(T_-D^{[2]})S^{[1]}). \end{aligned} \end{gather} \end{pro} \subsection{Discrete orthogonal polynomials and Pearson equation} We are interested in measures with support on the homogeneous lattice $\mathbb{N}_0$ as follows $\rho=\sum_{k=0}^\infty \delta(z-k) w(k)$, with moments given by \begin{align}\label{eq:moments} \rho_n=\sum_{k=0}^\infty k^n w(k), \end{align} and, in particular, with $0$-th moment given by \begin{align}\label{eq:first_moment} \rho_0=\sum_{k=0}^\infty w(k). \end{align} The weights we consider in this paper satisfy the following \emph{discrete Pearson equation} \begin{align}\label{eq:Pearson0} \nabla (\sigma w)&=\tau w, \end{align} that is $\sigma(k) w(k)-\sigma(k-1) w(k-1)=\tau(k)w(k)$, for $k\in\{1,2,\dots\}$, with $\sigma(z),\tau(z)\in\mathbb{R}[z]$. If we write $\theta:=\tau-\sigma$, the previous Pearson equation reads \begin{align}\label{eq:Pearson} \theta(k+1)w(k+1)&=\sigma(k)w(k), & k\in\mathbb{N}_0. \end{align} \begin{theorem}[Hypergeometric symmetries] Let the weight $w$ be subject to a discrete Pearson equation of the type \eqref{eq:Pearson}, where the functions $\theta,\sigma$ are polynomials, with $\theta(0)=0$. Then, \begin{enumerate} \item The moment matrix fulfills \begin{align}\label{eq:Gram symmetry} \theta(\Lambda)G=B\sigma(\Lambda)GB^\top. \end{align} \item The Jacobi matrix satisfies \begin{align}\label{eq:Jacobi symmetry} \Pi^{-1} H\theta(J^\top)=\sigma(J)H\Pi^\top, \end{align} and the matrices $H\theta(J^\top)$ and $\sigma(J)H$ are symmetric. \end{enumerate} \end{theorem} If $N+1:=\deg\theta(z)$ and $M:=\deg\sigma(z)$, and zeros of these polynomials are $\{-b_i+1\}_{i=1}^{N}$ and $\{-a_i\}_{i=1}^M$ we write $\theta(z)= z(z+b_1-1)\cdots(z+b_{N}-1)$ and $\sigma(z)= \eta (z+a_1)\cdots(z+a_M)$. According to \eqref{eq:first_moment} the $0$-th moment \begin{align*} \rho_0&=\sum_{k=0}^\infty w(k)=\sum_{k=0}^\infty \frac{(a_1)_k\cdots(a_M)_k}{(b_1+1)_k\cdots(b_{N}+1)_k}\frac{\eta^k}{k!}=\tensor[_M]{F}{_{N}} (a_1,\dots,a_M;b_1,\dots,b_{N};\eta) = {\displaystyle \,{}_{M}F_{N}\left[{\begin{matrix}a_{1}&\cdots &a_{M}\\b_{1}&\cdots &b_{N}\end{matrix}};\eta\right].} \end{align*} is the generalized hypergeometric function, where we are using the two standard notations, see \cite{generalized_hypegeometric_functions}. Then, according to \eqref{eq:moments}, for $n\in\mathbb{N}$, the corresponding higher moments $\rho_n=\sum_{k=0}^\infty k^n w(k)$, are \begin{align*} \rho_n&=\vartheta_\eta^n\rho_0=\vartheta_\eta^n\Big({\displaystyle \,{}_{M}F_{N}\left[{\begin{matrix}a_{1}&\cdots &a_{M}\\b_{1}&\cdots &b_{N}\end{matrix}};\eta\right]}\Big), &\vartheta_\eta:=\eta\frac{\partial }{\partial \eta}. \end{align*} Given a function $f(\eta)$, we consider the Wronskian \begin{align*} \mathscr W_n(f)=\det\begin{pNiceMatrix}[columns-width = 0.5cm] f &\vartheta_\eta f& \vartheta_\eta^2f&\Cdots &\vartheta_\eta^kf\\ \vartheta_\eta f& \vartheta_\eta^2f&&\Iddots& \vartheta_\eta^{k+1}f\\ \vartheta_\eta^2 f&&&&\\ \Vdots& &\Iddots&&\\ \vartheta_z^kf&\vartheta_\eta^{k+1}f& &&\vartheta_\eta^{2k}f \end{pNiceMatrix}. \end{align*} Then, we have that the Hankel determinants $\varDelta_k=\det G^{[k]}$ determined by the truncations of the corresponding moment matrix are Wronskians of generalized hypergeometric functions, \begin{align}\label{eq:hankel_hyper1} \varDelta_k&=\tau_k, & \tau_k&:=\mathscr W_{k}\Big({\displaystyle \,{}_{M}F_{N}\left[{\begin{matrix}a_{1}&\cdots &a_{M}\\b_{1}&\cdots &b_{N}\end{matrix}};\eta\right]}\Big),\\ \tilde \varDelta_k &=\vartheta_\eta\tau_k.\label{eq:hankel_hyper2} \end{align} Moreover, using Proposition \ref{pro:Hankel} we get \begin{align}\label{eq:Wp_n} H_k&=\frac{\tau_{k+1}}{\tau_k}, & p^1_k&=-\vartheta_\eta\log \tau_k. \end{align} The functions $\tau_k$ are known in the literature on integrable systems as tau functions. \begin{theorem}[Laguerre--Freud structure matrix] Let us assume that the weight $w$ solves the discrete Pearson equation \eqref{eq:Pearson} with $\theta,\sigma$ polynomials such that $\theta(0)=0$, $\deg\theta(z)=N+1$, $ \deg\sigma(z)=M$. Then, the Laguerre--Freud structure matrix \begin{align}\label{eq:Psi} \Psi&:=\Pi^{-1}H\theta(J^\top)=\sigma(J)H\Pi^\top=\Pi^{-1}\theta(J)H=H\sigma(J^\top)\Pi^\top\\ &=\theta(J+I)\Pi^{-1} H=H\Pi^\top\sigma(J^\top-I),\label{eq:Psi2} \end{align} has only $N+M+2$ possibly nonzero diagonals ($N+1$ superdiagonals and $M$ subdiagonals) \begin{align*} \Psi=(\Lambda^\top)^M\psi^{(-M)}+\dots+\Lambda^\top \psi^{(-1)}+\psi^{(0)}+ \psi^{(1)}\Lambda+\dots+\psi^{(N+1)}\Lambda^{N+1}, \end{align*} for some diagonal matrices $\psi^{(k)}$. In particular, the lowest subdiagonal and highest superdiagonal are given by \begin{align}\label{eq:diagonals_Psi} \left\{ \begin{aligned} (\Lambda^\top)^M\psi^{(-M)}&=\eta(J_-)^MH,& \psi^{(-M)}=\eta H\prod_{k=0}^{M-1}T_-^k\gamma=\eta\operatorname{diag}\Big(H_0\prod_{k=1}^{M}\gamma_k, H_1\prod_{k=2}^{M+1}\gamma_k,\dots\Big),\\ \psi^{(N+1)} \Lambda^{N+1}&=H(J_-^\top)^{N+1},& \psi^{(N+1)}=H\prod_{k=0}^{N}T_-^k\gamma=\operatorname{diag}\Big(H_0\prod_{k=1}^{N+1}\gamma_k, H_1\prod_{k=2}^{N+2}\gamma_k,\dots\Big). \end{aligned} \right. \end{align} The vector $P(z)$ of orthogonal polynomials fulfill the following structure equations \begin{align}\label{eq:P_shift} \theta(z)P(z-1)&=\Psi H^{-1} P(z), & \sigma(z)P(z+1)&=\Psi^\top H^{-1} P(z). \end{align} \end{theorem} The compatibility of the recursion relation, i.e. eigenfunctions of the Jacobi matrix, and the recursion matrix leads to some interesting equations: \begin{pro} The following compatibility conditions for the Laguerre--Freud and Jacobi matrices hold \begin{subequations} \begin{align}\label{eq:compatibility_Jacobi_structure_a} [\Psi H^{-1},J]&=\Psi H^{-1}, \\ \label{eq:compatibility_Jacobi_structure_b} [J, \Psi ^\top H^{-1}]&=\Psi ^\top H^{-1}. \end{align} \end{subequations} \end{pro} \subsection{The Toda flows} Let us define the strictly lower triangular matrix \begin{align*} \Phi&:=(\vartheta_{\eta} S ) S^{-1}. \end{align*} \begin{pro} \begin{enumerate} \item The semi-infinite vector $P$ fulfills \begin{align}\label{eq:MP} \vartheta_{\eta} P=\Phi P. \end{align} \item The Sato--Wilson equations holds \begin{align}\label{eq:Mscr} -\Phi H+\vartheta_\eta H-H \Phi^\top&=JH. \end{align} Consequently, $\Phi=-J_-$ and $n\in\mathbb{N}_0$ we have $\vartheta_\eta \log H_n=J_{n,n}$. \end{enumerate} \end{pro} Moreover, \begin{pro}[Toda] The following equations hold \begin{subequations} \begin{align}\label{eq:MS} \Phi =(\vartheta_{\eta} S)S^{-1}&=-\Lambda^\top\gamma,\\\label{eq:MS0} (\vartheta_\eta H) H^{-1}&=\beta. \end{align} \end{subequations} In particular, for $n,k-1\in\mathbb{N}$, we have \begin{subequations} \begin{gather}\label{eq:equations} \begin{aligned} \vartheta_\eta p^1_n&=-\gamma_n, & \vartheta_{\eta}p^k_{n+k}&=-\gamma_{n+k}p^{k-1}_{n+k-1}, \end{aligned}\\ \vartheta_\eta \log H_n=\beta_ n.\label{eq:equationsH} \end{gather} \end{subequations} The functions $q_n:=\log H_n$, $n\in\mathbb{N}$, satisfy the Toda equations \begin{align}\label{eq:Toda_equation} \vartheta_\eta^2q_n=\Exp{q_{n+1}-q_n}-\Exp{q_n-q_{n-1}}. \end{align} For $n\in\mathbb{N}$, we also have $\vartheta_\eta P_{n}(z)=-\gamma_n P_{n-1}(z)$. \end{pro} \begin{pro} The following Lax equation holds $ \vartheta_\eta J=[J_+,J]$. The recursion coefficients satisfy the following Toda system \begin{subequations}\label{eq:Toda_system} \begin{align}\label{eq:Toda_system_beta} \vartheta_\eta\beta_n&=\gamma_{n+1}-\gamma_n,\\\label{eq:Toda_system_gamma} \vartheta_\eta\log\gamma_n&=\beta_{n}-\beta_{n-1}, \end{align} \end{subequations} for $n\in\mathbb{N}_0$ and $\beta_{-1}=0$. Consequently, we get \begin{align}\label{eq:Toda_equation_gamma} \vartheta_\eta^2\log\gamma_n+2\gamma_n&=\gamma_{n+1}+\gamma_{n-1}. \end{align} \end{pro} For the compatibility of \eqref{eq:PascalP} and \eqref{eq:MP}, that is for the compatibility of the systems \begin{align*} \begin{cases} \begin{aligned} P(z+1)&=\Pi P(z),\\ \vartheta_\eta (P(z))&=\Phi P(z). \end{aligned} \end{cases} \end{align*} we obtain $\vartheta_\eta(\Pi )=[\Phi ,\Pi ]$. In the general case the dressed Pascal matrix $\Pi$ is a lower unitriangular semi-infinite matrix, that possibly has an infinite number of subdiagonals. However, for the case when the weight $w(z)=v(z)\eta^z$ satisfies the Pearson equation \eqref{eq:Pearson}, with $v$ independent of $\eta$, that is $\theta(k+1)v(k+1)\eta=\sigma(k)v(k)$, the situation improves as we have the banded semi-infinite matrix $\Psi$ that models the shift in the $z$ variable as in \eqref{eq:P_shift}. From the previous discrete Pearson equation we see that $\sigma(z)=\eta\kappa(z)$ with $\kappa,\theta$ $\eta$-independent polynomials in $z$ \begin{align}\label{eq:Pearson_Toda} \theta(k+1)v(k+1)=\eta\kappa(k)v(k). \end{align} \begin{pro} Let us assume a weight $w$ satisfying the Pearson equation \eqref{eq:Pearson}. Then, the Laguerre--Freud structure matrix $ \Psi$ given in \eqref{eq:Psi} satisfies \begin{subequations} \begin{align}\label{eq:eta_compatibility_Pearson_1a} \vartheta_\eta(\eta^{-1}\Psi^\top H^{-1} )&=[\Phi ,\eta^{-1}\Psi^\top H^{-1} ], \\ \vartheta_\eta(\Psi H^{-1} )&=[\Phi ,\Psi H^{-1} ].\label{eq:eta_compatibility_Pearson_1b} \end{align} \end{subequations} Relations \eqref{eq:eta_compatibility_Pearson_1a} and \eqref{eq:eta_compatibility_Pearson_1b} are \emph{gauge} equivalent. \end{pro} \section{Generalized Charlier weights}\label{S:Charlier} A particular simple case is when $\sigma$ does not depend upon $z$, that we name as extended Charlier weights. The generalized Charlier, which is the main example of these extended Charlier weights, corresponds to the choice $\sigma(z)=\eta$ and $\theta(z)= z(z+b)$. In fact, a weight that satisfies the Pearson equation \begin{align*} (k+1)(k+1+b)w(k+1)=\eta w(k) \end{align*} is proportional to the generalized Charlier weight \begin{align*} w(z)=\frac{\Gamma(b+1)\eta^z}{\Gamma(z+b+1)\Gamma(z+1)}=\frac{1}{(b+1)_z}\frac{\eta^z}{z!}. \end{align*} The finiteness of the moments requires $b>-1$ and $\eta>0$, see \cite{diego_paco}. The generalized Charlier $0$-th moment, as we have discussed, is the confluent hypergeometric limit function $\rho_0(\eta)={}_0F_1(;b+1;\eta)$. \begin{rem} Is well known the relation of this function with the Bessel functions $J_{\nu}$ and $I_{\nu}$: \begin{align*} \frac {({\tfrac {\eta}{2}})^{b}}{\Gamma (b+1)}\rho_{0}\Big(-\frac{\eta^2}{4}\Big)&= J_{b}(\eta),& \frac {({\tfrac {\eta}{2}})^{b}}{\Gamma (b+1)}\rho_{0}\Big(\frac{\eta^2}{4}\Big)&= I_{b}(\eta). \end{align*} Here $J_{\nu}$ is the $\nu$-th Bessel function and $I_{\nu}$ the $\nu$-th modified Bessel, both are connected: $\Exp{\pm \operatorname{i} \frac{\nu\pi}{2}}I_{\nu}(x)= J_{\nu}(\Exp{\pm \operatorname{i} \frac{\nu\pi}{2}} x) $. Thus, in terms of these modified Bessel functions, we can write, see \cite{diego_paco} \begin{align*} \rho_0(\eta)=\frac{\Gamma(b+1)}{\sqrt{\eta}^b}I_{b}(2\sqrt{\eta}). \end{align*} \end{rem} \begin{rem} The\emph{ non generalized} Charlier polynomials or Poisson--Charlier polynomials were discussed by Charlier in \cite{charlier}. \end{rem} \begin{pro} Let us consider an extended Charlier type weight $w$, i.e. $\theta(k+1)w(k+1)=\eta w(k)$ with $\theta(0)=0$. Then, the semi-infinite matrix $H\theta(J^\top)$ admits the Cholesky factorization \begin{align}\label{eq:Pascal_Chirstoffel} \theta(J)H= H\theta(J^\top)=\Theta^{-1} H \Theta^{-\top}=\Pi H\Pi^\top, \end{align} with the dressed Pascal semi-infinite matrix $\Pi=\Theta^{-1}$ being a lower unitriangular semi-infinite matrix with only its first $\deg\theta$ subdiagonals different from zero. \end{pro} \begin{proof} From \eqref{eq:Psi} we have $\Pi^{-1}H\theta(J^\top)=H\Pi^\top$; i.e. $H\theta(J^\top)=\Pi H\Pi^\top$ and given the uniqueness of the Cholesky factorization and its band structure we deduce the result. \end{proof} \begin{pro} For an extended Charlier type weight, with the choice $\sigma=\eta$ and $\theta(z)=z( z^{N-1}+\cdots+\theta_1)$, we have the following subdiagonal structure for the dressed Pascal matrix $\Pi=\sum_{k=0}^{N}(\Lambda^\top)^k\pi^{[k]}$ with the main diagonal and first subdiagonals given by $ \pi^{[0]}=I$ and $\pi^{[1]}=D=\operatorname{diag}(1,2,3,\cdots)$, and the lowest subdiagonal \begin{align*} (\Lambda^\top)^N\pi^{[N]}&=(J_-)^{N},& \pi^{[N]}=\prod_{k=0}^{N-1}T_-^k\gamma=\operatorname{diag}\Big(\prod_{k=1}^{N}\gamma_k, \prod_{k=2}^{N+1}\gamma_k,\dots\Big)=\operatorname{diag}\Big(\frac{H_N}{H_0}, \frac{H_{N+1}}{H_1},\dots\Big). \end{align*} \end{pro} \begin{proof} The only new fact to prove is the explicit expression for the lowest subdiagonal, that obviously will come from the lowest diagonal of $\theta(J)$ that is $ J_-^N$. \end{proof} \begin{theorem}[ The generalized Charlier Laguerre--Freud structure matrix]\label{teo:generalized Charlier} For a generalized Charlier weight, i.e. $\sigma=\eta$ and $\theta= z(z+b)$, the Laguerre--Freud structure matrix is \begin{align*} \Psi& = \left(\begin{NiceMatrix}[columns-width = 0.3cm] \eta H_0 & \eta H_0 &H_2 &0&0&\Cdots\\ 0 & \eta H_1 & 2 \eta H_1& H_3 &0&\Ddots\\ 0 & 0&\eta H_2&3\eta H_2& H_4&\Ddots \\ \Vdots &\Ddots &\Ddots & \Ddots &\Ddots&\Ddots \end{NiceMatrix}\right). \end{align*} \end{theorem} \begin{proof} For the structure matrix $\Psi$ we have that, see \eqref{eq:diagonals_Psi}, $\Psi=\psi^{(0)}+\psi^{(1)}\Lambda +\psi^{(2)}\Lambda^2$. Let us find these diagonal matrix coefficients. In the one hand, as \begin{align}\label{eq:Charlier_Psi_0} \Psi=\eta H\Pi^\top=\underset{\text{main diagonal}}{ \underbrace{\eta H}}+\underset{\text{first superdiagonal}}{ \underbrace{\eta H D\Lambda}}+ \underset{\text{second superdiagonal}}{ \underbrace{\eta H\pi^{[2]} \Lambda^2}}+\cdots, \end{align} we get from the main diagonal that $\psi^{(0)}=\eta H$, and from the first superdiagonal that $\psi^{(1)}=\eta HD$. In the other hand, we observe that \begin{align*} \Psi&=\Pi^{-1}H\Big((J^\top)^2+bJ^\top\Big)\\&= \begin{multlined}[t][.9\textwidth] \big(I-\Lambda^\top D+(\Lambda^\top)^2\pi^{[-2]}+\cdots\big)H\\ \times \big((\Lambda^\top)^2+ \Lambda^\top (\beta+{T_-}\beta+bI)+\gamma+T_+\gamma+\beta^2+b\beta+ (\beta+{T_-}\beta+bI)\gamma \Lambda+({T_-}\gamma )\gamma\Lambda^2\big) \end{multlined} \end{align*} so that \begin{multline}\label{eq:Charlier_Psi_1} \Psi=\cdots +\underset{\text{main diagonal}}{ \underbrace{H(\gamma+T_+\gamma+\beta^2+b\beta) -\Lambda^\top D H(\beta+{T_-}\beta+bI)\gamma \Lambda+(\Lambda^\top)^2\pi^{[-2]}H({T_-}\gamma )\gamma\Lambda^2}}\\+ \underset{\text{first superdiagonal}}{ \underbrace{H(\beta+{T_-}\beta+bI)\gamma \Lambda-\Lambda^\top DH({T_-}\gamma )\gamma\Lambda^2}}+\underset{\text{second superdiagonal}}{ \underbrace{ H({T_-}\gamma )\gamma\Lambda^2,}} \end{multline} and the result is proven. \end{proof}\enlargethispage{1cm} \begin{pro}[Compatibility] For a generalized Charlier weight the recursion coefficients fulfill \begin{align}\label{eq:charlier_compatibility} n \eta (\beta_n-\beta_{n-1}-1)&=(-\gamma_{n+1}+\gamma_{n-1})\gamma_n, \end{align} for $n\in\mathbb{N}$ and $\gamma_0=0$. Alternative forms of this equation are \begin{subequations} \begin{align}\label{eq:eta_compatibility_charlier_1} n\vartheta_\eta\Big(\frac{\eta}{\gamma_n}\Big)&=\gamma_{n+1}-\gamma_{n-1},\\ \label{eq:eta_compatibility_charlier_2} \vartheta_\eta\Big(\frac{\gamma_n\gamma_{n+1}}{\eta}\Big)&= (n+1)\gamma_{n}- n\gamma_{n+1}, \end{align} \end{subequations} for $n\in\mathbb{N}$ and $\gamma_0=0$. We also have \begin{align}\label{eq:a dos pasos} \beta_n-\beta_{n-2}-1&=\eta\big(n\gamma_{n}^{-1}-(n-1)\gamma_{n-1}^{-1}\big), & n\geq 2. \end{align} \end{pro} \begin{proof} Recalling $\gamma=({T_-}H)H^{-1}$ we write \begin{align*} \Psi H^{-1}&=\eta+\eta HD\Lambda H^{-1}+ H({T_-}\gamma)\gamma\Lambda^2 H^{-1}=\eta+\eta H({T_-}H)^{-1}D\Lambda +H({T_-}^2H)^{-1}({T_-}\gamma)\gamma\Lambda^2 \\&= \eta+\eta \gamma^{-1}D\Lambda +\Lambda^2\\ &= \left(\begin{NiceMatrix \eta & \frac{\eta}{\gamma_1} &1 & 0 &\Cdots \\ 0 &\eta &\frac{2\eta}{\gamma_2} &1 &\Ddots \\ 0&0 &\eta &\frac{3\eta}{\gamma_3} &\Ddots\\ \Vdots& \Ddots&\Ddots&\Ddots&\Ddots \end{NiceMatrix}\right). \end{align*} The compatibility equation \eqref{eq:compatibility_Jacobi_structure_b}, $ [\Psi H^{-1},J]=\Psi H^{-1}$, reads \begin{gather*} \left(\begin{NiceMatrix \eta & \frac{\eta}{\gamma_1} &1 & 0 &\Cdots \\ 0 &\eta &\frac{2\eta}{\gamma_2} &1 &\Ddots \\ \Vdots& \Ddots&\Ddots&\Ddots&\Ddots \end{NiceMatrix}\right) =\begin{aligned}[t] &-\left(\begin{NiceMatrix} \beta_0 & 1& 0&\Cdots\\ \gamma_1 &\beta_1 & 1&\Ddots\\ 0 &\gamma_2 &\beta_2 & \Ddots\\ \Vdots&\Ddots& \Ddots &\Ddots \end{NiceMatrix} \right) \left(\begin{NiceMatrix \eta & \frac{\eta}{\gamma_1} &1 & 0 &\Cdots \\ 0 &\eta &\frac{2\eta}{\gamma_2} &1 &\Ddots \\ \Vdots& \Ddots&\Ddots&\Ddots&\Ddots \end{NiceMatrix}\right)\\&+ \left(\begin{NiceMatrix \eta & \frac{\eta}{\gamma_1} &1 & 0 &\Cdots \\ 0 &\eta &\frac{2\eta}{\gamma_2} &1 &\Ddots \\ \Vdots& \Ddots&\Ddots&\Ddots&\Ddots \end{NiceMatrix}\right)\left(\begin{NiceMatrix} \beta_0 & 1& 0&\Cdots\\ \gamma_1 &\beta_1 & 1&\Ddots\\ 0 &\gamma_2 &\beta_2 & \Ddots\\ \Vdots&\Ddots& \Ddots &\Ddots \end{NiceMatrix} \right) \end{aligned}\\ \hspace*{-1cm}\small= \left( \begin{NiceMatrix}[columns-width = 1.5cm] \eta & \cellcolor{Gray!8} \frac{\eta(\beta_1-\beta_0 )}{\gamma_1} +\gamma_2& \cellcolor{Gray!15}\eta(\frac{1}{\gamma_1} -\frac{2}{\gamma_2}) +\beta_2-\beta_0&0&\Cdots & \\[5pt] 0&\eta& \cellcolor{Gray!8}\frac{2\eta(\beta_2-\beta_1)}{\gamma_2}+\gamma_3-\gamma_1 & \cellcolor{Gray!15} \eta(\frac{2}{\gamma_2} -\frac{3}{\gamma_3})+\beta_3-\beta_1&0 & \Cdots \\[5pt \Vdots&\Ddots&\Ddots &\Ddots &\Ddots &\Ddots \\[15pt] \end{NiceMatrix} \right)\\ \end{gather*} and, consequently, we get \eqref{eq:charlier_compatibility} on the first superdiagonal and from the second superdiagonal we get \eqref{eq:a dos pasos}. Let us look to the alternative expression \eqref {eq:eta_compatibility_charlier_1}. First, using the Toda system \eqref{eq:Toda_system_gamma} we see that \eqref {eq:eta_compatibility_charlier_1} is equivalent to \eqref{eq:charlier_compatibility}. Indeed, from \eqref{eq:Toda_system_gamma} we get \begin{align*} n\vartheta_\eta\Big(\frac{\eta}{\gamma_n}\Big)=\frac{n\eta}{\gamma_n}-\frac{n\eta\vartheta_{\eta} \gamma_n}{\gamma_n^2}=- \frac{n\eta(\beta_n-\beta_{n-1}-1)}{\gamma_n}, \end{align*} and the statement follows. An alternative proof for \eqref {eq:eta_compatibility_charlier_1} is obtained from the compatibility condition \eqref{eq:eta_compatibility_Pearson_1b}, i.e., \begin{align*} \vartheta_\eta\left(\begin{NiceMatrix \eta & \frac{\eta}{\gamma_1} &1 & 0 &\Cdots \\ 0 &\eta &\frac{2\eta}{\gamma_2} &1 &\Ddots \\ 0&0 &\eta &\frac{3\eta}{\gamma_3} &\Ddots\\[5pt] \Vdots& \Ddots&\Ddots&\Ddots&\Ddots \end{NiceMatrix}\right) &= \left(\begin{NiceMatrix}[columns-width = 0.5cm ] \eta & \gamma_2& 0&\Cdots& \\%[2pt] 0&\eta& \gamma_3-\gamma_1 &0 &\Cdots \\%[2pt] 0&0& \eta & \gamma_4-\gamma_2 &\Ddots \\ \Vdots&\Ddots&\Ddots &\Ddots & \Ddots\\ \end{NiceMatrix}\right). \end{align*} Equation \eqref{eq:eta_compatibility_Pearson_1a} with \begin{align*} \Pi=\eta^{-1}\Psi^\top H^{-1}=\left(\begin{NiceMatrix}[columns-width = auto ] 1 & 0 &0&\Cdots\\ 1 & 1& 0&\Ddots\\ \frac{\gamma_2\gamma_1}{\eta}&2&1&\Ddots\\ 0& \frac{\gamma_3\gamma_2}{\eta} &3 &\Ddots\\[5pt] \Vdots &\Ddots&\Ddots&\Ddots \end{NiceMatrix}\right) \end{align*} gives \eqref{eq:eta_compatibility_charlier_2}. \end{proof} \begin{comment} \begin{pro} [ODE for $\gamma_n$] The recursion coefficient $\gamma_{n}$ for the generalized Charlier orthogonal polynomials fulfills the following third order ODE \begin{align}\label{eq:ODE_gamma} \frac{\d}{\d\eta}\Big( \eta\frac{\d^2\gamma_n}{\d\eta^2} +\Big(1-\frac{\eta}{\gamma_n}\frac{\d\gamma_n}{\d\eta}\Big)\frac{\d\gamma_n}{\d\eta}+2\frac{\gamma_n^2}{\eta}+n^2\frac{\eta}{\gamma_n}\Big)- 2 \frac{\gamma_{n}}{\eta} =0. \end{align} \end{pro} \begin{proof} From Equations \eqref{eq:Toda_equation_gamma} and \eqref{eq:eta_compatibility_charlier_1}, that for convenience, we write together here \begin{align*} \vartheta_\eta^2\log\gamma_n+2\gamma_n&=\gamma_{n+1}+\gamma_{n-1},& n\vartheta_\eta\Big(\frac{\eta}{\gamma_n}\Big)&=\gamma_{n+1}-\gamma_{n-1}, \end{align*} we deduce the relations \begin{align}\label{eq:ode_pre} \vartheta_\eta^2\log\gamma_n+2\gamma_n\pm n\vartheta_\eta\Big(\frac{\eta}{\gamma_n}\Big)&=2\gamma_{n\pm1}. \end{align} Observing that \begin{align*} \vartheta_{\eta}^2\log\gamma_n&=\vartheta_{\eta}\Big(\frac{\eta}{\gamma_n}\Big)\frac{\d\gamma_n}{\d\eta} +\frac{\eta^2}{\gamma_n}\frac{\d^2\gamma_n}{\d\eta^2},& \vartheta_{\eta}\Big(\frac{\eta}{\gamma_n}\Big)&=\frac{\eta}{\gamma_n}-\frac{\eta^2}{\gamma_n^2}\frac{\d\gamma_n}{\d\eta} \end{align*} from \eqref{eq:ode_pre} we obtain \begin{align}\label{eq:edo_intermedia} \frac{\eta^2}{\gamma_n}\frac{\d^2\gamma_n}{\d\eta^2} +\frac{\eta}{\gamma_n}\Big(1-\frac{\eta}{\gamma_n}\frac{\d\gamma_n}{\d\eta}\Big)\Big(\frac{\d\gamma_n}{\d\eta}\pm n\Big)+2\gamma_n=2\gamma_{n\pm 1}, \end{align} that after some cleaning reads \begin{align}\label{eq:edo_intermedia2} \eta\frac{\d^2\gamma_n}{\d\eta^2} +\Big(1-\frac{\eta}{\gamma_n}\frac{\d\gamma_n}{\d\eta}\Big)\Big(\frac{\d\gamma_n}{\d\eta}\pm n\Big)+2\frac{\gamma_n^2}{\eta}=2\frac{\gamma_n\gamma_{n\pm 1}}{\eta}. \end{align} Recalling \eqref{eq:eta_compatibility_charlier_2}, that we rewrite as \begin{align*} \pm\vartheta_\eta\Big(\frac{\gamma_n\gamma_{n\pm1}}{\eta}\Big)&= (n\pm 1)\gamma_{n}- n\gamma_{n\pm 1},\\ \end{align*} from Equation \eqref{eq:edo_intermedia2} we find \begin{align*} \mp\vartheta_\eta\Big( \eta\frac{\d^2\gamma_n}{\d\eta^2} +\Big(1-\frac{\eta}{\gamma_n}\frac{\d\gamma_n}{\d\eta}\Big)\Big(\frac{\d\gamma_n}{\d\eta}\pm n\Big)+2\frac{\gamma_n^2}{\eta}\Big)+2(n\pm 1)\gamma_{n}=2n\gamma_{n\pm 1}. \end{align*} Therefore, inserting Equation \eqref{eq:edo_intermedia} in the above relation we get \begin{multline*} \mp\vartheta_\eta\Big( \eta\frac{\d^2\gamma_n}{\d\eta^2} +\Big(1-\frac{\eta}{\gamma_n}\frac{\d\gamma_n}{\d\eta}\Big)\Big(\frac{\d\gamma_n}{\d\eta}\pm n\Big)+2\frac{\gamma_n^2}{\eta}\Big)+2(n\pm 1)\gamma_{n}\\ =n\Big(\frac{\eta^2}{\gamma_n}\frac{\d^2\gamma_n}{\d\eta^2} +\Big(\frac{\eta}{\gamma_n}-\frac{\eta^2}{\gamma_n^2}\frac{\d\gamma_n}{\d\eta}\Big)\Big(\frac{\d\gamma_n}{\d\eta}\pm n\Big)+2\gamma_n\Big), \end{multline*} that after some cleaning can be written as follows \begin{multline*} \mp\vartheta_\eta\Big( \eta\frac{\d^2\gamma_n}{\d\eta^2} +\Big(1-\frac{\eta}{\gamma_n}\frac{\d\gamma_n}{\d\eta}\Big)\frac{\d\gamma_n}{\d\eta}+2\frac{\gamma_n^2}{\eta}\Big)\pm 2 \gamma_{n}\mp n^2\frac{\eta}{\gamma_n}\Big(1-\frac{\eta}{\gamma_n}\frac{\d\gamma_n}{\d\eta}\Big)\\ =n\frac{\eta}{\gamma_n}\Big(\eta\frac{\d^2\gamma_n}{\d\eta^2} +\Big(1-\frac{\eta}{\gamma_n}\frac{\d\gamma_n}{\d\eta}\Big)\frac{\d\gamma_n}{\d\eta}\Big)-n\vartheta_\eta \Big(\frac{\eta}{\gamma_n}\frac{\d\gamma_n}{\d\eta}\Big)=0.\end{multline*} Hence, we find \eqref{eq:ODE_gamma}. \end{proof} \end{comment} \begin{theorem}[Third order ODE for generalized Charlier]\label{teo:third order} The recursion coefficient $\gamma_n$ of the generalized Charlier polynomials is subject to the following third order nonlinear ODE \begin{align}\label{eq:ode_gamma_2} \vartheta_{\eta}\Big(\frac{\gamma_n}{\eta}(\vartheta_\eta^2\log\gamma_n+2\gamma_n)+n^2\frac{\eta}{\gamma_n}\Big)=2\gamma_n \end{align} \end{theorem} \begin{proof} In the one hand, Equation \eqref{eq:eta_compatibility_charlier_2} can be written as follows \begin{align*} \pm\vartheta_\eta\Big(\frac{\gamma_n\gamma_{n\pm1}}{\eta}\Big)&= (n\pm 1)\gamma_{n}- n\gamma_{n\pm 1}, \end{align*} and, consequently, we find \begin{align*} \vartheta_\eta\Big(\frac{\gamma_n\gamma_{n+1}+\gamma_{n-1}\gamma_n}{\eta}\Big)&= (n+ 1)\gamma_{n}- n\gamma_{n+ 1}- (n- 1)\gamma_{n}+ n\gamma_{n-1}=2\gamma_n-n(\gamma_{n+1}-\gamma_{n-1})\\&= 2\gamma_n-n^2\vartheta_\eta\Big(\frac{\eta}{\gamma_n}\Big), \end{align*} where \eqref{eq:eta_compatibility_charlier_1} has been used. In the other hand, \begin{align*} \vartheta_\eta\Big(\frac{\gamma_n\gamma_{n+1}+\gamma_{n-1}\gamma_n}{\eta}\Big)&=\vartheta_\eta\Big( \frac{\gamma_n}{\eta}(\gamma_{n+1}+\gamma_{n-1})\Big) =\vartheta_\eta\Big( \frac{\gamma_n}{\eta}(\vartheta_\eta^2\log\gamma_n+2\gamma_n)\Big), \end{align*} where the Toda equation \eqref{eq:Toda_equation_gamma} for the $\gamma$'s has been used. Hence, comparing the previous equations we get Equation \eqref{eq:ode_gamma_2}. \end{proof} \begin{rem}Using the relations \begin{align*} \vartheta_{\eta}^2\log\gamma_n&=\vartheta_{\eta}\Big(\frac{\eta}{\gamma_n}\Big)\frac{\d\gamma_n}{\d\eta} +\frac{\eta^2}{\gamma_n}\frac{\d^2\gamma_n}{\d\eta^2},& \vartheta_{\eta}\Big(\frac{\eta}{\gamma_n}\Big)&=\frac{\eta}{\gamma_n}-\frac{\eta^2}{\gamma_n^2}\frac{\d\gamma_n}{\d\eta} \end{align*} we write \eqref{eq:ode_gamma_2} as follows \begin{align* \frac{\d}{\d\eta}\big(\eta \mathcal P'_{\text{III},n}(\gamma_n) \big)&=2 \frac{\gamma_{n}}{\eta}, & \mathcal P'_{\text{III},n}(\gamma_n)&:=\frac{\d^2\gamma_n}{\d\eta^2} -\frac{1}{\gamma_n}\Big(\frac{\d\gamma_n}{\d\eta}\Big)^2+\frac{1}{\eta}\frac{\d\gamma_n}{\d\eta}+\frac{2\gamma_n^2}{\eta^2}+\frac{n^2}{\gamma_n} . \end{align*} The notation here is motivated by the Okamoto's, see \cite{Okamoto}, alternative form $\text{P}_{\text{III}}'$ of the Painlevé III equation ($\mathcal P'_{\text{III},n}(u)=0$), listed as \href{https://dlmf.nist.gov/32.2}{32.2.9} in the Digital Library of Mathematical Functions (DLMF) at NIST, with the following choice of parameters given there: $\alpha=8$, $\gamma=\beta=0$ and $\delta=4n^2$. The Okamoto's notation in \cite{Okamoto} is $\text{P}_{\text{III}'}$. In fact, the corresponding Painlevé equation is, after suitable rescaling of dependent and independent variable, the $\text{P}_{\text{III}'}(D7)$ equation (9) with $\beta=0$ in \cite{Okamoto2}. See Theorem 1 (iv) in \cite{Okamoto2} and the comment immediately after. For the connection with $\text{P}_{\text{III}}$ see \cite{smet_vanassche,walter} and \cite{clarkson}. In particular, Remark 4.5 in \cite{clarkson} gives the functions $p,q$ of the Hamiltonian system $\mathcal H_{\text{III}'}$, see \cite{Okamoto}, in terms of $S_n=-p^1_n=\vartheta_{\eta}\tau_n$ \begin{align*} q&=\frac{\eta\frac{\d^2S_n}{\d\eta^2}-2(n+b)\frac{\d S_n}{\d\eta}+2n}{4 \frac{\d S_n}{\d\eta}\big(1-\frac{\d S_n}{\d\eta}\big)},& p&=\frac{\d S_n}{\d\eta} \end{align*} with parameters $\theta_0=n+b$ and $\theta_\infty=n-b$. In terms of $\gamma_n=\eta\frac{\d S_n}{\d\eta}$ we have \begin{align*} q&=\frac{\eta^2\frac{\d\gamma_n}{\d\eta}-(2n+2b+1)\eta\gamma_n+2n\eta^2}{4 \gamma_n\big(\eta^2-\gamma_n\big)},& p&=\frac{\gamma_n}{\eta}. \end{align*} From \cite[\S 2.4]{clarkson} we conclude that $q$ satisfies $P_{\text{III}'}$ with $\alpha=-4(n-b)$, $\beta=4(n+b+1)$, $\gamma =4$ and $\delta=-4$, in \href{https://dlmf.nist.gov/32.2}{32.2.9} at DLMF, or Equation (2.4) in \cite{clarkson} with $A=-2\theta_\infty=-2(n-b)$ and $B=2(\theta_0+1)=2(n+b+1)$, \begin{align*} \frac{\d^2q}{\d\eta^2}=\frac{1}{q}\Big(\frac{\d q}{\d\eta}\Big)^2-\frac{1}{\eta}\frac{\d q}{\d\eta}-(n-b)\frac{q^2}{\eta^2}+\frac{n+b+1}{\eta}+\frac{q^3}{\eta}-\frac{1}{q}. \end{align*} Notice also \cite{clarkson,Okamoto} that $p^1_n$ is a solution to the $\text{P}_{\text{III}'}$ $\sigma$-equation. After all these observations, one is lead to conjecture that Equation \eqref{eq:ode_gamma_2} should have the Painlevé property, and probably is solved in terms of the $\text{P}_{\text{III}}$ transcendents. In terms of $p=\frac{\gamma_n}{\eta}=-\frac{\d p^1_n}{\d\eta}$, Equation \eqref{eq:ode_gamma_2} can be written as follows \begin{align* \vartheta_{\eta}\Big(\vartheta_\eta^2 p-\frac{(\vartheta_\eta p)^2}{p}+2\eta p^2+\frac{n^2}{p}\Big)=2\eta p \end{align*} that expands to \begin{align* p^2 \vartheta_\eta^3 p-2p(\vartheta_\eta p)(\vartheta_\eta^2 p)+(\vartheta_\eta p)^3+(2\eta p^3-n^2)\vartheta_{\eta}p+2 p^3(p-\eta)=0. \end{align*} In \cite{clarkson} it is shown that $\frac{p-1}{p}$ satisfies an instance of Painlevé V. \end{rem} \begin{pro}[Laguerre--Freud relations for the generalized Charlier case]\label{pro:Charlier Laguerre-Freud} For $n\in\mathbb{N}_0$, we find that the recursion coefficients satisfy the following Laguerre--Freud relations \begin{subequations} \begin{align}\label{eq:betgamma_charlier_1} \beta_{n+1}&=\frac{\eta (n+1)}{\gamma_{n+1}}-\beta_{n}+ n- b,\\ \label{eq:equation2} \gamma_{n+1}&=\eta-\gamma_n-\beta_n^2-b\beta_n+\frac{\gamma_{n-1}\gamma_n}{\eta}+\frac{\eta n^2}{\gamma_n},& \gamma_{-1}=\gamma_0=0. \end{align} \end{subequations} For $n\in\mathbb{N}$, the following expression for the coefficient of the subleading term \begin{align}\label{eq:charlier_subleading} p^{1}_n=\frac{n(n+1)}{2}- n\beta_n - \frac{\gamma_n\gamma_{n+1}}{\eta} \end{align} holds, as a function of near neighbors recursion coefficients holds. \end{pro} \begin{proof} From \eqref{eq:Charlier_Psi_0} and \eqref{eq:Charlier_Psi_1}, using \eqref{eq:ladder_lambda}, we get two different expressions for the first superdiagonal involving only the recursion coefficients that we must equate, i.e. \begin{align}\label{eq:charlier_intermedioHD} \eta HD&=H(\beta+{T_-}\beta+bI)\gamma -T_+\big(DH({T_-}\gamma )\gamma\big) \end{align} so that \begin{align} \eta D\gamma^{-1}=\beta+{T_-}\beta+bI-(T_+D)\big(T_+{T_-}^2H\big)({T_-}H)^{-1} =\beta+{T_-}\beta+bI-T_+D,\label{eq:charlier_intermedioHD2} \end{align} where we used $\gamma=({T_-}H)H^{-1}$, that component wise is \eqref{eq:betgamma_charlier_1}. Alternatively, notice that Equation \eqref{eq:a dos pasos}, after summing and dealing with a telescopic series on the RHS, gives \eqref{eq:betgamma_charlier_1}. From the main diagonal and the second superdiagonal, again using \eqref{eq:ladder_lambda}, we get the following two expressions \begin{align}\label{eq:charlier_second} \eta H&= H(\gamma+T_+\gamma+\beta^2+b\beta) - T_+(D H(\beta+{T_-}\beta+bI)\gamma) +T_+^2\big(\pi^{[-2]}H({T_-}\gamma )\gamma\big),\\\label{eq:charlier_main} \eta H\pi^{[2]}&=H({T_-}\gamma )\gamma. \end{align} From \eqref{eq:charlier_main} and \eqref{eq:pis} we obtain \eqref{eq:charlier_subleading}. Again, using $\gamma=({T_-}H)H^{-1}$ we get \begin{align* \eta &= \gamma+T_+\gamma+\beta^2+b\beta - (T_+D) (T_+H)H^{-1}T_+(\beta+{T_-}\beta+bI)(T_+{T_-}H)T_+(H^{-1}) +T_+^2\big(\pi^{[-2]}{T_-}^2H\big)H^{-1} \\ &=\gamma+T_+\gamma+\beta^2+b\beta-(T_+D)T_+(\eta D\gamma^{-1}+T_+D)+T_+^2\big(\pi^{[-2]})\\ &=\gamma+T_+\gamma+\beta^2+b\beta-(T_+D)(T_+^2D)-\eta(T_+D)^2T_+(\gamma^{-1})+T_+^2\pi^{[-2]}. \end{align*} Noticing that $2D^{[2]}=D{T_-}D$, we see that \eqref{eq:pis2} implies $\pi^{[-2]}=D({T_-}D)-\pi^{[2]} $, and \eqref{eq:charlier_main} gives $\pi^{[-2]}=D({T_-}D)-\eta^{-1}({T_-}^2H)H^{-1}$. Hence, $T_+^2\pi^{[-2]}=(T_+^2D)(T_+D)-\eta^{-1}HT_+(H^{-1})$ and we obtain \begin{align* \eta&=\gamma+T_+\gamma+\beta^2+b\beta-(T_+D)(T_+^2D)-\eta(T_+D)^2T_+(\gamma^{-1})+(T_+^2D)(T_+D)-\eta^{-1}HT_+(H^{-1})\\&= \gamma+T_+\gamma+\beta^2+b\beta-\eta(T_+D)^2T_+(\gamma^{-1}) -\eta^{-1}HT_+^2(H^{-1}). \end{align*} We finally find \begin{align*} \gamma+T_+\gamma+\beta^2+b\beta-\eta=\eta(T_+D)^2T_+(\gamma^{-1}) +\eta^{-1}HT_+^2(H^{-1}), \end{align*} that component wise gives \eqref{eq:equation2}. \end{proof} \begin{rem} Smet \& Van Assche found, see \cite[Theorem 2.1]{smet_vanassche}, for the generalized Charlier weight that the corresponding recursion coefficients satisfy (in the notation of this paper) \eqref{eq:betgamma_charlier_1} and, instead of \eqref{eq:equation2}, the following relation \begin{align}\label{eq:smet_vanassche} (\gamma_{n+1}-\eta)(\gamma_n-\eta)=\eta (\beta_n-n)(\beta_n-n+b). \end{align} Notice that \eqref{eq:smet_vanassche} is an equation of the form $\gamma_{n+1}=g(\gamma_n,\beta_n)$ which only involves a step backwards in $n$. In this sense, is better than \eqref{eq:equation2} which is of the form $\gamma_{n+1}=f(\gamma_n,\gamma_{n-1},\beta_n)$. However, \eqref{eq:charlier_compatibility} gives $\gamma_{n-1}=F(\gamma_{n+1},\gamma_n,\beta_{n},\beta_{n-1})$, but \eqref{eq:betgamma_charlier_1} gives $\beta_{n-1}=G(\beta_n,\gamma_n)$, so combining both we get $\gamma_{n-1}=F(\gamma_{n+1},\gamma_n,\beta_{n},G(\beta_n,\gamma_n))$ and we finally get a relation involving only $\gamma_{n+1},\gamma_{n}$ and $\beta_n$. \end{rem} \begin{pro} If \eqref{eq:charlier_compatibility} holds then, equations \eqref{eq:betgamma_charlier_1} and \eqref{eq:equation2} are equivalent to \eqref{eq:betgamma_charlier_1} and \eqref{eq:smet_vanassche}. \end{pro} \begin{proof} We rewrite \eqref{eq:charlier_compatibility} and \eqref{eq:betgamma_charlier_1} as indicated \begin{align} \gamma_{n-1}&= \gamma_{n+1}+ \frac{n \eta (\beta_n-\beta_{n-1}-1)}{\gamma_n}, & n\geq 1.\\ \beta_{n-1}+1&=-\beta_{n}+\frac{\eta n}{\gamma_{n}}+ n- b. \end{align} Hence, for $n\geq 1$ and taking $\gamma_{-1}=\gamma_0=0$ we can write \eqref{eq:equation2} as follows \begin{align}\notag \gamma_{n+1}&=\eta-\gamma_n-\beta_n^2-b\beta_n+\frac{\gamma_n}{\eta}\Big( \gamma_{n+1}+ \frac{n \eta (\beta_n-\beta_{n-1}-1)}{\gamma_n} \Big)+\frac{\eta n^2}{\gamma_n}\\\notag &=\eta+ \frac{\gamma_n\gamma_{n+1}}{\eta}-\gamma_n+\frac{\eta n^2}{\gamma_n}-\beta_n^2-b\beta_n+n(\beta_n-\beta_{n-1}-1)\\\notag&=\eta+ \frac{\gamma_n\gamma_{n+1}}{\eta}-\gamma_n+\frac{\eta n^2}{\gamma_n}-\beta_n^2-b\beta_n+n\big(2\beta_n-\frac{\eta n}{\gamma_{n}}- n+ b\big)\\\label{eq:gamman+1n} &= \eta+\frac{\gamma_n\gamma_{n+1}}{\eta}-\gamma_n-\beta_n^2-b\beta_n+2n\beta_n- n^2+ bn. \end{align} that is \eqref{eq:smet_vanassche}. The inverse statement is easily proven to hold. If we assume \eqref{eq:smet_vanassche}, \eqref{eq:betgamma_charlier_1} and \eqref{eq:charlier_compatibility} to be true, we can go backwards in the chain of equalities leading to \eqref{eq:gamman+1n} and get the stated result. \end{proof} \begin{coro} The $\beta$'s are subject to the nonlinear recursion \begin{multline}\label{eq:nonrecursion_beta_charlier} \frac{\eta (n+1)}{\beta_{n+1}+\beta_{n}- n+ b}=\eta+\Big(\frac{ n-1}{\beta_{n-1}+\beta_{n-2}- n+2+ b}-1\Big)\frac{\eta n}{\beta_{n}+\beta_{n-1}- n+1+ b}-\beta_n^2-b\beta_n\\+n (\beta_{n}+\beta_{n-1}- n+1+ b). \end{multline} \end{coro} \begin{proof} Notice that we can write \eqref{eq:betgamma_charlier_1} as follows \begin{align}\label{eq:gamma->beta_charlier} \gamma_{n+1} &=\frac{\eta (n+1)}{\beta_{n+1}+\beta_{n}- n+ b}, \end{align} that we can introduce in \eqref{eq:equation2} to get \eqref{eq:nonrecursion_beta_charlier}. \end{proof} We now seek for a second order ODE for $\gamma_n$. From the above results it easily follows that (this was previously found in \cite{filipuk_vanassche0}), \begin{lemma} The recursion coefficients $\beta_n$ and $\gamma_n$ satisfy the following system of first order nonlinear ODEs \begin{subequations}\label{eq:Charlier_system_gamma_beta} \begin{align}\label{eq:Charlier_system_gamma_beta_1} \vartheta_{\eta}\beta_n&=\eta\frac{\eta+ (b-n)n+(2n-b-\beta_n)\beta_n-\gamma_n}{\eta-\gamma_n}-\gamma_n,\\ \label{eq:Charlier_system_gamma_beta_2} \vartheta_\eta \gamma_n&=(b-n+1+2\beta_n)\gamma_n-n\eta. \end{align} \end{subequations} \end{lemma} \begin{proof} Equation \eqref{eq:gamman+1n}, after some cleaning, reads \begin{align}\label{eq:smet_vanassche_2} \gamma_{n+1}&= \eta\frac{\eta+ (b-n)n+(2n-b-\beta_n)\beta_n-\gamma_n}{\eta-\gamma_n}, \end{align} which is \cite[Equation (16)]{filipuk_vanassche0}. In the one hand, from \eqref{eq:smet_vanassche_2} and the Toda equation \eqref{eq:Toda_system_beta} we find \eqref{eq:Charlier_system_gamma_beta_1}. That is Equation (18) in \cite{filipuk_vanassche0}. On the other hand, from \eqref{eq:betgamma_charlier_1} and the Toda equation \eqref{eq:Toda_system_gamma} we get \eqref{eq:Charlier_system_gamma_beta_2}. \end{proof} \begin{rem} Differential system \eqref{eq:Charlier_system_gamma_beta} was used in \cite{filipuk_vanassche0} to get a second order nonlinear ODE for $\beta_n$ and then showed \cite[Theorem 2.1]{filipuk_vanassche0} that an auxiliary function $y$, see \cite[Equation (20)]{filipuk_vanassche0}, satisfies an instance of the $\text{P}_\text{V}$ related to the $\text{P}_\text{III}$. Notice that $y$ is a solution to a Ricatti equation in where $\beta_n$ appears in the coefficients. \end{rem} \begin{theorem}[Second order ODE for generalized Charlier]\label{teo:second order} The recursion coefficient $\gamma_n$ satisfies the second order nonlinear ODE \begin{multline}\label{eq:edo_Charlier_2} \Big(1-\frac{\gamma_n}{\eta}\Big)\Big(\vartheta_{\eta}\Big( \frac{\vartheta_\eta \gamma_n}{\gamma_n}+\frac{n\eta}{\gamma_n}\Big)+2\gamma_n\Big)+2(\gamma_n-\eta+(n-b)n)\\ -\frac{1}{2}\Big( \frac{\vartheta_\eta \gamma_n}{\gamma_n}+\frac{n\eta}{\gamma_n}\Big)^2+ (n+1)\Big( \frac{\vartheta_\eta \gamma_n}{\gamma_n}+\frac{n\eta}{\gamma_n}\Big) + (-b+n-1)(-b+3n+1). \end{multline} \end{theorem} \begin{proof}Observe that Equation \eqref{eq:Charlier_system_gamma_beta_2} leads to \begin{align}\label{eq:beta_n-gamma_n} \beta_n=\frac{1}{2}\Big( \vartheta_\eta \log \gamma_n+\frac{n\eta}{\gamma_n}-b+n-1\Big), \end{align} so that \begin{align}\label{eq:diff-beta_n-gamma_n} \vartheta_{\eta}\beta_n=\frac{1}{2}\vartheta_{\eta}\Big( \vartheta_\eta \log \gamma_n+\frac{n\eta}{\gamma_n}\Big). \end{align} From \eqref{eq:Charlier_system_gamma_beta_1} and \eqref{eq:diff-beta_n-gamma_n} we get \begin{align*} \Big(1-\frac{\gamma_n}{\eta}\Big)\Big(\frac{1}{2}\vartheta_{\eta}\Big( \vartheta_\eta \log \gamma_n+\frac{n\eta}{\gamma_n}\Big)+\gamma_n\Big)+\gamma_n-\eta+(n-b)n= (2n-b)\beta_n-\beta_n^2, \end{align*} and replacing $\beta_n$ by the expression provided in \eqref{eq:beta_n-gamma_n} we get \eqref{eq:edo_Charlier_2}. To check it let us elaborate on the RHS of the equation \begin{align*} (2n-b)\beta_n-\beta_n^2&= \frac{2n-b}{2}\Big( \vartheta_\eta \log \gamma_n+\frac{n\eta}{\gamma_n}-b+n-1\Big) -\frac{1}{4}\Big( \vartheta_\eta \log \gamma_n+\frac{n\eta}{\gamma_n}-b+n-1\Big)^2\\&= \begin{multlined}[t][0.75\textwidth] \frac{2n-b}{2}\Big( \vartheta_\eta \log \gamma_n+\frac{n\eta}{\gamma_n}-b+n-1\Big) -\frac{1}{4}\Big( \vartheta_\eta \log \gamma_n+\frac{n\eta}{\gamma_n}\Big)^2\\-\frac{(b-n+1)^2}{4} +\frac{b-n+1}{2}\Big( \vartheta_\eta \log \gamma_n+\frac{n\eta}{\gamma_n}\Big) \end{multlined}\\ &= \begin{multlined}[t][0.75\textwidth] -\frac{1}{4}\Big( \vartheta_\eta \log \gamma_n+\frac{n\eta}{\gamma_n}\Big)^2+ \frac{n+1}{2}\Big( \vartheta_\eta \log \gamma_n+\frac{n\eta}{\gamma_n}\Big) + \frac{(-b+n-1)(-b+3n+1)}{2}, \end{multlined} \end{align*} and the statement is proven. \end{proof} \begin{comment} Primer cálculo con errores, sale una identidad The third order ODE \eqref{eq:ode_gamma_2} and the second order ODE \eqref{eq:edo_Charlier_3} for $\gamma_n$ imply a first order ODE for this recursion coefficient. \begin{pro} The recursion coefficient $\gamma_n$ satisfies the following first order nonlinear ODE \begin{multline*} \big(\vartheta_{\eta}\gamma_n\big)^2-\Big(n^2\frac{(\eta-\gamma_n)^2}{ 2\gamma_n}\Big(\frac{1}{\eta}+\frac{\eta}{\gamma_n^2}\Big)+2\eta\Big)\vartheta_{\eta}\gamma_n\\+ \big(2\eta-\gamma_n\big)\gamma_n+\frac{(\eta-\gamma_n)^2}{2\gamma_n}\Big(\big(2+\frac{n^2}{\eta}\big)\gamma_n+n^2\frac{\eta}{ \gamma_n} \Big)=0. \end{multline*} \end{pro} \begin{proof} From \eqref{eq:edo_Charlier_2} we have \begin{align*} \frac{\gamma_n}{\eta} (\vartheta_\eta^2\log \gamma_n+2\gamma_n)&= \begin{multlined}[t][0.75\textwidth] n\frac{\vartheta_{\eta}\gamma_n}{\gamma_n}-n+ \frac{\gamma_n}{\eta-\gamma_n}\bigg(-2 (\gamma_n-\eta+(n-b)n) -\frac{1}{2}\Big( \frac{\vartheta_\eta \gamma_n}{\gamma_n}+\frac{n\eta}{\gamma_n}\Big)^2\\+ (n+1)\Big( \frac{\vartheta_\eta \gamma_n}{\gamma_n}+\frac{n\eta}{\gamma_n}\Big) + (-b+n-1)(-b+3n+1)\bigg) \end{multlined} \end{align*} and \eqref{eq:ode_gamma_2} implies \begin{align}\label{eq:edo_Charlier_3} 2\gamma_n-n^2 \vartheta_\eta\big(\frac{\eta}{\gamma_n} \big)&= \begin{multlined}[t][0.75\textwidth] \vartheta_{\eta}\Bigg( n\Big( \frac{\vartheta_\eta \gamma_n}{\gamma_n}+\frac{n\eta}{\gamma_n}\Big)-\frac{n^2\eta}{\gamma_n}+ \frac{\gamma_n}{\eta-\gamma_n}\bigg(-2 (\gamma_n-\eta+(n-b)n) \\ -\frac{1}{2}\Big( \frac{\vartheta_\eta \gamma_n}{\gamma_n}+\frac{n\eta}{\gamma_n}\Big)^2+ (n+1) \Big( \frac{\vartheta_\eta \gamma_n}{\gamma_n}+\frac{n\eta}{\gamma_n}\Big) \\+ (-b+n-1)(-b+3n+1)\bigg)\Bigg). \end{multlined} \end{align} Equation \eqref{eq:edo_Charlier_3}, once $\vartheta_{\eta}(\frac{\gamma_n}{\eta-\gamma_n})= -\frac{\eta}{\eta-\gamma_n}\frac{\gamma_n-\vartheta_{\eta}\gamma_n}{\eta-\gamma_n}$ is noticed, can be written as follows \begin{align*} \vartheta_{\eta}\Big( \frac{\vartheta_\eta \gamma_n}{\gamma_n}+\frac{n\eta}{\gamma_n}\Big) & = \begin{multlined}[t][0.75\textwidth]\frac{\eta-\gamma_n}{\gamma_n-\vartheta_{\eta}\gamma_n}\bigg( 2\gamma_n -2 \gamma_n\frac{\eta-\vartheta_{\eta}\gamma_n}{\eta-\gamma_n} \\+\frac{\eta}{\eta-\gamma_n}\frac{\gamma_n-\vartheta_{\eta}\gamma_n}{\eta-\gamma_n} \Big(-2 (\gamma_n-\eta+(n-b)n) -\frac{1}{2}\Big( \frac{\vartheta_\eta \gamma_n}{\gamma_n}+\frac{n\eta}{\gamma_n}\Big)^2\\+ (n+1) \Big( \frac{\vartheta_\eta \gamma_n}{\gamma_n}+\frac{n\eta}{\gamma_n}\Big) + (-b+n-1)(-b+3n+1)\Big)\bigg)\\ \end{multlined}\\&= \begin{multlined}[t][0.75\textwidth] -2 \gamma_n+\frac{\eta}{\eta-\gamma_n}\bigg(-2 (\gamma_n-\eta+(n-b)n) -\frac{1}{2}\Big( \frac{\vartheta_\eta \gamma_n}{\gamma_n}+\frac{n\eta}{\gamma_n}\Big)^2\\+ (n+1) \Big( \frac{\vartheta_\eta \gamma_n}{\gamma_n}+\frac{n\eta}{\gamma_n}\Big) + (-b+n-1)(-b+3n+1)\bigg). \end{multlined} \end{align*} Moreover, Equation \eqref{eq:edo_Charlier_2} gives \begin{multline*} \vartheta_{\eta}\Big( \frac{\vartheta_\eta \gamma_n}{\gamma_n}+\frac{n\eta}{\gamma_n}\Big) = -2\gamma_n+ \frac{\eta}{\eta-\gamma_n}\Big( -2(\gamma_n-\eta+(n-b)n) -\frac{1}{2}\Big( \frac{\vartheta_\eta \gamma_n}{\gamma_n}\Big)^2\\+ (n+1)\Big( \frac{\vartheta_\eta \gamma_n}{\gamma_n}+\frac{n\eta}{\gamma_n}\Big) + (-b+n-1)(-b+3n+1)\Big) \end{multline*} and, consequently, we get \begin{align*} -2\gamma_n=\frac{\eta-\gamma_n}{\gamma_n-\vartheta_{\eta}\gamma_n}\Big( 2\gamma_n-n^2 \vartheta_\eta\big(\frac{\gamma_n}{\eta}\big)+n^2 \vartheta_\eta\big(\frac{\eta}{\gamma_n} \big)\Big) -2 \gamma_n\frac{\eta-\vartheta_{\eta}\gamma_n}{\gamma_n-\vartheta_{\eta}\gamma_n} \end{align*} i.e., \begin{align}\label{eq:charlier_1rst-order_ODE_Painleve} -2\gamma_n\big(\gamma_n-\vartheta_{\eta}\gamma_n\big)=(\eta-\gamma_n)\Big( 2\gamma_n-n^2 \vartheta_\eta\big(\frac{\gamma_n}{\eta}\big)+n^2 \vartheta_\eta\big(\frac{\eta}{\gamma_n} \big)\Big) -2 \gamma_n\big(\eta-\vartheta_{\eta}\gamma_n\big), \end{align} \begin{align* 2\gamma_n(\eta-\gamma_n)=(\eta-\gamma_n)\Big( 2\gamma_n-n^2 \vartheta_\eta\big(\frac{\gamma_n}{\eta}\big)+n^2 \vartheta_\eta\big(\frac{\eta}{\gamma_n} \big)\Big) \end{align*} and after some cleaning the result follows. \end{proof} From \eqref{eq:charlier_1rst-order_ODE_Painleve} we see that $\gamma_n$ is solution of $F(\gamma_n',\gamma_n;\eta)=0$ where $F$ is a polynomial en the first two variables, with coefficients entire functions in $\eta$. According to a result of Painlevé \cite{painleve}, see also \cite{filipuk_halburd,filipuk_halburd2}, the solutions of this ODE only present movable singularities of algebraic type, poles and branch points. Thus, we conclude that $\gamma_n$ has the quasi-Painlevé property (also known as weak Painlevé property). Although, from the previous discussion \cite{clarkson} this a is pretty obvious fact since $\frac{\gamma_n-\eta}{\gamma_n}$ is a solution of an instance of $\text{P}_{\text V}$, and therefore all its movable singularities are poles. \end{comment} Some additional properties of this generalized Charlier case follow. \begin{pro} For the generalized Charlier case, $\sigma=\eta$ and $\theta=z(z+b)$, the following holds: \begin{enumerate} \item The dressed Pascal matrix is \begin{align}\label{eq:Pascal_Charlier} \Pi=I+\Lambda^\top D+\eta^{-1}\big(\Lambda^\top)^2({T_-}\gamma)\gamma= \left( \begin{NiceMatrix}[columns-width = auto] 1 & 0 &0 &\Cdots\\ 1 & 1 & 0 &\Ddots\\ \frac{\gamma_1\gamma_2 }{\eta}& 2&1&\Ddots\\ 0 &\frac{\gamma_2\gamma_3}{\eta} & 3 &\Ddots\\ \Vdots &\Ddots &\Ddots&\Ddots \end{NiceMatrix}\right). \end{align} Moreover, the Jacobi and dressed Pascal matrices are linked by \begin{align}\label{eq:Pascal_Jacobi_Charlier} J^2+bJ= \eta\Pi H\Pi^\top H^{-1}. \end{align} \item The corresponding orthogonal polynomials satisfy \begin{align*} \eta^{-1}&=\frac{P_{n+1}(0)P_n(-b)-P_{n+1}(-b)P_n(0)}{P_{n+2}(0)P_{n+1}(-b)-P_{n+2}(b)P_{n+1}(0)},& \frac{ n+1}{\gamma_{n+1}}&= \frac{P_{n+2}(0)P_n(-b)-P_{n+2}(-b)P_n(0)}{P_{n+2}(0)P_{n+1}(-b)-P_{n+2}(-b)P_{n+1}(0)}. \end{align*} \end{enumerate} \end{pro} \begin{proof} \begin{enumerate} \setcounter{enumi}{1} \item We apply the ideas that leads to Christoffel formula for a Christoffel perturbation. From \eqref{eq:P_shift} we get $\theta(z) P(z-1)=\Psi H^{-1} P(z)=\eta H\Pi^\top H^{-1} P(z)$, where the last equation holds due the generalized Charlier restrictions to the weight. Therefore, \begin{align*} \theta(z)H^{-1}P(z-1)=\Pi^\top H^{-1} P(z). \end{align*} As $\theta(0)=\theta(-b)=0$ we have \begin{align*}\hspace*{-1cm} \Pi^\top H^{-1} P(0)&=0,& \Pi^\top H^{-1} P(-b)&=0. \end{align*} Both equations can be simplified to \begin{align*} ( \eta^{-1}\gamma_{n+2}\gamma_{n+1},n+1)=-(H_n^{-1}P_n(0),H_n^{-1}P_n(-b))\begin{pmatrix} H_{n+2}^{-1}P_{n+2}(0) &H_{n+2}^{-1}P_{n+2}(-b) \\[2pt]H_{n+1}^{-1}P_{n+1}(0) &H_{n+1}^{-1}P_{n+1}(-b) \end{pmatrix}^{-1}, \end{align*} from where the result follows. \end{enumerate} \end{proof} \section{Generalized Meixner weights}\label{S:Meixner} Another interesting case appears when one takes $\sigma=z+a$, that we call extended Meixner type. The generalized Meixner, which is the main example of these extended Meixner weights, corresponds to the choice $\sigma(z)=\eta(z+a)$ and $\theta(z)=z(z+b)$. A weight that satisfies the Pearson equation \begin{align*} (k+1)(k+1+b)w(k+1)=\eta (k+a)w(k) \end{align*} is proportional to the generalized Meixner weight \begin{align*} w(z)=\frac{\Gamma(b+1)\Gamma(z+a)\eta^z}{\Gamma(a)\Gamma(z+b+1)\Gamma(z+1)}=\frac{(a)_z}{(b+1)_z}\frac{\eta^z}{z!}. \end{align*} The finiteness of the moments requires $a(b+1)>0$ and $\eta>0$ \cite{diego_paco}. In this case the moments are expressed in terms of $\rho_0={}_1F_1(a;b+1;\eta)=M(a,b+1,\eta)$. also known as the confluent hypergeometric function or Kummer function. \begin{rem} Meixner introduced the non generalized version with $w=(a)_z\frac{\eta^z}{z!}$ in \cite{meixner}. \end{rem} \begin{rem} For an extended Meixner type weight, as $M=\deg\sigma=1$ we have that the Laguerre--Freud matrix $\Psi=\Pi^{-1}H\theta(J^\top)=\eta(J+a)H\Pi^\top$ is a banded matrix with only one subdiagonal and $N$ superdiagonals. Now we do not have, as in the extended Charlier case that $\Psi=H\Pi^\top$, which implied that the dressed Pascal matrix $\Pi$ had only three nonzero subdiagonals. In the extended Meixner case the dressed Pascal matrix will possibly have an infinite number of nonzero subdiagonals. \end{rem} \begin{theorem}[The generalized Meixner Laguerre--Freud structure matrix]\label{teo:generalized Meixner} For a generalized Meixner weight; i.e. $\sigma=\eta(z+a)$ and $\theta=z(z+b)$, the structure matrix is { \begin{align}\label{eq:Laguerre-Freud-Meixner-structure} \hspace*{-.5cm}\Psi&=\left(\begin{NiceMatrix}[columns-width = .4cm ] \eta(\beta_0+a)H_0 & (\beta_0+\beta_1+b)H_1& H_2& 0 &\Cdots&\\ \eta H_1&\eta (\beta_1+a+1)H_1& (\beta_1+\beta_2+b-1)H_2&H_3&\Ddots&\\[5pt] 0&\eta H_2 &\eta (\beta_2+a+2)H_2& (\beta_2+\beta_3+b-2)H_3&H_4& \\[4pt] \Vdots&\Ddots&\eta H_3 &\eta(\beta_3+a+3)H_3& (\beta_3+\beta_4+b-3)H_4&\Ddots \\ & &\Ddots&\Ddots&\Ddots&\Ddots \end{NiceMatrix}\right). \end{align} } \end{theorem} \begin{proof} The structure matrix has a diagonal structure, see \eqref{eq:diagonals_Psi}, $\Psi=\Lambda^\top\psi^{(-1)}+\psi^{(0)}+\psi^{(1)}\Lambda+ \psi^{(2)}\Lambda^2$. As $\Psi=\eta(J+aI)H\Pi^\top$ we find \begin{align}\label{eq:structure_Meixner_1} \begin{aligned} \Psi&=\eta(\Lambda^\top\gamma+\beta+aI+\Lambda)H(I+D\Lambda+\pi^{[2]}\Lambda^2+\pi^{[3]}\Lambda^3+\cdots)\\&= \begin{multlined}[t][.9\textwidth] \underset{\text{first subdiagonal}}{\underbrace{\eta\Lambda^\top\gamma H}}+\underset{\text{main diagonal}}{\underbrace{\eta(\Lambda^\top\gamma H D\Lambda+(\beta+a I) H)}}+\underset{\text{first superdiagonal}}{\underbrace{\eta(\Lambda^\top\gamma H \pi^{[2]}\Lambda^2+(\beta+a I) HD\Lambda+\Lambda H)}}\\+ \underset{\text{second superdiagonal}}{\underbrace{\eta(\Lambda^\top\gamma H \pi^{[3]}\Lambda^3+(\beta+a I) H\pi^{[2]}\Lambda^2+\Lambda HD\Lambda)}} + \cdots \end{multlined} \end{aligned} \end{align} we get $\psi^{(0)}=\eta T_+(\gamma H D)+\eta(\beta+a I) H$. Now, as \begin{align*} J^2+b J=(\Lambda^\top)^2 ({T_-}\gamma )\gamma+ \Lambda^\top (\beta+{T_-}\beta+bI)\gamma+\gamma+T_+\gamma+\beta^2+b\beta+(\beta+{T_-}\beta +bI)\Lambda+\Lambda^2, \end{align*} from the alternative expression $\Psi=\Pi^{-1}H\big((J^\top)^2+bJ^\top\big)$, we find {\small\begin{align}\label{eq:structure_Meixner2} \hspace*{-1.5cm}\begin{aligned} \Psi&= \begin{multlined}[t][\textwidth] (I-\Lambda^\top D+(\Lambda^\top)^2 \pi^{[-2]}-(\Lambda^\top)^3 \pi^{[-3]}+\cdots)H\\\times \big((\Lambda^\top)^2+ \Lambda^\top (\beta+{T_-}\beta+bI)+\gamma+T_+\gamma+\beta^2+b\beta+ (\beta+{T_-}\beta+bI)\gamma \Lambda+({T_-}\gamma )\gamma\Lambda^2\big) \end{multlined}\\&= \begin{multlined}[t][\textwidth] \underset{\text{second superdiagonal}}{\underbrace{H({T_-}\gamma )\gamma\Lambda^2}}\underset{\text{first superdiagonal}}{\underbrace{- \Lambda^\top D H({T_-}\gamma )\gamma\Lambda^2+H(\beta+{T_-}\beta+bI)\gamma \Lambda}}\\+\underset{\text{main diagonal}}{\underbrace{ H(\gamma+T_+\gamma+\beta^2+b\beta)-\Lambda^\top DH(\beta+{T_-}\beta+bI)\gamma \Lambda+(\Lambda^\top)^2 \pi^{[-2]}H({T_-}\gamma )\gamma\Lambda^2}}\\\hspace*{-1cm}+ \underset{\text{first subdiagonal}}{\underbrace{H\Lambda^\top (\beta+{T_-}\beta+bI)-\Lambda^\top DH(\gamma+T_+\gamma+\beta^2+b\beta)+(\Lambda^\top)^2 \pi^{[-2]}H(\beta+{T_-}\beta+bI)\gamma \Lambda-(\Lambda^\top)^3 \pi^{[-3]}H({T_-}\gamma )\gamma\Lambda^2.}} \end{multlined} \end{aligned} \end{align}} Thus, $\psi^{(1)}=- T_+(D H({T_-}\gamma )\gamma)+H(\beta+{T_-}\beta+bI)\gamma $. Finally, the Laguerre--Freud matrix has the form \begin{align*} \Psi&=\eta\Lambda^\top {T_-} H +\eta(\beta+a I+T_+D)H +\big( \beta+{T_-}\beta+bI- T_+D \big){T_-}H\Lambda+{T_-}^2H\Lambda^2. \end{align*} When expressed component wise we get the given form in \eqref{eq:Laguerre-Freud-Meixner-structure}. \end{proof} \begin{pro}[Compatibility] For the generalized Meixner case the recursion coefficients fulfill \begin{subequations} \begin{align}\label{eq:laguerre-freud-meixner-2} (\beta_n+\beta_{n+1}+b-n-\eta)\gamma_{n+1}-(\beta_{n-1}+\beta_n+b-n+1-\eta)\gamma_n&=\eta(\beta_n+a+n),\\\label{eq:laguerre-freud-meixner-3} \gamma_{n+2}-\gamma_n-2\eta +(\beta_n+\beta_{n+1}+b-n-\eta)(\beta_{n+1}-\beta_n-1)&=0. \end{align} \end{subequations} \end{pro} \begin{proof} The matrix \begin{align* \hspace*{-1cm}\Psi H^{-1}&=\left(\begin{NiceMatrix}[columns-width = .3cm ] \eta(\beta_0+a)& \beta_0+\beta_1+b& 1& 0 &\Cdots&\\ \eta \gamma_1&\eta (\beta_1+a+1)& \beta_1+\beta_2+b-1&1&\Ddots&\\[5pt] 0&\eta \gamma_2 &\eta (\beta_2+a+2)& \beta_2+\beta_3+b-2&1& \\[4pt] \Vdots&\Ddots&\eta \gamma_3 &\eta(\beta_3+a+3)& \beta_3+\beta_4+b-3&\Ddots \\ & &\Ddots&\Ddots&\Ddots&\Ddots \end{NiceMatrix}\right) \end{align*} satisfies the compatibility condition \eqref{eq:compatibility_Jacobi_structure_a}. As we have {\begin{align*} \hspace*{-1cm}\big[\Psi H^{-1}, J\big]&=\left(\small\begin{NiceMatrix}[small,columns-width = 5pt ] \cellcolor{Gray!10} (\beta_0+\beta_1+b-\eta)\gamma_1& {\cellcolor{Gray!25}\scriptsize\begin{matrix} \gamma_2-\eta\\+ (\beta_0+\beta_1+b-\eta)(\beta_1-\beta_0) \end{matrix}}& 1& 0 &\Cdots\\ \eta \gamma_1&{\cellcolor{Gray!10}\scriptsize\begin{matrix} (\beta_1+\beta_2+b-1-\eta)\gamma_2\\-(\beta_0+\beta_1+b-\eta)\gamma_1 \end{matrix}}& {\cellcolor{Gray!25}\scriptsize\begin{matrix} \gamma_3-\gamma_1-\eta\\+(\beta_1+\beta_2+b-1-\eta)(\beta_2-\beta_1) \end{matrix}}&1&\Ddots\\[5pt] 0&\eta \gamma_2 &{\cellcolor{Gray!10}\scriptsize\begin{matrix} (\beta_2+\beta_3+b-2-\eta)\gamma_3\\-(\beta_1+\beta_2+b-1-\eta)\gamma_2 \end{matrix}}& \cellcolor{Gray!25}{\scriptsize\begin{matrix} \gamma_4-\gamma_2-\eta\\ +(\beta_2+\beta_3+b-2-\eta)(\beta_3-\beta_2) \end{matrix}}& \\ \Vdots &\Ddots &\Ddots&\Ddots&\Ddots \end{NiceMatrix}\right) \end{align*}} we get for the first row \begin{align*} \gamma_1&= \frac{\eta(\beta_0+a)}{\beta_0+\beta_1+b-\eta}, & \gamma_2&= \beta_0+\beta_1+b- (\beta_0+\beta_1+b-\eta)(\beta_1-\beta_0)+\eta, \end{align*} while for the next rows we obtain \begin{align*} (\beta_n+\beta_{n+1}+b-n-\eta)\gamma_{n+1}-(\beta_{n-1}+\beta_n+b-n+1-\eta)\gamma_n&=\eta(\beta_n+a+n),\\ \gamma_{n+2}-\gamma_n-\eta +(\beta_n+\beta_{n+1}+b-n-\eta)(\beta_{n+1}-\beta_n)&=\beta_n+\beta_{n+1}+b-n. \end{align*} Which are \eqref{eq:laguerre-freud-meixner-2} and \eqref{eq:laguerre-freud-meixner-3} (in disguise the last one, some clearing is required). An alternative manner to get \eqref{eq:laguerre-freud-meixner-2} follows. The matrix $\Psi H^{-1}$ fulfills \eqref{eq:eta_compatibility_Pearson_1b}, where $\Phi=-J_-$, and we have $ \vartheta_{\eta}(\Psi H^{-1})=[\Psi H^{-1},J_-]$. The explicit expression of the above commutator is, \begin{align*} \hspace*{-1cm}\big[\Psi H^{-1}, J_-\big]&=\left(\begin{NiceMatrix}[columns-width = 1cm,,] \cellcolor{Gray!15} (\beta_0+\beta_1+b)\gamma_1& \gamma_2& 0& 0&\Cdots&\\ \eta \gamma_1(\beta_1-\beta_0+1)&\cellcolor{Gray!15}\scriptsize\begin{matrix} (\beta_1+\beta_2+b-1)\gamma_2\\-(\beta_0+\beta_1+b)\gamma_1 \end{matrix}&\gamma_3-\gamma_1&0&\Ddots&\\[5pt] 0& \eta \gamma_2 (\beta_2-\beta_1+1)&{\cellcolor{Gray!15}\scriptsize\begin{matrix} (\beta_2+\beta_3+b-2)\gamma_3\\-(\beta_1+\beta_2+b-1)\gamma_2 \end{matrix}}& \gamma_4-\gamma_2& \\ \Vdots & &\Ddots&\Ddots&\Ddots \end{NiceMatrix}\right), \end{align*} and, consequently, we find the following equations \begin{align} \label{eq:1}\vartheta_\eta (\eta\gamma_n)&=\eta\gamma_n(\beta_n-\beta_{n-1}+1),\\ \label{eq:2}\vartheta_{\eta}(\beta_n+\beta_{n+1}+b-n)&=\gamma_{n+2}-\gamma_n,\\ \label{eq:3} \vartheta_\eta\big(\eta(\beta_n+a+n)\big) &=\gamma_{n+1}(\beta_n+\beta_{n+1}+b-n) -\gamma_n(\beta_{n-1}+\beta_n+b-(n-1)). \end{align} Notice that \eqref{eq:1} and \eqref{eq:2} follow from the Toda equations for the recursion coefficients \eqref{eq:Toda_system}. The Toda equation \eqref{eq:Toda_system_beta} allows to write \eqref{eq:3} as follows \begin{align*} \eta(\beta_n+a+n)+\eta\gamma_{n+1}-\eta\gamma_n &=\gamma_{n+1}(\beta_n+\beta_{n+1}+b-n) -\gamma_n(\beta_{n-1}+\beta_n+b-(n-1)), \end{align*} and we get \eqref{eq:laguerre-freud-meixner-2}. \end{proof} \begin{pro}[Laguerre--Freud relations] For $n\in\mathbb{N}$, we find the following Laguerre--Freud equations \begin{align}\label{eq:Laguerre-Freud-Meixner1} \gamma_{n+1}&= -\gamma_{n}-(\beta_{n}+b-n)\beta_{n}+n(b-a-n+1) +\eta(\beta_{n}+a+n)+\eta^{-1}(\beta_{n-1}+\beta_{n}+b-n+1-\eta) \gamma_{n},\\\label{eq:meixner_laguerre_freud_larga} 0&=\begin{aligned}[t] &(n+3)(n+2)(n+1)(\beta_n-\beta_{n+2}+1)-\big(\eta^{-1}\gamma_{n+3} -n-3\big)\gamma_{n+2}\\&+(n+3)(n+2) \Big(\eta^{-1}(\beta_{n+1}+\beta_{n+2}+b-n-1-\eta) \gamma_{n+2}-(n+2)(\beta_{n+1}+a ) \Big) \\&-(n+2)(n+1)\Big(\eta^{-1}(\beta_{n+3}+\beta_{n+4}+b-n-3-\eta) \gamma_{n+4}-(n+4)(\beta_{n+3}+a )\Big)\\ & +(-\beta_{n+2}+a-b ) \big( \eta^{-1}(\beta_{n+2}+\beta_{n+3}+b-n-2-\eta)\gamma_{n+3} - (n+3) (\beta_{n+2}+a ) \big) \\ &+(\beta_{n+2}+\beta_{n+3}+b-n-3-\eta)\gamma_{n+3}-(n+3)(\gamma_{n+2}+\beta_{n+2}^2+b\beta_{n+2})\\&+(n+3)(n+2)(\beta_{n+1}+\beta_{n+2}+b). \end{aligned} \end{align} We also find \begin{align}\label{eq:Laguerre-Freud-Meixner3} p^{1}_{n+1}&=-\eta^{-1}(\beta_{n+1}+\beta_{n+2}+b-n-1-\eta) \gamma_{n+2}+\beta_{n+1}+a (n+2) +\frac{(n+2)(n+1)}{2}. \end{align} \end{pro} \begin{proof} Comparing \eqref{eq:structure_Meixner_1} with \eqref{eq:structure_Meixner2} and recalling \eqref{eq:ladder_lambda} we obtain \begin{gather}\label{eq:hyper_meixner_1}\eta {T_-}H= \begin{multlined}[t][.7\textwidth] (\beta+{T_-}\beta+bI)T_-H- DH(\gamma+T_+\gamma+\beta^2+b\beta)\\+T_+\big( \pi^{[-2]}(\beta+{T_-}\beta+bI)H\gamma\big) +T_+^2\big(\pi^{[-3]}H({T_-}\gamma )\gamma\big), \end{multlined}\\\label{eq:hyper_meixner_2} \eta(T_+(\gamma H D)+(\beta+a I) H)= \begin{multlined}[t][.5\textwidth]H(\gamma+T_+\gamma+\beta^2+b\beta)\\-T_+(DH(\beta+{T_-}\beta+bI)\gamma) +T_+^2 (\pi^{[-2]}H({T_-}\gamma )\gamma), \end{multlined}\\\label{eq:hyper_meixner_3} \eta\big(T_+(\gamma H \pi^{[2]})+(\beta+a I)H D+{T_-}H\big)=- T_+\big(D H({T_-}\gamma )\gamma\big)+H(\beta+{T_-}\beta+bI)\gamma ,\\\label{eq:hyper_meixner_4} \eta\big(T_+(\gamma H \pi^{[3]})+(\beta+a I) H\pi^{[2]}+{T_-} (HD)\big)=H({T_-}\gamma )\gamma. \end{gather} Let us study relations \eqref{eq:hyper_meixner_2} and \eqref{eq:hyper_meixner_3}. In the one hand, if we choose to solve \eqref{eq:hyper_meixner_3} for $\pi^{[2]}$, we get (recall that ${T_-}T_+=\text{id}$) \begin{align}\notag \pi^{[2]}&=-H^{-1}\gamma^{-1} {T_-}((\beta+a I) HD+{T_-}H)- \eta^{-1}D {T_-}\gamma + \eta^{-1}H^{-1}\gamma^{-1}{T_-}(H\gamma(\beta+{T_-}\beta+bI)) \\&= \eta^{-1}({T_-}\beta+T_-^2\beta+bI-D-\eta) {T_-}\gamma-({T_-}D)({T_-}\beta+aI),\label{eq:pi2} \end{align} so that \begin{align*} \pi^{[2]}_n&=\eta^{-1}(\beta_{n+1}+\beta_{n+2}+b-n-1-\eta) \gamma_{n+2}-(n+2)(\beta_{n+1}+a ) . \end{align*} Moreover, from $\beta=T_+S^{[1]}-S^{[1]}$ and $\pi^{[ 2]}=D^{[2]}+ (T_-S^{[1]}-S^{[1]})D- S^{[1]}$ we get \begin{align}\label{eq:S1_Meixner} S^{[1]}&=-\pi^{[2]}+D^{[2]}-DT_-\bet \end{align} that, component wise, is \eqref{eq:Laguerre-Freud-Meixner3}. Now, recalling \eqref{eq:pis2}, we write \eqref{eq:hyper_meixner_2} as follows \begin{gather}\label{eq:hyper_meixner_2_bis} \eta (\beta+a I+T_+D) =\gamma+T_+\gamma+\beta^2+b\beta-T_+(D(\beta+{T_-}\beta+bI)) +T_+^2 (-\pi^{[2]}+2D^{[2]}), \end{gather} that is, using \eqref{eq:theDs}, \begin{align*} \eta(T_-^2\beta+a I+T_- D) &=T_-^2\gamma+{T_-}\gamma+T_-^2\beta^2+bT_-^2\beta-(T_-D)({T_-}\beta+T_-^2\beta+bI-D) -\pi^{[2]}\\&= \begin{multlined}[t][0.75\textwidth] T_-^2\gamma+{T_-}\gamma+T_-^2\beta^2+bT_-^2\beta-(T_-D)({T_-}\beta+T_-^2\beta+bI-D)\\- \eta^{-1}({T_-}\beta+T_-^2\beta+bI-D-\eta) {T_-}\gamma+(T_-D)({T_-}\beta+a I) \end{multlined}\\&=\begin{multlined}[t][0.75\textwidth] T_-^2\gamma+{T_-}\gamma+T_-^2\beta^2+bT_-^2\beta-(T_-D)(T_-^2\beta+(b-a)I-D)\\- \eta^{-1}({T_-}\beta+T_-^2\beta+bI-D-\eta) {T_-}\gamma, \end{multlined} \end{align*} that component wise reads \begin{multline*} \eta(\beta_{n+2}+a +n+2) =\gamma_{n+3}+\gamma_{n+2}+\beta_{n+2}^2+b\beta_{n+2}-(n+2)(\beta_{n+2}+b-a-n-1) \\-\eta^{-1}(\beta_{n+1}+\beta_{n+2}+b-n-1-\eta) \gamma_{n+2}, \end{multline*} which is \eqref{eq:Laguerre-Freud-Meixner1}. Now, \eqref{eq:hyper_meixner_4} can be written as follows \begin{align}\label{eq:pi3} \begin{aligned} \pi^{[3]}&=-({T_-}\beta+a I) {T_-}\pi^{[2]}+\big(\eta^{-1}T_-^2\gamma -T_-^2D\big)T_-\gamma\\&= \begin{multlined}[t][\textwidth] - (T_-\beta+a I) \big( \eta^{-1}(T_-^2\beta+T_-^3\beta+bI-T_-D-\eta)T_-^2\gamma - (T_-^2D) (T_-^2\beta+a I) \big) +\big(\eta^{-1}T_-^2\gamma -T_-^2D\big)T_-\gamma, \end{multlined} \end{aligned} \end{align} where \eqref{eq:pi2} has been used. From \eqref{eq:pis2}, $\pi^{[3]}+\pi^{[-3]}=2(({T_-}^2S^{[1]})D^{[2]}-({T_-}D^{[2]})S^{[1]})$, and \eqref{eq:S1_Meixner} (noticing that $3D^{[3]}=2D^{[2]}(T_-^2D^{[2]}-T_-D^{[2]})$) we get \begin{align*} \pi^{[-3]}=&2D^{[2]}(-T_-^2\pi^{[2]}+T_-^2D^{[2]}-(T_-^2D)T_-^3\beta)- 2(T_-D^{[2]})(-\pi^{[2]}+D^{[2]}-DT_-\beta)- \pi^{[3]}\\ =&3D^{[3]}+2({T_-}D^{[2]})(\pi^{[2]}+DT_-\beta)-2D^{[2]}(T_-^2\pi^{[2]}+(T_-^2D)T_-^3\beta)- \pi^{[3]}\\ =&3D^{[3]}T_-(\beta-{T_-}^2\beta+I)+2({T_-}D^{[2]})\pi^{[2]}-2D^{[2]}T_-^2\pi^{[2]}- \pi^{[3]}\\ =&3D^{[3]}T_-(\beta-T_-^2\beta+I)-\big(\eta^{-1}T_-^2\gamma -T_-^2D\big)T_-\gamma\\&+2(T_-D^{[2]}) \Big(\eta^{-1}(T_-\beta+T_-^2\beta+bI-D-\eta) T_-\gamma-(T_-D)(T_-\beta+a I) \Big) \\&-2D^{[2]}\Big(\eta^{-1}(T_-^3\beta+T_-^4\beta+bI-T_-^2D-\eta) T_-^3\gamma-(T_-^3D)(T_-^3\beta+a I)\Big)\\ & +(T_-\beta+a I) \big( \eta^{-1}(T_-^2\beta+T_-^3\beta+bI-T_-D-\eta)T_-^2\gamma - (T_-^2D) (T_-^2\beta+a I) \big). \end{align*} We now write \eqref{eq:hyper_meixner_1} as follows \begin{align*} \pi^{[-3]}=& \begin{multlined}[t][\textwidth] -(T_-^2\beta+T_-^3\beta+bI-T_-^2D-\eta)T_-^2\gamma+ (T_-^2D)(T_-\gamma+T_-^2\beta^2+bT_-^2\beta)-({T_-}\beta+T_-^2\beta+bI) T_-(2D^{[2]}-\pi^{[2]}) \end{multlined}\\ =& \begin{multlined}[t][\textwidth]-(T_-^2\beta+T_-^3\beta+bI-T_-^2D-\eta)T_-^2\gamma+(T_-^2D)(T_-\gamma+T_-^2\beta^2+bT_-^2\beta)\\-({T_-}\beta+T_-^2\beta+bI) T_-\big(2D^{[2]}- \eta^{-1}(T_-\beta+T_-^2\beta+bI-D-\eta) T_-\gamma+({T_-}D)({T_-}\beta+a I) \big). \end{multlined} \end{align*} Thus, we get the following equation \begin{align*} 0=&3D^{[3]}(T_-\beta-T_-^3\beta+I)-\big(\eta^{-1}T_-^2\gamma -T_-^2D\big)T_-\gamma\\&+2(T_-D^{[2]}) \Big(\eta^{-1}(T_-\beta+T_-^2\beta+bI-D-\eta) T_-\gamma-(T_-D)(T_-\beta+a I) \Big) \\&-2D^{[2]}\Big(\eta^{-1}(T_-^3\beta+T_-^4\beta+bI-T_-^2D-\eta) T_-^3\gamma-(T_-^3D)(T_-^3\beta+a I)\Big)\\ & +(T_-\beta+a I) \big( \eta^{-1}(T_-^2\beta+T_-^3\beta+bI-T_-D-\eta)T_-^2\gamma - (T_-^2D) (T_-^2\beta+a I) \big) \\ &+(T_-^2\beta+T_-^3\beta+bI-T_-^2D-\eta)T_-^2\gamma-(T_-^2D)(T_-\gamma+T_-^2\beta^2+bT_-^2\beta)\\&+({T_-}\beta+T_-^2\beta+bI) T_-\big(2D^{[2]}- \eta^{-1}(T_-\beta+T_-^2\beta+bI-D-\eta) T_-\gamma+({T_-}D)({T_-}\beta+a I) \big). \end{align*} Hence, component wise we get \eqref{eq:meixner_laguerre_freud_larga}. % % % % % % % \end{proof} \begin{rem} Notice that the long and cumbersome expression in \eqref{eq:meixner_laguerre_freud_larga} is a ligature for the recursion coefficients \begin{align*} (\beta_{n+4},\beta_{n+2},\beta_{n+1},\beta_n,\gamma_{n+4},\gamma_{n+3},\gamma_{n+2}). \end{align*} We just wanted to show that the method gives these equations. Indeed, \eqref{eq:Laguerre-Freud-Meixner1} is a better Laguerre--Freud equation for the $\beta$'s. However, notice that \eqref{eq:Laguerre-Freud-Meixner1} implies an expression of $\beta_{n+1}$ in terms $(\beta_{n-1},\beta_n,\gamma_{n+1},\gamma_n)$ while \eqref{eq:laguerre-freud-meixner-2} involves and expression for $\beta_{n}$ in terms of $(\beta_n,\beta_{n-1},\gamma_n,\gamma_{n+1})$. In both cases we need to go two steps down, and we say that we have \emph{length two relations}. However, we can mix both Laguerre--Freud equations to find an \emph{improved} expression for $\beta_{n+1}$ involving only one step down, i.e. $(\beta_n,\gamma_n,\gamma_{n+1})$, a \emph{length one relation}. A similar reasoning holds for \eqref{eq:laguerre-freud-meixner-3}. \end{rem} \begin{pro} The generalized Meixner recursion coefficients are subject to the following length one Laguerre--Freud equation \begin{multline}\label{eq:Meixner_Laguerre_Freud_step1} \beta_{n+1} \frac{1}{\gamma_{n+1}}\Big(\eta \big(\gamma_{n}+(\beta_{n}+b-n)\beta_{n}-n(b-a-n+1) \big)+\eta(\eta+1)(\beta_{n}+a+n)\Big)-\beta_n-b+n+1+2\eta. \end{multline} \end{pro} \begin{proof} We write \eqref{eq:Laguerre-Freud-Meixner1} as follows \begin{multline*} (\beta_{n-1}+\beta_{n}+b-n+1-\eta) \gamma_{n} \eta \big(\gamma_{n+1}+\gamma_{n}+(\beta_{n}+b-n)\beta_{n}-n(b-a-n+1) \big)+\eta^2(\beta_{n}+a+n) \end{multline*} that, when introduced in \eqref{eq:laguerre-freud-meixner-2}, delivers \begin{align*} (\beta_n+\beta_{n+1}+b-n-\eta)\gamma_{n+1}=&(\beta_{n-1}+\beta_n+b-n+1-\eta)\gamma_n+\eta(\beta_n+a+n)\\=& \begin{multlined}[t][0.65\textwidth]\eta \big(\gamma_{n+1}+\gamma_{n}+(\beta_{n}+b-n)\beta_{n}-n(b-a-n+1) \big) +\eta(\eta+1)(\beta_{n}+a+n), \end{multlined} \end{align*} and we obtain the Laguerre--Freud equation \eqref{eq:Meixner_Laguerre_Freud_step1}. Let us shift by $n\to n+1$ the relation \eqref {eq:Laguerre-Freud-Meixner1} \begin{multline* \gamma_{n+2}= -\gamma_{n+1}-(\beta_{n+1}+b-n-1)\beta_{n+1}+n(b-a-n) \\+\eta(\beta_{n+1}+a+n+1)+\eta^{-1}(\beta_{n}+\beta_{n+1}+b-n-\eta) \gamma_{n+1}, \end{multline*} and introduce the expression for $\gamma_{n+2}$ into \eqref{eq:laguerre-freud-meixner-3}. In doing so we obtain \begin{multline* \gamma_n+2\eta -(\beta_n+\beta_{n+1}+b-n-\eta)(\beta_{n+1}-\beta_n-1)=-(\beta_{n+1}+b-n-1)\beta_{n+1}+n(b-a-n) \\+\eta(\beta_{n+1}+a+n+1)+\eta^{-1}(\beta_{n}+\beta_{n+1}+b-n-2\eta) \gamma_{n+1}, \end{multline*} but \eqref{eq:Meixner_Laguerre_Freud_step1} can be written \begin{multline* \gamma_{n+1}(\beta_{n+1}+\beta_n+b-n-2\eta) \eta \big(\gamma_{n}+(\beta_{n}+b-n)\beta_{n}-n(b-a-n+1) \big)+\eta(\eta+1)(\beta_{n}+a+n)+\gamma_{n+1}, \end{multline*} so that \begin{multline* \gamma_n+2\eta -(\beta_n+\beta_{n+1}+b-n-\eta)(\beta_{n+1}-\beta_n-1)=-(\beta_{n+1}+b-n-1)\beta_{n+1}+n(b-a-n) \\+\eta(\beta_{n+1}+a+n+1)+\big(\gamma_{n}+(\beta_{n}+b-n)\beta_{n}-n(b-a-n+1) \big)\\+(\eta+1)(\beta_{n}+a+n)+\eta^{-1} \gamma_{n+1}. \end{multline*} \end{proof} Our \emph{best} Laguerre--Freud equations are \eqref{eq:Meixner_Laguerre_Freud_step1} and \eqref{eq:Laguerre-Freud-Meixner1}, having lengths one and two, respectively. For the reader convenience we reproduce them again as a Theorem \begin{theorem}[Laguerre--Freud equations for generalized Meixner]\label{teo:Laguerre-Freud generalized Meixner} The generalized Meixner recursion coefficients satisfy the following Laguerre--Freud relations \begin{align*} \beta_{n+1}&=\begin{multlined}[t][0.75\textwidth] \frac{1}{\gamma_{n+1}}\Big(\eta \big(\gamma_{n}+(\beta_{n}+b-n)\beta_{n}-n(b-a-n+1) \big)+\eta(\eta+1)(\beta_{n}+a+n)\Big)-\beta_n-b+n+1+2\eta, \end{multlined}\\ \gamma_{n+1}&=\begin{multlined}[t][0.75\textwidth] -\gamma_{n}-(\beta_{n}+b-n)\beta_{n}+n(b-a-n+1) +\eta(\beta_{n}+a+n)+\eta^{-1}(\beta_{n-1}+\beta_{n}+b-n+1-\eta) \gamma_{n}. \end{multlined} \end{align*} \end{theorem} \begin{rem} In \cite{smet_vanassche} a system of Laguerre--Freud equations was presented. There are two functions $u_n,v_n$ that are linked with $\beta_n,\gamma_n$ by \begin{align*} \gamma_n&=n\eta-(a-1)u_n,& \beta_n&=n+a-b-1+\eta-\frac{(a-1)v_n}{\eta}, \end{align*} and the nonlinear system, see equations (3.2) and (3.3) in \cite{smet_vanassche}, is \begin{align*} (u_n+v_n)(u_{n+1}+v_n)&=\frac{a-1}{\eta^2}v_n(v_n-\eta)\Big(v_n-\eta\frac{a-b-1}{a-1}\Big),\\ (u_n+v_n)(u_{n}+v_{n-1})&=\frac{u_n}{u_n-\frac{\eta n}{a-1}} (u_n+\eta)\Big(u_n+\eta\frac{a-b-1}{a-1}\Big). \end{align*} Observe that the first relation, (3.2) in \cite{smet_vanassche}, gives $\gamma_{n+1}$ as a rational function of $(\gamma_{n},\beta_n)$, a length one relation. The rational function is a cubic polynomial divided by a linear function on the recursion coefficients. Instead, our \eqref{eq:Laguerre-Freud-Meixner1} is a length two relation, but is quadratic polynomial in the recursion coefficients, not rational and involving cubic polynomials as does (3.1). The second relation (3.3) in \cite{smet_vanassche} gives $\beta_{n+1}$ in terms of rational function of $(\gamma_{n+1},\beta_n)$ a length one relation. Now the rational function is a cubic polynomial divided by a quadratic one. Relation \eqref{eq:Meixner_Laguerre_Freud_step1} is length one as well, but our rational function is quadratic polynomial divided by a linear one.\end{rem} \begin{rem} A nice feature of the Smet--Van Assche system is that for the particular value $a=1$ provides the explicit expression $\beta_n=n-b+\eta$ and $\gamma_n=n\eta$. Indeed, one we check using SageMath 9.0 that \eqref{eq:laguerre-freud-meixner-2}, \eqref{eq:laguerre-freud-meixner-3}, \eqref{eq:Laguerre-Freud-Meixner1} and \eqref{eq:meixner_laguerre_freud_larga} are satisfied. For $a=1$, the weight is $ w(z)=\dfrac{\eta^z}{(b+1)_z}$ and the $0$-th moment is given by the Kummer function $\rho_0= M(1,b+1,\eta)=\frac{b}{\eta^b} \Exp{\eta} (\Gamma(b) - \Gamma(b , \eta))$, where $\Gamma(b,\eta)=\int_\eta^\infty t^{b-1}\Exp{-t}\d t$ is the incomplete Gamma function. According to \cite{filipuk_vanassche1}, it corresponds to the Charlier case on the shifted lattice $\mathbb{N}-b$. \end{rem} \section{Generalized Hahn of type I weights}\label{S:Hahn} The choice $\sigma(z)=\eta(z+a)(z+b)$ and $\theta(z)=z(z+c)$ leads to the following Pearson equation \begin{align*} (k+1)(k+1+c)w(k+1)=\eta(k+a)(k+b)w(k) \end{align*} whose solutions are proportional to $w(z)=\frac{(a)_z(b)_z}{(c+1)_z}\frac{\eta^z}{z!}$. According to \cite{diego,diego_paco} this is the generalized Hahn weight of type I. The first moment is $ \rho_0={}_2F_1\left[\!\!{\begin{array}{c}a,b \\c+1\end{array}};\eta\right]$, i.e., the Gauss hypergeometric function. For $\eta=1$, Hahn introduced these discrete orthogonal polynomials in \cite{Hahn}. The standard \emph{Hahn polynomials} considered in the literature take $a=\alpha+1$, $b=-N$ and $c=-N-1-\beta$, with $N$ a positive integer. \enlargethispage*{1cm} \begin{theorem}[The generalized Hahn Laguerre--Freud structure matrix]\label{teo:Hahn} For a generalized Hahn of type I weight; i.e. $\sigma=\eta(z+a)$ and $\theta=z(z+b)$, we find \begin{align} \label{eq:Hahn1_p} p^1_{n+1}&=\begin{multlined}[t][.85\textwidth] (n+2)\beta_{n+2}+\beta_{n+1}+\frac{\eta-1}{\eta+1}\Big(\gamma_{n+3}+\gamma_{n+2}+\beta^2_{n+2}+\frac{(n+2)(n+1)}{2}\Big)\\+\frac{1}{\eta+1}\big(\eta ab+(\eta(a+b)-c)\beta_{n+2}+(\eta(a+b)+c)(n+2)\big), \end{multlined}\\ \label{eq:Hahn1_pi} \pi^{[2]}_n&=\begin{multlined}[t][.85\textwidth]\frac{1}{1+\eta}\big((n+2)(n+1)-\eta ab-(\eta(a+b)-c)\beta_{n+2}-(\eta(a+b)+c)(n+2)\big)\\+\frac{1-\eta}{1+\eta}(\gamma_{n+3}+\gamma_{n+2}+\beta^2_{n+2})-(n+2)(\beta_{n+2}+\beta_{n+1}). \end{multlined} \end{align} The Laguerre--Freud structure matrix is { \begin{align}\label{eq:Laguerre-Freud-Hahn-structure} \hspace*{-.5cm} \Psi&=\left(\begin{NiceMatrix}[small,columns-width = 1cm ] \cellcolor{Gray!10} \eta(\gamma_1+(\beta_0+a)(\beta_0+b) )H_0& (\beta_0+\beta_1+c)H_1& H_2& 0 &\Cdots&\\ \eta (\beta_0+\beta_1+a+b)H_1&\cellcolor{Gray!10}\scriptsize\begin{matrix}\eta (\gamma_1+\gamma_2+(\beta_1+a)(\beta_1+b) \\+\beta_0+\beta_1+a+b)H_1 \end{matrix}& (\beta_1+\beta_2+c-1)H_2&H_3&\Ddots& \\ \eta H_2 & \eta (\beta_1+\beta_2+a+b+1)H_2&\cellcolor{Gray!10}\scriptsize\begin{matrix} \eta (\gamma_2+\gamma_3+(\beta_2+a)(\beta_2+b) \\+2(\beta_1+\beta_2+a+b)+\pi^{[2]}_0)H_2 \end{matrix}&(\beta_2+\beta_3+c-2)H_3&\Ddots& \\[5pt] 0&\eta H_3 &\eta (\beta_2+\beta_3+a+b+2)H_3&\cellcolor{Gray!10}\scriptsize\begin{matrix} \eta (\gamma_3+\gamma_4+(\beta_3+a)(\beta_3+b) \\+3(\beta_2+\beta_3+a+b)+\pi^{[2]}_1)H_1 \end{matrix}&\Ddots& \\ \Vdots &\Ddots & \Ddots&\Ddots&\Ddots& \end{NiceMatrix}\right). \end{align} } \end{theorem} \begin{proof} In this case, since the polynomials $\sigma$ y $\theta$, the Freud--Laguerre matrix has the following diagonal structure $\Psi=(\Lambda^{\top})^2\psi^{(-2)}+\Lambda^{\top}\psi^{(-1)}+\psi^{(0)}+\psi^{(1)}\Lambda+\psi^{(2)}\Lambda^2$. That is, it has two subdiagonals, two superdiagonals and the main diagonal. In the one hand, from $\Psi=\sigma(J)H\Pi^{\top}$, we get for the Laguerre-Freud structure matrix \begin{multline}\label{eq:Psi_Hahn_1} \Psi=\underset{\text{second subdiagonal}}{\underbrace{\eta\Lambda^{\top}\gamma\Lambda^{\top}\gamma H}}+\underset{\text{first subdiagonal}}{\underbrace{\eta(\Lambda^{\top}\gamma\Lambda^{\top}\gamma HD\Lambda+\Lambda^{\top}\gamma(\beta +b)H+(\beta +a)\Lambda^{\top}\gamma H)}}\\+\underset{\text{main diagonal}}{\underbrace{\eta\big(\Lambda^{\top}\gamma\Lambda H+\Lambda\Lambda^{\top}\gamma H+(\beta +a)(\beta +b)H+(\Lambda^{\top}\gamma(\beta +b)+(\beta +a)\Lambda^{\top}\gamma)HD\Lambda+\Lambda^{\top}\gamma\Lambda^{\top}\gamma H\pi^{[2]}\Lambda^2\big)}}\\+\begin{multlined}[t][.8\textwidth] \underset{\text{first superdiagonal}}{\underbrace{\eta \Big(\big((\beta +a)(\beta +b)+\Lambda^{\top}\gamma\Lambda+\Lambda\Lambda^{\top}\gamma\big)HD\Lambda+(\beta +a)\Lambda H+\Lambda(\beta +b)H}}\\\underset{\text{first superdiagonal}}{\underbrace{+\Lambda^{\top}(\beta +b)H\pi^{[2]}\Lambda^2+(\beta +a)\Lambda^{\top}H\pi^{[2]}\Lambda^2+\Lambda^{\top}\gamma\Lambda^{\top}\gamma H\pi^{[3]}\Lambda^3\Big) }} \end{multlined}\\+\begin{multlined}[t][.7\textwidth] \underset{\text{second superdiagonal}}{\underbrace{ \eta\Big(\Lambda^{\top}\gamma\Lambda^{\top}\gamma H\pi^{[4]}\Lambda^4+\big(\Lambda^{\top}\gamma(\beta +b)+(\beta +a)\Lambda^{\top}\gamma\big)H\pi^{[3]}\Lambda^3}}\\\underset{\text{second superdiagonal}}{\underbrace{+\big(\Lambda^{\top}\gamma\Lambda+\Lambda\Lambda^{\top}\gamma+(\beta +a)(\beta +b)\big)H\pi^{[2]}\Lambda^2+\big(\Lambda(\beta +b)+(\beta +a)\Lambda\big)HD\Lambda+\Lambda^2H\Big)}}. \end{multlined} \end{multline} In the other hand, from $\Psi=\Pi^{-1}H\theta(J^{\top})$ we deduce \begin{multline}\label{eq:Psi_Hahn_2} \Psi=\begin{multlined}[t][.8\textwidth] \underset{\text{second subdiagonal}}{\underbrace{H(\Lambda^\top)^2+\Lambda^\top DH\Lambda^\top(\beta+T_{-}\beta+c)+(\Lambda^\top)^2\pi^{[-2]}H(\gamma+T_{+}\gamma+\beta^2+c\beta)}}\\\underset{\text{second subdiagonal}}{\underbrace{-(\Lambda^\top)^3\pi^{[-3]}H(\beta+T_{-}\beta+c)\gamma\Lambda+(\Lambda^\top)^4\pi^{[-4]}H(T_{-}\gamma)\gamma\Lambda^2}} \end{multlined}\\+\underset{\text{first subdiagonal}}{\underbrace{H\Lambda^\top(\beta+T_{-}\beta+c)-\Lambda^\top DH(\gamma+T_{+}\gamma+\beta^2+c\beta)+(\Lambda^\top)^2\pi^{[-2]}H(\beta+T_{-}\beta+c) \gamma\Lambda-(\Lambda^\top)^3\pi^{[-3]}H(T_{-}\gamma)\gamma\Lambda^2}}\\ +\underset{\text{main diagonal}}{\underbrace{H(\gamma+T_{+}\gamma+\beta^2+c\beta)-\Lambda^\top DH(\beta+T_{-}\beta+c)\gamma\Lambda+(\Lambda^\top)^2\pi^{[-2]}H(T_{-}\gamma)\gamma\Lambda^2}}\\ +\underset{\text{first superdiagonal}}{\underbrace{H(\beta+T_{-}\beta+c)\gamma\Lambda-\Lambda^\top DH(T_{-}\gamma)\gamma\Lambda^2 }}+\underset{\text{second superdiagonal}}{\underbrace{H(T_{-}\gamma)\gamma\Lambda^2}} \end{multline} From \eqref{eq:Psi_Hahn_1} we get the first two subdiagonals of $\Psi$, namely \begin{align*} \psi^{(-2)}&=\eta H\gamma T_-\gamma, & \psi^{(-1)}&=\eta\gamma T_+\gamma T_+H+\gamma(\beta+b)H+(T_-\beta+a)\gamma H, \end{align*} and from \eqref{eq:Psi_Hahn_2} we get the first two superdiagonals of $\Psi$, namely \begin{align*} \psi^{(1)}&= (\beta+T_{-}\beta+c)\gamma-( T_+D)(T_+H)\gamma T_+\gamma, & \psi^{(2)}&=H(T_{-}\gamma)\gamma. \end{align*} We can obtain an expression for the main diagonal that does not depend on $\pi^{[+2]}$ o de $\pi^{[-2]}$ by equating the terms corresponding to the main diagonal in both previous expressions \eqref{eq:Psi_Hahn_1} and \eqref{eq:Psi_Hahn_2}, and we obtain \begin{multline*} (\eta-1)(T_{+}\gamma+\gamma+\beta^2)+\beta[\eta(a+b)-c]+\eta ab+(\eta+1)(T_{+}D)[T_{+}\beta+\beta]+(T_{+}D)[\eta(a+b)+c]\\=T_{+}^2(\pi^{[-2]}-\eta\pi^{[2]}). \end{multline*} Recalling \eqref{eq:pis}, i.e. $\pi^{[\pm 2]}_{n}=\frac{(n+2)(n+1)}{2}\mp(n+1)\beta_{n+1}\mp p_{n+1}^1$, and substituting in the previous expression we can clear $p^1_{n+1}$, and subsequently replaced it in the expression of $\pi^{[2]}$ so that the main diagonal no longer depends on it. Reading element by element the first equality and clearing for $p^1_{n+1}$, we obtain \begin{multline*} p^1_{n+1}=(n+2)\beta_{n+2}+\beta_{n+1}+\frac{\eta-1}{\eta+1}\Big(\gamma_{n+3}+\gamma_{n+2}+\beta^2_{n+2}+\frac{(n+2)(n+1)}{2}\Big)\\+\frac{1}{\eta+1}\big(\eta ab+(\eta(a+b)-c)\beta_{n+2}+(\eta(a+b)+c)(n+2)\big), \end{multline*} and the entries of the diagonal matrix $\pi^{[2]}$ are \begin{multline*} \pi^{[2]}_n=\frac{1}{1+\eta}\big((n+2)(n+1)-\eta ab-(\eta(a+b)-c)\beta_{n+2}-(\eta(a+b)+c)(n+2)\big)\\+\frac{1-\eta}{1+\eta}(\gamma_{n+3}+\gamma_{n+2}+\beta^2_{n+2})-(n+2)(\beta_{n+2}+\beta_{n+1}). \end{multline*} Simplifying, the Laguerre--Freud matrix is gotten \begin{multline*} \Psi=\eta(\Lambda^{\top})^2T_{-}^2 H+ \eta\Lambda^{\top}\big(a+b+T_{+}D+\beta+T_{-}\beta\big)T_-H\\+\eta\big(T_{+}\gamma+\gamma+(\beta+a)(\beta+b) +T_{+}(D)(a+b+T_{+}\beta+\beta)+T_{+}^2\pi^{[2]})\big)H\\ + (c-T_+D+\beta+T_{-}\beta)T_-H\Lambda+T_{-}^2H\Lambda^2. \end{multline*} \end{proof} Let us analyze the compatibility $[\Psi H^{-1},J]=\Psi H^{-1}$, \begin{theorem}[Laguerre--Freud equations for generalized type I Hahn]\label{teo:compatibilityI_Hahn} The generalized Hahn of type I recursion coefficients satisfy the following Laguerre--Freud relations \begin{subequations} \begin{gather} \label{eq:Hahn_compatibility_1} \begin{multlined}[t][.9\textwidth] (\eta^2-1)\big((\beta_{n+1}+\beta_n)\gamma_{n+1}-(\beta_{n-1}+\beta_n)\gamma_n\big) \\+\eta \big(\beta_n(2\beta_n+a+b+c)+2(\gamma_{n+1}+\gamma_n)+n(a+b-c+n-1)+ab\big)\\ +(\eta+1)\big(\big(\eta(a+b)-c-(\eta+1)n\big)(\gamma_{n+1}-\gamma_n)+(\eta+1)\gamma_n\big)=0, \end{multlined}\\ \label{eq:Hahn_compatibility_2} \begin{multlined}[t][.9\textwidth] (\eta+1)((n-1)\beta_n+(n+1)\beta_{n+1})+(\eta-1)(\gamma_{n+2}-\gamma_n+\beta_{n+1}^2-\beta_n^2+n)\\+(\eta(a+b)-c)(\beta_{n+1}-\beta_n)+\eta(a+b)+c=0 . \end{multlined} \end{gather} \end{subequations} \end{theorem} \begin{proof} We analyze the compatibility $[\Psi H^{-1},J]=\Psi H^{-1}$ by diagonals. In both sides of the equation we find matrices whose only non-zero diagonals are the main diagonal, the first and second subdiagonals and the first and second superdiagonals. Equating the non-zero diagonals of both matrices, two identities for the second superdiagonal and subdiagonal are obtained. From the remaining diagonals we obtain the two Laguerre--Freud equations (we obtain the same equality from the first subdiagonal and from the first superdiagonal). Firstly, by simplifying we obtain that: \begin{multline*} \Psi H^{-1}= \eta(\Lambda^{\top})^2(T_{-}\gamma)\gamma+\eta\Lambda^{\top}\gamma(T_{+}(D)+(\beta+b)+T_{-}(\beta+a))\\+\frac{\eta}{\eta+1}\big(2(T_+\gamma+\gamma+\beta^2)+(a+b+c)\beta+ab+T_+D(a+b-c)+T_+DT_+^2D\big)\\+(\beta+T_{-}\beta+c-T_{+}( D))\Lambda+\Lambda^2. \end{multline*} From the main diagonal, clearing, we obtain: \begin{multline*} (1-\eta)\beta_{n-1}\gamma_n+(\eta-1)\beta_{n+1}\gamma_{n+1}+\beta_n(\frac{\eta}{\eta+1}(2\beta_n+a+b+c)+(\eta-1)(\gamma_{n+1}-\gamma_n))+\\(\frac{\eta}{\eta+1}(2(\gamma_{n+1}+\gamma_n)-nc+n(a+b+n-1)+ab)-(\eta+1)(n(\gamma_n-\gamma_{n+1})-\gamma_n)+(\eta(a+b)-c)(\gamma_{n+1}-\gamma_n))=0 \end{multline*} and we get Equation \eqref {eq:Hahn_compatibility_1}. From the first superdiagonal and from the first subdiagonal we get \begin{multline*} ((\eta+1)(\beta_n+\beta_{n+1}+n(\beta_{n+1}-\beta_n))+(\eta-1)(\gamma_{n+2}-\gamma_n+\beta_{n+1}^2-\beta_n^2+n)+c(1+\beta_n-\beta_{n+1})\\+\eta(a+b)(1+\beta_{n+1}-\beta_n)=0 \end{multline*} that leads to \eqref{eq:Hahn_compatibility_2}. \end{proof} We now proceed with the compatibility $[\Psi H^{-1},J_{-}]=\vartheta_{\eta}(\Psi H^{-1})$, recall that $J_{-}:=\Lambda^{\top}\gamma$ and $\vartheta_{\eta}=\eta \frac{d}{d \eta}$. As we will see we get no further equations than those already obtained in Theorem \ref{teo:compatibilityI_Hahn}. \begin{pro} The recursion coefficients for the generalized Hahn of type I orthogonal polynomials satisfy \begin{subequations} \begin{gather}\label{eq:Hahn_compatibiliyII_1} \vartheta_{\eta}(\beta_{n}+\beta_{n+1}+c-n)=\gamma_{n+2}-\gamma_{n},\\\label{eq:Hahn_compatibiliyII_2} \begin{multlined}[t][0.85\textwidth] \vartheta_{\eta}\Bigl(\frac{\eta}{\eta+1}(2(\gamma_{n+1}+\gamma_n+\beta_n^2)+c(\beta_n-n)+n(n-1)+(a+b)(\beta_n+n)+ab)\Bigr)=\\\gamma_{n+1}(\beta_{n}+\beta_{n+1}+c-n)-\gamma_{n}(\beta_{n-1}+\beta_{n}+c-(n-1)), \end{multlined}\\\label{eq:Hahn_compatibiliyII_3} \begin{multlined}[t][.9\textwidth] \vartheta_{\eta}(\eta\gamma_{n+1}(n+a+b+\beta_n+\beta_{n+1}))=\\\frac{\eta}{\eta+1}\gamma_{n+1}(2(\gamma_{n+2}-\gamma_n+\beta_{n+1}^2-\beta_n^2)+(a+b+c)(\beta_{n+1}-\beta_n)+2n+(a+b-c)) \end{multlined}\\\label{eq:Hahn_compatibiliyII_4} \begin{multlined}[t][.9\textwidth] \vartheta_{\eta}(\eta\gamma_{n+1}\gamma_{n+2})=\eta\gamma_{n+1}\gamma_{n+2}(\beta_{n+2}-\beta_n+1). \end{multlined} \end{gather} \end{subequations} \end{pro} \begin{proof} From the diagonals of $[\Psi H^{-1},J_{-}]=\vartheta_{\eta}(\Psi H^{-1})$ we get \begin{enumerate} \item From the first superdiagonal we obtain \eqref{eq:Hahn_compatibiliyII_1} \item From the main diagonal cleaning up we get \eqref{eq:Hahn_compatibiliyII_2}. \item From the first subdiagonal we get, symplifying \eqref{eq:Hahn_compatibiliyII_3}. \item Finally, from the second subdiagonal we get \eqref{eq:Hahn_compatibiliyII_4}. \end{enumerate} \end{proof} \begin{rem} We see that \eqref{eq:Hahn_compatibiliyII_1} follow from the Toda equation \eqref{eq:Toda_system_beta} and \eqref{eq:Hahn_compatibiliyII_4} follow from Toda equation \eqref{eq:Toda_system_gamma}. Moreover the two remaining equations (from the main diagonal and the first subdiagonal) are those of Theorem \ref{teo:compatibilityI_Hahn}. \end{rem} \begin{rem} Dominici in \cite[Theorem 4]{diego} found the following Laguerre--Freud equations \begin{align*} (1-\eta)\nabla(\gamma_{n+1}+\gamma_n)&=\eta v_n\nabla(\beta_n+n)-u_n\nabla(\beta_n-n),\\ \Delta\nabla(u_n-\eta v_n)\gamma_n&=u_n\nabla (\beta_n-n)+\nabla (\gamma_{n+1}+\gamma_n), \end{align*} with $u_n:=\beta_n+\beta_{n+1}-n+c+1$ and $v_n:=\beta_n+\beta_{n-1}+n-1+a+b$. Therefore, the first one is of type $\gamma_{n+1} =F_1 (n,\gamma_n,\gamma_{n-1},\beta_n,\beta_{n-1})$, of length two, and the second of the form $\beta_{n+1 }= F_2 (n,\gamma_{n+1},\gamma_n,\gamma_{n-1},\beta_n,\beta_{n-1},\beta_{n-2})$, is of length three. .\end{rem} \begin{rem} Filipuk and Van Assche in \cite[Equations (3.6) and (3.9)]{filipuk_vanassche2} introduce new non local variables $(x_,y_n)$, \begin{align*} \beta_n&=x_n+\frac{n+(n+a+b)\eta-c-1}{1-\eta},\\ \frac{1-\eta}{\eta}\gamma_n&=y_n+\sum_{k=0}^{n-1}x_k+\frac{n(n+a+b-c-2)}{1-\eta}. \end{align*} Then, in \cite[Theorem 3.1]{filipuk_vanassche2} Equations (3.13) and (3.14) for $(x_n,y_n)$ are found, of length $0$ and $1$ respectively, in the new variables. Recall, that these new variables are non-local and involve all the previous recursion coefficients. In this respect, is it not so clear the meaning of length. The nice feature in this case, is that \cite[Equations (3.13) and (3.14)]{filipuk_vanassche2} is a discrete Painlevé equation, that combined with the Toda equations lead to a differential system for the new variables $x_n$ and $y_n$ that after suitable transformation can be reduced to Painlevé VI $\sigma$-equation. Very recently, \cite{Dzhamay_Filipuk_Stokes} it has been shown that this system is equivalent to $dP(D^{(1)}_4/D^{(1)}_4)$, known as the difference Painlevé V. \end{rem} \section*{Conclusions and outlook} In their studies of integrable systems and orthogonal polynomials, Adler and van Moerbeke have throughly used the Gauss--Borel factorization of the moment matrix, see \cite{adler_moerbeke_1,adler_moerbeke_2,adler_moerbeke_4}. This strategy has been extended and applied by us in different contexts, CMV orthogonal polynomials, matrix orthogonal polynomials, multiple orthogonal polynomials and multivariate orthogonal, see \cite{am,afm,nuestro0,nuestro1,nuestro2,ariznabarreta_manas0,ariznabarreta_manas2,ariznabarreta_manas_toledano}. For a general overview see \cite{intro}. Recently, see\cite{Manas_Fernandez-Irrisarri}, we extended those ideas to the discrete world. In particular, we applied that approach to the study of the consequences of the Pearson equation on the moment matrix and Jacobi matrices. For that description a new banded matrix is required, the Laguerre--Freud structure matrix that encodes the Laguerre--Freud relations for the recurrence coefficients. We have also found that the contiguous relations fulfilled generalized hypergeometric functions determining the moments of the weight described for the squared norms of the orthogonal polynomials a discrete Toda hierarchy known as Nijhoff--Capel equation, see \cite{nijhoff}. In \cite{Manas} we study the role of Christoffel and Geronimus transformations for the description of the mentioned contiguous relations, and the use of the Geronimus--Christoffel transformations to characterize the shifts in the spectral independent variable of the orthogonal polynomials. In this paper we have deepen in that program and search further in the discrete semi-classical cases, finding Laguerre--Freud relations for the recursion coefficients for three types of discrete orthogonal polynomials of generalized Charlier, generalized Meixner and generalized Hahn of type I cases. For the future, we will study the generalized Hahn of type II polynomials, and extend these techniques to multiple discrete orthogonal polynomials \cite{Arvesu_Coussment_Coussment_VanAssche} and its relations with the transformations presented in \cite{bfm} and quadrilateral lattices \cite{quadrilateral1,quadrilateral2}, \section*{Acknowledgments} This work has its seed in several inspiring conversations with Diego Dominici during a research stay at Johannes Kepler University at Linz.
{ "redpajama_set_name": "RedPajamaArXiv" }
5,081
Auckland , Ooh La La Land, the Top L.A. picks from Booking.com La La Land hit New Zealand cinemas last month, showcasing powerhouse performances from Ryan Gosling and Emma Stone. Yesterday the movie broke records at the Golden Globes by winning all seven of its nominated categories. This triumph, along with eight Critic's Choice Awards, has rumours flying about Oscar glory. Why not take yourself into the heart of the film itself with a trip to its namesake, Los Angeles. Booking.com, the world leader in connecting travellers with awesome places to stay, has curated a selection of Hollywood hot spots to transport you in to the world of La La Land. Stroll down Sunset Boulevard, hike to the Hollywood sign, stand with the stars and sample the joy of jazz music - truly immerse yourself in all things La La Land. theredbury@hollywoodandvine1-5.jpg The Redbury @ Hollywood and Vine, Hollywood Walk of Fame http://www.booking.com/hotel/us/the-redburry-hollywood-and-vine.en-gb.html Designed with Hollywood decadence in mind, channel your inner movie star with a stay at The Redbury @ Hollywood and Vine. Think boutique suites, cocktails and good music. You'll feel a million dollars in your luxury room complete with balcony, flat-screen TV and breakfast-in-bed service. Just a short stroll away is the Hollywood Walk of Fame where you can grace the same streets as your favourite celebrities. hotelamaranoburbank1-5.jpg hotelamaranoburbank11.jpg Hotel Amarano Burbank, Warner Bros Studios http://www.booking.com/hotel/us/graciela.en-gb.html Warner Bros Studios is one of the most famous studios in Los Angeles. To feel like a true movie star, select the deluxe tour option and see what it takes to make a film come to life. You'll be walking in the footsteps of Ryan Gosling and Emma Stone themselves as scenes from La La Land were filmed in the studios. Only a fifteen-minute walk away is the Hotel Amarano Burbank. Sample a variety of cocktails or take a dip in the hot tub before relaxing in your luxurious bedroom after a day touring the studios. Hollywood Sky Loft, Catalina Bar & Grill http://www.booking.com/hotel/us/hollywood-sky-loft.en-gb.html Ryan Gosling plays La La Land's Sebastian, a struggling jazz musician. To experience Los Angeles at its best, make sure you take a trip to one of its many incredible Jazz bars. The Catalina Bar & Grill is an intimate venue that draws jazz lovers from near and far. After letting the music go to your head, stroll back to the Hollywood Sky Loft and relax in your private loft. This has all the comforts of home, plus a few extras like a hot tub, outdoor pool, fitness centre and balcony mamashelterlosangeles1-4.jpg Mama Shelter, Griffith Park http://www.booking.com/hotel/us/mama-shelter-la.en-gb.html From Mama Shelter, it's just a short cruise down Sunset Boulevard to Griffith Park, one of the country's largest urban parks. La La Land's main characters, Sebastian and Mia, perform what has been described as 'their pivotal dance number' here, so if you're a fan of the film this is the place to go. Mama Shelter offers quirky yet cosy accommodation in the heart of Hollywood and features a rooftop terrace with breath-taking views of the Hollywood Hills. levelfurnishedlivingsuitesdowntownlosangeles2.jpg LEVEL Furnished Living Suites, Grand Central Market http://www.booking.com/hotel/us/level-furnished-living.en-gb.html The popularity of street food and market stalls has taken the nation by storm and it is no different in Los Angeles. The Grand Central Market has been around for nearly 100 years and continues to serve the very best street food you can find in L.A. Even fictional characters have taken their turn here with Mia and Sebastian enjoying a street-side nibble in the movie. A short walk away is the LEVEL Furnished Living Suites, offering self-catered accommodation with access to a fitness suite, basketball court and swimming pool so you can stay as fit as the stars themselves. - End- About Booking.com: Booking.com is the world leader in booking hotel and other accommodations online. It guarantees the best prices for any type of property, from small independents to five-star luxury. Guests can access the Booking.com website anytime, anywhere from their desktops, mobile phones and tablet devices, and they don't pay booking fees – ever. The Booking.com website is available in 43 languages, offers over 1M hotels and accommodations including more than 540,000 vacation rental properties and covers over 96,000 destinations in 227 countries and territories worldwide. It features over 108M reviews written by guests after their stay and attracts online visitors from both leisure and business markets around the globe. With over 20 years of experience and a team of over 13,000 dedicated employees in over 184 offices worldwide, Booking.com operates its own in-house customer service team, which is available 24/7 to assist guests in their native languages and ensure an exceptional customer experience. Established in 1996, Booking.com B.V. owns and operates Booking.com™, and is part of The Priceline Group (NASDAQ: PCLN). Follow us on Twitter, Google+ and Pinterest, like us on Facebook, or learn more at http://www.booking.com. Jaime De Silva PR Manager, APAC, Booking.com jaime.desilva@booking.com For more information on permissions to use photos and imagery, please contact bookingnz@thisismango.co.nz before using on your own platform. Visit our Google Plus profile (opens in new window) Visit our Pinterest page (opens in new window) Treat the travel bug with these nostalgic getaways How the pandemic is introducing a new era of 'safety first' travellers SMARTER, KINDER, SAFER: BOOKING.COM REVEALS FIVE PREDICTIONS FOR THE FUTURE OF TRAVEL Booking.com Shares Top Tips to Get the Most Value Out of Your Future Travel Booking.com celebrates Halloween with 4 horror movie filming locations you can actually stay in, if you dare… Free Your Mind and Unwind at These Peaceful Destinations About Booking.com | Car hire | Villas.com | FAQ | Careers | Terms & Conditions | Privacy and Cookies | Contact
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,562
"""Minifier for Python code The module exposes a single function : minify(src), where src is a string with the original Python code. The function returns a string with a minified version of the original : - indentation is reduced to the minimum (1 space for each level) - comments are removed, except on the first 2 lines - lines starting with a string are removed (considered as doc strings), except if the next line doesn't start with the same indent, like in # -------------------------------- def f(): 'function with docstring only' print('ok') # -------------------------------- """ import os import token import tokenize import re import io from keyword import kwlist def minify(src, preserve_lines=False): # tokenize expects method readline of file in binary mode file_obj = io.BytesIO(src.encode('utf-8')) token_generator = tokenize.tokenize(file_obj.readline) out = '' # minified source line = 0 last_type = None indent = 0 # current indentation level brackets = [] # stack for brackets # first token is script encoding encoding = next(token_generator).string file_obj = io.BytesIO(src.encode(encoding)) token_generator = tokenize.tokenize(file_obj.readline) for item in token_generator: # update brackets stack if necessary if token.tok_name[item.type]=='OP': if item.string in '([{': brackets.append(item.string) elif item.string in '}])': brackets.pop() sline = item.start[0] # start line if sline == 0: # encoding continue # udpdate indentation level if item.type==tokenize.INDENT: indent += 1 elif item.type==tokenize.DEDENT: indent -= 1 continue if sline>line: # first token in a line if not brackets and item.type==tokenize.STRING: if last_type in [tokenize.NEWLINE, tokenize.INDENT, None]: # If not inside a bracket, replace a string starting a # line by the empty string. # It will be removed if the next line has the same # indentation. out += ' '*indent+"''" if preserve_lines: out += '\n'*item.string.count('\n') continue out += ' '*indent # start with current indentation if item.type not in [tokenize.INDENT, tokenize.COMMENT]: out += item.string elif item.type==tokenize.COMMENT and \ line<=2 and item.line.startswith('#!'): # Ignore comments starting a line, except in one of the first # 2 lines, for interpreter path and/or encoding declaration out += item.string else: if item.type == tokenize.COMMENT: # ignore comments in a line continue if not brackets and item.type == tokenize.STRING and \ last_type in [tokenize.NEWLINE, tokenize.INDENT]: # If not inside a bracket, ignore string after newline or # indent out += "''" if preserve_lines: out += '\n'*item.string.count('\n') continue if item.type in [tokenize.NAME, tokenize.NUMBER, tokenize.OP] and \ last_type in [tokenize.NAME, tokenize.NUMBER]: # insert a space when needed if item.type != tokenize.OP \ or item.string not in ',()[].=:{}+&' \ or (last_type == tokenize.NAME and last_item.string in kwlist): out += ' ' elif item.type == tokenize.STRING and \ item.string[0] in 'rbu' and \ last_type in [tokenize.NAME, tokenize.NUMBER]: # for cases like "return b'x'" out += ' ' elif item.type == tokenize.NAME \ and last_item.type == tokenize.OP and last_item.string == '.': # special case : from . import X out += ' ' out += item.string line = item.end[0] last_item = item if item.type==tokenize.NL and last_type==tokenize.COMMENT: # NL after COMMENT is interpreted as NEWLINE last_type = tokenize.NEWLINE else: last_type = item.type # replace lines with only whitespace by empty lines out = re.sub('^\s+$', '', out, re.M) if not preserve_lines: # remove empty line at the start of the script (doc string) out = re.sub("^''\n", '', out) # remove consecutive empty lines out = re.sub('\n( *\n)+', '\n', out) # remove lines with an empty string followed by a line that starts with # the same indent def repl(mo): if mo.groups()[0]==mo.groups()[1]: return '\n'+mo.groups()[1] return mo.string[mo.start(): mo.end()] out = re.sub("\n( *)''\n( *)", repl, out) return out
{ "redpajama_set_name": "RedPajamaGithub" }
7,831
\section{Introduction} A basic question in the calculus of variations and real analysis is the following: Consider a linear differential operator $\A \colon C^{\infty}(\R^N,\R^d) \to C^{\infty}(\R^N,\R^l)$ of first order with constant coefficients, and a bounded sequence of functions $u_n \in L^1(\R^N,\R^d)$ which satisfy $\A u_n =0$ in the sense of distributions and are close to a bounded set in $L^{\infty}$, i.e. \begin{equation} \label{eq:almost_linfty} \lim_{n \to \infty} \int_{\{x \in \R^N \colon \V u_n(x) \V \geq L\}} \V u_n \V \, \dx=0 \end{equation} for some $L>0$. Does there exist a sequence of functions $v_n$, such that $\A v_n =0$, $\norm v_n \norm_{L^{\infty}} \leq CL $ and $(u_n - v_n) \to 0$ in measure (in $L^1$)? This question was answered first by \textsc{Zhang} in \cite{Zhang} for sequences of gradients ($u_n = \nabla w_n$), i.e. for the operator $\A= \curl$, which assigns to a function $u \colon \R^N \to \R^N$ the skew-symmetric $(N \times N)$-matrix with entries $\partial_i u_j - \partial_j u_i$. \textsc{Zhang's} proof, which builds on the works of \textsc{Liu} \cite{Liu} and \textsc{Acerbi-Fusco} \cite{Acerbi}, proceeds as follows. Denote by $M f$ the Hardy-Littlewood maximal function of $f \in L^1_{\loc}(\R^N,\R^d)$ and let $u_n = \nabla w_n$. The estimate \eqref{eq:almost_linfty} implies that the sets $X^n= \{ M( \nabla w_n)\geq L'\}$ have small measure for large $n$. One then uses (c.f. \cite{Acerbi}) that \begin{equation} \label{eq:AFestimate} \V w_n(x) - w_n(y) \V \leq C L' \V x -y \V ',\quad x,y \in \R^N \back X^n, \end{equation} i.e. $w_n$ is Lipschitz continuous on $\R^N \back X^n$. The fact that Lipschitz continuous functions on closed subsets of $\R^N$ can be extended to Lipschitz continuous functions on $\R^N$ with the same Lipschitz constant \cite{KB} yields the result. In this paper, we show that the answer to the previously formulated question is also positive for sequences of differential forms and $\A=d$, the operator of exterior differentiation. Let us denote by $\Lambda^r$ the r-fold wedge product of the dual space $(\R^N)^{\ast}$ of $\R^N$ and by $d \colon C^{\infty}(\R^N,\Lambda^r) \to C^{\infty}(\R^N,\Lambda^{r+1})$ the exterior derivative w.r.t. the standard Euclidean geometry on $\R^N$. \begin{thm}[$L^{\infty}$-truncation of differential forms] \label{Thm:intro} Suppose that we have a sequence $u_n \in L^1(\R^N,\Lambda^r)$ with $d u_n=0$ (in the sense of distributions), and that there exists an $L>0$ such that \begin{displaymath} \int_{\{y \in \R^N \colon \V u_n(y) \V >L \}} \V u_n(y) \V \dy \longrightarrow 0 \quad \text{as } n \to \infty. \end{displaymath} There exists a constant $C_1=C_1(N,r)$ and a sequence $v_n \in L^{\infty}(\R^N,\Lambda^r)$ with $dv_n=0$ and \begin{enumerate} [i)] \item $\norm v_n \norm_{L^{\infty}(\R^N,\Lambda^r)} \leq C_1 L$; \item $\norm v_n - u_n \norm_{L^1(\R^N,\Lambda^r)} \to 0$ as $n \to \infty$; \item $\V \{ y \in \R^N \colon v_n(y) \neq u_n(y) \} \V \to 0$. \end{enumerate} \end{thm} An analogous version of Theorem \ref{Thm:intro} holds if $\R^N$ is replaced by the $N$-torus $T_N$ (c.f. Theorem \ref{main}) or by an open Lipschitz set $\Omega$ and functions $u$ with zero boundary data (c.f. Propostion \ref{main4}). Moreover, the result immediately extends to $\R^m$-valued forms by taking truncations coordinatewise (c.f. Proposition \ref{main5}). In particular, the result of Theorem \ref{Thm:intro} includes a positive answer to the question previously raised for the differential operator $\A =\divergence$ after suitable identifications of $\Lambda^{N-1}$ and $\Lambda^N$ with $\R^N$ and $\R$, respectively. One key ingredient in the proofs is a version of the Acerbi-Fusco estimate \eqref{eq:AFestimate} for simplices rather than pairs of points. For the estimate, let us consider $\omega \in C_c^1(\R^N,\Lambda^r)$ with $d\omega=0$ and let $D$ be a simplex with vertices $x_1,...,x_{r+1}$ and a normal vector $\nu^r \in \R^N \wedge... \wedge \R^N$ (c.f. Section \ref{SecStokes} for the precise definition). Assume that $M\omega (x_i) \leq L$ for $i=1,...,r+1$. Then \begin{equation} \label{Lipschitzversion} \left \V \int_D \omega ( \nu^r) \right \V \leq C(N) L \sup_{1 \leq i,j \leq r+1} \V x_i -x_j \V ^r. \end{equation} The second ingredient is a geometric version of the Whitney extension theorem, which may be of independent interest. Combining \eqref{Lipschitzversion} and the extension theorem, one easily obtains the assertion for smooth closed forms. The general case follows by a standard approximation argument. Truncation results like the result by \textsc{Zhang} or Theorem \ref{Thm:intro} have immediate applications in the calculus of variations. In particular, they provide characterisations of the $\A$-quasiconvex hulls of sets (c.f. Section \ref{Aqchulls}) and the set of Young-measures generated by sequences satisfying $\A u_n=0$. The classical result for sequences of gradients (i.e. sequences of functions $u_n$ satisfying $\curl u_n=0$) goes back to \textsc{Kinderlehrer} and \textsc{Pedregal} \cite{Kinderlehrer,Pedregal}. Here, we show the natural counterpart of their characterisation result, whenever the operator $\A$ admits the following $L^{\infty}$-truncation result: We say that $\A$ satisfies the property (ZL) if for all sequences $u_n \in L^1(T_N,\R^d) \cap \ker \A$, such that there exists an $L>0$ with \begin{displaymath} \int_{\{y \in T_N \colon \V u_n(y) \V >L \}} \V u_n(y) \V \dy \longrightarrow 0 \quad \text{as } n \to \infty, \end{displaymath} there exists a $C=C(\A)$ and a sequence $v_n \in L^1(T_N,\R^d) \cap \ker \A$ such that \begin{enumerate} [i)] \item $\norm v_n \norm_{L^{\infty}(T_N,\R^d)} \leq C L$; \item $\norm v_n - u_n \norm_{L^1(T_N,\R^d)} \to 0$ as $n \to \infty$. \end{enumerate} By \textsc{Zhang} \cite{Zhang}, the property (ZL) holds for $\A=\curl$ and a version of Theorem \ref{Thm:intro} shows this for $\A=d$ (Corollary \ref{main2}). Further examples are shortly discussed in Example \ref{ex:op}. For the characterisation of Young measures, recall that $\spt \nu$ denotes the support of a (signed) Radon measure $\nu \in \mathcal{M}(\R^d)$, and for $f \in C(\R^d)$ \begin{displaymath} \li \nu, f \re :=\int_{\R^d} f \textup{d}\mu. \end{displaymath} If the property (ZL) holds for some differential operator $\A$, then one is able to prove the following statement. \begin{thm}[Classification of $\A$-$\infty$-Young measures] Let $\A$ satisfy (ZL). A weak$*$ measurable map $\nu: T_N \to \M(\R^d)$ is an $\A$-$\infty$-Young measure if and only if $\nu_x \geq 0$ a.e. and there exists $K \subset \R^d$ compact and $u \in L^{\infty}(T_N,\R^d) \cap \ker \A$ with \begin{enumerate} [i)] \item $\spt \nu_x \subset K$ for a.e. $x \in T_N$; \item $ \li \nu_x, id \re = u$ for a.e. $x \in T_N$; \item $\li \nu_x, f \re \geq f(\li \nu_x, id \re)$ for a.e. $x \in T_N$ and all continuous and $\A$-quasiconvex $f:\R^d \to \R$. \end{enumerate} \end{thm} We close the introduction with a brief outline of the paper. In Section \ref{secaux}, we introduce some notation, recall some basic facts from multilinear algebra and the theory of differential forms and prove estimate \eqref{Lipschitzversion}. Section \ref{secWhitney} is devoted to the proof of the geometric Whitney extension theorem. In Section \ref{Secmain}, the proof of the truncation result (and its local and periodic variant) is given. Section \ref{secappl} discusses the applications to $\A$-quasiconvex hulls and $\A$-Young measures. Many of the arguments here follow the arguments in \cite{Kinderlehrer}. The necessary adaptations in our setting are discussed in an Appendix. \section{Preliminary results} \label{secaux} \subsection{Notation} We consider an open and bounded Lipschitz set $\Omega \subset \R^N$ and denote by $T_N$ the $N$-dimensional torus, which arises from identifying faces of $[0,1]^N$. We may identify functions $f \colon T_N \to \R^d$ with $\Z^N$-periodic functions $\tilde{f} \colon \R^N \to \R^d$, and vice versa. We write $B_\rho(x)$ to denote the ball with radius $\rho$ and centre $x$. Denote by $\mathcal{L}^N$ the Lebesgue measure and, for a set $X \subset \R^N$, \begin{displaymath} \V X \V := \mathcal{L}^N(X). \end{displaymath} For a measure $\mu$ on $\R^N$ and a $\mu$-measurable set $A \subset \R^N$ with $0<\mu(A)<\infty$ define the average integral of a $\mu$-measurable function f via \begin{displaymath} \fint_A f \dmu = \frac{1}{\mu(A)} \int_A f \dmu. \end{displaymath} For $k\in \N$ write $[k] = \{1,...,k\}$. For a normed vector space $V$ we denote by $V^{\ast}$ the dual space of $V$. Define the space $\Lambda^r$ as the $r$-fold wedge product of $(\R^N)^{\ast}$, i.e. \begin{displaymath} \Lambda^r = \underbrace{(\R^N) ^{\ast} \wedge ... \wedge (\R^N)^{\ast}}_ {\substack{r\text{ copies}}} \end{displaymath} and similarly the space $\Lambda_r$ as the $r$-fold wedge product of $\R^N$. Then $\Lambda^r$ and $\Lambda_r$ are finite-dimensional vector spaces. For $\R^N$ denote by $\{e_i \}_{i \in [N] }$ the standard basis and by $\cdot$ the standard scalar product. For $(\R^N)^{\ast}$ denote by $\theta_1,...,\theta_N$ the corresponding dual basis of $(\R^N)^{\ast}$, i.e. $\theta_i$ is the map $y \mapsto y \cdot e_i$. For $k \in I_r := \{ l \in [N]^r \colon l_1 < l_2 <...<l_r\}$ the vectors \begin{equation} \label{dxkr} e^{k,r} = e_{k_1} \wedge e_{k_2} \wedge... \wedge e_{k_r} \end{equation} form a basis of $\Lambda_r$. Denote by $\cdot^r$ the scalar product with respect to this basis, i.e. for $k,l \in I_r$ \begin{displaymath} e^{k,r} \cdot^r e^{l,r} = \left \{ \begin{array}{ll} 1 & k=l, \\ 0 & k\neq l.\end{array} \right. \end{displaymath} This also provides us with a suitable norm on $\Lambda_r$, which we denote by $\norm \cdot \norm_{\Lambda_r}$. Similarly, using the standard basis of $(\R^n)^{\ast}$, we define a basis $\theta^{k,r}$ and a norm $\norm \cdot \norm_{\Lambda^r}$. Also note that for $0 \leq s \leq r$ there exists (up to sign) a natural map $\Lambda^r \times \Lambda_s \mapsto \Lambda^{r-s}$, as $\Lambda^s$ is the dual space of $V^s$ and $\Lambda^r= \Lambda^s \wedge \Lambda^{r-s}$. In particular, in the special case $s=1$ for $h_1,...,h_r \in \R^{N\ast}$ and $y \in \R^N$ \begin{equation} \label{s1} (h_1 \wedge .... \wedge h_r)(y) = \sum_{i=1}^r (-1)^{i-1} h_i(y) h_1 \wedge ... \wedge h_{i-1} \wedge h_{i+1} ... \wedge h_r. \end{equation} In the case $s=r$ and for $h_1,...,h_r \in (\R^{N})^{\ast}$ and $y_1,...,y_r \in \R^N$ \begin{equation}\label{sr} (h_1 \wedge .... \wedge h_r)(y_1 \wedge ... \wedge y_r) = \sum_{ \sigma \in S_r} \left( \sgn(\sigma) \prod_{i=1}^r h_i(y_{\sigma(i)})\right), \end{equation} where $S_r$ denotes the group of permutations of $\{1,...,r\}$. \eqref{sr} also gives us a representation of the map $\Lambda^r \times \Lambda_s \mapsto \Lambda^{r-s}$ as for $h \in \Lambda^r$, $x \in \Lambda_s$ we may consider the element of $\Lambda^{r-s}=(\Lambda_{r-s})^{\ast}$ defined by \begin{displaymath} z \longmapsto h ( x \wedge z), \quad z \in \Lambda_{r-s}. \end{displaymath} Let us shortly remark that this notation is slightly different to the usual notation for interior products. Moreover, note that the space $\Lambda^N$ is isomorphic to $\R$ via the map $I^N$ defined by \begin{displaymath} a ~ \theta_1 \wedge ... \wedge \theta_N \longmapsto a \in \R. \end{displaymath} \subsection{Differential forms} In the following, we will define all objects for an open set $\Omega \subset \R^N$, but these definitions are also valid for $\R^N$ and $T_N$ respectively. We call a map $f \in L^1_{\loc}(\Omega,\Lambda^r)$ an \textbf{$r$-differential form} on $\Omega$. We define the space \begin{displaymath} \Gamma = \bigcup_{r \in \N} C^{\infty}(\Omega,\Lambda^r). \end{displaymath} It is well-known (c.f \cite{Cartan,diffgeo}) that there exists a linear map $d\colon \Gamma \mapsto \Gamma$, called the \textbf{exterior derivative} with the following properties \begin{enumerate}[i)] \item $d^2 = d \circ d = 0$, \item $d$ maps $C^{\infty}(\Omega,\Lambda^r)$ into $C^{\infty}(\Omega,\Lambda^{r+1})$, \item We have the \textbf{Leibniz rule}: If $\alpha \in C^{\infty}(\Omega,\Lambda^r)$ and $\beta \in C^{\infty}(\Omega,\Lambda^s)$, then \begin{equation} \label{Leibniz} d (\alpha \wedge \beta) = d \alpha \wedge \beta + (-1)^r \alpha \wedge \beta, \end{equation} \item $d: C^{\infty}(\Omega,\Lambda^0) \to C^{\infty}(\Omega,\Lambda^1)$ is the gradient via the identification $\Lambda^0 =\R$, $\Lambda^1 = (\R^N)^{\ast} \cong \R^N$. \end{enumerate} This map $d$ has the following representation in terms of the standard coordinates (c.f. \cite{diffgeo}). Let $\omega \in C^{\infty}(\Omega,\Lambda^r)$, which, for some $a_k \in C^{\infty}(\Omega,\R)$, can be written as \begin{displaymath} \omega(y) = \sum_{k \in I_r} a_k(y) \theta^{k,r}. \end{displaymath} Then \begin{equation} \label{coordinate} d \omega(y) = \sum_{k \in I_r} \sum_{l \in [N]} \partial_l a_k(y) \theta_l \wedge \theta^{k,r}. \end{equation} \begin{rem} For a fixed $r \in \{0,...,N-1\}$ we can identify $d \colon C^{\infty}(\Omega, \Lambda^r) \mapsto C^{\infty}(\Omega, \Lambda^{r+1})$ with a differential operator $\A$. By definition, for $r=0$, $d$ can be identified with the gradient. For $r=1$, after a suitable identification of $\Lambda^2$ with $\R^{N \times N}_{skew}$, $d= \curl$, which is the differential operator mapping $u \in C^{\infty}(\Omega,\R^N)$ to $\curl u \in C^{\infty}(\Omega,\R^{N \times N}_{skew})$ defined by \begin{displaymath} (\curl u )_{lk} = \partial_l u_k - \partial_k u_l. \end{displaymath} If $r=N-1$, after identifying $\Lambda^{N-1}$ with $\R^N$ and $\Lambda^N$ with $\R$, the differential operator $d$ becomes the divergence of a vector field which is defined for $u \in C^{\infty}(\Omega,\R^N)$ by \begin{displaymath} \divergence u = \sum_{k=1}^N \partial_k u_k. \end{displaymath} \end{rem} \begin{lemma} \label{prodrule} We have the following product rules for $d$: \begin{enumerate} [i)] \item Let $\omega \in C^1(\Omega,\Lambda^1)$, $z \in \R^N= \Lambda_1$. Then \begin{equation} \label{prod1} d (\omega(\cdot)(\cdot-z)) = \nabla \omega(\cdot) (\cdot-z) + \omega(\cdot), \end{equation} where we define $\nabla \omega(\cdot) (\cdot-z) \in C(\Omega,\Lambda^1)$ as follows: \\If $\omega = \sum_{i=1}^N \omega_i \theta_i$ and $(y-z) = \sum_{i=1}^N (y-z)_i e_i$, then \begin{displaymath} \nabla \omega(y-z) := \sum_{l=1}^N \sum_{i=1}^N \partial_l \omega_i (y-z)_i \theta_l. \end{displaymath} \item There is a linear bounded map $D^{1,r} \in \Lin((\Lambda^r \times \R^N)\times \R^N, \Lambda^r)$ such that for $\omega \in C^1(\Omega,\Lambda^r)$, $z \in \R^N$ we have \begin{equation} d (\omega(\cdot)(\cdot-z)) = D^{1,r}( \nabla \omega,(\cdot-z)) + \omega(\cdot). \end{equation} \item There is a linear and bounded map $D^{s,r} \in \Lin((\Lambda^r \times\R^N) \times \Lambda_s, \Lambda^{r-s})$ such that for $\omega \in C^1(\Omega, \Lambda^r)$, $z \in \R^N$, $z_2 \in \Lambda_{s-1}$ \begin{equation} \label{productrule} d ( \omega(\cdot) ((\cdot-z)\wedge z_2)) = D^{s,r} (\nabla \omega, (\cdot-z)\wedge z_2) + (-1)^{s-1}\omega(\cdot) (z_2). \end{equation} \end{enumerate} \end{lemma} \begin{proof} i) simply follows from a calculation, i.e., if as mentioned \begin{displaymath} \omega(y) = \sum_{i=1}^N \omega_i(y) \theta_i \quad \text{ and }(y-z) = \sum_{i=1}^N (y-z)_i e_i, \end{displaymath} then \begin{align*} d(\omega(y)(y-z))& = \sum_{l=1}^N \partial_l(\omega(y)(y-z)) \theta_l \\ &= \sum_{i,l=1}^N \partial_l \omega_i(y) (y-z)_i \theta_l+ \sum_{l=1}^N \omega_l(y) \theta_l, \end{align*} which is what we claimed. ii) then follows from i) and using \eqref{s1}. Likewise, iii) then follows from ii). \end{proof} \begin{defi} For $\omega \in L^1_{loc}(\Omega,\Lambda^r)$ and $u \in L^1_{\loc}(\Omega, \Lambda^{r+1})$ we say that $d \omega = u$ in the sense of distributions if for all $\phi \in C_c^{\infty}(\Omega, \Lambda^{N-r-1})$ we have \begin{displaymath} \int_{\Omega} d \phi \wedge \omega = (-1)^{N-r} \int_{\Omega } \phi \wedge u . \end{displaymath} \end{defi} Note that this definition is equivalent to the following formula: For all $\phi \in C_c^{\infty}(\Omega,\Lambda^s) $ with $0 \leq s \leq N-r-1$ \begin{displaymath} (-1)^{r+1} \int_{\Omega} \omega \wedge d\phi= - \int_{\Omega} u \wedge \phi. \end{displaymath} \subsection{Stoke's theorem on simplices} \label{SecStokes} We want to establish a suitable notion of Stoke's theorem for differential forms on simplices. Let $1 \leq r \leq N$ and $x_1,...,x_{r+1} \in \R^N$. Define the simplex $\Sim(x_1,...,x_{r+1})$ as the convex hull of $x_1,...,x_{r+1}$. We call this simplex degenerate, if its dimension is strictly less than $r$. For $i \in \{1,...,r+1\}$ consider $\Sim(x_1,...x_{i-1},x_{i+1},...,x_{r+1}) =: \Sim^i(x_1,...x_{r+1})$. This is an $(r-1)$ dimensional face of $\Sim(x_1,...,x_{r+1})$ and a subset of the boundary of the manifold $\Sim(x_1,...,x_{r+1})$, which, for simplicity, will be denoted by $\partial \Sim(x_1,...,x_{r+1})$. Suppose first that we are given the simplex \begin{displaymath} \{\lambda \in [0,1]^r \ \colon \sum_{i=1}^r \lambda_i \leq 1\} \times \{0\}^{N-r} = \Sim(0,e_1,...,e_r) \subset \R^r \times \{0\}^{N-r} \subset \R^N. \end{displaymath} Then the classical version of Stoke's theorem on oriented manifolds reads that for every differential form $\tilde{\omega} \in C^1(\R^r \times \{0\}^{N-r}, \R^r \wedge ... \wedge \R^r)$ we have \begin{equation} \label{Stokes1} \int_{\Sim(0,e_1,...,e_r)} d\tilde{\omega}(y) \dH^{r}(y) = \int_{\partial^{\ast} \Sim(0,e_1,...,e_r)} \tilde{\omega}(y) \wedge \nu(y) \dH^{r-1}(y). \end{equation} In \eqref{Stokes1}, $\nu(y)$ denotes the outer normal unit vector at $y\in \partial^{\ast} \Sim(0,e_1,...e_r)$ and $\partial^{\ast}$ is the reduced boundary of the simplex, where this outer normal exists (the interior of all $(r-1)$-dimensional faces). In our case, we are given a differential form with the underlying space being $\R^N$ and not $\R^r$ (the tangential space of the manifold/simplex), hence we can modify \eqref{Stokes1} to get for $\omega \in C^1(\R^N,\Lambda^r)$ \begin{align} \label{Stokes15} \int_{\Sim(0,e_1,...,e_r)}& d\tilde{\omega}(y) (e_1 \wedge ... \wedge e_r) \dH^r(y) \nonumber \\ &= \sum_{i=1}^r (-1)^i \int_{\Sim(0,...,e_{i-1},e_{i+1},...,e_r)} \omega(y) (e_1 \wedge ... \wedge e_{i-1} \wedge e_{i+1} \wedge ...\wedge e_r) \\ &+ \int_{\Sim(e_1,...,e_r)} 2^{-r/2}\omega(y)((e_2-e_1) \wedge (e_3-e_2) \wedge ... \wedge (e_{r}-e_{r-1})). \nonumber \end{align} Let us write for simplicity that for $x_1,...,x_{r+1} \in \R^N$ \begin{displaymath} \nu^r(x_1,...,x_{r+1}) = ((x_2-x_1) \wedge (x_3 -x_2) \wedge ... \wedge (x_{r+1}-x_r)) \in V_r. \end{displaymath} The map $\nu^r$ has the following properties: \begin{enumerate} [i)] \item $\nu^r$ is alternating, i.e. for a permutation $\sigma \in S_r$: \begin{displaymath} \nu^r(y_1,...,y_{r+1}) = \sgn(\sigma) \nu^r(y_{\sigma(1)},...,y_{\sigma(r+1)}). \end{displaymath} \item We have the relation \begin{displaymath} \norm \nu^r(y_1,...,y_{r+1}) \norm_{\Lambda_r} = r \Haus ^r(\Sim(y_1,...,y_{r+1})) . \end{displaymath} \end{enumerate} A linear change of coordinates from $\Sim(0,e_1,..,e_r)$ to $\Sim(x_1,...,x_{r+1})$ leads from \eqref{Stokes15} to the following: For $\omega \in C^{\infty}(\R^N,\Lambda^{r-1})$ and $x_1,...x_{r+1} \in \R^N$ \begin{align} \label{Stokes2} &\frac{1}{r} \fint_{\Sim(x_1,...,x_{r+1})} d \omega(y) (\nu^r(x_1,...,x_{r+1})) \dH ^r(y) \\ \nonumber ~&= \sum_{i=1}^{r+1} \frac{(-1)^i}{r-1} \fint_{\Sim^i(x_1,...x_{r+1})} \omega(y)( \nu^{r-1} (x_1,...,x_{i-1},x_{i+1},...x_{r+1})) \dH ^{r-1}(y), \end{align} \subsection{The maximal function} The Hardy-Littlewood maximal function for $u \in L^1_{\loc}(\R^N,\R^d)$ is defined by \begin{displaymath} Mu(x) = \sup_{R>0} \fint_{B_R(x)} \V u(y) \V \dy. \end{displaymath} Again, we can also define the maximal function for functions on the torus using the identification with periodic functions. \begin{prop} [Properties of the maximal function] (c.f. \cite{Stein}) \label{maximalfct} $M$ is sublinear, i.e. $ M(u+v)(y) \leq Mu(y) + Mv(y)$ for all $u,v \in L^1_{\loc}(\R^N,\R^d)$ and $y\in \R^N$. Moreover, $M: L^p(\R^N,\R^d) \to L^p(\R^N,\R)$ is bounded for $1 < p \leq \infty$ and bounded from $L^1$ to $L^{1,\infty}$. In particular, this means that for $1 \leq p < \infty$ \begin{displaymath} \left \V \{ Mu > \lambda \} \right \V \leq C_p \lambda ^{-p} \norm u \norm_{L^p(\R^N,\R^d)}^p. \end{displaymath} \end{prop} If $u \in L^p_{\loc}(\R^N,\R^d)$ is a $\Z^N$-periodic function, i.e. $u \in L^p(T_N,\R^d)$, then \begin{displaymath} \V \{M u >\lambda\} \cap [0,1]^N \V \leq C_p \lambda^{-p} \norm u \norm_{L^p([0,1]^N,\R^d)}^p. \end{displaymath} We now come to a key lemma for our main theorem. \begin{lemma} \label{maximalf} There exists a constant $C = C(N,r)$ such that for all $\omega \in C^1(\R^N,\Lambda^{r})$, $\lambda >0$ with $d \omega=0$ and $x_1,...,x_{r+1} \in \{ M\omega \leq \lambda\}$ we have \begin{displaymath} \left \V \fint_{\Sim(x_1,...,x_{r+1})} \omega (\nu^r(x_1,...,x_{r+1})) \right \V \leq C \lambda \max_{1\leq i,j \leq r+1} \V x_i -x_j \V ^r. \end{displaymath} \end{lemma} This lemma is so to speak our version of Lipschitz continuity. In particular, it has been proven (for example in \cite{Acerbi}) that for $u \in W^{1,1}_{loc}(\R^N,\R^m)$ and for $y_1,y_2 \in\{M \nabla u(x) \leq L \}$ \begin{displaymath} \left \V \int_0^1 \nabla u(ty_1+(1-t)y_2) \cdot (y_1 -y_2) \dt \right \V = \V u(y_1) -u(y_2) \V \leq C L \V y_1 -y_2 \V. \end{displaymath} Hence, one should view Lemma \ref{maximalf} as a generalisation of this result. \begin{proof} For simplicity write $\V \omega \V := \norm \omega \norm_{\Lambda^r}$. It suffices to show that there exists $z \in \R^N$ such that \begin{equation} \label{claim1} \sum_{i=1}^{r+1} \int_{\Sim(x_1,...x_{i-1},z,x_{i+1},...)} \V \omega \V \dH^{r}(y) \leq C \lambda \max_{i,j \in [r+1]} \V x_i -x_j \V^r \end{equation} using that \begin{align} \label{Stokesappl} \sum_{i=1}^{r+1} \int_{\Sim(x_1,...x_{i-1},z,x_{i+1},...)} \omega( \nu^{r-1}(x_1,...x_{i-1},z,x_{i+1},...)) \dH^{r}(y) \\ \nonumber= \int_{\Sim(x_1,...,x_{r+1})} \omega (\nu^{r-1}(x_1,...,x_{r+1})) \dH^{r}(y). \end{align} This equation \eqref{Stokesappl} can be verified by Stoke's theorem \eqref{Stokes2}, using that boundary terms with a simplex with vertex $z$ cancel out on the left-hand side of \eqref{Stokesappl}. \begin{figure} \centering \begin{tikzpicture}[domain=0:6] \coordinate[label=left:$x_1$] (A) at (1,1); \coordinate[label=right:$x_2$] (B) at (4,4); \coordinate[label=left:$x_3$] (C) at (2,5.5); \coordinate[label=below:$z$] (D) at (3,2); \fill (A) circle (2pt); \fill (B) circle (2pt); \fill (C) circle (2pt); \fill (D) circle (2pt); \draw (A) -- (B); \draw (A) -- (C); \draw (B) -- (C); \draw[dashed] (A) -- (D); \draw[dashed] (B) -- (D); \draw[dashed] (C) -- (D); \end{tikzpicture} \caption{Illustration of \eqref{Stokesappl} for $r=2$. The integrals on the dashed $1$-dimensional faces cancel out in \eqref{Stokesappl} after applying Stoke's theorem.} \label{picture1} \end{figure} We now prove \eqref{claim1}. W.l.o.g. $R= \max_{i,j \in[r+1]} \V x_i-x_j \V = \V x_1-x_2 \V$. Note that there exists a dimensional constant $C_1$ such that \begin{displaymath} \V B_R(x_1) \cap B_R(x_2) \V \leq C_1 R^N. \end{displaymath} First, consider $x_1,...,x_r \in B_R(x_1)$. For $z \in B_R(x_1)$ define $E(z)$ to be the $r$-dimensional hyperplane going through $x_1,...,x_r$ and $z$. This is well-defined if $z$ is not in the $(r-1)$ dimensional hyperplane $F$ going through $x_1,...,x_r$. Note that for $z,\tilde{z} \notin F$ \begin{displaymath} z \in E(\tilde{z}) \Leftrightarrow \tilde{z} \in E(z). \end{displaymath} \noindent As $M \omega (x_1) \leq \lambda$, we know that \begin{displaymath} \int_{B_R(x_1)} \V \omega \V (z) \dz \leq \lambda b_N R^N, \end{displaymath} where $b_N$ is the volume of the $N$-dimensional unit ball $B_1(0)$. As $\Haus ^r(E(z) \cap B_R(x_1)) = b_r R^r$, it also follows that \begin{displaymath} \int_{B_R(x_1)} \int_ {E(z) \cap {B_R(x_1)}} \V \omega \V (y) \dH^{r}(y) \dz \leq \lambda b_N b_r R^{N+r}. \end{displaymath} Using that $\Sim(x_1,...,x_r,z) \subset E(z) \cap B_R(x_1)$, we conclude that for $\mu >0$ \begin{equation} \label{measureest} \left \V \left \{z \in B_R(x_1) \colon \left \V \int_{\Sim(x_1,...,x_r,z)} \V \omega \V (y) \dy \right \V \geq \mu \right \} \right \V \leq \frac{\lambda b_r b _N R^{N+r}} {\mu}. \end{equation} Choose now $\mu^{\ast}= 2(r+1) b_r b_N R^r\lambda C_1^{-1}$. Plugging this into \eqref{measureest}, we see that the measure of this set is smaller that $ R^N(2 (r+1))^{-1}$. Repeating this procedure for all $(r-1)$-dimensional faces of $\Sim(x_1,...,x_{r+1})$, we get that for $i>1$ \begin{displaymath} \left \V \left \{z \in B_R(x_1) \colon \left \V \int_{\Sim(x_1,...,x_{i-1},z,x_{i+1},...)} \V \omega \V (y) \dH^{r}(y) \right \V \geq \mu^{\ast} \right \} \right \V \leq \frac{C_1 R^N}{2 (r+1)}, \end{displaymath} and for $i=1$ \begin{displaymath} \left \V \left \{z \in B_R(x_2) \colon \left \V \int_{\Sim(z,x_2,...x_{r+1})} \V \omega \V (y) \dH^{r}(y) \right \V \geq \mu^{\ast} \right \} \right \V \leq \frac{C_1 R^N}{2 (r+1)}. \end{displaymath} Hence, there exists $z \in B_R(x_1) \cap B_R(x_2)$ such that all the integrals in the sum of \eqref{claim1} are smaller than $((2(r+1))^{-1} b _r b _N C_1^{-1}) R^r\lambda$. This is what we wanted to prove. \end{proof} \section{A Whitney-type extension theorem} \label{secWhitney} First, let us recall the following Lipschitz extension theorem. \begin{thm}[Lipschitz extension theorem] Let $X \subset \R^N$ be a closed set and $u \in C(X,\R^d)$ such that \begin{equation} \label{LipschitzonX} \V u(x) - u (y) \V \leq L \V x - y \V. \end{equation} Then there exists a function $v \in C(\R^N,\R^d)$ with $v_{\V X} =u$ and such that $v$ is Lipschitz on $\R^N$ with Lipschitz constant at most $C(N)L$ (i.e. the Lipschitz constant does not depend on $X$). \end{thm} Of course, there are several ways to prove such a theorem, even with $C(N)=1$ \cite{KB}. However, \textsc{Whitney's} proof \cite{Whitney} plays with the geometry of $\R^N$ quite nicely. A similar geometric ideas lies behind our proof for closed differential forms. First, let us define an analogue of \eqref{LipschitzonX}. Suppose that $X$ is a closed subset of $\R^N$, such that $X^C = \R^N \backslash X$ is bounded and $\V \partial X \V =0$. \\ Let $u \in C_c^{\infty}(\R^N,\Lambda^r)$ with $du =0$. Let $L>0$ be such that $\norm u \norm_{L^{\infty}(X)} \leq L$ and that for all $x_1,...,x_{r+1} \in X$ we have \begin{equation} \label{Lipschitzprop} \left \V \fint_{\Sim(x_1,...,x_{r+1})} u(y) (\nu^r(x_1,...,x_{r+1})) \dy \right \V \leq L \max \V x_i -x_j \V ^r. \end{equation} \begin{lemma}[Whitney-type extension theorem] \label{extension} There exists a constant $C= C(N,r)$ such that for all $u \in C_c^{\infty}(\R^N,\Lambda^r)$ and $X$ meeting the requirements above there exists $v \in L^1_{\loc}(\R^N,\Lambda^r)$ with \begin{enumerate} [i)] \item $dv =0$ in the sense of distributions; \item $v (y) = u (y) $ for all $y \in X$; \item $ \norm v \norm_{L^{\infty}} \leq CL$. \end{enumerate} \end{lemma} \begin{rem} The constant $C$ does not depend on the choice of $u$ or $X$, it is only important that the pair $(u,X)$ satisfies \eqref{Lipschitzprop}. The assumption that $X^C$ is bounded can be dropped, the assumption $\V \partial X \V =0$ makes the proof much easier. \end{rem} \begin{rem} As one can see in the proof, the assumption $u\in C_c^{\infty}(\R^N,\Lambda^r)$ can be weakened to $u \in C_c^1(\R^N,\Lambda^r)$, as we only need the first derivative of $u$. However, it is important to remember that we cannot prove Lemma \ref{extension} for the even weaker assumption $u \in L^1_{\loc}$, as \eqref{Lipschitzprop} is not well-defined. \end{rem} For the proof we follow the classical approach by Whitney with a few little twists. First, we will define the extension in \eqref{vdef}. Then we prove that $v$ satisfies properties i)-iii). ii) and iii) are quite easy to see from the definition of $v$, however it is hard to verify that i) holds. On the one hand, we show that the strong derivative of $v$ exists almost everywhere, namely in $\R^N \back \partial X$ and that $dv=0$ almost everywhere. Here we need $ \V \partial X \V =0$. On the other hand, we then prove that the distributional derivative $dv$ is in fact also an $L^1$ function, yielding that $dv=0$ in the sense of distributions. We now start with the definition of the extension. Let us recall (c.f. \cite{Stein}) that for $X \subset \R^N$ closed we can find a collection of pairwise disjoint open cubes $\{Q^{\ast}_i\}_{i \in \N}$ such that \begin{itemize} \item $Q^{\ast}_i$ are open dyadic cubes; \item $\cup_{i \in \N}~ \bar{Q}^{\ast}_i = X^C$; \item $\dist (Q^{\ast}_i,X) \leq l(Q^{\ast}_i) \leq 4 \dist(Q^{\ast}_i,X)$,; where $l(Q^{\ast}_i)$ denotes the sidelength of the cube. \end{itemize} \begin{figure} \centering \begin{tikzpicture}[domain=0:8] \draw[thick, dashed](0,4) -- (8,8]); \coordinate[label=above:$\partial X$] (A) at (4,6); \coordinate[label=above:$X$] (B) at (2,7); \coordinate[label=right:$X^C$] (C) at (6.5,1); \fill[color=gray, opacity=0.2] (0,4) -- (0,8) -- (8,8); \foreach \x in {2,4,6} \draw (\x,0) rectangle (\x+2,2); \draw (6,2) rectangle (8,4); \foreach \x in {0,1} \foreach \y in {0,1} \draw (\x,\y) rectangle (\x+1,\y+1); \foreach \x in {1,2} \draw (\x,2) rectangle (\x+1,3); \foreach \x in {3,4,5} \foreach \y in {2,3} \draw (\x,\y) rectangle (\x+1,\y+1); \foreach \x in {5,6,7} \draw (\x,4) rectangle (\x+1,5); \draw (7,5) rectangle (8,6); \foreach \x in {0,0.5} \foreach \y in {2,2.5} \draw (\x,\y) rectangle (\x+0.5,\y+0.5); \draw (0.5,3) rectangle (1,3.5); \draw (1,3) rectangle (1.5,3.5); \foreach \x in {1.5,2,2.5} \foreach \y in {3,3.5} \draw (\x,\y) rectangle (\x+0.5,\y+0.5); \draw (2.5,4) rectangle (3,4.5); \draw (3,4) rectangle (3.5,4.5); \foreach \x in {3.5,4,4.5} \foreach \y in {4,4.5} \draw (\x,\y) rectangle (\x+0.5,\y+0.5); \draw (4.5,5) rectangle (5,5.5); \draw (5,5) rectangle (5.5,5.5); \foreach \x in {5.5,6,6.5} \foreach \y in {5,5.5} \draw (\x,\y) rectangle (\x+0.5,\y+0.5); \foreach \x in {6.5,7,7.5} \draw (\x,6) rectangle (\x+ 0.5,6.5); \draw (7.5,6.5) rectangle (8,7); \foreach \x in {0,0.25} \foreach \y in {3,3.25} \draw (\x,\y) rectangle (\x+0.25,\y+0.25); \foreach \x in {0.25,0.5,0.75} \draw (\x,3.5) rectangle (\x+0.25,3.75); \draw (0.75,3.75) rectangle (1,4); \foreach \x in {1,1.25} \foreach \y in {3.5,3.75} \draw (\x,\y) rectangle (\x+0.25,\y+0.25); \foreach \x in {1.25,1.5,1.75} \draw (\x,4) rectangle (\x+0.25,4.25); \draw (1.75,4.25) rectangle (2,4.5); \foreach \x in {2,2.25} \foreach \y in {4,4.25} \draw (\x,\y) rectangle (\x+0.25,\y+0.25); \foreach \x in {2.25,2.5,2.75} \draw (\x,4.5) rectangle (\x+0.25,4.75); \draw (2.75,4.75) rectangle (3,5); \foreach \x in {3,3.25} \foreach \y in {4.5,4.75} \draw (\x,\y) rectangle (\x+0.25,\y+0.25); \foreach \x in {3.25,3.5,3.75} \draw (\x,5) rectangle (\x+0.25,5.25); \draw (3.75,5.25) rectangle (4,5.5); \foreach \x in {4,4.25} \foreach \y in {5,5.25} \draw (\x,\y) rectangle (\x+0.25,\y+0.25); \foreach \x in {4.25,4.5,4.75} \draw (\x,5.5) rectangle (\x+0.25,5.75); \draw (4.75,5.75) rectangle (5,6); \foreach \x in {5,5.25} \foreach \y in {5.5,5.75} \draw (\x,\y) rectangle (\x+0.25,\y+0.25); \foreach \x in {5.25,5.5,5.75} \draw (\x,6) rectangle (\x+0.25,6.25); \draw (5.75,6.25) rectangle (6,6.5); \foreach \x in {6,6.25} \foreach \y in {6,6.25} \draw (\x,\y) rectangle (\x+0.25,\y+0.25); \foreach \x in {6.25,6.5,6.75} \draw (\x,6.5) rectangle (\x+0.25,6.75); \draw (6.75,6.75) rectangle (7,7); \foreach \x in {7,7.25} \foreach \y in {6.5,6.75} \draw (\x,\y) rectangle (\x+0.25,\y+0.25); \foreach \x in {7.25,7.5,7.75} \draw (\x,7) rectangle (\x+0.25,7.25); \draw (7.75,7.25) rectangle (8,7.5); \end{tikzpicture} \caption{A collection of cubes $Q_j^{\ast}$ near the boundary (up to a certain size).} \label{picture2} \end{figure} Choose $0 < \varepsilon < 1/4$ and define another collection of cubes by $Q_i= (1 + \varepsilon) Q_i^{\ast}$ (cube with the same center and sidelength $(1+\varepsilon) l(Q^{\ast}_i)$). Then \begin{itemize} \item $\cup_{i \in \N}~ Q_i = X^C$; \item For all $i \in \N$, the number of cubes $Q_j$ such that $Q_i \cap Q_j \neq \emptyset$ is bounded by a dimensional constant $C(N)$; \item In particular, all $x \in \R^N$ are only contained in at most $C(N)$ cubes $Q_i$; \item The distance to the boundary is again comparable to the sidelength, i.e. \begin{displaymath} 1/2 \dist (Q_i,X) \leq l(Q_i) \leq 8 \dist(Q_i,X). \end{displaymath} \end{itemize} Note that if $X$ is $\Z^N$-periodic, then also $Q_i$ can be chosen to be $\Z^N$ periodic (initially, we have a collection of dyadic cubes). Now consider $\phi \in C_c^{\infty}((-1-\varepsilon,1+\varepsilon)^N,[0,\infty))$ with $\phi =1$ on $(-1,1)^N$. We can rescale $\phi$ such that we obtain functions $\phi^{\ast}_j \in C_c^{\infty}(Q_j)$ with $\phi^{\ast}_j =1$ on $Q^{\ast}_j$. Define the partition of unity on $X^C$ by \begin{displaymath} \phi_j = \frac{\phi^{\ast}_j}{\sum_{i \in \N} \phi_i^{\ast}}. \end{displaymath} Note that $0 \leq \phi_j \leq 1$ and that there exists a constant $C>0$ such that for all $j \in \N$ \begin{displaymath} \V \nabla \phi_j \V \leq C/8~ l(Q_j)^{-1} \leq C \dist(Q_j,X)^{-1}. \end{displaymath} For each cube $Q_i$, we may find an $x \in X$ such that $\dist(Q_i,x) = \dist(Q_i,X)$. Denote this $x$ by $x_i$. For a multiindex $I=(i_1,...,i_{r+1}) \in \N^{r+1}$, define \begin{displaymath} G(x_{i_1},...,x_{i_{r+1}})= G(I) := \fint_{\Sim(x_{i_1},...,x_{i_{r+1}})} \omega(y) \dy. \end{displaymath} We now define the differential form $\alpha \in L^1 (\R^N,\Lambda^r)$ by \begin{equation} \label{alpha} \alpha(y) := \sum_{I \in \N^{r+1}} \phi_{i_1} d\phi_{i_2}\wedge ... \wedge d\phi_{i_{r+1}} \wedge (G(I)(\nu^r(x_{i_1},...,x_{i_{r+1}}))). \end{equation} Note that in this setting $G(I)(\nu^r(...)) \in \R = \Lambda^0$. We claim that the function $v \in L^1_{\loc} (\R^N,\Lambda^r)$ given by \begin{equation} \label{vdef} v(y) := \left\{ \begin{array}{ll} u(y) & y \in X, \\ (-1)^{r} \alpha(y) & y \in X^C \end{array} \right. \end{equation} is the function satisfying all the properties of Lemma \ref{extension}. \begin{lemma} \label{aux1} The differential form $\alpha$ defined in \eqref{alpha} satisfies $\alpha \in L^1(X^C, \Lambda^r)$ and the sum in \eqref{alpha} converges pointwise and in $L^1$. \end{lemma} \begin{proof} Pointwise convergence is clear, as for fixed $y \in X^C$ only finitely many summands are nonzero in a neighbourhood of $y$ ($\phi_i$ is only nonzero in $Q_i$ and any point is only covered by at most $C(N)$ cubes). For $L^1$ convergence fix some $i_1 \in \N$. Note that there are at most $C(N)^r$ summands in $i_2,...,i_{r+1}$, which are nonzero, as $Q_{i_1}$ only intersects with $C(N)$ other cubes. Furthermore, note that for all $i_l$ with $Q_{i_l} \cap Q_{i_1} \neq \emptyset$ \begin{displaymath} \norm d \phi_{i_l} (y) \norm_{\Lambda^1} \leq C \dist(y,X) \leq C l(Q_{i_1})^{-1}. \end{displaymath} Moreover, we can bound $\nu^r$ by \begin{displaymath} \norm \nu^r(x_{i_1},...,x_{i_{r+1}}) \norm_{\Lambda_r} \leq \max_{a,b \in \{i_1,..,i_{r+1}\}} \V x_a -x_b \V^r \leq C l(Q_{i_1})^r. \end{displaymath} Hence, we can bound the $L^{\infty}$-norm of a nonzero summand of \eqref{alpha} by $C \norm u \norm_{L^{\infty}}$, as $\V G(I) \V \leq \norm u \norm_{L^{\infty}}$. As the support of the summand is contained in $Q_i$, we have that its $L^1$ norm is bounded by \begin{displaymath} C \norm u \norm_{L^{\infty}} \V Q_{i_1} \V. \end{displaymath} Remember that any point in $X^C$ is covered by only $C(N)$ cubes, such that the sum of $\V Q_i \V$ is bounded by $C(N) \V X^C \V$. Hence, the sum in \eqref{alpha} converges absolutely in $L^1$ and its $L^1$ norm is bounded by $C(N)^{r+1} C \norm u \norm_{L^{\infty}} \V X^C \V$. \end{proof} \begin{lemma} \label{aux2} The function $v$ is strongly differentiable almost everywhere and satisfies $dv(y) =0$ for all $y \in \R^N \back \partial X$. \end{lemma} \begin{proof} Note that $u \in C_c^{\infty}(\R^N,\Lambda^r)$ and hence $v$ is strongly differentiable in $X \back \partial X$. Furthermore, the sum in \eqref{alpha} is a finite sum in a neighbourhood of $y$ for all $y \in X^C$. As the summands are also $C^{\infty}$, the sum is $C^{\infty}$ in the interior of $X^C$. By assumption $du=0$, hence it remains to prove that $d \alpha (y) = 0$ for all $y \in X^C$. Note that in a neighbourhood of $y \in X^C$ again only finitely many summands are nonzero. Using that $d^2=0$ and the Leibniz rule, we get \begin{equation} \label{dalpha} d \alpha(y) = \sum_{I \in \N^{r+1}} d\phi_{i_1}(y) \wedge ... \wedge d \phi_{i_{r+1}}(y) (G(I)(\nu^r(x_{i_1},...,x_{i_{r+1}}))) . \end{equation} Observe that this term does not converge in $L^1$ and hence this identity is only valid pointwise. Pick some $j \in \N$ such that $y \in Q_j$. As all $\phi_i$ sum up to $1$ in $X^C$, we have \begin{displaymath} d\phi_j(y) = - \sum_{I \in \N \back \{j\}} d\phi_i(y). \end{displaymath} Replace $d \phi_j$ in the sum in \eqref{dalpha} by $- \sum_{I \in \N \back \{j\}} d\phi_i(y)$. Recall that $\nu^r(x_1,...,x_{r+1})=0$ if $x_l=x_{l'}$ for some $l \neq l'$. Hence, \begin{align*} d \alpha(y) &= \sum_{I \in \N^{r+1}} d \phi_{i_1}(y) \wedge ... \wedge d\phi_{i_{r+1}}(y) \wedge (G(I)(\nu^r(x_{i_1},...,x_{i_{r+1}})))\\ ~ & =\sum_{I \in (\N \back \{j\})^{r+1}} d \phi_{i_1}(y) \wedge ... \wedge d \phi_{i_{r+1}}(y) \wedge(G(I)(\nu^r(x_{i_1},...,x_{i_{r+1}})))\\ ~&~~ - \sum_{l=1}^{r+1} \sum_{I \in (\N \back \{j\})^{r+1}} d \phi_{i_1(y)} \wedge ... \wedge d \phi_{i_{r+1}}(y) \\ & \hspace{1.5cm}\wedge (G(x_{i_1},...x_{i_{l-1}},x_j,x_{i_{l+1}},...)( \nu^r(x_{i_1},...x_{i_{l-1}},x_j,x_{i_{l+1}},...))). \end{align*} We apply Stoke's theorem \eqref{Stokes2} to the $r$-form $u$ and the simplex with vertices $x_j, x_{i_1},...,x_{i_{r+1}}$, use that $d u=0$ and conclude that this term is $0$, i.e. \begin{align*} G(I)&(T^r(x_{i_1},...,x_{i_{r+1}})) - \sum_{l=1}^{r+1} G(x_{i_1},...,x_{i_{l-1}},x_j,x_{i_{l+1}},...)( \nu^r(x_{i_1},...,x_j,x_{i_{l+1}},...)) \\ ~& = - \frac{r-1}{r} \fint_{\Sim(x_j,x_{i_1},...,x_{i_{r+1}})} du(y)( \nu^{r+1}(x_j,x_{i_1},...,x_{i_{r+1}})) \dH^{r}(y) = 0. \end{align*} Hence, the pointwise derivative equals $0$ almost everywhere. \end{proof} It is important to note that the sum \eqref{alpha} in the definition of $\alpha$ converges in $L^1$, but in general does not converge in $W^{1,1}$, and thus we have no information on the behaviour at the boundary of $X^C$. However, it suffices to show that the distribution $dv$ for $v$ given by \eqref{vdef} is actually an $L^1$ function. If $d v \in L^1$, we can conclude with Lemma \ref{aux2} that $dv= 0$ in the sense of distributions. \begin{lemma} \label{aux3} The distributional exterior derivative of $v$ defined in \eqref{vdef} satisfies $dv \in L^1(\R^N,\Lambda^{r+1})$, i.e. there exists an $L^1$ function $h \in L^1(\R^N,\Lambda^{r+1})$ such that for all $\psi \in C_c^{\infty}(\R^N,\Lambda^{N-r-1})$ \begin{displaymath} (-1)^{r} \int_{X^C} \alpha \wedge d\psi + \int_{X} u \wedge d\psi = \int_{\R^N} h \wedge \psi. \end{displaymath} \end{lemma} \begin{proof} Consider \begin{displaymath} \int_{X^C} \alpha(y) \wedge d\psi(y) \dy. \end{displaymath} In view of the definition of $\alpha$, this expression is given by: \begin{align*} \int_{\R^N} & \sum_{I \in \N^{r+1}} \phi_{i_1}d\phi_{i_2}\wedge ... \wedge \phi_{i_{r+1}} (G(I)(\nu^r(x_{i_1},...,x_{i_{r+1}})))d\psi~\dy = (\ast). \end{align*} We use the splitting $G(I) = (G(I) - u(\cdot)) +u(\cdot)$ and write ($\ast$) as \begin{equation} \label{splitting} \begin{aligned} (\ast) &= \int_{\R^N} \sum_{I \in \N^{r+1}} \phi_{i_1} d\phi_{i_2}\wedge ...\wedge d\phi_{i_{r+1}} \wedge ((G(I) - u(\cdot))( \nu^r(x_{i_1},...,x_{i_{r+1}})) \wedge d\psi\\ ~& ~+ \int_{\R^N} \sum_{I \in \N^{r+1}} \phi_{i_1} d \phi_{i_2}\wedge...\wedge d \phi_{i_{r+1}}\wedge ( u(\cdot)(\nu^r(x_{i_1},...,x_{i_{r+1}})))\wedge d\psi~ \\ ~& = \text {(I)} + \text {(II)}. \end{aligned} \end{equation} Note that (I) defines a distribution given by an $L^1$ function. Indeed, the sum \begin{displaymath} \phi_{i_1} d\phi_{i_2}\wedge ...\wedge d\phi_{i_{r+1}} \wedge ((G(I) - u(y))( \nu^r(x_{i_1},...,x_{i_{r+1}})) \end{displaymath} converges in $W^{1,1}(\R^N,\Lambda^{r+1})$. To see this, one can repeat the proof of Lemma \ref{aux1} and use that there are additional factors in the estimate of the norms. For this, note that if $z \in Q_{i_1}$ \begin{displaymath} \norm G(I) - u(z) \norm_{\Lambda^r} \leq Cl(Q_i)\norm \nabla u \norm_{L^{\infty}} \end{displaymath} and \begin{displaymath} \norm \nabla (G(I) -u(\cdot))(z) \norm_{\Lambda^r} \leq C \norm\nabla u \norm_{L^{\infty}}. \end{displaymath} One gets improved regularity and may integrate by parts to eliminate the derivative of $\psi$. Term (II) is not so easy to handle. We prove the following claims: \noindent \textbf{Claim 1:} Let $1 \leq s \leq r$ and $I' =(i_s,...,i_{r+1}) \in \N^{r-s+2}$. There exists $h_s \in L^1(T_N,\Lambda^{r+1})$ such that \begin{equation} \label{indstep} \begin{aligned} \int_{X^C} & \sum_{I' \in \N^{r-s+2}} \phi_{i_s} d\phi_{i_{s+1}} \wedge...\wedge d \phi_{i_{r+1}} \wedge (u(\cdot)(\nu^{r-s+1}(x_{i_s},...,x_{i_{r+1})}))) \wedge d \psi \\ &= \int_{X^C} h_s \wedge \psi \\ & \quad - \int_{X^C} \sum_{I' \in \N^{r-s+1}} \phi_{i_{s+1}} d \phi_{i_{s+2}} \wedge ... \wedge d\phi_{i_{r+1}} \wedge (u(\cdot)(\nu^{r-s}(x_{i_{s+1}},...,x_{i_{r+1}})) \wedge d \psi. \end{aligned} \end{equation} Here we use the notation that $\nu^0(x_{i_{r+1}}) = 1 \in \Lambda_0 =\R$. \noindent \textbf{Claim 2:} There is $\tilde{h} \in L^1(T_N,\Lambda^{r+1})$ such that \begin{equation} \label{ind} \begin{aligned} \int_{\R^N} & \sum_{I' \in \N^{r+1}} \phi_{i_1} d\phi_{i_{2}} \wedge...\wedge d \phi_{i_{r+1}} \wedge (u(\cdot)(\nu^{r}(x_{i_1},...,x_{i_{r+1})}))) \wedge d \psi \\ &= \int_{X^C} \tilde{h} \wedge \psi + (-1)^r \int_{X^C} u \wedge d\psi.\\ \end{aligned} \end{equation} Note that Claim 2 follows from Claim 1 by an inductive argument. The domain of integration in \eqref{ind} can be replaced by $X^C$ as well, as all $\phi_{i_j}$ are supported in $X^C$. First, let us conclude the proof under the assumption that Claim 1 holds true. Using \eqref{splitting} and Claim 2 we see that there is an $h \in L^1(\R^N,\R^d)$ such that \begin{displaymath} \int_{X^C} \alpha \wedge d\psi = \int_{\R^N} h \wedge \psi + (-1)^{r}\int_{X^C} u \wedge d\psi. \end{displaymath} Recall that $du =0$ in the sense of distributions and therefore \begin{displaymath} -\int_{X^C} u\wedge d \psi = \int_{X} u\wedge d \psi. \end{displaymath} We conclude that there exists an $L^1$ function $h \in L^1(\R^N,\Lambda^{r+1})$ such that \begin{displaymath} \int_{X^C} \alpha \wedge d\psi + (-1)^{r} \int_{X} u \wedge d\psi = \int_{\R^N} h \wedge \psi. \end{displaymath} Thus, $dv$ is an $L^1$ function. It remains to prove Claim 1. Note that \begin{equation} \label{identity1} \nu^{r-s+1}(x_{i_s},...,x_{i_{r+1}}) = \sum_{j=s}^{r+1} \nu^{r-s+1}(x_{i_s},...,x_{i_{j-1}},y,x_{i_{j+1}},...,x_{i_{r+1}}). \end{equation} This can be verified using that the wedge product is alternating and explicitly writing the right-hand side of \eqref{identity1}. Using this identity, we may split the right-hand side of \eqref{indstep} (denoted by (III)), i.e. \begin{align*} \text{(III)} &= \sum_{j=s+1}^{r+1} \int_{\R^N} \sum_{I \in \N^{r-s+2}} \phi_{i_s} d\phi_{i_{s+1}}\wedge ... \wedge d\phi_{i_{r+1}}\\ & \hspace{2cm} \wedge (u(\cdot)( \nu^{r-s+1}(x_{i_s},...,x_{i_{j-1}},y,x_{i_{j+1}},...,x_{i_{r+1}}))) \wedge d\psi \\~& +\int_{\R^N} \sum_{I \in \N^{r-s+2}} \phi_{i_s} d\phi_{i_{s+1}}\wedge... \wedge d\phi_{i_{r+1}}\wedge( u(\cdot) (\nu^{r-s+1}(y,x_{i_{s+1}},...,x_{i_{r+1}})))\wedge d\psi \\ ~&= \text {(IIIa)} + \text{(IIIb)}. \end{align*} Arguing as in Lemma \ref{aux1}, we see that the sum \begin{displaymath} \sum_{I \in \N^{r-s+2}} \phi_{i_s} d\phi_{i_{s+1}}\wedge ... \wedge d\phi_{i_{r+1}} \wedge (u(\cdot)( \nu^{r-s+1}(x_{i_s},...,x_{i_{j-1}},y,x_{i_{j+1}},...,x_{i_{r+1}}))) \end{displaymath} is in fact convergent in $L^1$. Moreover, the index $i_j$ only appears once in this sum. Recall that for $y \in X^C $ \begin{displaymath} \sum_{i_j \in \N} d\phi_{i_j}(y) =0. \end{displaymath} Thus, \begin{displaymath} \text{(IIIa)} = 0. \end{displaymath} For (IIIb) note that $\sum_{i_1 \in \N} \phi_{i_s} = 1_{X^C}$ and, by the same argument as for (IIIa), we can write \begin{align*} \text{(IIIb)} &= \int_{X^C} \sum_{I \in \N^{r-s+1}} d\phi_{i_{s+1}}\wedge...\wedge d\phi_{i_{r+1}}\wedge( u(\cdot)(\nu^{r-s+1}(y,x_{i_{s+1}},...,x_{i_{r+1}}))) \wedge d\psi. \end{align*} We can now integrate by parts to eliminate the exterior derivative in front of $\phi_{i_{s+1}}$. Applying Lemma \ref{prodrule}, using $d^2=0$, the Leibniz rule and the fact that $\phi_{i_j} \in C_c^{\infty}(\R^N,\R)$ \begin{align*} &(-1)^{r-s+1} \text{(IIIb)}\\ &= \int_{X^C} \sum_{I \in \N^{r-s+1}} \phi_{i_{s+1}}d\phi_{i_{s+2}}\wedge... \wedge d\phi_{i_{r+1}} \wedge d( u(\cdot)( \nu^{r-s+1}(y,x_{i_{s+1}},...,x_{i_{r+1}})))\wedge d\psi \\ ~ & = \int_{X^C} \sum_{I \in \N^{r-s+1}} \phi_{i_{s+1}}d\phi_{i_{s+2}}\wedge... \wedge d\phi_{i_{r+1}} \\ & \hspace{4cm} \wedge D^{r-s+1,r}(\nabla u(\cdot),( \nu^{r-s+1}(y,x_{i_{s+1}},...,x_{i_{r+1}})))\wedge d\psi \\ ~& ~+~(-1)^{(r-s)} \int_{X^C } \sum_{I \in \N^{r-s+1}} \phi_{i_{s+1}}d\phi_{i_{s+2}}\wedge... \wedge d\phi_{i_{r+1}} \\ &\hspace{4cm} \wedge u(\cdot)( \nu^{r-s}(x_{i_{s+1}},...,x_{i_{r+1}})))\wedge d\psi \\ ~&= \text{(IIIc)} +\text{(IIId)}. \end{align*} Arguing similarly to Lemma \ref{aux1} and as for term (I), we can show that \begin{displaymath} \sum_{I \in \N^{r}} \phi_{i_{s+1}}d\phi_{i_{s+2}}\wedge... \wedge \phi_{i_{r+1}} \wedge D^{r-s+1,r}(\nabla u(\cdot),( \nu^{r-s+1}(y,x_{i_{s+1}},...,x_{i_{r+1}}))) \in W^{1,1}(\R^N,\Lambda^r), \end{displaymath} and that this sum is convergent in $W^{1,1}$. Hence, we have shown that there exists $h_s \in L^1(\R^N,\Lambda^{r+1})$ such that \begin{equation} \label{induction} \begin{aligned} \text{(III)}=& \int_{\R^N} \sum_{I \in \N^{r-s+2}} \phi_{i_s} d \phi_{i_{s+1}}\wedge...\wedge d \phi_{i_{r+1}}\wedge ( u(\cdot)(\nu^{r-s+1}(x_{i_1},...,x_{i_{r+1}})))\wedge d\psi~ \\ &= \int_{\R^N} h_s \wedge \psi \\ &\hspace{0.5cm} -\int_{\R^N} \sum_{I \in \N^{r-s+1}} \phi_{i_{s+1}} d\phi_{i_{s+2}} \wedge ... \wedge d\phi_{i_{r+1}} (u(\cdot)(\nu^{r-s}(x_{i_{s+1}},...,x_{i_{r+1}})))\wedge d \psi \end{aligned} \end{equation} Hence, Claim 1 holds, completing the proof of Lemma \ref{aux3} \end{proof} This proves Lemma \ref{extension}. The property that \begin{displaymath} dv = 0 \quad \text{in the sense of distributions} \end{displaymath} follows from Lemma \ref{aux2} and Lemma \ref{aux3}. By definition, $v=u$ on $X$. Finally, we can bound the $L^{\infty}$-norm of $v$ by $CL$, as in the definition of $\alpha$ \begin{displaymath} \sum_{I \in \N^{r+1}} \phi_{i_1} d\phi_{i_2}\wedge ... \wedge d\phi_{i_{r+1}} \wedge (G(I)(\nu^r(x_{i_1},...,x_{i_{r+1}}))) \end{displaymath} every summand can be bounded by $CL$ due to (\ref{Lipschitzprop}) and the estimate $\V d \phi_j \V \leq C \dist(Q_j,X)^{-1}$. Again, we get the $L^{\infty}$ bound, as only finitely many summands on are nonzero for every $y \in X^C$. With slight modifications one is able to prove the following variants. \begin{coro} \label{unbounded} Let $u \in C^{\infty}(\R^N,\Lambda^r)$ with $du=0$, let $L>0$, and let $X \subset \R^N$ be a nonempty closed set such that $\norm u \norm_{L^{\infty}(X)} \leq L$ and for all $x_1,...,x_{r+1} \in X$ we have \begin{displaymath} \left \V \fint_{\Sim(x_1,...,x_{r+1})} u(y)(\nu^r(x_1,...,x_{r+1})) \dy \right \V \leq L \max \V x_i -x_j \V ^r. \end{displaymath} Suppose further that $\V \partial X \V =0$. There exists a constant $C= C(N,r)$ such that for all $u \in C^{\infty}(\R^N,\Lambda^r)$ and $X$ meeting these requirements there exists $v \in L^1_{\loc}(\R^N,\Lambda^r)$ with \begin{enumerate} [i)] \item $dv =0$ in the sense of distributions; \item $v(y)=u(y)$ for all $y \in X$; \item $\norm v \norm_{L^{\infty}} \leq CL$. \end{enumerate} \end{coro} This statement is proven in the same way as Lemma \ref{extension}, but all the statements are only true locally (e.g. the $L^1$ bounds on $\alpha$ are replaced by bounds in $L^1_{\loc}(X^C,\Lambda^r)$). If we choose $u$ and $X$ to be $\Z^N$ periodic we get a suitable statement for the torus. \begin{coro} \label{torus} Let $u \in C^{\infty}(T_N,\Lambda^r)$ with $du =0$, let $L>0$, and let $X \subset \R^N$ be a nonempty, closed, $\Z^N$-periodic set (which can be viewed as a subset of $T_N$) such that $\norm u \norm_{L^{\infty}(X)} \leq L$ and for all $x_1,...,x_{r+1} \in X$ we have \begin{displaymath} \left \V \fint_{\Sim(x_1,...,x_{r+1})} \tilde{u}(y)(\nu^r(x_1,...,x_{r+1})) \dy\right \V \leq L \max \V x_i -x_j \V ^r, \end{displaymath} where $\tilde{u}\in C^{\infty}(\R^N,\Lambda^r)$ is the $\Z^N$-periodic representative of $u$. Suppose further that $\V \partial X \V =0$. There exists a constant $C= C(N,r)$ such that for all $u \in C^{\infty}(T_N,\Lambda^r)$ and $X$ meeting these requirements there exists $v \in L^1(T_N,\Lambda^r)$ with \begin{enumerate} [i)] \item $dv =0$ in the sense of distributions; \item $v(y)=u(y)$ for all $y \in X \subset T_N$; \item $\norm v \norm_{L^{\infty}} \leq CL$. \end{enumerate} \end{coro} As mentioned before, we can choose the cubes $Q_j$ to be rescaled dyadic cubes. As the set $X$ is periodic, the set of cubes (and hence also the partition of unity) and their projection points may also be chosen to be $\Z^N$-periodic. By definition then also the the extension will be $\Z^N$-periodic. \section{$L^{\infty}$-truncation} \label{Secmain} Now we prove the main result of this paper on the $L^{\infty}$-truncation of closed forms. \begin{thm}[$L^{\infty}$-truncation of differential forms] \label{main} There exist constants $C_1,C_2 >0$ such that for all $u \in L^1(T_N,\Lambda^r)$ with $du=0$ and all $L>0$ there exists $v \in L^{\infty}(T_N,\Lambda^r)$ with $dv=0$ and \begin{enumerate} [i)] \item $\norm v \norm_{L^{\infty}(T_N,\Lambda^r)} \leq C_1 L$; \item $\V \{y \in T_N \colon v(y) \neq u(y) \V\} \leq \frac{C_2}{L} \int_{\{y \in T_N\colon \V u(y) \V >L\}} \V u(y) \V \dy$; \item $\norm v - u \norm_{L^1(T_N,\Lambda^r)} \leq C_2 \int_{\{y \in T_N \colon \V u(y) \V >L\}} \V u(y) \V \dy$. \end{enumerate} \end{thm} Given the Whitney-type extension obtained in Lemma \ref{torus} and Lemma \ref{extension} combined with Lemma \ref{maximalf}, the proof now roughly follows \textsc{Zhang}'s proof for Lipschitz truncation in \cite{Zhang}. First, we prove the statement in the case that $v$ is smooth directly using our extension theorem for the set $X= \{M u \leq L\}$. After calculations similar to \cite{Zhang} we are able to show that this extension satisfies the properties of Theorem \ref{main}. Afterwards, we prove the statement for $u \in L^1(T_N,\Lambda^r)$ by a standard density argument. \begin{proof} First, suppose that $u \in C^{\infty}(T_N,\Lambda^r)$. For $\lambda>0$ define the set \begin{displaymath} X_{\lambda} = \{y \in T_N \colon Mu(y) \leq \lambda\}. \end{displaymath} Choose $2L \leq \lambda \leq 3L$ such that $\V \partial X_{\lambda} \V =0$. Then, by Lemma \ref{maximalf} and the extension Lemma \ref{torus}, there exists a $v \in L^1(T_N,\Lambda^r)$ with \begin{enumerate} \item $\{ y \in T_N \colon v(y) \neq u(y) \} \subset X_{\lambda}^C$. \item $\norm v \norm_{L^{\infty}} \leq CL$. \item $dv =0$ in the sense of distributions. \end{enumerate} We need to show that \begin{equation} \label{TS1}\norm v - u \norm_{L^1(T_N,\Lambda^r)} \leq C_2 \int_{\{y\colon \V u(y) \V >L\}} \V u(y) \V \dy \end{equation} and that \begin{equation} \label{TS2} \V \{y \in T_N \colon v(y) \neq u(y) \}\V \leq \frac{C_2}{L} \int_{\{y\colon \V u(y) \V >L\}} \V u(y) \V \dy. \end{equation} Indeed, \eqref{TS1} follows from \eqref{TS2}, as \begin{align*} \int_{T_N} \V v(y) -u(y) \V \dy &= \int_{X_{\lambda}^C} \V v(y) -u(y)\V \dy \\ ~& \leq \int_{\{Mu \geq \lambda\}} \V u(y) \V + \int_{\{Mu \geq \lambda\}} \V v(y) \V \dy \\ ~& \leq \int_{\{\V u \V \geq \lambda\}} \V u(y) \V \dy + 2CL \V \{ Mu \geq \lambda\} \V. \end{align*} Thus, it suffices to prove \eqref{TS2}. To this end, define the function $h \colon \Lambda^r \to \R$ by \begin{displaymath} h(z) = \left \{ \begin{array}{ll} 0 & \text{if } \V z \V <L, \\ \V z \V -L & \text{if } \V z \V \geq L. \end{array} \right. \end{displaymath} Let $y \in \{Mu > \mu\}$ for $\mu \in \R$. Then there exists an $R>0$ such that \begin{displaymath} \fint_{B_R(y)} \V u(z)\V \dz > \mu. \end{displaymath} Thus, \begin{align*} M (h (u))(y) & \geq \fint_{B_R(y)} \V h(u) (z) \V \dz \\ ~& = \frac{1}{\V B_R(y) \V} \int_{B_R(y) \cap \{u \geq L\}} \V u(z) \V - L \dz \\ ~& \geq \fint_{B_R(y)} \V u(z) \V \dz - \frac{1}{\V B_R(y) \V} \int_{B_R(y) \cap \{u \leq L\}} \V u(z) \V \dz \\ & \hspace{1cm}- \frac{1}{\V B_R(y) \V} \int_{B_R(y) \cap \{ \V u \V \geq L\}} L \dz \\ ~& \geq \mu -L. \end{align*} Thus, $\{y \in T_N \colon Mu > \mu\} \subset \{y \in T_N \colon M h(u) (y) > \mu - L\}$. Using the weak-$L^1$ estimate for the maximal function (Proposition \ref{maximalfct}), we get \begin{equation} \label{estimatex} \begin{aligned} \V \{ y \in T_N \colon Mu(y) \geq \lambda\} \V & \leq \V \{ y \in T_N \colon M h(u) \geq \lambda-L\} \V\\ ~&\leq \frac{1}{\lambda -L} C \int_{T_N} \V h(u)(z) \V \dz\\ ~& \leq \frac{C}{L} \int_{T_N \cap \{\V u \V \geq L\}} \V u(z) \V \dz. \end{aligned} \end{equation} This is what we wanted to show. Note that the proof only uses $u \in C^{\infty}(T_N,\Lambda^r)$ to define $v$ and nowhere else, hence estimate \eqref{estimatex} is valid for all $u \in L^1(T_N,\Lambda^r)$. For general $u \in L^1(T_N,\Lambda^r)$, one may consider a sequence $u_n \in C^{\infty}(T_N,\Lambda^r)$ with $d u_n=0$ and $u_n \to u$ in $L^1$ and pointwise almost everywhere. This sequence can be easily constructed by convolving with standard mollifiers. Observe that for $\lambda >0$ \begin{align} \label{estimate1} \int_{\{\V u_n \V \geq 2\lambda\}} \V u_n \V \dy & \leq \int_{\{ \V u_n -u \V \geq \V u \V\} \cap \{ \V u_n \V \geq 2 \lambda\}} \V u_n \V \dy + \int_{\{\V u_n - u \V \leq \V u \V \} \cap \{ \V u_n \V \geq 2 \lambda\}} \V u_n \V \dy \\ \nonumber ~& \leq 2 \int_{\{\V u \geq \lambda\V\}} \V u \V \dy +2 \norm u_n - u \norm_{L^1}. \end{align} Furthermore, we use the subadditivity of the maximal function and see that for all $y \in T_N$ \begin{displaymath} M u_n (y) \leq M u (y) + M(u - u_n)(y). \end{displaymath} Thus, \begin{displaymath} \{y \in T_N \colon M u_n(y) \geq 2\lambda\} \subset \{y \in T_N \colon M u(y) \geq \lambda\} \cup \{ y \in T_N \colon M (u - u_n)(y) \geq \lambda\}. \end{displaymath} Using the weak-$L^1$estimate for the maximal function (Proposition \ref{maximalfct}) we see that \begin{equation} \label{setestimate} \left \V \{ y \in T_N \colon M u(y) \leq \lambda\} \back \{y \in T_N \colon M u_n(y) \geq 2 \lambda\} \right \V \longrightarrow 0 \quad \text{as } n \to \infty. \end{equation} Choose some $\lambda \in (4L,6L)$ such that for all $n \in \N$ $\V \partial \{y \in T_N \colon Mu_n(y) \geq 2\lambda\} \V =0$. Then extend like in the first part of the proof to get a sequence $v_n$ with $dv_n=0$ and \begin{enumerate} [a)] \item $\norm v_n \norm_{L^{\infty}(T_N,\Lambda^r)} \leq 2C_1 \lambda$; \item $\V \{y \in T_N \colon v_n(y) \neq u_n(y) \V\} \leq \frac{C_2}{2\lambda} \int_{y\colon \V u_n(y) \V >2 \lambda} \V u_n(y) \V \dy$; \item $\norm v_n - u_n \norm_{L^1(T_N,\Lambda^r)} \leq C_2 \int_{\{y\colon \V u_n(y) \V > 2 \lambda\}} \V u_n(y) \V \dy$. \end{enumerate} Letting $n \to \infty$, by a) this sequence converges, up to extraction of a subsequence, weakly$*$ to some $v \in L^{\infty}(T_N,\Lambda^r)$. The weak$*$-convergence implies $dv =0$. Moreover, by construction, the set $\{y \in T_N \colon v_n \neq u_n\}$ is contained in the set $\{y \in T_N \colon Mu_n(y) \geq 2 \lambda\}$. As $u_n \to u$ pointwise a.e. and in $L^1$, we get using \eqref{setestimate} that $v = u$ on the set $\{y \in T_N \colon M u(y) \leq \lambda\}$. (If $v_n$ converges to $u$ in measure on a set $A$ and $v_n$ weakly to some $v$, then $v=u$ on $A$.) Hence, $v$ defined as the weak$*$ limit of $v_n$ satisifies \begin{enumerate} [i)] \item $\norm v \norm_{L^{\infty}(T_N,\Lambda^r)} \leq C_1 \lambda \leq 6 C_1 L $; \item using \eqref{estimatex} and $v=u$ on $\{y \in T_N \colon M u(y) \leq \lambda\}$ \begin{displaymath} \V \{ y \in T_N \colon u(y) \neq v(y) \} \V \leq \frac{C_2}{L} \int_{\{y \in T_N\colon \V u(y) \V >L\} } \V u(y) \V \dy; \end{displaymath} \item using triangle inequality and $v_n - u_n \to 0$ in $L^1$, one obtains \begin{displaymath} \norm v - u \norm_{L^1(T_N,\Lambda^r)} \leq C_2 \int_{\{y \in T_N \colon \V u(y) \V >L\}} \V u(y) \V \dy. \end{displaymath} \end{enumerate} Hence, $v$ meets the requirements of Theorem \ref{main}. \end{proof} \begin{coro}[$L^\infty$-truncation for sequences] \label{main2} Suppose that we have a sequence $u_n \in L^1(\R^N,\Lambda^r)$ with $d u_n=0$, and that there exists $L>0$ such that \begin{displaymath} \int_{\{y \in T_N \colon \V u_n(y) \V >L \}} \V u_n(y) \V \dy \longrightarrow 0 \quad \text{as } n \to \infty. \end{displaymath} There exists a $C_1=C_1(N,r)$ and a sequence $v_n \in L^1(T_N,\Lambda^r)$ with $dv_n=0$ and \begin{enumerate} [a)] \item $\norm v_n \norm_{L^{\infty}(T_N,\Lambda^r)} \leq C_1 L$; \item $\norm v_n - u_n \norm_{L^1(T_N,\Lambda^r)} \to 0$ as $n \to \infty$; \item $\V \{ y \in T_N \colon v_n(y) \neq u_n(y) \} \V \to 0$. \end{enumerate} \end{coro} This directly follows by applying Theorem \ref{main}. The proof of Theorem \ref{main} also works if $L^1$ is replaced by $L^p$ for $1<p< \infty$. Furthermore, we do not need to restrict us to periodic functions on $\R^N$, the statement is also valid for non-periodic functions. \begin{prop} \label{main3} Let $1 \leq p < \infty$. There exist constants $C_1,C_2 >0$, such that, for all $u \in L^p(\R^N,\Lambda^r)$ with $du=0$ and all $L>0$, there exists $v \in L^p(\R^N,\Lambda^r)$ with $dv=0$ and \begin{enumerate} [i)] \item $\norm v \norm_{L^{\infty}(\R^N,\Lambda^r)} \leq C_1 L$; \item $\V \{y \in \R^N \colon v(y) \neq u(y) \V\} \leq \frac{C_2}{L^p} \int_{\{y \in \R^N \colon \V u(y) \V >L\}} \V u(y) \V^p \dy$; \item $\norm v - u \norm^p_{L^p(\R^N,\Lambda^r)} \leq C_2 \int_{\{y \in \R^N \colon \V u(y) \V >L\}} \V u(y) \V^p \dy$. \end{enumerate} \end{prop} As described, the proof is pretty much the same as for Theorem \ref{main}. We may also want to truncate closed forms supported on an open bounded subset $\Omega \subset \R^N$ (c.f. \cite{Sebastian,Solenoidal}). This is possible, but we may lose the property, that they are supported in this subset. Let us, for simplicity, consider balls $\Omega=B_{\rho}(0)$ and, after rescaling, $\rho=1$. \begin{prop}\label{main4} Let $1 \leq p < \infty$. There exist constants $C_1,C_2 >0$ such that, for all $u \in L^p( \R^N,\Lambda^r)$ with $du=0$ and $\spt(u) \subset B_1(0)$ and all $L>0$, there exists $v \in L^p(\R^N,\Lambda^r)$ with $dv=0$ and \begin{enumerate} [i)] \item $\norm v \norm_{L^{\infty}(\R^N,\Lambda^r)} \leq C_1 L$; \item $\V \{y \in \R^N \colon v(y) \neq u(y) \V\} \leq \frac{C_2}{L^p} \int_{\{y \in \R^N \colon \V u(y) \V >L\}} \V u(y) \V^p \dy$; \item $\norm v - u \norm^p_{L^p(\R^N,\Lambda^r)} \leq C_2 \int_{\{y \in \R^N \colon \V u(y) \V >L\}} \V u(y) \V^p \dy$; \item $\spt(v) \subset B_R(0)$, where $R$ only depends on the $L^p$-norm of $u$ and on $L$. \end{enumerate} \end{prop} Again, this proof is very similar to the proof of Theorem \ref{main}. Property iv) comes from the fact that if a function $u$ is supported in $B_1(0)$, then its maximal function $Mu(y)$ decays fast as $y \to \infty$. Let us mention that this result also holds for vector-valued differential forms, i.e. $u \in L^p(\R^N,\Lambda^r \times \R^m)$, where the exterior derivative is taken componentwise. \begin{prop}[Vector-valued forms on the torus] \label{main5} There exist constants $C_1,C_2 >0$ such that, for all $u \in L^1(T_N,\Lambda^r\times\R^m)$ with $du=0$ and all $L>0$, there exists $v \in L^1(T_N,\Lambda^r \times \R^m)$ with $dv=0$ and \begin{enumerate} [i)] \item $\norm v \norm_{L^{\infty}(T_N,\Lambda^r \times \R^m)} \leq C_1 L$; \item $\V \{y \in T_N \colon v(y) \neq u(y) \V\} \leq \frac{C_2}{L} \int_{\{y \in T_N\colon \V u(y) \V >L\}} \V u(y) \V \dy$; \item $\norm v - u \norm_{L^1(T_N,\Lambda^r \times \R^m)} \leq C_2 \int_{\{y \in T_N \colon \V u(y) \V >L\}} \V u(y) \V \dy$. \end{enumerate} \end{prop} This statement follows directly from the proof of Theorem \ref{main} by simply truncating every component of $u$. Likewise, similar statements as in Propositions \ref{main2}, \ref{main3} and \ref{main3} follow for vector-valued differential forms. \section{Applications} \label{secappl} In the following, we consider a linear and homogeneous differential operator of first order, i.e. we are given $\A: C^{\infty}(\R^N,\R^d) \to C^{\infty}(\R^N,\R^l)$ of the form \begin{displaymath} \A u= \sum_{k=1}^N A _k \partial_k u, \end{displaymath} where $A_k : \R^d \to \R^l$ are linear maps. We call a continuous function $f: \R^d \to \R$ $\A$-quasiconvex if for all $\phi \in C^{\infty}(T_N,\R^d)$ with $\int_{T_N} \phi(y) \dy =0$ and $\A \phi =0$, and for all $x \in \R^d$ then the following version of Jensen's inequality \begin{equation} \label{Aqcdef} f(x) \leq \int_{T_N} f(x + \phi(y)) \dy \end{equation} holds true. \textsc{Fonseca} and \textsc{M\"uller} showed that \cite{FM}, if the constant rank condition seen below holds, then $\A$-quasiconvexity is a neccessary and sufficient condition for weak$*$ lower-semicontinuity of the functional $I: L^{\infty}(\Omega, \R^d) \to [0, \infty)$ defined by \begin{displaymath} I(u) = \int_{\Omega} f(u(y)) \dy. \end{displaymath} \noindent Define the symbol $\Aop \colon \R^N \back \{0\} \to \Lin(\R^d,\R^l)$ of the operator $\A$ by \begin{displaymath} \Aop (\xi) = \sum_{k=1}^N \xi_k A_k. \end{displaymath} The operator $\A$ is said to satisfy the \textbf{constant rank property} (c.f. \cite{Murat}) if for some fixed $r \in \{0,...,d\}$ and all $\xi \in \Sphere^{N-1} = \{\xi \in \R^N \colon \V \xi \V=1\}$ \begin{displaymath} \dim (\ker \Aop(\xi)) =r. \end{displaymath} We call a homogeneous differential operator $\B:C^{\infty}(T_N,\R^m) \to C^{\infty}(T_N,\R^d)$, which is not neccessarily of order one, the potential of $\A$ if \begin{displaymath} \psi \in C^{\infty}(T_N,\R^d) \cap \ker \A,~ \int_{T_N} \psi(y) \dy=0 ~\Longleftrightarrow \exists \phi \in C^{\infty}(T_N,\R^m) \text{ s. t.} \psi= \B \phi. \end{displaymath} Recently, \textsc{Rai\c{t}\u{a}} showed that $\A$ has such a potential if and only if $\A$ satisfies the constant rank property (\cite{Raita}). In the following, we always assume that $\A$ satisfies the constant rank property and that $\B$ is the potential of $\A$. \begin{defi} We say that $\A$ satisfies the property (ZL) if it admits a statement of the type of Corollary \ref{main2}, i.e. for all sequences $u_n \in L^1(T_N,\R^d) \cap \ker \A$ such that there exists an $L>0$ with \begin{displaymath} \int_{\{y \in T_N \colon \V u_n(y) \V >L \}} \V u_n(y) \V \dy \longrightarrow 0 \quad \text{as } n \to \infty, \end{displaymath} there exists a $C=C(\A)$ and a sequence $v_n \in L^1(T_N,\R^d) \cap \ker \A$ such that \begin{enumerate} [i)] \item $\norm v_n \norm_{L^{\infty}(T_N,\R^d)} \leq C_1 L$; \item $\norm v_n - u_n \norm_{L^1(T_N,\R^d)} \to 0$ as $n \to \infty$. \end{enumerate} \end{defi} Our goal now is to show that (ZL) implies further properties for the operator $\A$. We first look at a few examples. \begin{ex} \label{ex:op} \begin{enumerate} [a)] \item As shown by Zhang \cite{Zhang}, the operator $\A= \curl$ has the property (ZL). This is shown by using that its potential is the operator $\B= \nabla$. In fact, most of the applications here have been shown for $\B = \nabla$ relying on (ZL), but can be reformulated for $\A$ satisfying (ZL). \item Let $W^k= (\R^N \otimes ... \otimes \R^N)_{\sym} \subset (\R^N)^k$. We may identify $u \in C^{\infty}(T_N,W^k)$ with $\tilde{u} \in C^{\infty}(T_N,(\R^N)^k)$ and define the operator \begin{displaymath} \curl^{(k)} \colon C^{\infty}(T_N,W^k) \to C^{\infty}(T_N, (\R^N)^{k-1} \times \Lambda^2) \end{displaymath} as taking the $\curl$ on the last component of $\tilde{u}$, i.e. for $I \in [N]^{k-1}$ \begin{displaymath} (\curl^{(k)} u)_I = 1/2 \sum_{i,j \in \N} \partial_i \tilde{u}_{I j} - \partial_j \tilde{u}_{I i} e_i \wedge e_j \end{displaymath} Note that this operator has the potential $\nabla^k:C^{\infty}(\R^N,\R) \to C^{\infty}(\R^N,W^k)$ (c.f. \cite{Meyers}). To the best of the author's knowledge the proof of the property (ZL) is in this setting not written down anywhere explicitly, but basically combining the works \cite{Acerbi,Francos,Stein,Zhang} yields the result (c.f. \cite{Schiffer2}). \item In this work, it has been shown that the exterior derivative $d$ satisfies the property (ZL). The most prominent example is $\A=\divergence$. \item The result is also true, if we consider matrix-valued functions instead (c.f. Proposition \ref{main4}).. For example, (ZL) also holds if we consider $\divergence: C^{\infty}(\R^N,\R^{N \times M}) \to C^{\infty}(\R^N,\R^M)$, where \begin{displaymath} \divergence _i u(x) = \sum_{j=1}^N \partial_j u_{ji} (x). \end{displaymath} \item Likewise, let $\A_1\colon C^{\infty}(T_N,\R^{d_1}) \to C^{\infty}(T_N,\R^{l_1})$ and $\A_1 \colon C^{\infty}(T_N,\R^{d_2}) \to C^{\infty}(T_N,\R^{l_2})$ be two differential operators satisfying (ZL). Then also the operator \begin{displaymath} \A: C^{\infty}(T_N, \R^{d_1}\times \R^{d_2}) \to C^{\infty}(T_N,\R^{l_1}\times \R^{l_2}) \end{displaymath} defined componentwise for $u=(u_1,u_2)$ by \begin{displaymath} \A (u_1,u_2) = (\A_1 u_1, \A_2 u_2) \end{displaymath} satisfies the property (ZL). The truncation is again done separately in the two components. \end{enumerate} \end{ex} \begin{rem} This paper is concerned with $L^{\infty}$-truncation, i.e. we are in a situation with very low regularity. There also has been some progress when truncating divergence-free fields in $W^{1,\infty}$, using similar methods (c.f. \cite{Solenoidal} or \cite{Sebastian}). \end{rem} An overview of the results one is able to prove using property (ZL) can be found in the lecture notes \cite[Sec. 4]{Mueller_var} and in the book \cite[Sec. 4,7]{Rindlerbook}, where they are formulated for the case of ($\curl$)-quasiconvexity. \medskip \subsection{$\A$-quasiconvex hulls of compact sets} \label{Aqchulls} For $f \in C(\R^d,\R)$ we can define the quasiconvex hull of $f$ by (c.f. \cite{FM,BFL}) \begin{equation} \label{fcthull} \QA f(x) := \inf \left\{ \int_{T_N} f(x + \psi(y)) \dy \colon \psi \in C^{\infty}(T_N,\R^d) \cap \ker \A,~\int_{T_N} \psi =0 \right\}. \end{equation} $\QA f$ is the largest $\A$-quasiconvex function below $f$ \cite{FM}. In view of the separation theorem for convex sets in Banach spaces we define (c.f. \cite{CMO2,Sverak4,Sverak3}) the $\A$-quasiconvex hull of a set $K \subset \R^d$ by \begin{displaymath} K^{\A qc}_{\infty} := \left \{ x \in \R^d \colon ~ \forall f \colon \R^d \to \R \text{ } \A \text{-quasiconvex with } f_{\V K} \leq 0 \text{ we have } f(x) \leq 0 \right \}, \end{displaymath} and the $\A$-$p$-quasiconvex hull for $1 \leq p <\infty$ by \begin{align*} K^{\A qc}_p:= &\left \{ x \in \R^d \colon ~ \forall f \colon \R^d \to \R \text{ } \A \text{-quasiconvex with } f_{\V K} \leq 0\right. \text{ and } \\~ &\quad \left.\V f(v) \V \leq C(1+ \V v \V^p) \text{ we have } f(x) \leq 0 \right \}. \end{align*} The $\A$-$p$-quasiconvex hull for $1 \leq p <\infty$ can be alternatively defined via\begin{align*} K^{\A qc*}_p:= &\left \{ x \in \R^d \colon( \QA \dist^p(\cdot,K)) (x)=0 \right \}. \end{align*} If $K$ is compact, then $K^{\A qc}_p=K^{\A qc*}_p$. Moreover, the spaces $K^{\A qc}_{p}$ are nested, i.e. $K^{\A qc}_{q} \subset K^{\A qc}_{q'}$ if $q \leq q'$. In \cite{CMO2} it is shown that equality holds for $\A$ being the symmetric divergence of a matrix, $K$ compact and $1 < q,q'< \infty$. The proof can be adapted for different $\A$, but uses the Fourier transform and is not suitable for the cases $p=1$ and $p=\infty$. Here, the property (ZL) comes into play. For a compact set $K$ we define the set $K^{\A app}$ (c.f. \cite{Mueller_var}) as the set of all $x \in \R^d$ such that there exists a bounded sequence $u_n \in L^\infty(T_N,\R^d) \cap \ker \A$ with \begin{displaymath} \dist(x + u_n, K) \longrightarrow 0 \quad \text{in measure, as } n \to \infty. \end{displaymath} \begin{thm} \label{ZLappl} Suppose that $K$ is compact and $\A$ is an operator satisfying (ZL). Then \begin{equation} \label{set1} K^{\A app} = K^{\A qc}_{\infty} = \left\{x \in \R^d \colon \QA (\dist(\cdot,K)) (x) = 0\right\}. \end{equation} \end{thm} \begin{proof} We first prove $K^{\A app} \subset K^{\A qc}_{\infty}$. Let $x \in K^{\A app}$ and take an arbitrary $\A$-quasiconvex function $f: \R^d \to [0,\infty)$ with $f_{\V K} =0$. We claim that then $f(x)=0$. Take a sequence $u_n$ from the definition of $K^{\A app}$. As $f$ is continuous and hence locally bounded, $f(u_n) \to 0$ in measure and $0 \leq f( u_n) \leq C$. Quasiconvexity and dominated convergence yield \begin{displaymath} f(x) \leq \liminf_{n \to \infty} \int_{T_N} f(x + u_n(y)) \dy = 0. \end{displaymath} $K^{\A qc}_{\infty} \subset \left\{x \in \R^d \colon \QA (\dist(\cdot,K)) (x) = 0\right\}$ is clear by definition, as $\QA (\dist(\cdot,K))$ is an admissible separating function. The proof of the inclusion $\{ x \in \R^d \colon \QA (\dist(\cdot,K)) (x) = 0\} \subset K^{\A app}$ uses (ZL). If $\QA (\dist(\cdot,K)) =0$, then there exists a sequence $\phi_n \in C^{\infty}(T_N,\R^d) \cap \ker \A$ with $\int_{T_N} \phi_n =0 $ such that \begin{displaymath} 0= \QA (\dist(\cdot,K))(x) = \lim_{n \to \infty} \int_{T_N} \dist(x + \phi_n(y),K) \dy. \end{displaymath} As $K$ is compact, there exists $R>0$ such that $K \subset B(0,R)$. Moreover, as $x \in K_{\infty}^{\A qc}$, also $x \in B(0,R)$. This implies that \begin{displaymath} \lim_{n \to \infty} \int_{T_N \cap \{\V \phi_n \V \geq 6R\}} \V \phi_n \V \dy =0. \end{displaymath} We may apply (ZL) and find a sequence $\psi_n \in L^{\infty}(T_N,\R^d)\cap \ker \A$ such that \begin{displaymath} \norm \phi_n - \psi_n \norm_{L^1(T_N,\R^d)} \longrightarrow 0 \quad \text{as } n \to \infty \end{displaymath} and \begin{displaymath} \norm \psi_n \norm _{L^{\infty}(T_N,\R^d)} \leq C R. \end{displaymath} Hence, $x \in K^{\A app}$. \end{proof} \begin{rem} \label{remLp} Theorem \ref{ZLappl} shows that for all $1 \leq p < \infty$ \begin{displaymath} K^{\A app} = K^{\A qc}_{\infty} = \left\{x \in \R^d \colon \QA (\dist(\cdot,K)^p) (x) = 0\right\} = K^{\A qc}_{p}. \end{displaymath} This follows directly, as all the sets $K_p^{\A qc}$ are nested and, conversely, all the hulls of the distance functions are admissible $f$ in the definition of $K^{\A qc}_{\infty}$. \end{rem} \begin{rem} Such a kind of theorem is not true for general unbounded closed sets $K$. As a counterexample one may consider $\A = \curl$ (i.e. usual quasiconvexity) and look at the set of conformal matrices $K= \{\lambda Q : \lambda \in \R^+, Q \in SO(n) \} \subset \R^{n \times n}$. If $n \geq 2$ is even(\cite{MSY}, there exists a quasiconvex function $F \colon \R^{n \times n} \to \R$ with $F(x) = 0 \Leftrightarrow x \in K$ and \begin{displaymath} 0 \leq F(A) \leq C(1+ \V A \V^{n/2}). \end{displaymath} On the other hand, let $n \geq 4$ be even and $F \colon \R^{n \times n} \to \R$ be a rank-one convex function with $F_{\V K} =0$ and for some $p < n/2$ \begin{displaymath} 0 \leq F(A) \leq C(1+ \V A \V^{p}). \end{displaymath} Then $F=0$ by \cite{Yan}. A reason for the nice behaviour of compact sets is that for such sets all distance functions are coercive, i.e. \begin{displaymath} \dist(v,K)^p \geq \V v \V^p - C, \end{displaymath} which is obviously not true for non-compact sets. Coercivity of a function is often needed for relaxation results (c.f \cite{BFL}). \end{rem} \subsection{$\A$-$\infty$ Young measures} We consider $\M(\R^d)$ the set of signed Radon measures with finite mass. Note that this is the dual space of $C_c(\R^d)$ with the dual pairing \begin{displaymath} \li \mu ,f \re = \int_{\R^d} f(y)~ \textup{d}\mu(y). \end{displaymath} For a measurable set $E \subset \R^N$ we call $\mu: E \to \M(\R^d)$ weak$*$ measurable if the map \begin{displaymath} x \longmapsto \li \mu(x),f \re \end{displaymath} is measurable for all $f \in C_c(\R^d)$. Later, we may consider the space $L^{\infty}_w (E,\M(\R^d))$, which is the space of all weakly measurable maps such that $\spt \mu_x \subset B(0,R)$ for some $R>0$ and for a.e. $x \in T_N$. This space is equipped with the topology $\nu^n \weakstar \nu$ iff $\forall f \in C_0(\R^d)$ \begin{displaymath} \li \nu^n_x , f \re \weakstar \li \nu_x , f \re \text{ in } L^{\infty}(E). \end{displaymath} \begin{rem} \label{metrizable} The topology of $L^{\infty}_w(E,\M(\R^d))$ is metrisable on bounded sets: Note that $\nu_n$ supported on $B(0,R)$ converges to $\nu$ if and only if for all $f \in C(\bar{B}(0,R))$ and all $g \in L^1(E)$ \begin{displaymath} \int_E \li \nu^n_x ,f \re g(x) \dx \longrightarrow \int_E \li \nu_x , f \re g(x) \dx. \end{displaymath} If $\nu^n$ is bounded, then this equation holds for all $f,g$ if and only if it holds for dense subsets of $C(\bar{B}(0,R))$ and $L^1(E)$. As these spaces are separable, we may consider a countable dense subset $(f_k,g_k)_{k \in \N}$ of $C(\bar{B}(0,R))\times L^1(E)$ and the metric \begin{displaymath} d_k (\nu, \mu) = \left \V \int_E \li \nu_x - \mu_x ,f_k \re g_k(x) \dx \right \V, \end{displaymath} and then define the metric \begin{displaymath} d(\nu, \mu) = \sum_{k \in \N} 2^{-k} \frac{d_k(\nu,\mu)}{1+ d_k(\nu,\mu)}. \end{displaymath} \end{rem} Let us now recall the Fundamental Theorem of Young measures(c.f. \cite{Ball}, \cite{compcomp}). \begin{prop} [Fundamental Theorem of Young measures] \label{FTOYM} Let $E \subset \R^N$ be a measurable set of finite measure and $u_j: E \to \R^d$ a sequence of measurable functions. There exists a subsequence $u_{j_k}$ and a weak$*$ measurable map $\nu: E \to \M(\R^d)$ such that the following properties hold: \begin{enumerate} [i)] \item $\nu_x \geq 0$ and $\norm \nu_x \norm_{\M(\R^d)} = \int_{\R^d} 1 ~\textup{d} \mu(x) \leq 1$; \item $\forall f \in C_0(\R^d)$ define $\bar{f}(x) = \li \nu_x,f \re$.Then $f(u_{j_k}) \weakstar \bar{f} \text{ in } L^{\infty}(E)$; \item If $K \subset \R^d$ is compact, then $\spt \nu_x \subset K$ if $\dist(u_{j_k},K) \to 0$ in measure; \item It holds \begin{equation} \label{1prime} \norm \nu_x \norm_{\M(\R^d)} = 1 \text{ for a.e. } x \in E \end{equation} if and only if \begin{displaymath} \lim_{M \to \infty} \sup_{k \in \N} \left \V \{ \V u_{j_k} \V \geq M\} \right \V =0; \end{displaymath} \item If \eqref{1prime} holds, then for all $A \subset E$ measurable and for all $f \in C(\R^d)$ such that $f(u_{j_k})$ is relatively weakly compact in $L^1(A)$, also \begin{displaymath} f(u_{j_k}) \weakto \bar{f} \text{ in } L^1(A); \end{displaymath} \item If \eqref{1prime} holds, then (iii) holds with equivalence. \end{enumerate} \end{prop} We call such a map $\nu: E \to \M(\R^d)$ the Young measure generated by the sequence $u_{j_k}$. One may show that every weak$*$ measureable map $E \to \M(\R^d)$ satisfying (i) is generated by some sequence $u_{j_k}$. \begin{rem} If $u_k$ generates a Young measure $\nu$ and $v_k \to 0$ in measure (in particular, if $v_k \to 0$ in $L^1$), then the sequence $(u_k + v_k)$ still generates $\nu$. If $u:T_N \to \R^d$ is a function, we may consider the oscillating sequence $u_n(x) := u(nx)$. This sequence generates the homogeneous (i.e. $\nu_x = \nu$ a.e.) Young measure $\nu$ defined by \begin{displaymath} \li \nu , f \re = \int_{T_N} f(u_n(y)) \dy. \end{displaymath} \end{rem} \begin{quest} What happens if we impose further conditions on the sequence $u_{j_k}$, for instance $\A u_{j_k} =0$? \end{quest} For $1 \leq p < \infty$ we call a sequence $v_j \in L^p(\Omega,\R^d)$ $p$-equi-integrable if \begin{displaymath} \lim \limits_{\varepsilon \to 0} \sup_{j \in \N} \sup_{E \subset \Omega \colon \V E \V < \varepsilon} \int_E \V v_j(y) \V ^p \dy = 0. \end{displaymath} \begin{defi} Let $1 \leq p \leq \infty$. We call a map $\nu \colon \Omega \to \R^d$ an $\A$-$p$-Young measure if there exists a $p$-equi-integrable sequence $\{v_j\} \subset L^p(\Omega, \R^d)$ (for $p = \infty$ a bounded sequence), such that $v_j$ generates $\nu$ and satisfies $\A v_j =0$. \end{defi} For $1 \leq p < \infty$ the set of $ \A $-$p$ Young measures was classified by \textsc{Fonseca} and \textsc{Müller} in \cite{FM} and for the special case $\A=\curl$ already in \cite{Pedregal}. \begin{prop} Let $1 \leq p < \infty$. $\nu: T_N \to \M(\R^d)$ is an $\A$-$p$-Young measure if and only if \begin{enumerate} [i)] \item $\exists v \in L^p(T_N,\R^d)$ such that $\A v =0$ and \begin{displaymath} v(x) = \li \nu_x , \id \re = \int_{\R^d} y ~\textup{d}\nu_x(y) \text{ for a.e. } x \in T_N; \end{displaymath} \item $\int_{T_N} \int_{\R^d} \V z \V ^p ~\textup{d} \nu_x(z) \dx < \infty$; \item for a.e. $x \in T_N$ and all continuous $g$ with $\V g(v) \V \leq C(1 + \V v \V^p)$ we have \begin{displaymath} \li \nu_x, g \re \geq \QA g(\li \nu_x, \id \re). \end{displaymath} \end{enumerate} \end{prop} Recently, there has also been progress for so-called generalized Young measures ($p=1$ is a special case), c.f. \cite{RindlerKris,KirchKris,RindlerDP}. Let us recall the result of \textsc{Kinderlehrer} and \textsc{Pedregal} for $W^{1,\infty}$-Gradient Young measures (c.f. \cite{Kinderlehrer}). \begin{prop} A weak$*$ measurable map $\nu : \Omega \to \M (\R^{N \times m})$ is a $\curl$-$\infty$-Young measure if and only if $\nu_x \geq 0$ a.e. and there exists $K \subset \R^{N \times m}$ compact, $v \in L^{\infty}(\Omega,\R^{N \times m})$ such that \begin{enumerate} [a)] \item $\spt \nu_x \subset K$ for a.e. $x \in \Omega$; \item $ < \nu_x, \id> = v$ for a.e. $x \in \Omega$; \item for a.e. $x \in \Omega$ and all continuous $g \colon \R^{N \times m} \to \R$ we have \begin{displaymath} \li \nu_x, g \re \geq \mathcal{Q}_{\curl} g(\li \nu_x, \id \re). \end{displaymath} \end{enumerate} \end{prop} It is possible to state such a theorem in the general setting that $\A$ satisfies (ZL). The proofs in \cite{Kinderlehrer} mostly rely on this fact and the general case of operators satisfying (ZL) can be treated in the same fashion with few modifications. The details of the proofs can be found in the Appendix. Let us first state the classification theorem for so called homogeneous $\A$-$\infty$-Young measures, i.e. $\A$-$\infty$-Young measures $ \nu \colon T_N \to \M(\R^d)$ with the following properties \begin{enumerate} [i)] \item $\spt {\nu_x} \subset K$ for a.e. $x \in T_N$ where $K \subset \R^d$ is compact; \item $\nu$ is a homogeneous Young measure, i.e. there exists $\nu_0 \in \M(\R^d)$ such that $\nu_x= \nu_0$ for a.e. $x \in T_N$. \end{enumerate} Define the set $\M^{\A qc} (K)$ by (c.f. \cite{Sverak2}) \begin{equation} \label{Maqc} \M^{\A qc}(K) =\left\{ \nu \in \M(\R^d) \colon \nu \geq 0,~\spt \nu \subset K,~\li \nu,f \re \geq f( \li \nu, \id \re)~\forall f \colon \R^d \to \R \text{ } \A \text{-qc}\right\}. \end{equation} Denote by $H_{\A}(K)$ the set of homogeneous $\A$-$\infty$-Young measures supported on $K$. We are now able to formulate the classification of these measures (c.f.\cite[Theorem 5.1.]{Kinderlehrer}). \begin{thm}[Characterisation of homogeneous $\A$-$\infty$-Young measures] \label{hommain} Let $\A$ satisfy the property (ZL) and $K$ be a compact set. Then \begin{displaymath} H_{\A}(K) = \M ^{\A qc} (K). \end{displaymath} \end{thm} Using this result, one may prove the Characterisation of $\A$-$\infty$-Young measures (c.f \cite[Theorem 6.1]{Kinderlehrer}). \begin{thm}[Characterisation of $\A$-$\infty$-Young measures]\label{mainYM} Suppose that $\A$ satisfies the property (ZL). A weak$*$ measurable map $\nu: T_N \to \M(\R^d)$ is an $\A$-$\infty$-Young measure if and only if $\nu_x \geq 0$ a.e. and there exists $K \subset \R^d$ compact and $u \in L^{\infty}(T_N,\R^d) \cap \ker \A$ with \begin{enumerate} [i)] \item $\spt \nu_x \subset K$ for a.e. $x \in T_N$. \item $ \li \nu_x, \id \re = u$ for a.e. $x \in T_N$, \item $\li \nu_x, f \re \geq f(\li \nu_x, \id \re)$ for a.e. $x \in T_N$ and all continuous and $\A$-quasiconvex $f:\R^d \to \R$. \end{enumerate} \end{thm} As mentioned, the proofs in the case $\A =\curl$ can be found in \cite{Kinderlehrer,Mueller_var,Rindlerbook} and in the general case in the Appendix. Let us shortly describe the strategy of the proofs. For Theorem \ref{hommain} one may prove that $H_{\A}(K)$ is weakly compact, that averages of (non-homogeneous) $\A$-$\infty$ Young measures are in $H_{\A}(K)$ and that the set $H_{A}^x(K)= \{ \nu \in H_{\A} \colon \li \nu, \id \re =x\}$ is weak$*$ closed and convex. The characterisation theorem then follows by using Hahn-Banach separation theorem and showing that any $\mu \in M^{\A qc}$ cannot be separated from $H_{\A}(K)$, i.e. for all $f \in C(K)$ and for all $\mu \in M^{\A qc}(K)$ with $\li \mu, \id \re=0 $ \begin{displaymath} \li \nu, f \re \geq 0 \text{ for all } \nu \in H_{\A}^0(K) \Rightarrow \li \mu, f \re \geq 0. \end{displaymath} Theorem \ref{mainYM} then can be shown using Proposition \ref{hommain} and a localisation argument. \textbf{Acknowledgements:} The author would like to thank Stefan M\"uller for introducing him to the topic and for helpful discussions. The author has been supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the graduate school BIGS of the Hausdorff Center for Mathematics (GZ EXC 59 and 2047/1, Projekt-ID 390685813). \begin{appendix} \section{Proofs of Theorem \ref{hommain} and Theorem \ref{mainYM}} We want to prove the characterisation Theorem \ref{hommain}. For this result we shall prove some auxiliary lemmas first. We mainly follow the proofs in the $\curl$-free setting in \cite{Kinderlehrer,Rindlerbook,Mueller_var}. \begin{lemma} (Properties of $H_{\A}(K)$) \label{prop1} \begin{enumerate} [i)] \item If $\nu \in H_{\A}(K)$ with $\li \nu, \id \re =0$, then there exists a sequence $u_j \in L^{\infty}(T_N,\R^d)$ such that $\A u_j =0$, $u_j$ generates $\nu$ and $\norm u_j \norm_{L^{\infty}(T_N,\R^d)} \leq C \sup_{z \in K} \V z \V = C \V K \V_{\infty}$. \item $H_{\A}(K)$ is weakly$*$ compact in $\M (\R^d)$. \end{enumerate} \end{lemma} \begin{proof} i) follows from the definition of $H_{\A}(K)$. The uniform bound on the $L^{\infty}$ norm of $u_j$ can be guaranteed by (ZL) and vi) in Theorem \ref{FTOYM}. For the weak$*$ compactness note that $H_{\A}(K)$ is contained in the weak$*$ compact set $\P(K)$ of probability measures on $K$. As the weak$*$ topology is metrisable on $\P(K)$ it suffices to show that $H_{\A}(K)$ is sequentially closed. Hence, we consider a sequence $\nu_k \subset H_{\A}(K)$ with $\nu_k \weakstar \nu$ and show that $\nu \in H_{\A}(K)$ Due to the definition of Young measures, we may find sequences $u_{j,k} \in L^{\infty}(T_N,\R^d) \cap \ker \A$ such that $u_{j,k}$ generates $\nu_k$ for $j \to \infty$. Recall that the topology of generating Young measures is metrisable on bounded set of $L^{\infty}(T_N,\R^d)$ (c.f. Remark \ref{metrizable}). We may find a subsequence $u_{j_k,k}$ which generates $\nu$. As we know that $\norm u_{j_k,k} \norm_{L^{\infty}} \leq C \V K \V_{\infty}$, $\nu \in H_{\A}(K)$ and hence $H_{\A}(K)$ is closed. \end{proof} \begin{lemma} \label{average} Let $\nu$ be an $\A$-$\infty$-Young measure generated by a bounded sequence $u_k \in L^{\infty}(T_N,\R^d) \cap \ker \A$. Then the measure $\bar{\nu}$ defined via duality for all $f \in C_0(\R^d)$ by \begin{displaymath} \li \bar{\nu},f \re = \int_{T_N} \li \nu_x ,f \re \dx \end{displaymath} is in $H_{\A}(K)$. \end{lemma} \begin{proof} For $n \in \N$ define $u_k^n \in L^{\infty}(T_N,\R^d) \cap \ker \A$ by $u_k^n(x)=u_k(nx)$. Then for all $f \in C_0(\R^d)$ \begin{displaymath} f(u_k^n) \weakstar \int_{T_N} f(u_k) \text{ in } L^{\infty}(T_N,\R^d) \quad \text{as } n \to \infty . \end{displaymath} Note that by Theorem \ref{FTOYM} ii) we also have \begin{displaymath} \int_{T_N} f(u_k(x)) \dx \longrightarrow \int_{T_N} \li \nu_x, f \re \dx \quad \text{ as } k \to\infty. \end{displaymath} Due to metrisability on bounded sets (Remark \ref{metrizable}), we can find a subsequence $u_k^{k(n)}$ in $L^{\infty}(T_N,\R^d)$ such that \begin{displaymath} f(u_k^{n(k)}) \weakstar \int_{T_N} \li \nu_x, f \re \dx \quad \text{as } k \to \infty. \end{displaymath} Thus, $\bar{\nu} \in H_{\A}(K)$. \end{proof} \begin{lemma} Define the set $H_{\A}^x(K) := \{ \nu \in H_{\A} \colon \li \nu, \id \re = x\}$. Then $H_{\A}^x(K)$ is weak$*$ closed and convex. \end{lemma} \begin{proof} Weak$*$ closedness is clear by Lemma \ref{prop1}. It suffices to show that for any $\nu_1$, $\nu_2 \in H_{\A}^x(K)$ and $\lambda \in [0,1]$ it holds $\lambda \nu_1 + (1- \lambda) \nu_2 \in H_{\A}^x(K)$. Due to Lemma \ref{prop1} there exist bounded sequences $u_j$, $v_j \in L^{\infty}(T_N,\R^d) \cap \ker \A$ generating $\nu_1$ and $\nu_2$, respectively. Note that $u_j$ generates $\nu_1$ iff for all $f \in C_c(\R^d)$ \begin{displaymath} f(u_j) \weakstar \int_{\R^d} f~ \textup{d}\nu_1 \text{ as } j \to \infty \text{ in } L^{\infty}(T_N,\R^d). \end{displaymath} Now consider the sequence $u_j^n(\cdot)$ = $u_j(n\cdot)$ for $n \in \N$. We know that for all $f \in C_c(\R^d)$ \begin{displaymath} f(u_j^n) \weakstar \int_{T_N} f(u_j(y)) \dy \quad \text{ as } n \to \infty \text{ in } L^{\infty}(T_N,\R^d). \end{displaymath} As $\nu_1$ is a homogeneous measure, we also know that \begin{displaymath} f(u_j) \weakstar \int_{\R^d} f~ \textup{d}\nu_1 \quad \text{as } j \to \infty \text{ in } L^{\infty}(T_N,\R^d). \end{displaymath} By a diagonalisation argument (which works since $L^{\infty}_w(T_N,\M(\R^d))$ is metrisable, Remark \ref{metrizable}), we may pick a subsequence $u_j^{n(j)}$ with the following properties: \begin{enumerate} [a)] \item $u_j^{n(j)}$ generates the homogeneous Young measure $\nu_1$. \item $k(j) \to \infty$ as $j \to \infty$. \end{enumerate} We now use that we have a potential operator $\B$ of degree $k$ (c.f. \cite{Raita}). Hence, we may find $U_j \in W^{k,2}(T_N,\R^d)$ satisfying \begin{enumerate} [i)] \item $\B U_j = u_j$; \item $\norm U_j \norm _{W^{k,2}(T_N,\R^m)} \leq C \norm u_j \norm_{L^2(T_N,\R^d)} \leq C \norm u_j \norm_{L^{\infty}(T_N,\R^d)}$. \end{enumerate} By scaling, we get potentials $U_j^{n(j)} = n(j)^{-k} U_j(n(j)x)$ such that $\B U_j^{n(j)} = u_j^{n(j)}$. Consider smooth cut-off functions $\phi_n \in C_c^{\infty}((0,\lambda)\times(0,1)^{N-1}, [0,1])$ such that $\phi_n \to 1$ pointwise and in $L^1((0,\lambda)\times(0,1)^{N-1},\R)$, $\phi_n (x) =1 $ if $\dist(x,\partial ((0,\lambda)\times(0,1)^{N-1}) > \frac{1}{n}$, and for all $l \in \N$ \begin{displaymath} \norm \nabla^l \phi_n \norm_{L^{\infty}} \leq C_l n^l. \end{displaymath} We define $\tilde{U}_j^{n(j)} = \phi_{n(j)} \cdot U_j^{n(j)}$ and $\tilde{u}_j = \B \tilde{U}_j^{n(j)}$. Note that \begin{enumerate} [1)] \item $\A \tilde{u}_j = 0$ as it is element of the image of $\B$; \item $\tilde{u}_j$ is compactly supported in $(0,\lambda)\times(0,1)^{N-1}$; \item $\norm \tilde{u}_j - u_j^{k(j)} \norm_{L^1} \to 0$ as $j \to \infty$. \end{enumerate} As $u_j^{k(j)}$ is in $L^{\infty}$ with uniform bound $C \V K \V_{\infty}$ (c.f. Lemma \ref{prop1}), property 3) implies \begin{equation} \label{Zhangbound1} \int_{\{y \in (0,\lambda)\times(0,1)^{N-1} \colon \V \tilde{u}_j(y) \V \geq C \V K \V_{\infty}\}} \V \tilde{u}_j(y) \V \dy \longrightarrow 0 \quad \text{as } j \to \infty. \end{equation} We may construct a similar sequence $\tilde{v}_j$ satisfying $\tilde{v}_j =0$, $\tilde{v}_j$ is compactly supported in $(\lambda,1)\times (0,1)^{N-1}$ and also satisfies an estimate of type \ref{Zhangbound1}. Due to property 3), $\tilde{u}_j$ still generates the homogeneous Young measure $\nu_1$. Hence, \begin{displaymath} w_j := \left\{ \begin{array}{ll} \tilde{u}_j & \text{on } (0,\lambda) \times (0,1)^{N-1}, \\ \tilde{v}_j & \text{on } (\lambda,1) \times (0,1)^{N-1} \end{array} \right. \end{displaymath} satisfies $\A w_j =0$, $w_j$ is compactly supported on the unit cube, and due to \eqref{Zhangbound1}, \begin{displaymath} \int_{\{\V w_j \V \geq C \V K \V_{\infty}\}} \V w_j(y) \V \dy \longrightarrow 0 \quad \text{as } j \to \infty. \end{displaymath} Moreover, $w_j$ generates the Young measure $\nu$ with $\nu_x = \nu_1$ on $(0,\lambda) \times [0,1]^{N-1}$ and $\nu_x =\nu_2 $ on $(\lambda,1) \times [0,1]^{N-1}$. Again, consider the oscillating version $w_j^n(x) = w_j (nx)$, which generates $\int_{T_N} f(w_j) \dy$ as a Young measure. By the same argument as before taking a suitable subsequence $w_j^{n(j)}$, this subsequence generates $\lambda \nu_1+ (1-\lambda) \nu_2$. Note that $w_j^{n(j)}$ is not in $L^{\infty}$, but it is almost in $L^{\infty}$, i.e. \begin{displaymath} \int_{\{ y \in T_N \colon \V w_j^{n(j)} \V \geq C \V K \V_{\infty} \}} \V w_j^{n(j)} \V \dy \longrightarrow 0 \quad \text {as } j \to \infty. \end{displaymath} Hence, by property (ZL), there exists a $\widetilde{w}_j$ uniformly bounded in $L^{\infty}(T_N,\R^d) \cap \ker \A$ and with $\norm w_j - w_j^{n(j)} \norm_{L^1(T_N,\R^d)} \to 0$ as $j \to \infty$. Thus, $\widetilde{w}_j$ still generates $\lambda \nu_1 + (1- \lambda) \nu_2$ and is bounded in $L^{\infty}$. We conclude that $\lambda \nu_1 + (1- \lambda) \nu_2 \in H_{\A}^x(K)$ and thus this set is convex. \end{proof} \begin{rem} \label{boundary} Using the construction we made in the proof (one can choose $\lambda=1$), we may also prove the following characterisation of $\A$-$\infty$-Young measures: $\nu$ is a homogeneous $\A$-$\infty$-Young measure if there exists a sequence $u_n \in L^{1}(\R^N,\R^d) \cap \ker \A$ with $\spt u_n \subset Q = [0,1]^N$ and \begin{displaymath} \int_{\{\V u_n \V \geq L\}} \V u_n \V \longrightarrow 0 \quad \text{as } n \to \infty. \end{displaymath} By convolution we may even assume that $u_n \in C_c^{\infty}((0,1)^N,\R^d) \cap \ker \A$. \end{rem} We are now ready to prove Theorem \ref{hommain}. \begin{proof}[Proof of Theorem \ref{hommain}:] We have that $H_{\A}(K) \subset \M^{\A qc}$ due to the fundamental theorem of Young measures: $\nu \geq 0$ and $\spt \nu \subset K$ are clear by i) and iii) of Theorem \ref{FTOYM}. The corresponding inequality follows by $\A$-quasiconvexity, i.e. if $u_n \in L^{\infty}(T_N,\R^d) \cap \ker \A$ generates the Young measure $\nu$, then \begin{displaymath} \li \nu, f \re = \lim_{n \to \infty} \int_{T_N} f (u_n(y)) \dy \geq \liminf_{n \to \infty} f\left(\int_{T_N} u_n(y) \dy \right) = f(\li \nu, \id \re). \end{displaymath} To prove $M^{\A qc}(K) \subset H_{\A}(K)$, w.l.o.g. consider a measure such that $\li \nu, \id \re=0$. We just proved that $H_{\A}^0(K)$ is weak$*$ closed and convex. Remember that $C(K)$ is the dual space of the space of signed Radon measures $\M(K)$ with the weak$*$ topology (see e.g. \cite{Rudin}). Hence, by Hahn-Banach separation theorem, it suffices to show that for all $f \in C(K)$ and all $\mu \in M^{\A qc}(K)$ with $\li \mu,\id \re=0$\begin{displaymath} \li \nu, f \re \geq 0 \text{ for all } \nu \in H_{\A}^0(K) \Rightarrow \li \mu, f \re \geq 0. \end{displaymath} To this end, fix some $f \in C(K)$, consider a continuous extension to $C_0(\R^d)$ and let \begin{displaymath} f_k(x) = f(x) + k \dist^2(x,K). \end{displaymath} We claim that \begin{equation} \label{homogclaim} \lim_{k \to \infty} \QA f_k(0) \geq 0. \end{equation} If we show \eqref{homogclaim}, $\mu$ satisfies \begin{displaymath} \li \mu,f \re = \li \mu, f_k \re \geq \li \mu , \QA f_k \re \geq \QA f_k(0), \end{displaymath} finishing the proof. For the identity $\li \mu,f \re = \li \mu, f_k \re $ recall that $\mu $ is supported in $K$ and $\dist^2(x,K) = 0$ for $x \in K$. Hence, suppose that \eqref{homogclaim} is wrong. As $f_k$ is strictly increasing, there exists $\delta >0$ such that \begin{displaymath} \QA f_k(0) \leq - 2 \delta, \quad k \in \N. \end{displaymath} Using the definition of the $\A$-quasiconvex hull \eqref{fcthull}, we get $u_k \in L^{\infty}(T_N,\R^d) \cap \ker \A$ with $\int_{T_N} u_k(y) \dy =0$ and \begin{equation} \label{contradict1} \int_{T_N} f_k (u_k(y)) \dy \leq -\delta. \end{equation} We may assume that $u_k \weakto 0$ in $L^2(T_N,\R^d)$ and also that $\dist(u_k,K) \to 0$ in $L^1(T_N)$. By property (ZL), there exists a sequence $v_k \in \ker \A$ bounded in $L^{\infty}(T_N,\R^d)$ with $\norm u_k -v_k \norm_{L^1} \to 0$. $v_k$ generates (up to taking subsequences) a Young measure $\nu$ with $\spt \nu_x \subset K$. Then for fixed $j \in \N$, using Lemma \ref{average} and that $\bar{\nu} \in H_{\A}(K) \subset M^{\A qc}(K)$, \begin{displaymath} \liminf_{k \to \infty} \int_{T_N} f_j(u_k(y)) \dy \geq \liminf_{k \to \infty} \int_{T_N} f_j(v_k(y)) \dy = \int_{T_N} \int_{\R^d} f_j ~\textup{d} \nu_x \dx = \li \bar{\nu}, f \re \geq 0. \end{displaymath} But this is a contradiction to \eqref{contradict1}, as $f_k \geq f_j$ if $k \geq j$. \end{proof} Using the result of Theorem \ref{hommain}, we are now able to prove Theorem \ref{mainYM}. \begin{proof}[Proof of Theorem \ref{mainYM}:] We first establish the neccessity of i) - iii). If $\nu$ is an $\A$-$\infty$-Young measure then there exists $u_n \in L^{\infty}(T_N,\R^d) \cap \ker \A$ that generates $\nu$. As $u_n$ is bounded in $L^{\infty}$, $\norm u_n \norm_{L^{\infty}} \leq R$, then $\spt \nu_x \subset \bar{B}(0,R)$. Moreover, $u_n \weakstar \li \nu_x, \id \re=: u$, hence the function $u$ satisfies $\A u=0$. iii) follows from the weak$*$ lower semicontinuity result from \cite{FM}. Note that for $f \in C(\R^d)$ $\A$-quasiconvex the value of $f(u_n)$ only depends on $f: B(0,R) \to \R$, hence we may consider a dense countable subset of $C(B(0,R)) $. Then for a.e. $x \in T_N$ and all $f$ in this countable subset we may use Lebesgue point theorem and see that \begin{align*} \li \nu , f \re &= \lim_{r \to 0} \fint_{B_r(x)} f(y) ~ \textup{d} \nu_y \\ & = \lim_{r \to 0} \lim_{n \to \infty} \fint_{B_r(x)} f(u_n(y)) \dy \\ &\geq \lim_{r \to 0} \fint_{B_r(x)} f(u(y)) \dy \\ &= f( \li \nu_x, \id \re). \end{align*} For sufficiency consider such a $\nu \geq 0$ satisfying i) - iii). Suppose first that $\li \nu_x, \id \re =0$ for a.e. $x \in \R^d$. We divide $T_N$ into $n^N$ equisized subcubes $Q^n_i = q_i +[0,1/n)^N$, $i \in [n^N]$ where $q_i \in 1/n \Z^N$. Define $\nu^n_i$ by duality as \begin{equation} \label{nuin} \li \nu_i^n, f \re = \fint_{Q^n_i} \li \nu_x, f \re \dx \end{equation} and $\nu^n: T_N \to \mathcal{M}(\R^d)$ by \begin{displaymath} (\nu^n)_x = \nu^n _i \text{ if } x \in Q^n_i. \end{displaymath} Note that $\nu^n_i$ is bounded and for all $f \in C_c(\R^d)$ \begin{displaymath} \li \nu^n , f \re \longrightarrow \li \nu, f \re \text{ in } L^1(\R^d). \end{displaymath} Due to boundedness we also know that \begin{displaymath} \nu^n \weakstar \nu \text{ in } L^{\infty}_w(T_N,\M(\R^d)). \end{displaymath} We only need to show that $\nu^n$ is an $\A$-$\infty$ Young measure, as then we may take a diagonal sequence to get the Young measure $\nu$ (recall that the topology was metrisable on bounded sets, Remark \ref{metrizable}). Fix some $n \in \N$. Note that for any $i \in [n^N]$, by Theorem \ref{hommain}, the measure $\nu^n_i$ is a homogeneous $\A$-$\infty$-Young measure. Indeed, \begin{enumerate} [i)] \item $\nu_i^n \geq 0$, \item $\spt(\nu_i^n) \subset K$, as for every $x \in Q^n_i$ $\spt \nu_x \subset K$, \item for every $\A$-quasiconvex $f\in C(\R^d)$ we have \begin{align*} \li \nu_i^n , f\re = \fint_{Q^n_i} \li \nu_x,f \re \geq \fint_{Q^n_i} f(\li \nu_x, \id \re) = f(0) = f(\li \nu_i^n, \id \re). \end{align*} \end{enumerate} Hence, by Remark \ref{boundary} there exists a sequence $u^k_i$ generating $\nu^n_i$ with $u^k_i \in C_c^{\infty}(Q,\R^d) \cap \ker \A$ and \begin{displaymath} \int_{ \V u^k_i \V \geq C R} \V u_k^i \V \leq 1/k. \end{displaymath} By rescaling, we may find $\tilde{u}^k_i \in C_c^{\infty}(Q^n_i) \cap \ker \A$ with $\int_{\V \tilde{u}^k_i\V \geq C R} \V \tilde{u}_k^i \V \leq n^{-N}/k$. Thus, we may find $v_k \in L^1(T_N,\R^d) \cap \ker \A$ with \begin{displaymath} \int_{\V v_k \V \geq CL} \V v_k \V \leq 1/k, \end{displaymath} by defining $v_k = \tilde{u}^k_i$ on $Q^n_i$. $v_k$ generates $\nu^N$ by construction. By (ZL), we may find $\tilde{v}_k \in L^{\infty}(T_N,\R^d) \cap \ker \A$ bounded with $\norm \tilde{v}_k - v_k \norm_{L^1(T_N,\R^d)} \to 0$, which therefore still generates $\nu^N$. Thus, $\nu^N$ is an $\A$-$\infty$-Young measure and then also $\nu$. For $\li \nu_x, \id \re = u$ consider the translated measure $\tilde{\nu}$ defined by \begin{displaymath} \li \tilde{\nu}, f \re = \int_{\R^d} f(y + u_n(x)) ~\textup {d} \nu_x. \end{displaymath} Note that $\li \tilde{\nu}_x, \id \re = 0$ and $\tilde{\nu}$ still satisfies i)-iii). Then we may find a bounded sequence $w_k \in L^{\infty}(T_N,\R^d) \cap \ker \A$ generating $\tilde{u}$. The functions $w_k + u \in L^{\infty}(T_N,\R^d) \cap \ker \A$ then generate $\nu$. \end{proof} \end{appendix} \bibliographystyle{alpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,443
Q: Slow to type login password When I initially start Ubuntu I have to hold down each key for about 1 second to type in the password. Just touching the keys as normal has no effect. This behavior does not persist if I logout and login again. I believe this behavior first occurred after installing and tinkering with powertop. Any ideas where to start looking to rectify this?
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,226
Many people still hold the mistaken belief that fasted training (exercising on an empty stomach) is ineffective and harmful. I've seen many "experts" claim that it doesn't increase fat burning, and it actually makes your body burn muscle instead. New research explains why fasted training improves fat burning and how eating before a workout can blunt long-term improvements.
{ "redpajama_set_name": "RedPajamaC4" }
6,129
The Rebel Sweetheart.: Sign-Up | Coachlicious Giveaway. Mom Powered Media is currently taking sign-ups for the Coachlicious Giveaway! The event will run from August 1-22, and at stake is a Coach Madison Op Art Sateen Handbag. Event details, posting guidelines, and sign-up form can be found here.
{ "redpajama_set_name": "RedPajamaC4" }
9,000
Deskaheh és el nom nadiu de Levi General (Grand River, Ontàrio, 1873 - Reserva Tuscarora, Nova York, 1925). Fou un líder cayuga, educat en una família tradicional del clan shao-hyowa (Gran Cel). Va anar el 1923 a la seu de la Societat de Nacions a Ginebra, tot viatjant amb passaport de la Confederació Iroquesa, i hi presentà el memorial The red man's appeal for Justice, tot aportant el Wampum de dues fileres, el pacte més antic signat amb europeus. Reberen suport de Pèrsia, Irlanda, Estònia i Panamà, però el març del 1924 tots reberen pressions britàniques i ho paralitzaren. El 7 d'octubre del 1924 la Policia Muntada dissolgué el parlament de les Sis Nacions, robant documents i wampums, i convocant noves eleccions. Aleshores s'exilià als EUA, on va morir. Enllaços externs Biografia Cabdills amerindis Cayuga Iroquesos Polítics d'Ontàrio
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,466
@interface cityFirstTableViewCell : UITableViewCell @property(nonatomic, strong) cityModel *model; @property (strong, nonatomic) IBOutlet UIButton *ClassifyButton; @end
{ "redpajama_set_name": "RedPajamaGithub" }
7,884
Tile flooring is one of the more popular choices of today. You can see a lot of modern houses as well as condominium units and apartments with tile flooring. People seem to like it because of its versatility and attractiveness. The earliest samples of tile flooring can be traced back in the late Middle Ages (and even a little earlier than that), remnants of which can be found in Ancient Greece and Rome. At that time, mosaic tile flooring, which was composed of colored clay, glass or shells and was arranged to produce a unique montage patterns, was very popular. In fact, twelfth century French churches had mosaic tile flooring in yellow, black or green hues. Today, tile flooring continues to be in heavy use in Europe in various forms. In castles and ancient edifices, store tile flooring can be seen. This kind of flooring is composed of thin slabs of stone of various shapes and sizes and, using some kind of grout, such are arranged in the same way that tiles of today are arranged. While tile flooring can be plain, it can also be very ornate. An example of ornate tile flooring is marble tile flooring. Moneyed people usually prefer this for their homes especially since it is expensive and projects lavishness. Marble tile flooring is also frequently used in banks, embassies and other public places. Because the look of marble tile can truly give a little zing to a regular house in the suburbs, let's say, people who can't afford the real thing even result to buying linoleum that has the marble design. Another kind of tile flooring is one that uses porcelain. Often used in bathrooms or shower rooms, porcelain tile flooring can be cleaned easily, all you need is just some good liquid or powder bathroom cleansers. Porcelain tile flooring is also used by some for their patios. Contact your local handyman if you wish to have tile flooring installed in your home. If you want to try your hand on doing it, though, always bear in mind that you must carefully measure the area that you want to tile. Get all exact measurements. Don't hesitate to get expert advice on the best kind of tile to use and the most suitable grout to secure the tile in place. Even amid the presence of other types of flooring like wood, vinyl, linoleum, ceramic, marble and rubber, tile flooring will still be a favorite for most structures. A vacuum cleaner is one of the most effective cleaning tools. We are available with a variety of vacuum cleaners having different features. So if you are going to buy a vacuum cleaner then make sure that you make a wise choice and choose the one that suits best for your needs. To help yourself, make things clearer. Have a look at different types of vacuum cleaners available, what type of features are you looking for, what type of flooring you have at your home, how big the area is that you are going to clean with the help of your vacuum cleaner. Upright vacuum cleaners are heavier than canister vacuum cleaners. An upright vacuum cleaner might be more difficult to maneuver and not too handy for cleaning small gaps and spots. If weight is a problem, then canister vacuum is a good choice. The main good thing about upright vacuum cleaners is suction power. The motor is closer to the vacuum head as compared to a canister. Canister vacuum cleaners are easier to use because the motor units is usually smaller and you can vacuum a huge area with the head, while not even moving the motor. Most canister vacuums come with attachments that are stored inside them for quick retrieval and put-away. Both types of vacuum cleaners offer features for making cleaning better, convenient and safer. A vacuum cleaner with powerful suction can depend on many factors but wattage of your motor is an excellent indicator of its power. Greater wattage a vacuum has, the more powerful it is. A good figure for the cylinder cleaner is around 1400 W, and 1300 for an upright. This is the latest feature of vacuum cleaners. Till now, all vacuum cleaners collected dirt in a bag. But this scene was changed by the bagless Dyson vacuum cleaners. The main disadvantage with vacuum cleaners that use bags is lack of suction as the bag fills up. If you buy a vacuum cleaner with standard filtration, make sure you look for one with several filtration levels. S-class and HEPA filters greatly reduce the number and size of particles which have been emitted back into the fresh air.
{ "redpajama_set_name": "RedPajamaC4" }
47
export default { // font scaling override - RN default is on allowTextFontScaling: true, };
{ "redpajama_set_name": "RedPajamaGithub" }
6,388
{"url":"https:\/\/answers.opencv.org\/question\/905\/roi-region-of-interest\/?answer=910","text":"# roi (Region of Interest)\n\nI want to use histogram only the center part of my image.I use histogram for illumination images but I wanted to use it only on the center of the image because of time limitations. In addition to this, when I use equalizeHist, it sometimes causes noises.Here is my code;\n\nMat img1=imread(\"imageLight100.bmp\");\nMat img2=img1.clone();\nRect roi = Rect(100,100,200,200);\nMat roiImg;\n\nroiImg = img1(roi);\ncvtColor(roiImg, roiImg, CV_RGB2GRAY);\nequalizeHist(roiImg,roiImg);\nimshow(\"roi\",roiImg);\n\ncvtColor(img2, img2, CV_RGB2GRAY);\nimg2.copyTo(roiImg);\nimshow(\"roiImage\",img2);\nwaitKey();\n\nedit retag close merge delete\n\nSort by \u00bb oldest newest most voted\n\nbasarkir, can you please rephrase your question? It is hard to understand.\n\nAnyway, when you are taking ROI of the image (as you did in your example), no copy is preformed. There still single image buffer for both img1 and roiImg. They are just pointing to different parts of it. So when you change content of roiImg, content of img1 is changed as well.\n\n1) If you are working with gray images, it makes sense to convert them all prior to making any operations on them (clone and such).\n\n2) If you are working with gray images than read them as gray images when you are using imread. Its default is to read image as 3-channel image, but you may force it to read it as 1-channel image, or you may read the image as is.\n\nMat color =imread(\"imageLight100.bmp\",1); \/\/ read the image as 3-channel image\n\n\n3) imread() returns image in BGR format not RGB, so proper flag for cvtColor will be CV_BGR2GRAY.\n\n4) I don't see why you copied img2 to roiImg. This will just erase all content of roiImage and replace it with content of img2 (including width and height).\n\nTo put this all together, if you want to make histogram equalization to part of the image it should be writen this way:\n\nMat img = imread(\"imageLight100.bmp\",0); \/\/ read the image as gray image\nMat roiImg = img(Rect(100,100,200,200)); \/\/ take ROI from image\nequalizeHist(roiImg,roiImg); \/\/ perform equalization of ROI\nimshow(\"roiImage\",roiImg); \/\/ show only equalized ROI\nimshow(\"image\",img); \/\/ show whole image while part of it was equalized\nwaitKey();\n\nmore\n\nThank you so much Micheal for your answer. I found solution. Here is my working code:\n\nvoid roi(Mat& img,int x_point,int y_point,int width ,int height)\n{\nRect roi = Rect(x_point,y_point,width,height);\nMat roiImg;\nroiImg = img(roi);\nequalizeHist(roiImg,roiImg);\nroiImg.copyTo(img(Rect(x_point, y_point, width, height)));\n}\n\n\nI use histogram for illumination images but I wanted to use it only on the center of the image because of time limitations. My roi function's execute time is 0.2ms. Using the equalizeHist function's execute time is 1.0ms. In addition to this, when I use equalizeHist, it sometimes causes noises. You gave me some useful information about images. I have always used cvtColor, but now I am going to use imread method with (1,0,-1).\n\nmore\n\nYou are welcome. Also note that that last line of your code is not needed. It just coping memory buffer on itself.\n\n( 2012-07-30 07:09:48 -0600 )edit\n\nOfficial site\n\nGitHub\n\nWiki\n\nDocumentation","date":"2023-02-03 16:07:17","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.19366461038589478, \"perplexity\": 3661.4984461303366}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764500058.1\/warc\/CC-MAIN-20230203154140-20230203184140-00024.warc.gz\"}"}
null
null
A GM G22U é uma locomotiva projetada pela EMD para ser a sucessora da EMD GM G12 com a substituição do motor 567 pelo 645 em 1966. Possui motor 12-645E aspirado que lhe fornece potência bruta de 1650Hp e 1500Hp (1118Kw) para tração. As 129 unidades que rodam no Brasil foram fabricadas pela Material y Construcciones S.A. (Macosa) da Espanha sob licença da Electro-Motive Diesel durante os primeiros anos da década de 1970. Naquela época o Brasil vivia o "milagre brasileiro", período de grandes investimentos em infra-estrutura. A RFFSA pode então fazer uma grande encomenda de locomotivas para bitola métrica, começando pelas G22U e G22CU, para os estados de Paraná, Santa Catarina e Rio Grande do Sul e depois de G26CU para o Rio Grande do Sul. A intenção era de se testar as novas maquinas nas linhas do sul e, após a consolidação operacional, distribuir unidades entre as divisões regionais da ferrovia, o que não aconteceu. Essas locomotivas tiveram importante papel na região sul do pais, tendo sido por mais de 30 anos usadas na linha Curitiba-Paranaguá, substituindo nos anos 1970 as GE U12B e GM GL8 e GM G12. Foram utilizadas normalmente em triplex, quadras, quinas e até mesmo senas nos tempos da RFFSA, e em tração distribuída já no período concessionado a ALL, sendo que, no ano de 2012 algumas unidades foram dotadas do sistema LOCOTROL, o que permite tração distribuída controlada remotamente via rádio. Também atendem até os dias presentes o Ramal de Rio Branco do Sul, antiga Estrada de Ferro Norte do Paraná. Nessa linha atualmente operam acopladas a um conjunto M1, isto é, uma locomotiva baixada, que é reformada, recebendo truques Flexcoil-B, motores de tração D31 e operam como unidade de tração alimentada pela potência extra que a G22 é capaz de gerar, pois seu gerador é o mesmo da G22CU, que possui 6 eixos. Duas unidades G22U são conectadas a uma M1, que geralmente fica no meio das unidades alimentadoras. Cada locomotiva é direcionada com a cabine apontada para extremidade, formando um grande conjunto inseparável ao estilo <X - Y - X > , onde X são as G22U e Y a Unidade Lastreada M1. As setas representam a direção das cabines de comando. Tais unidades São chamadas de M1, ou mesmo SLUG. Trafegam também no Ramal de São Francisco, aonde as curvas apertadas não permitem a circulação de maquinas de maior porte sem grandes reformas das linhas. No Rio Grande do Sul tiveram atuação durante o período da RFFSA, quando 10 unidades foram destinadas a Viação Ferrea Rio Grande do Sul. Com a privatização todas as unidades foram reunidas, e não houve mais distinção dessas maquinas. Pouco antes da privatização a RFFSA enviou para a Estrada de Ferro Dona Thereza Cristhina uma unidade, assim como algumas G12 para dieselizar a malha, que até então operava locomotivas a vapor. A unidade que foi enviada era a 4409, que encontra-se até hoje operando pela FTC. Durante mais de trinta anos de operação sofreram algumas pequenas modificações como criação do passadiço traseiro, aumento da capacidade do tanque de combustível, adaptação para unidade geradora M1, corte das "saias" laterias para melhor acesso as tubulações e cabos elétricos, fechamento de Number-boards e Luzes de Classificação, troca de limpa trilhos, grades dos radiadores, retirada da placa da Macosa e de placa de numeração da lateral da cabine em alto relevo. (acessório não original, incluso pela RFFSA), construção de passadiço traseiro (semelhante ao das G22CU), adaptação para queima de bio-diesel e instalação de freio eletrônico / LOCOTROL. Atualmente, a ALL está operando as G22U também em parte das linhas da antiga Estrada de Ferro Sorocabana, em São Paulo, onde as locomotivas jamais haviam circulado na era estatal. Com isso, elas passaram a tracionar os trens de bobinas de aço e cimento que trafegam entre a capital paulista e a Região Sul, substituindo em muitos casos as GE U20C ex-FEPASA. São conhecidas como o "fusca" da ferrovia, pois operam em praticamente todas as linhas, com robustez e confiança. Após 1997 a manutenção destas maquinas precarizou-se, grande parte devido as políticas operacionais das concessionárias que as administraram, causando modificações que descaracterizaram em muito suas aparências, partes mecânicas e elétricas. Mesmo assim, mostram-se robustas, pois sofrem diariamente grandes adversidades, dado sua manutenção falha, linhas de péssima qualidade e pessoal com pouco treinamento no manuseio das mesmas. No Brasil Adquirida entre 1971 e 1973 pela RFFSA, satisfeita com os resultados das GM G12 na bitola métrica. Foram compradas para serem distribuídas na 11ª e 13ª Divisão da RFFSA, nas Regionais Paraná-Santa Catarina e Rio Grande do Sul. Foram distribuídas da seguinte forma: 119 para a SR-5 (Curitiba). 10 para SR-6 (Porto Alegre). Posteriormente uma delas foi transferida da SR-5 para a Ferrovia Teresa Cristina durante sua dieselização. (4409) Locomotivas baixadas 4302, 4317, 4323, 4324, 4327, 4336, 4337, 4347, 4353, 4367, 4393, 4411, 4428 Locomotiva transformada em Slug M-2 4323 Locomotiva transferida para Ferrovia Teresa Cristina. 4409 Locomotivas com tanque de combustível ampliado para: 4900L: 4316, 4340, 4385, 4391, 4395, 4396, 4400, 4404 5200L: 4305, 4329, 4338, 4377, 4386, 4387, 4392 6000L: 4381 Locomotivas que usaram o bio-diesel como combustível (fizeram testes, e atualmente retornaram a utilizar diesel comum) 4380, 4386 Locomotivas com passadiço traseiro: 4334, 4309, 4409, 4332,4310,4389 Locomotivas com adaptação para M-1 (Slug): 4304, 4305, 4321, 4329, 4331, 4332, 4333, 4338, 4341, 4349, 4352, 4356, 4359, 4363, 4368, 4377, 4380, 4381, 4383, 4385, 4386, 4387, 4391, 4392, 4395, 4396, 4397, 4399, 4403, 4404, 4405, 4412, 4413, 4420, 4423, 4426 General Motors Locomotivas B-B Locomotivas diesel-elétricas Locomotivas da GM G22U
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,785
{"url":"https:\/\/www.gradesaver.com\/textbooks\/science\/physics\/university-physics-with-modern-physics-14th-edition\/chapter-8-momentum-impulse-and-collision-problems-exercises-page-268\/8-71","text":"## University Physics with Modern Physics (14th Edition)\n\nWe can find the time $t$ to fall to the floor. $y = \\frac{1}{2}at^2$ $t = \\sqrt{\\frac{2y}{g}} = \\sqrt{\\frac{(2)(2.20~m)}{9.80~m\/s^2}}$ $t = 0.670~s$ We can find the horizontal speed of the combined object when it leaves the table. $m_2v_2= m_1v_1$ $v_2 = \\frac{m_1v_1}{m_2}$ $v_2 = \\frac{(0.500~kg)(24.0~m\/s)}{8.50~kg}$ $v_2 = 1.41~m\/s$ We can use the time and the horizontal speed to find the horizontal distance $x$ $x = v_2~t = (1.41~m\/s)(0.670~s)$ $x = 0.945~m$ The combined object travels a horizontal distance of 0.945 meters.","date":"2019-11-12 04:20:53","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5762706398963928, \"perplexity\": 125.0855494473873}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-47\/segments\/1573496664567.4\/warc\/CC-MAIN-20191112024224-20191112052224-00085.warc.gz\"}"}
null
null
Paulina Buziak-Śmiatacz (ur. 16 grudnia 1986) – polska lekkoatletka, chodziarka. Kariera Czterokrotna mistrzyni kraju w chodzie na 20 kilometrów (2012, 2016, 2017, 2018), halowa mistrzyni Polski w chodzie sportowym na 3000 metrów (2009, 2012, 2014, 2016), 10. zawodniczka Młodzieżowych Mistrzostw Europy (Chód na 20 km Debreczyn 2007). W 2012 startowała na igrzyskach olimpijskich w Londynie, na których zajęła 45. miejsce w chodzie na 20 kilometrów. W 2013 zajęła 29. miejsce na mistrzostwach świata. W 2016 ponownie startowała na igrzyskach olimpijskich, w Rio de Janeiro, zajęła 28. miejsce w chodzie na 20 kilometrów. Wychowanka i zawodniczka OTG Sokoła Mielec. Rekordy życiowe Chód na 20 kilometrów – 1:29:41 (2014) W 2012 ustanowiła wynikiem 1:29:44 rekord Polski na tym dystansie. Chód na 50 kilometrów – 4:41:02 (2019) Przypisy Linki zewnętrzne Polscy chodziarze Polscy lekkoatleci na igrzyskach olimpijskich Polscy olimpijczycy (Londyn 2012) Lekkoatleci OTG Sokoła Mielec Urodzeni w 1986 Polscy olimpijczycy (Rio de Janeiro 2016) Lekkoatleci na Letnich Igrzyskach Olimpijskich 2016
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,337
Sponsor a Locker Eight Players Earn All-WIAC Honors; Loen Earns Coaching Honor MADISON, Wis. (Blugolds.com) – Eight members of the UW-Eau Claire men's hockey team and head coach Matt Loen have received All-Wisconsin Intercollegiate Athletic Conference (WIAC) recognition, it was announced today by Assistant Commissioner of Media Relations Matt Stanek. 2014-15 All-WIAC Team | Official WIAC Release Forward Ross Andersen (Sr.-River Falls, WI) earned First Team All-WIAC honors for the first time in his career after leading the conference in goals (20), short-hand points (6), short-hand goals (5) and game-winning goals (5). Andersen also ranked second in points (32) and 12th in assists (12). Andersen scored at least one point in 18 of 27 games this season including a stretch from January 31 to February 14 where he scored a goal in five straight games. Andersen was named WIAC Men's Hockey Athlete of the Week a league-best three times in 2014-15. This season Andersen tied the school record of five game-winning goals by Alex Hicks in 1989-90 and five short-handed goals also set by Hicks in 1991-92. Andersen finishes his career tied for second in career game-winning goals (10) and fifth in short-handed goals (5). Andersen ranks 13th for career goals (49), 19th in points (88) and tied for 21st in games played (106). Defenseman Jack Callahan (Sr.-Red Bank, NJ/Christian Brothers Academy) received first team All-WIAC recognition ranking second in the conference in defenseman scoring (26) and assists (21). Callahan also ranked fourth among WIAC statistics in short-handed points (2) and seventh in power play points (9) and short-handed points (2). Callahan tallied a season-high four assists in an 11-2 victory over Lawrence on December 13, 2015. Callahan received first team All-WIAC honors in 2013 while earning honorable mention accolades in 2014. Callahan has led the team in assists each of the last two seasons. In three seasons as a Blugold Callahan is tied for 18th in career assists (53) tying Dennis Ryan who played from 1979-83. Goalie Jay Deo (So.-Langley, BC) earned first team All-WIAC honors for the first time in his career. Deo ranked fourth among WIAC goalies in goals-against average (2.75), save percentage (.914) and win percentage (.679). Deo was one of four sophomores receiving first team recognition. In the Eau Claire record book Deo is tied for fifth in single-season saves percentage (.914) and tenth for goals-against average (2.75). In his two seasons as a Blugold Deo ranks first in career goals-against average (1.92), fourth in saves percentage (.934), tied for eighth in wins (17) and 19th in saves (666). Defenseman Jeff Pauluk (Jr.-Bloomington, MN/Jefferson) received first team accolades tying for seventh in the conference rankings in defenseman scoring (13 points) and short-handed points (1). Pauluk set career highs in 2014-15 in goals (3), points (13) and plus/minus (+15). Pauluk scored a season-best three points, one goal and two assists, in an 11-2 victory over Lawrence on December 13, 2014. Forward Patrick Moore (So.-Grand Rapids, MN) is the first of three Blugolds to earn honorable mention All-WIAC honors. Moore ranked second among WIAC statistical leaders in assists (21) and short-handed points (4), third in short-handed goals (2) and fourth in points (29). Moore set single-season career high marks in points (29), assists (21), goals (8) and plus/minus (+15). Moore scored four points, one goal and three assists, in a 5-3 win at UW-Superior on February 6, 2015. Moore was named WIAC Men's Hockey Athlete of the Week honors for January 19-25, 2015. Forward Ethan Nauman (Jr.-Mosinee, WI) received honorable mention recognition leading the WIAC in points (33), goals (20) and power play goals (7). Nauman set single-season career marks in points (33), goals (20), assists (13) and plus/minus (+13). Nauman scored four points, two goals and two assists, in an 11-2 victory over Lawrence on December 13, 2014. Nauman also scored three points in three other games this season. Forward Brandon Wahlin (So.-White Bear Lake, MN) earned honorable mention honors in his first season as a Blugold leading the league in power play points (16) and power play goals (7) while ranking second in points (32) and assists (21). Wahlin tied for second on the team in plus/minus (+16). Wahlin tied a school record with three power play goals in an 11-2 victory over Lawrence on December 13, 2015. Wahlin scored 5 points in that game, one short of tying the school record. Forward T.J. Stuntz (Fr.-Austin, TX/Round Rock) was named to the WIAC All-Sportsmanship Team. This team is represented for individuals that displayed exemplary sportsmanship throughout the season. Stuntz tied for eighth in game-winning goals (2) and tied for ninth in freshman scoring (6 points). Stuntz scored two of his three goals of the season in a 4-2 victory over UW-River Falls on February 28, 2015. Head Coach Matt Loen earned WIAC Coach of the Year honors for the fourth time in the past six years (2010, 2011, 2013, 2015). The Blugolds ranked first in penalty kill percentage (.891) and special teams net (+21) while ranking second in scoring offense (3.85 goals/game), power play conversion (.243) and scoring by periods (104 goals). Only Steve Freeman of UW-River Falls has won more WIAC Coach of the Year awards than Loen. The 2014-05 season marks the third-straight year that the Blugolds have won 18 or more games, the longest such streak of its kind in school history. In addition the Blugolds have now posted five-straight winning seasons which is also a school record. -KCM- February 16, 2016 Four Blugolds named WIAC Athlete of the Week! March 30, 2015 Andersen, Callahan Named AHCA/CCM Men's Hockey All-Americans March 5, 2015 Eight Players Earn All-WIAC Honors; Loen Earns Coaching Honor February 28, 2015 Men's Hockey Wins the Battle But Loses the War February 27, 2015 Men's Hockey Taken Down By River Falls February 24, 2015 Men's Hockey & Women's Basketball to Host WIAC Playoff Games February 21, 2015 Seniors Shine as Men's Hockey Defeats River Falls February 19, 2015 Big First Period Powers Men's Hockey Past Stout February 14, 2015 Second Period Surge Too Much for Men's Hockey February 13, 2015 Men's Hockey Moves Into First Place with 5-2 Victory Over Stevens Point February 10, 2015 Four Blugolds Earn WIAC Athlete of the Week Honors February 7, 2015 Men's Hockey Sweeps Superior February 6, 2015 Men's Hockey Stings Superior January 31, 2015 Men's Hockey Falls Short to Stout January 30, 2015 Men's Hockey Storms Past Stout January 27, 2015 Three Blugolds Earn WIAC Athlete of the Week Honors January 24, 2015 Men's Hockey Takes Two at River Falls January 23, 2015 Men's Hockey Rushes Past River Falls January 17, 2015 Men's Hockey Falls at St. Thomas January 13, 2015 Three Earn WIAC Athlete of the Week Honors January 10, 2015 Men's Hockey Wins at Augsburg Again January 9, 2015 Men's Hockey Opens 2015 With A Win December 13, 2014 Men's Hockey Powers Past Lawrence December 12, 2014 Men's Hockey Runs Past Raiders December 6, 2014 Men's Hockey Stings UW-Superior December 5, 2014 Men's Hockey Falls at Seventh-Ranked Stevens Point November 29, 2014 Andersen Lead Men's Hockey Over Augsburg November 22, 2014 Men's Hockey Ties Lake Forest November 21, 2014 Men's Hockey Comes Up Short Against Adrian November 18, 2014 Andersen Named WIAC Men's Hockey Athlete of the Week November 18, 2014 Men's Hockey Takes Defending Champion Down to the Wire November 15, 2014 Men's Hockey Rush Past Royals November 14, 2014 Men's Hockey Surges Past Pipers November 9, 2014 Men's Hockey Battles St. John's to a Tie November 7, 2014 Men's Hockey Opens Season With Shutout Win November 5, 2014 Men's Hockey to Host Youth Night
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,598
Provincial funding announcement for Indigenous education celebrated by U of M president Manitoba government announces new program funding and increase to operating grants January 7, 2016 — Premier Selinger today announced his government will continue to invest in post-secondary education in Manitoba with particular attention to Indigenous education. The province committed to support Indigenous programming at universities and colleges, including support for the National Centre for Truth and Reconciliation at the University of Manitoba. In addition, the province will increase operating grants to universities by 2.5 per cent next year, providing much-needed funding for Manitoba's universities. "Today's investment is great news for post-secondary institutions in Manitoba, particularly the University of Manitoba," said David Barnard, president and vice-chancellor. "I am delighted Premier Selinger continues to share our vision for and commitment to Indigenous education and support for the next generation of Indigenous leaders in our province." The province's announcement will support the work already underway at the University of Manitoba with regard to increasing access for Indigenous students and supporting campus indigenization as recommended by the Truth and Reconciliation Commission. Barnard adds, "Today's announcement is much welcomed by the university as we focus on responsibly managing our resources to enhance our academic and research programs for the benefit of our students and all Manitobans." In addition to his position as president of the University of Manitoba, Barnard is also chair of the Council of Presidents of Universities in Manitoba (COPUM). Speaking on behalf of COPUM at today's announcement, Barnard said: "We view the provincial funding support as a positive move forward for all post-secondary institutions in Manitoba, but it will still require all universities to continue being very prudent in their spending." For more information, please contact Chris Rutkowski, media communications, at: 204-474-9514 or Chris [dot] Rutkowski [at] umanitoba [dot] ca Indigenous, National Centre for Truth and Reconciliation, president A journey into U of M's 'untapped treasure' In a room on the third floor of Dafoe library, right beside the Icelandic reading room, a small group of archivists work to preserve history. Archives, University of Manitoba 'The strength of the plan is really dependent on the diversity of voices that come and share ideas' Let's reaffirm our ideals in the fight against racism and hatred in all their forms A message from President Michael Benarroch on International Holocaust Remembrance Day anti-racism, president
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,682
In a recent judgment by the Competition Commission of India [CCI], by the bench headed by Mr. Devender Kumar Sikri Chairperson in the case of Vinod Kumar Gupta vs. Whatsapp Inc. [decided on June 1, 2017 ], it has been held that even though 'WhatsApp' appeared to be dominant in the relevant market, the allegations of predatory pricing had no substance and the WhatsApp had not contravened any of the provisions of Section 4 of the Competition Act. Vinod Kumar Gupta, the Informant had filed Information under Section 19(1)(a) of the Competition Act, 2002 (the 'Act') against WhatsApp Inc., alleging contravention of the provisions of Section 4 of the Act. It was stated that the WhatsApp was acquired by Facebook Inc. ('Facebook'), the largest social networking site, on 19th February, 2014 for approximately US$19.3 billion. The Informant had made the submission that the "relevant product market" in this scenario would be 'free messaging App available to smartphones' and the relevant geographical market was 'Global'. As per the Informant, by removing subscription fees, WhatsApp has enlarged its consumer base substantially from 450 million to over 1 billion and it is providing the services by sourcing funds from its parent company i.e. 'Facebook'. The Commission also observed the submission of WhatsApp regarding its users safeguards that all types of 'WhatsApp' messages (including chats, group chats, images, videos, voice messages and files) and 'WhatsApp' calls are protected by end-to-end encryption so that third parties and 'WhatsApp' cannot read them and also the message can only be decrypted by the recipient. Further, as stated by WhatsApp, nothing a user shares on 'WhatsApp', including his/ her messages, photos, and account information, is be shared onto 'Facebook' or any other apps of 'Facebook family of companies' for any third party to see, and nothing a user posts on those apps will be shared by 'WhatsApp' for any third party to see.
{ "redpajama_set_name": "RedPajamaC4" }
346
The Vigilante: Fighting Hero of the West Action Crime 285 min 6.7 1947 Columbia's 33rd serial (made between "Jack Armstrong" and "The Sea Hound") was based on the character that first appeared in "Action Comics" No. 42. Ralph Byrd Ramsay Ames George Offerman, Jr. Robert Barron Tiny Brauer True story of the undersized Depression-era racehorse whose victories lifted not only the spirits of the team behind it but also those of their nation. Set in 1890, this is the story of a Pony Express courier who travels to Arabia to compete with his horse, Hidalgo, in a dangerous race for a massive contest prize, in an adventure that sends the pair around the world... A newly-developed microchip designed by Zorin Industries for the British Government that can survive the electromagnetic radiation caused by a nuclear explosion has landed in the hands of the KGB. James Bond must find out how and why. His suspicions soon lead him to big industry leader Max Zorin. Marnie is a thief, a liar, and a cheat. When her new boss, Mark Rutland, catches on to her routine kleptomania, she finds herself being blackmailed. Shattered illusions are hard to repair -- especially for a good-hearted zebra named Stripes who's spent his life on a Kentucky farm amidst the sorely mistaken notion that he's a debonair thoroughbred. Once he faces the fact that his stark stripes mark him as different, he decides he'll race anyway. And with help from the young girl who raised him, he just might end up in the winner's circle. A Muslim ambassador exiled from his homeland, Ahmad ibn Fadlan finds himself in the company of Vikings. While the behavior of the Norsemen initially offends ibn Fadlan, the more cultured outsider grows to respect the tough, if uncouth, warriors. During their travels together, ibn Fadlan and the Vikings get word of an evil presence closing in, and they must fight the frightening and formidable force, which was previously thought to exist only in legend. The 13th Warrior A Finnish superhero, a masked vigilante Rendel seeks for revenge and fights against VALA, the huge criminal organization. Rendel After a failed swindle, two con-men end up with a map to El Dorado, the fabled "city of gold," and an unintended trip to the New World. Much to their surprise, the map does lead the pair to the mythical city, where the startled inhabitants promptly begin to worship them as gods. The only question is, do they take the worshipful natives for all they're worth, or is there a bit more to El Dorado than riches? The Road to El Dorado The best-selling videogame, Hitman, roars to life with both barrels blazing in this hardcore action-thriller starring Timothy Olyphant. A genetically engineered assassin with deadly aim, known only as "Agent 47" eliminates strategic targets for a top-secret organization. But when he's double-crossed, the hunter becomes the prey as 47 finds himself in a life-or-death game of international intrigue. A young man returns to his hometown after years of being gone to get revenge on a gang that ruined his life. A 14th century Crusader returns with his comrade to a homeland devastated by the Black Plague. The Church commands the two knights to transport a witch to a remote abbey, where monks will perform a ritual in hopes of ending the pestilence. The Texas Rangers chase down a gang of outlaws led by Butch Cavendish, but the gang ambushes the Rangers, seemingly killing them all. One survivor is found, however, by an American Indian named Tonto, who nurses him back to health. The Ranger, donning a mask and riding a white stallion named Silver, teams up with Tonto to bring the unscrupulous gang and others of that ilk to justice. Chelios faces a Chinese mobster who has stolen his nearly indestructible heart and replaced it with a battery-powered ticker that requires regular jolts of electricity to keep working. Follows a young man named Albert and his horse, Joey, and how their bond is broken when Joey is sold to the cavalry and sent to the trenches of World War One. Despite being too young to enlist, Albert heads to France to save his friend. Wounded to the brink of death and suffering from amnesia, Jason Bourne is rescued at sea by a fisherman. With nothing to go on but a Swiss bank account number, he starts to reconstruct his life, but finds that many people he encounters want him dead. However, Bourne realizes that he has the combat and mental skills of a world-class spy—but who does he work for?
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,656
\section{Introduction} \label{section:intro} Intergroup discrimination is a feature of most modern societies~\cite{tajfel1970}. People typically seek other people who are similar to themselves~\cite{fiske}, which results in these similar individuals to form \textit{groups}. These groups labeled as in-groups, give rise to an individual's social identity~\cite{tajfel1974}. The formation of such an in-group causes the simultaneous creation of one or more out-groups, which comprises of all agents that do not belong to the in-group. Since in societies, this division into groups most often implies a competitive relationship between the groups, the existence of classification is sufficient for the introduction of intergroup bias~\cite{tajfel1970}. This intergroup bias may be exhibited in the form of in-group favoritism, or out-group prejudice~\cite{messick}. Out-group prejudice, or simply \textit{prejudice}, is a phenomenon where a group is perceived as a homogeneous entity~\cite{MASUDA20128}, and a cognitive error is directed towards every one of its members simply because of their membership~\cite{allport}. Prejudice may manifest itself through discriminatory behavior towards members of the out-group by certain individuals~\cite{tajfel1970}, which may have a basis in both the individual's personal experience~\cite{fareri} and its tendency to conform to its in-group~\cite{messick}. Experimental studies have shown that certain properties of groups may be correlated with the effects of prejudice in society, such as their sizes ~\cite{Martinez-Vaquero}~\cite{giles}, and even their sociopolitical ordering based on mainstream culture~\cite{dasgupta_implicit_2004}. Since the formation of groups and the subsequent birth of prejudice is inevitable in any society, understanding prejudice, its manifestation in social interactions, and its effects on individual and collective prosperity become paramount, especially in consideration of current socio-political contexts. Past experiments~\cite{tajfel1970} have concluded that prejudice brings about a disparity in the performance of groups. However, we observe that a question on the effects of prejudice remains unaddressed: does this disparity arise only from the prejudiced agents' explicit suppression (negative) of the out-group, or does its presence in agents also cause promotion (positive) in their in-group? We address this question through a complex model of prejudiced multi-group societies in which we can test the in-group and out-group effects of prejudice independently. This is achieved by instantiating prejudiced groups in society independent of groups that are subject to this prejudice. The interactions in our model are governed by the Continuous Prisoner's Dilemma (CPD)~\cite{verhoeff}. These interaction types have recently been shown to be successful in modeling and understanding other social phenomena such as ethical behavior~\cite{ijcai2020-24} and egocentric bias~\cite{nkishore2019} in individuals. Our agent society contains two social structures, groups and factions. Groups represent subsets of agents in the society that share similar key characteristics. Factions are subsets of agents within the same group that have the ability to influence the actions of the constituent faction members. The level of cooperation of an agent in a CPD interaction is governed by its opinion of the other agent. An agent forms its opinion based on its past experience and the opinions of its faction members. The level of influence that an agent's faction has on its opinion is controlled by an attribute named \textit{faction alignment}. Recent work on understanding the mechanisms of prejudice~\cite{Whitaker2018} models its origins and the societal conditions that lead to its perpetuation and mitigation. Their work assumes that an agent has the same prejudice value against all other out-groups in the society which may not be true. Also, agents in their model take individual actions based on global reputations of other agents, which may not accurately represent the behavior of individuals, especially in prejudiced societies where agents may act in lockstep with only their in-group~\cite{messick}. To better understand the effects of prejudice on prosperity, we introduce an agent type called the \textit{prejudiced agent}. We model prejudice as an individual characteristic of this agent where the level of prejudice that any agent can have against a group lies in the range [0, 1]. Each such agent can have different prejudice values against different groups in the society. This prejudice value serves as a discount in the cooperation level of these agents for interactions with the out-group, and can dynamically change based on the payoffs that it receives from these interactions. The change in its prejudice value further propagates a change in its faction alignment based on the difference between its own prejudice value and its faction's prejudice value against a particular group. We generate our results after running experiments with varying configurations of our agent society. Result \textbf{R2} shows that even when modeling prejudice as having an explicit effect only on inter-group interactions, there is still an emergence of implicit in-group promotion. We conclude that the existence of out-group suppression and in-group promotion cannot be isolated, although, in our model, the former has a more significant impact on the creation of disparity within society than the latter. We demonstrate through result \textbf{R1} that groups with a higher concentration of prejudiced agents in them demonstrate higher prosperity than their less prejudiced counterparts. Our modeling of imbalanced societies also demonstrates a correlation between skew in sizes and skew in prosperity levels of competing groups, which is further amplified by the existence of prejudice in the majority group. Furthermore, result \textbf{R4} addresses the effects of prejudice in skewed societies, and \textbf{R5} summarises the impact of prejudice on the prosperity of a society as a whole. We also present results for societies containing agents whose prejudice may be directed towards the in-group, rather than the out-group. These type of agents may arise in socially disadvantaged groups~\cite{spicer} We model these agents as \textit{renegades}, who discriminate against members of their own group on the basis of prejudice. Existing research on renegade agents~\cite{dasgupta_implicit_2004} hypothesizes that these agents may have lower levels of prosperity within society, and may even reduce the prosperity levels of others within their group. Our result \textbf{R3} validates these hypotheses and further presents findings into trends followed by such agents' prejudice levels over time. \section{System Model} \label{section:fw} We model a system to represent an agent society in which the constituent agents participate in iterated interactions. The set of all agents in the society is denoted by $\mathbb{S}$. At each model step, any two agents are picked at random from $\mathbb{S}$ and paired together to undergo an interaction. \subsection{Agent Experience and Opinion} Each agent has a notion of memory ingrained in them. The memory serves the purpose of storing the most recent experiences of an agent with all other agents that it has interacted with until that instance. This is achieved using a matrix $\mathbb{E}$, termed as \textit{agent experience}. The size of $\mathbb{E}$ can be of the order $\mathcal{O}(n \cdot \omega)$, where \textit{n} is the total number of agents in the society and \textit{$\omega$} represents the maximum number of recent interactions that an agent can recall with another agent. \textit{Agent experience} $(\mathbb{E})$ is utilised by an agent to form an \textit{opinion} about the other agent in a particular interaction~\cite{nkishore2019}. The interactions in the model are based on Continuous Prisoner's Dilemma (CPD) which is elaborated upon in Section \ref{subsec:cpd}. We now consider two randomly picked agents, $\mathcal{A}{}_0$ and $\mathcal{A}{}_1$ in an arbitrary interaction. We use $C_{\mathcal{A}{}_0}(\mathcal{A}{}_1,t)$ to denote $\mathcal{A}{}_0$'s \textit{cooperation level} with $\mathcal{A}{}_1$ at interaction $t$ between them. The \textit{cooperation level} is a value that lies in the range of 0 to 1 as seen in standard CPD settings. The \textit{opinion} of $\mathcal{A}{}_0$ about $\mathcal{A}{}_1$ at interaction \textit{t} between the two is denoted by $\eta_\mathcal{A}{}_0(\mathcal{A}{}_1, t)$. This value is computed as, \begin{equation} \label{eq : 1} \eta_{\mathcal{A}{}_0}(\mathcal{A}{}_1,t) = \frac{\sum_{i=1}^{\omega} C_{\mathcal{A}{}_0}(\mathcal{A}{}_1, t-i)}{\omega} \end{equation} We also define social structures: \textit{groups} and \textit{factions} in our model. Each agent belongs to a particular \textit{group} in the system and a \textit{faction} within the \textit{group}. A \textit{faction} is a small subset of agents within the \textit{group} that represents a circle of close-knit agents. These are discussed in detail in Section \ref{subsec:socialstructures}. If an agent belongs to a \textit{faction}, the final cooperation level $C_{\mathcal{A}{}_0}(\mathcal{A}{}_1,t)$ is then an aggregation of $\mathcal{A}{}_0$'s past experience with $\mathcal{A}{}_1$, ($\eta_{\mathcal{A}{}_0}(\mathcal{A}{}_1, t)$) and the opinion of $\mathcal{A}{}_0$'s faction of $\mathcal{A}{}_1$. \subsection{Continuous Prisoner's Dilemma} \label{subsec:cpd} In a standard Prisoner's Dilemma (PD) game setting, the agents have a choice to either defect or cooperate with the other agent which in turn affects their payoffs. Such extreme behaviors of only cooperation or defection however cannot model complex real-word scenarios. This can be achieved by using an alternate game setting: the Continuous Prisoner's Dilemma (CPD)~\cite{verhoeff}, which allows interacting agents to choose a level of cooperation in the range of 0 to 1, where 0 indicates complete defection and 1 indicates complete cooperation. Consider an agent $\mathcal{A}{}_0$ interacting with another agent $\mathcal{A}{}_1$. Their cooperation levels are denoted by $c_0$ and $c_1$ respectively. $\mathcal{A}{}_0$'s payoff, $r_{\mathcal{A}{}_0}(c_0,c_1)$, is now calculated as a linear interpolation of the discrete game payoffs \begin{equation} \label{eq : 2} r_{\mathcal{A}{}_0}(c_0,c_1) = {c_0}{c_1}C + {c_0}\bar{c_1}S + \bar{{c_0}}{c_1}T + \bar{{c_0}}\bar{c_1}D \end{equation} where $\bar{c_0} = (1-c_0)$, $\bar{c_1} = (1-c_1)$, and $C, T, D, S$ are the discrete payoffs in standard PD as shown below. \input{tables/cpd} The conditions to be followed while choosing the values of these variables are : $2C > T+S$ and $T > C > D > S$. Following the work done in~\cite{Axelrod1390}~\cite{nkishore2019} we choose the set of values to be : $(C = 3, T = 5, D = 1, S = 0)$ \subsection{Social Structures} \label{subsec:socialstructures} The fundamental social structure in our model is a \textit{group}, which represents a subset of agents in the society who share similar key values, and group membership is a part of an agent's social identity. All agents in $\mathbb{S}$ belong to a particular \textit{group}. The \textit{groups} in a society are mutually exclusive and exhaustive, i.e, $G_1$ $\cup$ $G_2$ $\cup$ ... $\cup$ $G_g$ $=$ $\mathbb{S}$ and $G_1$ $\cap$ $G_2$ $\cap$ ... $\cap$ $G_g$ $=$ $\phi$, where g is the total number of groups in society. Additionally, every agent in a group also belongs to a particular \textit{faction} within it. Since agents in a group may have a direct relationship with only a subset of their group~\cite{allport}, we create \textit{factions} to model these subsets, where opinions of agents are influenced by others in their faction. The set of factions, \{$\mathcal{F}_1$, $\mathcal{F}_2$, ..., $\mathcal{F}_f$\}, is mutually exclusive and exhaustive, where \textit{f} represents the total number of factions within a \textit{group}. Consider an agent $\mathcal{A}{}_0$ interacting with another agent $\mathcal{A}{}_1$. While calculating its level of cooperation with $\mathcal{A}{}_1$, $\mathcal{A}{}_0$ takes into consideration not only its own \textit{opinion} (as computed in \eqref{eq : 1}) of $\mathcal{A}{}_1$ but also an aggregate of the opinions of its faction members. Let $\mathcal{F}$ represent the faction to which an agent $\mathcal{A}{}_0$ belongs. The faction's \textit{aggregated opinion} of $\mathcal{A}{}_1$ at interaction $t$, denoted by $\mathcal{O}_\mathcal{F}(\mathcal{A}{}_1, t)$, is thus computed as, \begin{equation} \label{eq : 3} \mathcal{O}_\mathcal{F}(\mathcal{A}{}_1, t) = \frac{\sum_{\forall \mathcal{A} \in \mathcal{F}} \eta_\mathcal{A}(\mathcal{A}{}_1,t)}{|\mathcal{F}|} \end{equation} The level of influence that an agent's faction can have over its \textit{opinion} is modeled by a parameter $f$ called the \textit{faction alignment}. It denotes the weightage that $\mathcal{A}{}_0$ gives to its faction's opinion. Its value lies between 0 and 1, with $f=0$ representing that $\mathcal{F}$ has no influence on $\mathcal{A}{}_0$'s opinion of $\mathcal{A}{}_1$, and $f=1$ representing that $\mathcal{A}{}_0$ has complete faith in $\mathcal{F}$ and decides its level of cooperation entirely based on its faction's opinion. \subsection{Prejudiced Agent} \label{subsec:prej} In this section, we introduce the prejudiced agent type. For an agent $\mathcal{A}$, prejudice is modeled as a vector named \textit{prejudice vector}: [$p_1$, $p_2$, ..., $p_g$], where \textit{g} is the total number of \textit{groups} in the system and $p_i$ is a value between 0 to 1 representing the level of prejudice that $\mathcal{A}$ has against the members of \textit{group} $G_i$. The value of $p_i$ is updated after every interaction $\mathcal{A}$ has with an agent belonging to $G_i$, where the outcome of such interactions could either reinforce or quell $\mathcal{A}$'s prejudice. Based on its prejudice value and the prejudice values of the members of its faction, a prejudiced agent can also update its \textit{faction alignment} $f$. \textit{Faction alignment} updates are modeled to allow the prejudiced agents to dynamically change the effect of their faction's influence over it. A prejudiced agent's faction would have more influence over it if the agent's prejudice value is close to the average prejudice value of the rest of its faction members since this implies that the agent and its faction share a similar world view. These updates are discussed further in detail in Section \ref{subsec:interactions}. Since an unprejudiced agent has no notion of prejudice in it, these updates are not required for such agents. The $p_i$'s of a prejudiced agent's \textit{prejudice vector} are instantiated at random using a Gaussian Distribution defined by $\mu$ = 0.5 and $\sigma$ = 0.2. The use of a Gaussian Distribution to instantiate the prejudice vector is in line with previous work on other socio-psychological factors ~\cite{nkishore2019} ~\cite{Lieder}. We now consider a prejudiced agent $\mathcal{A}{}_0$ belonging to \textit{group} $G_0$ having its interaction $t$ with another agent $\mathcal{A}{}_1$ of a different \textit{group} $G_1$. The prejudice attribute of $\mathcal{A}{}_0$ influences its \textit{opinion} (as computed in \eqref{eq : 1}) about $\mathcal{A}{}_1$ and this new \textit{prejudiced opinion}, $\Psi_{\mathcal{A}{}_0}(\mathcal{A}{}_1,t)$ is thus computed as, \begin{equation} \label{eq : 4} \Psi_{\mathcal{A}{}_0}(\mathcal{A}{}_1,t) = (1 - p_1) \cdot \eta_{\mathcal{A}{}_0}(\mathcal{A}{}_1,t) \end{equation} In a faction comprising of prejudiced agents, the collective opinion is shaped by the prejudice of agents contributing to its formation~\cite{brewer1998intergroup}. Therefore, the aggregated opinion is no longer computed using \eqref{eq : 3}. Instead, we use \eqref{eq : 4} to calculate the new \textit{prejudiced aggregated opinion}, $\mathcal{P}_\mathcal{F}(\mathcal{A}{}_1, t)$ as, \begin{equation} \label{eq : 5} \mathcal{P}_\mathcal{F}(\mathcal{A}{}_1, t) = \frac{\sum_{\forall \mathcal{A} \in \mathcal{F}} \Psi_\mathcal{A}(\mathcal{A}{}_1,t)}{|\mathcal{F}|} \end{equation} \subsection{Agent Interactions} \label{subsec:interactions} The \textit{cooperation level} computed by an agent undergoing a CPD interaction changes depending on whether the agent is prejudiced, or unprejudiced. We describe the method for the computation of this value in both these agent types. \subsubsection{Unprejudiced Agent Interaction} \label{subsec:unprej} The \textit{cooperation level} for an unprejudiced agent is computed by combining its \textit{opinion}, as derived from \eqref{eq : 1} and its faction's \textit{aggregated opinion}, as derived from \eqref{eq : 3}. Consider an unprejudiced agent $\mathcal{A}{}_0$ interacting with another agent $\mathcal{A}{}_1$. Let the \textit{faction alignment} (defined earlier in Section \ref{subsec:cpd}) of $\mathcal{A}{}_0$ be $f$. $f$ governs how much influence $\mathcal{A}{}_0$'s faction's \textit{aggregated opinion} has on its \textit{cooperation level}. The \textit{cooperation level} for $\mathcal{A}{}_0$ at interaction \textit{t} with $\mathcal{A}{}_1$ is then computed as, \begin{equation} \label{eq : 6} \mathcal{C}_{\mathcal{A}{}_0}(\mathcal{A}{}_1, t) = f \cdot \mathcal{O}_\mathcal{F}(\mathcal{A}{}_1, t) + (1-f) \cdot \eta_{\mathcal{A}{}_0}(\mathcal{A}{}_1,t) \end{equation} The payoff obtained by $\mathcal{A}{}_0$ can be calculated by substituting its \textit{cooperation level} in \eqref{eq : 2} and this payoff can then be cumulated over every interaction to serve as the agent's measure of \textit{prosperity}. \subsubsection{Prejudiced Agent Interaction} The \textit{cooperation level} for a prejudiced agent is computed in a similar but slightly different manner than the unprejudiced agent. Consider a prejudiced agent $\mathcal{A}{}_0$ belonging to \textit{group} $G_0$ interacting with another agent $\mathcal{A}{}_1$ belonging to \textit{group} $G_1$. The prejudice value of $\mathcal{A}{}_0$ against members of $G_1$ is denoted by $p_1$. Let the \textit{faction alignment} of $\mathcal{A}{}_0$ be $f$. The \textit{cooperation level} for $\mathcal{A}{}_0$ at interaction \textit{t} with $\mathcal{A}{}_1$ is then computed using \eqref{eq : 4} and \eqref{eq : 5} as, \begin{equation} \label{eq : 7} \mathcal{C}_{\mathcal{A}{}_0}(\mathcal{A}{}_1, t) = f \cdot \mathcal{P}_\mathcal{F}(\mathcal{A}{}_1, t) + (1-f) \cdot \psi_{\mathcal{A}{}_0}(\mathcal{A}{}_1,t) \end{equation} \noindent This is again substituted in \eqref{eq : 2} to calculate the payoff received which is then cumulated to get the agent's measure of \textit{prosperity}. The major distinction in prejudiced and unprejudiced interactions comes after the calculation of \textit{cooperation level}. As discussed briefly in Section \ref{subsec:prej}, a prejudiced agent now updates both, its prejudice value $p_i$ as well as its \textit{faction alignment} $f$ based on the payoff received from the CPD interaction. If the payoff received by $\mathcal{A}{}_0$ goes above a threshold value $\alpha_1$, its prejudice value $p_i$ is incremented by a value $\theta$ as its prejudice is reinforced due to the rewarding payoff. Similarly, if it goes below a threshold value $\alpha_2$, its prejudice value is decremented by the same $\theta$ as in this case the agent acted upon its prejudice but received a lower payoff which decreases its belief in its prejudice. \begin{equation} \label{eq : 8} p_i := p_i \begin{cases} + \theta & \text{if \textit{Payoff} $> \alpha_1 $ } \\ - \theta & \text{if \textit{Payoff} $< \alpha_2 $ } \end{cases} \end{equation} \noindent Varying fractions of the maximum payoff (= 5) that an agent could obtain from a standard CPD interaction (obtained when one agent fully cooperates and the other fully defects) were tested as values for the thresholds, $\alpha_1$ and $\alpha_2$, while keeping $\alpha_1$ $ > $ $\alpha_2$. The final values for $\alpha_1, \alpha_2 \text{ and } \theta$ are given in Table \ref{table:model}. After updating its prejudice, $\mathcal{A}{}_0$ now updates its \textit{faction alignment} $f$. This is done by first computing a new quantity \textit{$Prej_F$}, termed as \textit{faction prejudice}. $Prej_F$ is computed as the average of $p_i$'s of all the members of the faction. In this particular interaction, the value of $Prej_F \text{ is } \frac{\sum_{\forall \mathcal{A} \in \mathcal{F}}p_1}{|\mathcal{F}|}$. If the difference between the agent's own prejudice and the \textit{faction prejudice} is less, more specifically, lower than a threshold $\beta_1$, the agent reinforces its faith in its faction by increasing its \textit{faction alignment} $f$ by a value $\tau$. Similarly, if the difference is greater than a threshold $\beta_2$, it decreases its \textit{faction alignment} by the same value $\tau$. In this particular interaction, the update is shown below as, \begin{equation} \label{eq : 9} f := f \begin{cases} + \tau & \text{if $|Prej_F - p_1| < \beta_1$ } \\ - \tau & \text{if $|Prej_F - p_1| > \beta_2$} \end{cases} \end{equation} \input{tables/table_1} As the value for an agent's prejudice updates over interactions, the thresholds $\beta_1$ and $\beta_2$ cannot be kept constant. They are computed dynamically using an agent's current \textit{FactionAlignment} $f$. By tracking the value of $|Prej_F - p_i|$ in multiple simulations, we find it to be in the range [0.01, 0.2]. This range is then further split to determine the ranges for $\beta_1$ and $\beta_2$, where $\beta_1 < \beta_2$. These thresholds for an agent $\mathcal{A}$ are then recalculated before every interaction by linearly scaling $\mathcal{A}$'s current value of $(1-f)$ to the predetermined range of each threshold. The value of $\tau$ and the ranges for $\beta_1$ and $\beta_2$ after multiple runs of the simulation are shown in Table \ref{table:model}. \input{tables/table_2} \section{Experiments and Results} Each instantiation of the model containing agents belonging to \textit{groups} and \textit{factions} is termed as a \textit{society}. Every \textit{society} contains 1000 agents, which may either be prejudiced or unprejudiced, as described in Section \ref{section:fw}. Societies have exactly 50 \textit{factions}, each containing 20 agents of the same type. Each society may contain two or more \textit{groups}, and all agents and the corresponding factions they belong to are distributed uniformly across these groups unless specified otherwise. Prejudice orientations are taken to be a group trait, therefore every prejudiced agent in a group is prejudiced towards the same set of out-groups. Experiments are run for $10^5$ iterations, where every iteration is a CPD interaction (Section \ref{subsec:cpd}) between two agents chosen randomly from the agent pool. Each experiment is repeated 10 times, and the results generated are the average over these runs. Experiments are run with the values of model parameters as specified in Table \ref{table:model}. In our experiments, we calculate the cumulative payoff (labeled \textit{prosperity}) of each agent and use it to generate results on average \textit{prosperity} values of different groups/agent types. We also track prejudice and payoff values of our agent body after every iteration, and present results on the change of these quantities over time. \subsection{Effect of Prejudice on \textit{Prosperity}} \label{subsec:3.1} \subsubsection{Concentration of Prejudice} \begin{figure}[h] \centering \includegraphics[width=0.45\textwidth]{fig_1} \caption{Payoff over time for different concentrations of prejudiced agents} \label{fig:4.1.1} \end{figure} \label{subsub:3.1.1} We analyze the effect of the fraction of agents which are prejudiced in a group $G$, denoted as $\epsilon_G$ on the relative performance of that group over time. $\epsilon_G$ can be interpreted as a measure of the level of prejudice of $G$. We initialize a model with two groups, $G_1$ and $G_2$, where all agents in $G_2$ are unprejudiced ($\epsilon_{G_2} = 0$), and we test for 5 fractions of prejudiced agents in $G_1$, $\epsilon_{G_1} \in \{0.2, 0.4, 0.6, 0.8, 1.0\}$. The ratio of the average payoff of agents in $G_1$ and that of agents in $G_2$ at every model step is plotted in Fig \ref{fig:4.1.1} for all 5 values of $\epsilon_{G_1}$. As the plots indicate, the payoff ratio is directly correlated to the value of $\epsilon_{G_1}$. This brings us to our first result: \begin{center} \fbox{% \parbox{0.8\columnwidth}{% \textbf{R1.} \emph{The relative average prosperity of a group within a society increases as the group gets more prejudiced.} }% } \end{center} This result is in agreement with existing studies on prejudiced behavior, and also provides an incentive for why prejudice may exist in societies in the first place. \subsubsection{In-Group Effect vs. Out-Group Effect} \begin{figure}[h] \centering \begin{minipage}[t]{.45\textwidth} \centering \vspace{-12.7em} \include{tables/table_3} \captionof{table}{Prejudice orientation for results in Fig \ref{fig:bias}} \label{table:tab1} \end{minipage} \qquad \begin{minipage}[t]{.45\textwidth} \centering \includegraphics[width=0.85\textwidth]{fig_2} \captionof{figure}{Comparing in-group and out-group effects of prejudice ($m$ is the slope)} \label{fig:bias} \end{minipage}% \end{figure} Does the disparity presented in Fig \ref{fig:4.1.1} arise only from the prejudiced agents' explicit (negative) suppression of the out-group, or does its presence in agents also bring about implicit (positive) promotion in their in-group? In order to address this question, we run an experiment as follows: we initialize our society with 10 groups, $\{G_1, ..., G_{10}\}$, where every group has 100 agents. Groups $G_1, ..., G_5$ are identical, containing all unprejudiced agents, whereas groups $G_6, ..., G_{10}$ have all prejudiced agents. Prejudice orientations of these agents are initialized as in Fig \ref{table:tab1}, where `Groups Prejudiced against' lists all the groups that the given group is prejudiced against. Therefore, $G_{10}$ is prejudiced against 5 groups, and conversely, $G_1$ has 5 groups prejudiced against it. We use this initialization of our prejudice orientation for a specific reason: this orientation creates a trend for decreasing out-group effects $(G_1 > ... > G_5)$ and increasing in-group effects $(G_6 < ... < G_{10})$ of prejudice simultaneously within the same society but in independent sets of groups. Fig. \ref{fig:bias} present the results for each of these effects, and we observe a positive slope for both effects, indicating that they exist in tandem. However, we note that the increase in \textit{prosperity} of unprejudiced groups due to decreasing prejudice against them is more severe $(m=1.7)$ than the increase in \textit{prosperity} of the prejudiced groups due to their increasing level of prejudice $(m=0.7)$. This brings us to the following: \begin{center} \fbox{% \parbox{0.8\columnwidth}{% \textbf{R2.} \emph{In-group promotion due to the existence of prejudice emerges implicitly in societies even when prejudice is modeled explicitly as an out-group phenomenon, although the severity of this effect on the creation of disparity between groups is lower.} }% } \end{center} \subsection{Majorities and Minorities} \label{subsec:3.2} \begin{figure}[h] \centering \begin{minipage}[c]{.45\textwidth} \centering \includegraphics[width=\textwidth]{fig_3} \captionof{figure}{\textit{Prosperity} ratios for multiple configurations of skewed societies} \label{fig:4.1.3} \end{minipage}\qquad \begin{minipage}[c]{.45\textwidth} \centering \vspace{1.5em} \includegraphics[width=0.80\textwidth]{fig_4} \captionof{figure}{\textit{Prosperity} variations in a skewed society} \label{fig:4.1.4} \end{minipage} \end{figure} We test the effect that the relative size of a group has on its relative \textit{prosperity}, and analyze the impact of the existence of majorities and minorities in society. For this, we conduct a series of experiments for societies having two groups $G_1$ and $G_2$, and study changes in the ratio of the average \textit{prosperity} of these groups with changes in skew between their sizes. We emulate societies with four prejudice configurations of our groups: Both $G_1$ and $G_2$ prejudiced; only $G_1$ prejudiced; only $G_2$ prejudiced; and both $G_1$ and $G_2$ unprejudiced. For each of these configurations, we run simulations for 5 different splits of agents between these groups: 500-500; 600-400; 700-300; 800-200; 900-100. The values of the ratio of the average \textit{prosperity} of groups $G_1$ and $G_2$ from these experiments are plotted in Fig \ref{fig:4.1.3}. The figure shows that while skews in size have no impact on \textit{prosperity} when neither group is prejudiced (orange), we can observe a significant increase in \textit{prosperity} of the agents in the majority group over those in the minority group in the configurations where the majority is prejudiced (red, blue). A relative increase in prosperity of the majority can also be observed in the case when only the minority is prejudiced (green), although this is much more subtle. However, even with a 9:1 skew in group size in favor of the unprejudiced group, the prejudiced group still maintains a higher prosperity level. This brings us to the following result: \begin{center} \fbox{% \parbox{0.8\columnwidth}{% \textbf{R3.} \emph{Skew in group size in prejudiced societies results in a relative increase in prosperity of the majority. This skew creates a disparity in favor of the majority group when it is prejudiced, but cannot overcome the existing disparity from the prejudice of the minority group when it is unprejudiced.} }% } \end{center} We further test this result in multi-group societies: we configure our model with 4 groups $G_1, G_2, G_3, G_4$, with every group prejudiced against every other group, having agents distributed across them in the ratio $1:2:3:4$. Fig \ref{fig:4.1.4} shows the average \textit{prosperity} of each of these groups, and we observe that the \textit{prosperity} increases with an increase in group size. Hence our model and the derived result R3 is able to extend to known ideas of majorities in a society dominating over minorities, in the case of those majorities being prejudiced against the minorities. \subsection{Presence of Renegades} \label{subsec:3.3} \begin{figure}[h] \centering \begin{minipage}[t]{.45\textwidth} \centering \includegraphics[width=0.85\textwidth]{fig_5} \captionof{figure}{Average \textit{Prosperity} levels in a multi-group society with groups containing \textit{renegades}. Here $G_4$ and $G_5$ have \textit{renegades}.} \label{fig:4.2.2} \end{minipage}\qquad \begin{minipage}[t]{.45\textwidth} \centering \includegraphics[width=0.85\textwidth]{fig_6} \captionof{figure}{Average prejudice levels in a two-group society with \textit{renegades}} \label{fig:4.2.1} \end{minipage} \end{figure} In societies, alongside prejudice, there can also exist an inherent and collective perception of some groups being superior to others. Theories suggest that in the perceived inferior groups in such societies, there may exist certain individuals who exercise out-group favoritism more than in-group favoritism~\cite{dasgupta_implicit_2004}. These individuals have been termed as \textit{renegades}~\cite{tajfel1974} and their impact on societies has not been addressed in existing approaches to modeling prejudice. Questions surrounding such agents remain largely unaddressed. In our framework, we model such agents as modified prejudiced agents with their prejudice being directed towards their own in-group rather than any other out-group. We conduct experiments to analyze the effect of the presence of renegade agents on the overall performance of the group they belong to, and also study how their prejudice levels and those of agents around them evolve over time. We run initial experiments on societies having two groups, $G_1$ and $G_2$, with each group being prejudiced against the other. While all agents in $G_1$ are prejudiced against $G_2$, some fraction of agents in $G_2$ are initialized as \textit{renegades}. When creating $G_2$ with only 10\% \textit{renegades}, we observe a 6.5\% decrease in the overall \textit{prosperity} of $G_2$ relative to $G_1$, and this number increases to 12.7\% when the population of renegades increases to 20\%. Experimenting in the multi-group case, we initialize the model with 5 groups $G_1, ..., G_5$, with 10\% of the prejudiced agents in $G_4$ and $G_5$ being \textit{renegades}. Fig \ref{fig:4.2.2} shows the average payoffs of only the prejudiced agents in each group, along with that of all the renegades $(R)$ in our model. We observe that while the renegades themselves have the poorest performance of all agents, their presence even degrades the performance of other prejudiced agents in $G_4$ and $G_5$. We also track how the average prejudice values change over time in the two group experiment when 10\% of agents in $G_2$ are renegades. As shown in Fig \ref{fig:4.2.1}, while there is no significant evolution in the prejudice levels of $G_1$, agents in $G_2$ appear to have a reducing prejudice level over time, and this trend mirrors the prejudice level of the renegades themselves. These findings bring us to the following result: \begin{center} \fbox{% \parbox{0.8\columnwidth}{% \textbf{R4.} \emph{Renegades experience lower prosperity levels in society and they also bring about a reduction in prosperity for their fellow non-renegade group members. Moreover, their existence implies conflicting prejudice orientations within a group, resulting in a decrease of both out-group prejudice in prejudiced agents and in-group prejudice in renegades.} }% } \end{center} \subsection{Impact of Prejudice Levels Across \textit{Societies}} \label{subsec:3.4} \begin{figure}[h] \centering \begin{minipage}[t]{.45\textwidth} \centering \includegraphics[width=0.85\textwidth]{fig_7} \captionof{figure}{\textit{Prosperity} ratio of prejudiced society $\mathbb{S}_1$ with unprejudiced society $\mathbb{S}_2$ for different concentrations of prejudiced agents in $\mathbb{S}_1$} \label{fig:4.3.1} \end{minipage}\qquad \begin{minipage}[t]{.45\textwidth} \centering \vspace{-13.7em} \include{tables/table_4} \vspace{4.85em} \captionof{table}{Average \textit{prosperity} values in societies with different number of groups containing prejudiced agents} \label{table:tab2} \end{minipage} \end{figure} Similar to Section \ref{subsec:3.1}, we analyze the effect of the level of prejudice. But instead of comparing different groups within a society, we test this impact across different \textit{societies}. We run experiments comparing two societies, $\mathbb{S}_1$ and $\mathbb{S}_2$, each having 1000 agents. $\mathbb{S}_2$ is initialized with all agents as unprejudiced, whereas $\mathbb{S}_1$ is initialized with 5 different fractions of agents as prejudiced across 5 experiments: 0.2, 0.4, 0.6, 0.8, 1.0. Fig \ref{fig:4.3.1} plots the ratio of the average \textit{prosperity} of $\mathbb{S}_2$ to that of $\mathbb{S}_1$ for all 5 of these experiments. We observe that the payoff decreases as the percentage of prejudiced agents in $\mathbb{S}_1$ increases. This leads us to our final result: \begin{center} \fbox{% \parbox{0.8\columnwidth}{% \textbf{R5.} \emph{As the number of prejudiced agents in a society increases, the average prosperity of that society as whole decreases.} }% } \end{center} We test this result for societies having more than two groups: We initialize a society with 5 groups, $G_1, G_2, G_3, G_4, G_5$, where each group has 200 agents. We start with the case where all groups are unprejudiced, and then run the model 5 more times, with one more group becoming prejudiced in each subsequent run. Table \ref{table:tab2} lists the average payoff value achieved by agents in each of these societies. We observe approximately a 20\% drop in \textit{prosperity} between a society with no prejudiced agents, and a society with all agents as prejudiced. These findings reinforce our result R5, and also find credibility due to the universal understanding of prejudice being a negative entity in society. \section{Discussions} Experimental studies on prejudice, its origin, and its impact on people and groups exist dating back as far as 1970 \cite{tajfel1970}. Tajfel's experiments, which were conducted on schoolboys aged 14 and 15, involved dividing the subjects into two groups based on flimsy and unimportant criteria, and then asking each subject to reward (in the form of money) some randomly chosen members that were explicitly labeled as belonging either to `their group' or to the `other group'. The results showed that even though rationally, the best choices for individual subjects were to maximize the overall joint profit of all the boys combined, they instead made choices that favored the profits of their in-group over the out-group. The experiments noted that the subjects continued to make choices that were driven by an objective of maximizing the disparity between the profits of the groups, even if this disparity came at the cost of their own group getting lower profits than the theoretical maximum. Our results \textbf{R1} and \textbf{R5} corroborate the findings of these experiments, and we further demonstrate that this behavior exists in prejudiced societies even when the number of groups exceeds two. There also exist studies that do not regard prejudice as an abstract intergroup entity, but instead measure the impact of prejudicial discrimination, particularly in the context of interracial interactions \cite{crosby1980, Dovidio2002}. However, the implicit attitude of the prejudiced individuals to maximize the prosperity of their in-group remains a common denominator across all these studies. The fact that in-group favoritism (as demonstrated in result \textbf{R2}) and an objective of maximizing the in-group profits is an emergent behavior of the agents in our model speaks to the efficacy of the modeling methodology deployed by us for this task. Studies with a similar experimental setup to Tajfel's original work were published in 2001, with the objective being to study, amongst other things, the in-group satisfaction and discriminatory attitudes in individuals when there existed a significant imbalance in the sizes of the groups \cite{Leonardelli2001}. This study reached a similar conclusion to Tajfel's work, as the subjects continued to make discriminatory choices to benefit their in-group over the out-group, even when they were members of the minority group. This result is also in line with our findings, as we show that prejudiced minorities achieve higher average prosperity levels when paired up with a non-prejudiced majority out-group (Fig. \ref{fig:4.1.3}). \cite{Leonardelli2001}, as well as other works, however, make no measurements to study a potential correlation between a group's size and its performance. This is not necessarily surprising, since in most cases it is very difficult and sometimes even highly impractical to design and carry out experiments that test in isolation every such targeted hypothesis. This is where computational modeling, like the one presented in this work, is able to stand out over traditional experimentation, as it becomes possible to tailor experiments that test such hypotheses. We test the reasonably well-accepted wisdom of majorities being more socially prosperous than minorities through the experiments presented in Section \ref{subsec:3.2}. As shown through result \textbf{R3}, we find that the relative size of a group does play a role in its average prosperity levels in both two-group and multi-group societies. However, this remains true only when the groups in question are prejudiced and possess discriminatory attitudes, thus further highlighting the importance of isolating and identifying prejudice when studying any intergroup social phenomenon. Although already well theorized by then \cite{Tajfel1971}, renegade agents, or agents that have a prejudicial attitude towards members of their in-group rather than their out-group, were first experimentally observed in 1980 when some individuals in a study by \cite{crosby1980} made decisions in certain social situations that benefited their racial out-group rather than their in-group. Existing studies and experiments on individuals possessing attitudes of out-group favoritism and self-stereotyping indicate a causal relationship between such attitudes and a negative impact on intellectual performance \cite{spicer}. These agents have also been modeled computationally in the past in the context of studying the role they play in bringing about polarization in societies \cite{FLACHE2018}. Experiments with these agent types have shown that even in low concentration (20 percent), they can significantly influence the members of their in-group to have a more positive outlook toward members of the out-group and reduce the overall out-group hatred. Similar results on the impact of renegades on opinions are obtained from our model as well, as shown by the decrease in prejudice levels of the non-renegade members belonging to the renegades' in-group (Fig \ref{fig:4.2.1}). Existing models, however, exclusively study the role these agent types play in changing the opinions of other agents, and fail to capture and predict their potential impact on the performance (or prosperity) of their in-group in social interactions. \cite{dasgupta_implicit_2004} hypothesizes that the existence of in-group prejudice may negatively impact the performance of both the self-stereotyping individuals and their in-group. Our model allows us to design experiments that can isolate and measure the impact of renegade agent types which lets us not only confirm, but also further refine existing hypotheses. In our simulations, these agent types experience the lowest prosperity levels in society, and can lead to decreased prosperity for the rest of their in-group compared to the non-self-stereotyping out-group(s) in both the two-group and multi-group settings (Section \ref{subsec:3.3}). The group division methodology deployed by \cite{tajfel1970} for his experimental studies has since been widely accepted in the research community and is referred to as the minimal group paradigm \cite{Tajfel1971}. What these experiments highlight is that prejudicial attitudes can often emerge implicitly as a side-effect of group division itself, and may not necessarily stem only from deep-rooted issues such as race or gender. In experimental studies of any intergroup phenomenon, it is nearly impossible to isolate the effect of the phenomenon under study from the implicit prejudices that may arise between the groups. Therefore, in computational modeling, in order to accurately emulate real-world outcomes, it becomes critical to account for this implicit prejudice. This is where the model presented by us in this work can be deployed by researchers in conjunction with their own for the phenomenon under study, and thus either incorporate or remove the impact of implicit prejudices. \section{Conclusion} Prejudice is crucial in understanding multi-group dynamics as it provides insights into the kind of inter-group relations prevalent in a social setting. Our framework introduces an agent type, the prejudiced agent, to analyze the relationship between prejudice and prosperity by creating societies containing groups and factions. Previous studies on modeling out-group prejudice focus more on the evolution and propagation of prejudice in society~\cite{Whitaker2018}. In this work, we present a model that not only tracks the changes in prejudice levels of individuals but also the effect it can have on an agent's individual and group prosperity. We show that even when modeling prejudice as out group-suppression, its existence results in the implicit emergence of in-group promotion, making these concepts impossible to separate from each other. Our results suggest that while prejudiced agents themselves accumulate relatively higher prosperity levels, their presence in societies reduces the prosperity level as a whole. We also show that prejudiced groups in a society maintain higher levels of prosperity even when they are a minority in the society. The model proposed in this work can serve as a strong foundation to conduct meaningful simulation based studies on prevalent social issues. By appropriate instantiation of prejudiced agents, factions and groups, one can analyze the effects of prejudice in interactions between people of different races, nationalities, etc. Much research has already been conducted on studying and preventing social issues such as racism which have continued to plague our society even in this modern era. However, when it comes to the testing and applicability of such theoretical solutions, there is a clear lacuna due to the difficulty inherent in conducting social experiments. We believe that our model can form a basis by which these theories can be easily simulated and tested. Existing experiments on intergroup phenomena follow Tajfel's minimal group paradigm, which assumes group alignment to be a one-dimensional trait. This means that existing studies do not account for two agents that may be part of the same group for an aspect (e.g. politics) and at the same time belong to different groups in other respects (e.g. race). When such real-world studies are conducted, this model can be correspondingly enhanced. This model may be enriched with economic aspects for the purpose of studying other social phenomena of interest, such as the distribution of wealth in a prejudiced society. An agent in a prejudiced environment would undergo preferential interactions in the sense that it would act upon its prejudice to decide its partners for economic transactions. Such an economic model could then be used to conduct extensive experiments to learn about the effects of prejudice on the social status of a person by taking the wealth accumulated by an agent as a proxy for its social status. \bibliographystyle{unsrt}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,021
If you are needing to close a street, alley or sidewalk, please see the form below. A dumpster permit is needed if the dumpster will be sitting on a sidewalk or a street. Please see the form below. A home that has been vacant for 6 months or more is eligible for garbage exemption. Please see the form below. Handicap parking is available to residents of Chillicothe. Please see the form below.
{ "redpajama_set_name": "RedPajamaC4" }
131
{"url":"https:\/\/mathsgee.com\/35923\/what-are-the-rules-of-differentiation","text":"# arrow_back What are the rules of differentiation?\n\n5 views\nWhat are the rules of differentiation?\n\nRules for differentiation\n- General rule for differentiation:\n$\\frac{d}{d x}\\left[x^{n}\\right]=n x^{n-1}, \\text { where } n \\in \\mathbb{R} \\text { and } n \\neq 0$\n- The derivative of a constant is equal to zero.\n$\\frac{d}{d x}[k]=0$\n- The derivative of a constant multiplied by a function is equal to the constant multiplied by the derivative of the function.\n$\\frac{d}{d x}[k \\cdot f(x)]=k \\frac{d}{d x}[f(x)]$\n- The derivative of a sum is equal to the sum of the derivatives.\n$\\frac{d}{d x}[f(x)+g(x)]=\\frac{d}{d x}[f(x)]+\\frac{d}{d x}[g(x)]$\n- The derivative of a difference is equal to the difference of the derivatives.\n$\\frac{d}{d x}[f(x)-g(x)]=\\frac{d}{d x}[f(x)]-\\frac{d}{d x}[g(x)]$\nby Platinum\n(104,456 points)\n\n## Related questions\n\nNotice: Undefined index: avatar in \/home\/customer\/www\/mathsgee.com\/public_html\/qa-theme\/AVEN\/qa-theme.php on line 993\nWhat are the quotient and product rules in differentiation?\nWhat are the quotient and product rules in differentiation?What are the quotient and product rules in differentiation? ...\nclose\n\nNotice: Undefined index: avatar in \/home\/customer\/www\/mathsgee.com\/public_html\/qa-theme\/AVEN\/qa-theme.php on line 993\nWhat is Implicit Differentiation?\nWhat is Implicit Differentiation?What is Implicit Differentiation? ...\nclose\n\nNotice: Undefined index: avatar in \/home\/customer\/www\/mathsgee.com\/public_html\/qa-theme\/AVEN\/qa-theme.php on line 993\nWhat is the list of basic differentiation rules?\nWhat is the list of basic differentiation rules?What is the list of basic differentiation rules? ...\nclose\n\nNotice: Undefined index: avatar in \/home\/customer\/www\/mathsgee.com\/public_html\/qa-theme\/AVEN\/qa-theme.php on line 993\nUsing the rules of differentiation, determine the derivative of $f(x)=\\frac{x^2+9x+18}{x+3}$\nUsing the rules of differentiation, determine the derivative of $f(x)=\\frac{x^2+9x+18}{x+3}$Using the rules of differentiation, determine &nbsp;the derivative &nbsp;of $f(x)=\\frac{x^2+9x+18}{x+3}$ ...\nclose\n\nNotice: Undefined index: avatar in \/home\/customer\/www\/mathsgee.com\/public_html\/qa-theme\/AVEN\/qa-theme.php on line 993\nUsing the rules of differentiation, determine the derivative of $f(x)=\\frac{x^2+9x+18}{x+3}$\nUsing the rules of differentiation, determine the derivative of $f(x)=\\frac{x^2+9x+18}{x+3}$Using the rules of differentiation, determine &nbsp;the derivative &nbsp;of $f(x)=\\frac{x^2+9x+18}{x+3}$ ...\nclose\n\nNotice: Undefined index: avatar in \/home\/customer\/www\/mathsgee.com\/public_html\/qa-theme\/AVEN\/qa-theme.php on line 993\nEvaluate $\\frac{d}{d x} f(x)$ using the rules for differentiation: $f(x)=-\\frac{1}{2} x^{4}-2 x^{3}+\\frac{1}{2} \\sqrt{x^{5}}$\nEvaluate $\\frac{d}{d x} f(x)$ using the rules for differentiation: $f(x)=-\\frac{1}{2} x^{4}-2 x^{3}+\\frac{1}{2} \\sqrt{x^{5}}$Evaluate $\\frac{d}{d x} f(x)$ using the rules for differentiation: \\ f(x)=-\\frac{1}{2} x^{4}-2 x^{3}+\\frac{1}{2} \\sqrt{x^{5}} \\ ...\nclose\n\nNotice: Undefined index: avatar in \/home\/customer\/www\/mathsgee.com\/public_html\/qa-theme\/AVEN\/qa-theme.php on line 993\nFind $h^{\\prime}(x)$ using the rules for differentiation: $h(x)=2 x^{2}+4 x-1$\nFind $h^{\\prime}(x)$ using the rules for differentiation: $h(x)=2 x^{2}+4 x-1$Find $h^{\\prime}(x)$ using the rules for differentiation: \\ h(x)=2 x^{2}+4 x-1 \\ ...","date":"2022-01-21 11:19:43","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 3, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6601873636245728, \"perplexity\": 3147.798025145647}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-05\/segments\/1642320303356.40\/warc\/CC-MAIN-20220121101528-20220121131528-00636.warc.gz\"}"}
null
null
Шпанович — фамилия. Шпанович, Ивана — сербская легкоатлетка. Шпанович, Илья — национальный герой Югославии. Шпанович, Томица — национальный герой Югославии.
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,390
Q: How to add "Open terminal here" to Nautilus' context menu? I am working on Ubuntu 12.04 64bit. I want to add "Open terminal here" to Nautilus context or right-click menu but it tries to download 32bit version from Internet. A: Just use: sudo apt-get install nautilus-extension-gnome-terminal and Logout/Login or reboot. A: I have just installed Ubuntu 14.04 Desktop edition today 07-18-2014, and all I had to do to get the command line option in Nautilus was the following in a terminal: sudo apt-get install nautilus-open-terminal nautilus -q A: nautilus-open-terminal and nautilus-actions packages are available in Universe repository of Ubuntu 14.04. So run the below commands to enable universe repository and also to install above mentioned packages. sudo add-apt-repository universe sudo apt-get update sudo apt-get install nautilus-open-terminal sudo apt-get install nautilus-actions Finally run nautilus -q command to quit nautilus.Now you can be able to see Open in terminal option on right-clicking. A: You have to install the nautilus-open-terminal package from the universe repositories for Ubuntu versions up to Ubuntu 15.04: sudo apt-get install nautilus-open-terminal If you want to install it with apturl, use this URL: apt://nautilus-open-terminal Then: nautilus -q In order to restart Nautilus In Ubuntu 15.10, the functionality is already included in nautilus! A: You'll need to install nautilus-admin (make sure to install the additional files) to have the right click option and others as well, since nautilus-open-terminal is no longer maintained. A: If you are using Ubuntu 18.04 or newer: sudo apt install nautilus-admin A: Here is my script to open terminal in the current directory, I built my own after the open-terminal plugin stopped working for me #!/bin/bash ################################## # A nautilus script to open gnome-terminal in the current directory # place in ~/.gnome2/nautilus-scripts ################################## # Remove file:// from CURRENT_URI gnome-terminal --working-directory=`echo "$NAUTILUS_SCRIPT_CURRENT_URI" | cut -c 8-` PS: Here is some bonus info Assigning a shortcut to the script * *Add executable script to ~/.gnome2/nautilus-scripts *Wait some time - nautilus regenerates accels file *Edit file ~/.gnome2/accels/nautilus *Find line similar to this one: ; (gtk_accel_path "<Actions>/ScriptsGroup/script_file:\\s\\s\\shome\\sgautam\\s.gnome2\\snautilus-scripts\\sopen-terminal" "") * *Remove comment (semicolon) and specify shortcut like this: (gtk_accel_path "<Actions>/ScriptsGroup/script_file:\\s\\s\\shome\\sgautam\\s.gnome2\\snautilus-scripts\\sopen-terminal" "<Primary><Shift>t") * *Save file. *Logout - login. A: I used @Gautam's solution until I found it will not work (I mean a script itself) if path contains non-ascii characters because it's URL encoded. Here is my little fix which is working at least for me. So, the script should look like this: #!/usr/bin/gnome-terminal According to gnome-terminal docs, when you execute this: cd path/to/dir gnome-terminal gnome-terminal will use path/to/dir as working directory, which explains why that script works. A: Do sudo apt-get update and try again. Or cd /tmp wget http://mirrors.kernel.org/ubuntu/pool/universe/n/nautilus-open-terminal/nautilus-open-terminal_0.20-1_amd64.deb sudo dpkg -i nautilus*deb sudo apt-get install -f A: * *Find the .bashrc file in Home. *Open it with any text editor. *Add a line at the end: cd $PWD *Save it. *Close all instances of Nautilus *Now, when you open Nautilus you will get to see the "Open in terminal" option in the right-click menu and it loads the current directory path when clicked. A: For recent versions of Ubuntu (eg. 18)... create/save this script in: ~/.local/share/nautilus/scripts/ Note: you need to also chg the permission of this new file to allow execution It will add a Scripts right click context menu item (with the name given e.g. 'open-in-terminal.sh') for any file or directory you click in nautilus. #!/bin/bash # # When a directory is selected, go there. Otherwise go to current # directory. If more than one directory is selected, show error. if [ -n "$NAUTILUS_SCRIPT_SELECTED_FILE_PATHS" ]; then set $NAUTILUS_SCRIPT_SELECTED_FILE_PATHS if [ $# -eq 1 ]; then destination="$1" # Go to file's directory if it's a file if [ ! -d "$destination" ]; then destination="`dirname "$destination"`" fi else zenity --error --title="ERROR! Open terminal here" \ --text="Plz only select one directory." exit 1 fi else destination="`echo "$NAUTILUS_SCRIPT_CURRENT_URI" | sed 's/^file:\/\///'`" fi # It's only possible to go to local directories if [ -n "`echo "$destination" | grep '^[a-zA-Z0-9]\+:'`" ]; then zenity --error --title="ERROR! Open terminal here" \ --text="Sorry, only local directories can be used." exit 1 fi gnome-terminal --working-directory="$destination" Based on this source: https://help.ubuntu.com/community/NautilusScriptsHowto/SampleScripts#Open_terminal_here A: This link provides the best working solution for adding the feature "Open terminal here" as context command menu for a folder. http://www.n00bsonubuntu.net/content/add-open-terminal-here-to-file-menu-ubuntu-14-04/
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,763
This article by Sabaa Notta was originally published on The MasterCard Foundation website and is republished with permission. Somewhere along Banana Road in Accra, the bustling capital of Ghana, lives the Meltwater Entrepreneurial School of Technology (MEST) office. Its purpose is to provide opportunity and demonstrate that a new generation of successful software entrepreneurs can originate from Africa This is also where the first launch of the 2017/18 Data Hackathon for Financial Inclusion (DataHack4FI) took place in November 2017. DataHack4FI is a competition for programmers, data scientists, enthusiasts, and fintechs to build solutions for real problems of poor people using the tool data. More specifically, how to understand data, make it usable, grow with it, and use it in an effective manner. The competition seeks to advance collaboration among young, innovative groundbreakers, and financial sector players. The Ghana launch kicked off the 2017/2018 season of DataHack4FI, which will also take place in five other countries: Kenya, Rwanda, Tanzania, Uganda, and Zambia. The final competition will be held in Rwanda in May 2018, where finalists will have a chance to compete for US$25,000 in funding. DataHack4FI was created by insight2impact (i2i). Started in 2015, i2i is a collaboration between Cenfri and Finmark Trust in partnership with the Mastercard Foundation and the Bill & Melinda Gates Foundation. The aim is to address the gap that exists between decisions made in the financial inclusion space and available data. There are many places where those decisions are made and data is needed. According to InterMedia's Financial Inclusion Insights, Ghana is home to more than 27 commercial banks, 60 non-bank financial institutions, 138 rural and community banks, 503 licenced microfinance institutions and hundreds of licenced individuals and susu collectors. Moreover, the Bank of Ghana signed the Maya Declaration in 2012 and joined the Better Than Cash Alliance in 2014 to commit to enabling digital financial services in the country. MEST is a rigorous 12-month training programme for entrepreneurs offering opportunities in seed funding and incubation resources. As i2i's country partner in Ghana, MEST offers their Entrepreneurs-in-Training (EITs) a chance to also partake in the DataHack4FI. Many of the young people present at the Ghana launch do not have the financial means to take their ideas to scale or to afford business development services. DataHack4FI provides them an environment to partner with others, share resources, gain mentorship, and find a platform to showcase their ideas. Among the individuals participating was Heather Mavunga, an Entrepreneur-in-Training at MEST with interests in blockchain technology and cryptocurrencies. Through MEST and the DataHack4FI network, Heather hopes to learn how to manage tech products and services and how to acquire customers as a startup. Also among those present was Adam Muhammed Muhideen, CEO and CTO of TroTroTractor, a powerful platform that connects farmers and tractor operators. This agribusiness allows tractor owners to monitor movements and the work of their equipment. Adam is a part of the MEST community and started TroTroTractor from proceeds of winning a previous competition. Akin-Awokoya Emmanuel, Co-Founder of InvestXD, discussed how investing money for young adults is a challenge, with limited understanding of investments and a lack of options available to people in his situation. This is what led to the creation of InvestXD, an online tool that makes it easy for individuals in Ghana to learn about investment, discover investment options with the best rate, invest, and then monitor the performance of their portfolio. Through continued support by the government and initiatives such as DataHack4FI, the development community can collectively provide an opportunity to listen to the voice of the youth, understand what they are doing to address financial inclusion challenges, and catalyze partnerships that best utilise the strengths of various stakeholders in the ecosystem. The next DataHack4FI launch will take place in Rwanda in January 2018. For more information on how to take part, and how to follow the trajectories of those competing, visit the DataHack4FI website.
{ "redpajama_set_name": "RedPajamaC4" }
6,125
La baie d'Alger est la baie autour de laquelle s'étend Alger, la capitale de l'Algérie. L'amiral Ernest Mouchez ancien directeur de l'observatoire de Paris avait déclaré que seule la baie de Rio de janeiro pouvait être comparée à la beauté du panorama de la baie d'Alger. Géographie La baie d'Alger présente une indentation creusée dans le rivage entre le cap matifou (ville de Bordj-el-bahri) à l'est et le cap caxine à l'ouest. Elle a la forme d'un demi-cercle presque parfait. La surface qui se trouve entre la ligne d'environ qui joint les deux caps et le contour périphérique de est de . Sa configuration géographique est presque insulaire, elle est composée d'une terre plate centrale et d'une bande côtière entourée d'une chaîne de montagnes et de collines. Notes et références Alger Alger Alger
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,617
Q: Concave Convex $e^{-x}-e^{-2x}$ My main goal is to see here if my discusison on concave/convex is good enough. Problem Find the extreme points (max and/or min) for $$f(x)=e^{-x}-e^{-2x}.$$ Also discuss concavity/convexity. Attempt Extreme points Solve $f'(x) = 0 -e^{-x} + 2e^{-2x} = 0$ Multiply by $-e^{2x}$ and get $e^x - 2 = 0 \quad \Rightarrow \quad e^x = 2 \quad \Rightarrow \quad x = \ln(2).$ Then: $f(\ln(2)) = e^{-\ln 2} - e^{-2\ln(2)} = e^{\ln(2^{-1})} - e^{\ln(2^{-2})} = e^{\ln(\frac{1}{2})} - e^{\ln(\frac{1}{4})} = \frac{1}{2} - \frac{1}{4} = \frac{1}{4}.$ There is an extreme point at $\left(\ln(2), \frac{1}{4}\right).$ Second derivative test: $f''(x) = e^{-x} - 4e^{-2x}.$ $f''(\ln(2)) = e^{-\ln(2)} - 4e^{-2\ln 2} = e^{\ln(2^{-1})} - 4e^{\ln(2^{-2})} = e^{\ln(\frac{1}{2})} - 4e^{\ln(\frac{1}{4})}.$ $= \frac{1}{2} - 4\left(\frac{1}{4}\right) = \frac{1}{2} - 1 = -\frac{1}{2}.$ In $x_0=\ln(2)$ we have $f'(x_0)=0$ and $f''(x_0)<0.$ Thus, $\left(\ln(2), \frac{1}{4}\right)$ is a maximum point. Concave/Convex $f''(\ln(4)) = 0$. And $f''$ is negative to the left of $\ln(4)$, and positive to the right, so $f$ is concave for $x<\ln(4)$ and conve for $f>\ln(4)$. A: My main goal is to see here if my discusison on concave/convex is good enough. Yes, your discussion is very good and very clear. A: ! $f′′(\ln(4))=0$ $f′′$ is negative to the left of $\ln(4)$ and positive to the right
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,132
Loud Flash: British punk on paper News I 08.09.10 I by Fact Loud Flash: The Art Of Punk On Paper is a new exhibition opening at the Haunch of Venison gallery in London on September 24. The exhibition, which runs until October 30, is focussed on iconic poster art from the punk era. Curator Toby Mott contends that the poster was of special importance to punk, given TV and radio's general resistence to its snotty, anti-establishment charms. Though the now famous work of Jamie Reid and Linder Sterling (for the Sex Pistols and Buzzocks respectively) is amply represented in the show, there are also a considerable number of pieces by anonymous artists. Particularly interesting is the illustration of how both the National Front and Rock Against Racism sought to adopt punk's stark visual language to attract youths to support their vigorously opposed political causes. Mott's own personal collection forms the backbone of Loud Flash, and the exhibition also includes a carefully selected array of memorabilia from the Queen's Silver Jubilee, which coincided with the explosion of punk in '77. "I began this collection as a teenager in the 1970s," says Mott. "I loved punk music and the attitude that went with it, but I was equally taken with the subversive way the bands promoted themselves – Jamie Reid's famous Sex Pistols poster of the Queen with a safety pin through her nose being a stand-out example. "But even then it was apparent to me that what was going on was much more than a musical movement. This exhibition seeks to capture punk's cataclysmic collision with the cultural, social and political values of the time and show the enduring legacy it left in its wake." There will also be a 128-page publication to accompany the exhibition, featuring essays by Mott, Simon Ford and Susanna Greeves among others.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,686
{"url":"https:\/\/www.groundai.com\/project\/azimuthal-asymmetries-in-qcd-hard-scattering-infrared-safe-but-divergent\/1","text":"1 Introduction\n\nZU-TH 04\/17\n\nAzimuthal asymmetries in QCD hard scattering:\n\n[0.2cm] infrared safe but divergent\n\nStefano Catani Massimiliano Grazzini and Hayk Sargsyan\n\nINFN, Sezione di Firenze and Dipartimento di Fisica e Astronomia,\n\nUniversit\u00e0 di Firenze, I-50019 Sesto Fiorentino, Florence, Italy\n\nPhysik-Institut, Universit\u00e4t Z\u00fcrich, CH-8057 Z\u00fcrich, Switzerland\n\nAbstract\n\nWe consider high-mass systems of two or more particles that are produced by QCD hard scattering in hadronic collisions. We examine the azimuthal correlations between the system and one of its particles. We point out that the perturbative QCD computation of such azimuthal correlations and asymmetries can lead to divergent results at fixed perturbative orders. The fixed-order divergences affect basic (and infrared safe) quantities such as the total cross section at fixed (and arbitrary) values of the azimuthal-correlation angle . Examples of processes with fixed-order divergences are heavy-quark pair production, associated production of vector bosons and jets, dijet and diboson production. A noticeable exception is the production of high-mass lepton pairs through the Drell\u2013Yan mechanism of quark-antiquark annihilation. However, even in the Drell\u2013Yan process, fixed-order divergences arise in the computation of QED radiative corrections. We specify general conditions that produce the divergences by discussing their physical origin in fixed-order computations. We show lowest-order illustrative results for asymmetries (with ) in top-quark pair production and associated production of a vector boson and a jet at the LHC. The divergences are removed by a proper all-order resummation procedure of the perturbative contributions. Resummation leads to azimuthal asymmetries that are finite and computable. We present first quantitative results of such a resummed computation for the asymmetry in top-quark pair production at the LHC.\n\nMarch 2017\n\n## 1 Introduction\n\nAngular distributions of final-state particles produced by high-energy collisions are known to be relevant observables for the understanding of the underlying dynamics of their production mechanism. In this paper we deal with azimuthal-angle distributions and related asymmetries.\n\nIn the case of collisions that are produced by spin polarized particles, azimuthal angles with respect to the spin direction can be specified and measured: the analysis of the ensuing azimuthal correlations is a much studied topic and the subject of intense ongoing investigations (see, e.g., Ref.\u00a0[1] and references therein).\n\nAzimuthal correlations in spin unpolarized collisions are less studied. A relevant exception is the lepton azimuthal distribution [2] of high invariant mass lepton pairs that are produced in hadron collisions through the Drell\u2013Yan (DY) mechanism of quark-antiquark annihilation. Angular distributions for the DY process are a well known topic that has been deeply studied at both the theoretical and experimental levels (see, e.g., Refs.\u00a0[3, 4, 5] and references therein).\n\nIn the case of spin unpolarized collisions, much attention to azimuthal asymmetries has been devoted in recent years within the context of a QCD framework based on the introduction of non-perturbative transverse-momentum dependent (TMD) distributions of quarks and gluons inside the colliding unpolarized hadrons. In particular, the TMD distribution of linearly-polarized gluons [6] inside unpolarized hadrons plays a distinctive role since it leads to specific and modulations of the dependence on the azimuthal-correlation angle . TMD distribution studies of such modulations have been performed, for instance, for the production process of heavy quark-antiquark () pairs [7] and for the associated production of a virtual photon and a jet (\u00a0jet) [8]. Many more related studies can be found in the list of references of Ref.\u00a0[8].\n\nIn this paper we consider spin unpolarized collisions and, specifically, we focus our discussion on hadron\u2013hadron collisions, although many features that we discuss are generalizable to (and, hence, valid for) lepton\u2013hadron and lepton\u2013lepton collisions. We consider the production of a system of two or more particles with a high value of the total invariant mass, and we examine the azimuthal correlations between the momenta of the system and of one of its particles. Roughly speaking, the relevant azimuthal angle is related to the difference between the azimuthal angles of the transverse momenta of the system and of the particle. Considering high values of the invariant mass of the system (additional kinematical cuts on the momentum of the produced particle can be applied or required), we select the hard-scattering regime of the production process. In this regime, azimuthal correlations are infrared and collinear safe observables [9], and they can be studied and computed by applying the standard QCD framework of factorization in terms of perturbative partonic cross sections and parton distribution functions (PDFs) of the colliding hadrons. Starting from the results in Refs.\u00a0[10, 11] and, especially, from those in Ref.\u00a0[12] and the analysis on the role of soft wide-angle radiation that is performed therein, we develop and present some general (process-independent) considerations on azimuthal correlations and asymmetries.\n\nWe point out with some generality that, in spite of the infrared and collinear safety of azimuthal-correlation observables, their computation at fixed perturbative orders leads to divergences. The divergences appear also in basic quantities such as the total cross section at fixed (and arbitrary) values of the azimuthal-correlation angle . Example of processes with fixed order (f.o.) divergences are heavy-quark pair () production, associated production of vector () or Higgs bosons and jets (e.g., \u2018\u2009jet\u2019 with ), dijet and diboson production. We support our theoretical discussion by performing the numerical calculation of modulations (for some values of ) at the lowest perturbative order for some specific processes. The numerical results are found to be consistent with the divergent behaviour if in the case of top-quark pair () production and if in the case of associated \u2018\u2009jet\u2019 production. A noticeable exception to the appearance of QCD divergences is the case of the DY process. However, even in the case of the DY process, we predict the presence of f.o. divergences in the perturbative computation of QED radiative corrections. Moreover, analogous divergences arise in the case of lepton\u2013lepton collisions from the computation of radiative corrections in a pure QED context (e.g., in the QED inclusive-production process , or even in the simpler process ). In summary, processes that at the leading order (LO) are produced with an exactly vanishing value, , of the transverse momentum of the system tend to develop sooner or later (in the computation of QCD and QED radiative corrections at subsequent perturbative orders) f.o. divergences.\n\nA possible reaction to this state of affairs is to advocate non-perturbative strong-interactions dynamics and related non-perturbative QCD effects that can cure the f.o. divergences. This cure would spoil the common wisdom according to which non-perturbative contributions to infrared and collinear safe observables are (expected to be) power suppressed, namely, suppressed by some inverse power of the relevant hard-scattering scale (as set by the high invariant mass of the system ). Moreover, non-perturbative strong-interactions dynamics cannot be advocated in the case of f.o. divergences in a pure QED context.\n\nWe pursue a more conventional viewpoint within perturbation theory. We examine the origin of the singularities for azimuthal-correlation observables order by order in perturbation theory, and we proceed to resum the singular terms to all orders [11, 12].\n\nThe singularities originate from the kinematical region where . We identify two sources of singularities. One source [10, 11] is initial-state collinear radiation in the case of systems that can be produced with vanishing by gluon initiated partonic collisions. The other source [12] is soft wide-angle radiation in the case of systems that contain colour-charged particles (or electrically-charged particles for QED radiative corrections). Collinear radiation is, per se, responsible for singularities in (and ) harmonics with and , only [11]. Soft radiation is responsible for singularities that, in principle, can affect harmonics with any (both even and odd) values of ().\n\nAt high perturbative orders collinear and soft radiation is mixed up, and the singularities are enhanced by logarithmic () contributions. We discuss in general terms how the all-order resummation of the logarithmic contributions leads to ( integrated) azimuthal asymmetries that are finite and computable. Such procedure is effective in both QCD and QED contexts. In the QCD case this does not imply that non-perturbative effects have a negligible quantitative role in the region of very-low values of , but these effects give small contributions to -integrated azimuthal-correlation observables. Within perturbative QCD, the most advanced theoretical treatment of singular azimuthal correlations that is available at present regards the process of heavy-quark pair production [12]. We use the process of top-quark pair production in collisions at the LHC as a prototype to present a first illustration of the resummation of singular azimuthal asymmetries at the quantitative level.\n\nA correspondence can be established [10, 11, 13] between the TMD distribution of linearly-polarized gluons and singular azimuthal correlations that are due to initial-state collinear radiation in gluon initiated partonic subprocesses. Therefore, both the framework used in Refs.\u00a0[7, 8] and our perturbative QCD treatment lead to corresponding and azimuthal modulations. In this respect, the TMD distribution of linearly-polarized gluons can be regarded as the extension of specific features of QCD collinear dynamics from perturbative to non-perturbative transverse-momentum scales. Much discussion of the present paper is related to soft radiation effects that occur in processes with colour-charged final-state particles. Soft radiation produces azimuthal asymmetries (which are singular at f.o. and finite after resummation) for both quark (or antiquark) and gluon initiated subprocesses, and for harmonics with arbitrary even values () [12] and also odd values of . These soft wide-angle radiation effects are unrelated to the TMD distribution of linearly-polarized gluons.\n\nThe paper is organized as follows. In Sect.\u00a02 we start our discussion on azimuthal asymmetries and we specify the conditions that lead to divergences in f.o. computations. In Sect.\u00a03 we discuss azimuthal asymmetries in two examples of hadron collider processes, i.e. the production of lepton pairs through the DY mechanism and the production of a pair, by contrasting the different behaviour of the corresponding azimuthal harmonics. In Sect.\u00a04 we start our analysis of the small- limit: in Sect.\u00a04.1 we focus on the small- behavior at f.o. and in Sect.\u00a04.2 we recall the transverse-momentum resummation procedure for the case of azimuthally-averaged cross sections. In Sect.\u00a05.1 we discuss the origin of singular azimuthal correlations and present illustrative lowest-order results for the jet production process. In Sect.\u00a05.2 we outline the resummation procedure of singular terms in the case of azimuthally-correlated cross sections and we contrast the small- behavior expected for the resummed cross section with the known behavior of the azimuthally-averaged transverse-momentum cross section. In Sect.\u00a05.3 we focus on the harmonic for production and we present first quantitative results of a resummed calculation at next-to-leading logarithmic accuracy. After the matching with the complete next-to-leading order (NLO) result, the resummed computation offers an effective \u2018lowest-order\u2019 prediction for the harmonic. We also comment on the possible role of non-singular terms. In Sect.\u00a06 we summarize our results.\n\n## 2 Azimuthal correlations and asymmetries in fixed-order perturbation theory\n\nOur discussion on azimuthal correlations has a high generality. To simplify the illustration of the key points, we consider the simplest class of processes, in which the produced high-mass system in the final state is formed by only two \u2018particles\u2019 in generalized sense (point-like particles and\/or jets).\n\nWe consider the inclusive hard-scattering hadroproduction process\n\n h1(P1)+h2(P2)\u2192F({p3,p4})+X, (1)\n\nin which the collision of the two hadrons and with momenta and produces the triggered final state , and denotes the accompanying final-state radiation. The observed final state is a generic system that is formed by two \u2018particles\u2019, and , with four momenta and , respectively. The two particles can be point-like particles or hadronic jets (), which are reconstructed by a suitable (infrared and collinear safe) jet algorithm. As for the case of point-like particles, the most topical process is the production of a high-mass lepton pair through the DY mechanism of quark\u2013antiquark annihilation. We consider many other cases such as, for instance, the production of a photon pair (), a pair of top quark and antiquark () or a pair of vector bosons (), in addition to dijet production () and associated production processes such as vector boson plus jet (). The invariant masses of the two particles have a little role in the context of our discussion (and they do not affect any conceptual aspects of our discussion). The system has total invariant mass (), transverse momentum and rapidity (transverse momenta and rapidities are defined in the centre\u2013of\u2013mass frame of the colliding hadrons). We require that is large (, being the QCD scale), so that the process in Eq.\u00a0(1) can be treated within the customary perturbative QCD framework. We use to denote the centre\u2013of\u2013mass energy of the colliding hadrons, which are treated in the massless approximation ().\n\nThe dynamics of the production process in Eq.\u00a0(1) can be described in terms of five kinematical variables: the total mass , transverse momentum and rapidity of the system and two independent angular variables that specify the kinematics of the two particles and with respect to the total momentum of . These two angular variables are a polar-angle variable and an azimuthal-angle variable. To be definite and to avoid the use of \u2018exotic\u2019 variables, we refer to a widely-used set of angular variables and we use the polar angle and the azimuthal angle (of particle ) in the Collins\u2013Soper (CS) rest frameSince we are dealing with a rest frame of , the two particles are exactly (by definition) back\u2013to\u2013back in that frame. In particular, the relative azimuthal separation is . [2] of the system .\n\nThe variable is the relevant variable for our discussion of azimuthal correlations. We remark that specifies the azimuth of one of the two particles in the system with respect to the total momentum of the system. In particular, we also remark that we are not considering the relative azimuthal separation (, with , is the azimuthal angle of the transverse-momentum vector in the centre\u2013of\u2013mass frame of the colliding hadrons) between the two particles. However, we can anticipate (we postpone comments on this point) that our main findings are not specific of the CS frame, and they are equally valid for other azimuthal variables with respect to the system (for instance, we can consider the azimuthal angle in a different rest frame of or, simply, the azimuthal difference , where is the azimuthal angle of in the centre\u2013of\u2013mass frame of the colliding hadrons).\n\nUsing the kinematical variables in the CS frame, we can consider azimuthal distributions for the process in Eq.\u00a0(1). The most elementary azimuthal-dependent observable is the azimuthal cross section at fixed invariant mass,\n\n d\u03c3dM2d\u03c6, (2)\n\nand we can also consider less inclusive observables such as, for instance, the -dependent azimuthal cross section in Eq.\u00a0(3) and the multidifferential (five-fold) cross section in Eq.\u00a0(4):\n\n d\u03c3dM2dq2Td\u03c6, (3)\n d\u03c3dM2dydq2Tdcos\u03b8d\u03c6. (4)\n\nAll these quantities are related (from the less inclusive to the more inclusive case) through integration of kinematical variables (for instance, the azimuthal cross section in Eq.\u00a0(2) is obtained by integrating Eq.\u00a0(3) over ) and, in particular, the azimuthal integration of Eq.\u00a0(2) gives the total cross section (at fixed invariant mass) of the process:\n\n d\u03c3dM2=\u222b2\u03c00d\u03c6d\u03c3dM2d\u03c6. (5)\n\nObviously, we can also consider differential cross sections that are integrated over a certain range of values of the invariant mass.\n\nSince all the cross sections that we have just mentioned are infrared and collinear safe quantities [9], they can be computed perturbatively within the customary QCD factorization framework (see Ref.\u00a0[14] and references therein). The only non-perturbative (strictly speaking) input is the set of PDFs of the colliding hadrons. The PDFs are convoluted with corresponding partonic differential cross sections that can be evaluated as a power series expansion in the QCD running coupling .\n\nThis perturbative QCD framework is applicable at any finite (and arbitrary) fixed perturbative order. Despite this statement, the first main observation that we want to make is that the f.o. perturbative calculation of the azimuthal distributions can lead to divergent (and, hence, unphysical and useless) results. More specifically, the f.o. calculation of the azimuthal cross section of Eq.\u00a0(2) gives the following results:\n\nWe mean that the perturbative computation of for the DY process gives a finite result order\u2013by\u2013order in QCD perturbation theory, while in most of the other cases (some of them are listed in the right-hand side of Eq.\u00a0(6)) the computation gives a divergent (meaningless) result for any values of the azimuthal angle starting from some perturbative order. We note that the integration over of gives the total cross section (see Eq.\u00a0(5)), which is known to be finite at any f.o.. Therefore, the divergent behaviour that is highlighted in Eq.\u00a0(6) originates from the azimuthal-correlation\u00a7\u00a7\u00a7Using the shorthand notation to denote a generic multidifferential cross section with azimuthal dependence (e.g., the cross sections in Eqs.\u00a0(3) and (4)), its azimuthal average is and we can define the corresponding correlation component as , analogously to the definition in Eq.\u00a0(7). component, , of the azimuthal cross section:\n\n d\u03c3corrdM2d\u03c6\u2261d\u03c3dM2d\u03c6\u2212\u27e8d\u03c3dM2d\u03c6\u27e9av.=d\u03c3dM2d\u03c6\u221212\u03c0d\u03c3dM2, (7)\n\nwhere the notation denotes the azimuthal average and is the total cross section in Eq.\u00a0(5).\n\nTo understand the origin of the divergent behaviour in Eq.\u00a0(6), we first comment about kinematics. If the system has vanishing transverse momentum (), the rest frame of is obtained by simply applying a longitudinal boost to the centre\u2013of\u2013mass frame of the colliding hadrons. Any additional rotation in the transverse plane of the collision leaves the system at rest and makes the particle azimuthal angle ambiguously defined. Owing to the azimuthal symmetry of the collision process, if there is no preferred direction to define . In other words, considering the azimuthal angle (as defined in the CS frame or any other rest frame of ) and performing the limit , we have\n\n cos\u03c6=cos(\u03d53\u2212\u03d5(\\boldmathqT))+O(qT\/M), (8)\n\nwhere and are the azimuthal angles of the corresponding transverse-momentum vectors in the centre\u2013of\u2013mass frame of the colliding hadrons. If , is not defined and, consequently, is not (unambiguously) defined. At the strictly formal level, to define the azimuthal correlation we have to exclude the phase space point at . If we want to consider integrated azimuthal correlations (such as, for instance, the cross section in Eq.\u00a0(7)), we have to introduce a minimum value of () and eventually perform the limit . The divergences that are highlighted in Eq.\u00a0(6) do not appear by considering azimuthal correlations at fixed and finite values of (e.g., the azimuthal-dependent cross sections in Eq.\u00a0(3) and (4)). These divergences are related to the limit after integration of the azimuthal correlations over .\n\nOur discussion on the limit reconciles the appearance of divergences in Eq.\u00a0(6) with the perturbative criterion of infrared and collinear safety at the formal level: no divergence occurs provided , namely, provided the azimuthal-correlation observable is well (unambiguously) defined. However, a single phase space point at is physically harmless. Therefore, the integrated azimuthal correlations (i.e., their limiting behaviour as ) are finite and measurable quantities. Moreover, they are also finite and measurable if is non-vanishing, or, equivalently if is not vanishing and fixed at an arbitrarily small value. The divergence in Eq.\u00a0(6) implies that the dependent azimuthal correlations become singular at small values of , and that this singular behaviour is not integrable over in the limit . This unphysical behaviour of f.o. perturbative QCD at small values of and the divergence of the integrated azimuthal correlations certainly require some deeper understanding to have a QCD theory of physically measurable azimuthal correlations. As a possible shortcut, one can still use f.o. perturbative QCD but avoid the region of small values of . In this case one still needs some understanding of the phenomenon in order to assess the extent of the dangerous small- region where the f.o. predictions are \u2018unphysical\u2019 or, anyhow, not reliable at the quantitative level.\n\nWe note that the kinematical relation in Eq.\u00a0(8) implies that the azimuthal angle in the CS frame has no privileged role in the context of our discussion of azimuthal correlations in the small- region. The main features of the small- behaviour of azimuthal correlations that are discussed in this paper are basically unaffected by using azimuthal angles as defined in other rest frames of the system , or, by using other related definitions of azimuthal angles. For instance, one can simply replace the CS frame angle with one of the relative azimuthal angles () in the centre\u2013of\u2013mass frame of the colliding hadrons. Alternatively, one can use the difference between the azimuthal angles (in the centre\u2013of\u2013mass frame of the colliding hadrons) of the two transverse-momentum vectors and . All these definitions of the relevant (for our purposes) azimuthal-angle variable turn out to be equivalent (because of Eq.\u00a0(8)) in the limit (or, at very small values of ). In the following we continue to mainly refer ourselves to the azimuthal angle in the CS frame, although all the basic features that we discuss are unchanged by considering the other definitions of the azimuthal angle.\n\nWe also note that the discussion that we have presented so far about azimuthal correlations can be generalized in a straightforward way to consider the case in which the system is formed by more than two particles. In the multiparticle case, we can simply examine azimuthal correlations that are defined by using the azimuthal angles of the various particles in in a specified rest frame of (such as the CS frame), by simply using the various relative azimuthal angles in the centre\u2013of\u2013mass frame of the colliding hadrons, or by using some other related (i.e., equivalent in the limit ) azimuthal variables. Since the multiparticle cases do not present any additional conceptual issues with respect to the two-particle case, in the following (for the sake of technical simplicity) we continue to mostly consider the case of systems that are formed by two particles.\n\nThe f.o. perturbative divergences of the azimuthal correlations originate from QCD radiative corrections due to inclusive emission in the final state of partons with low transverse momentum (soft and collinear partons). The origin is discussed in more detail in the following Sections of the paper (see, in particular, Sect.\u00a05.1). Since the azimuthal correlations behave differently in different processes (as stated in Eq.\u00a0(6)), we anticipate here the conditions that produce the divergent behaviour.\n\nAzimuthal correlations can have f.o. divergences if the final-state system can be produced by the partonic subprocess (see also Eq.\u00a0(15) and accompanying comments) where\n\n \u2219\u00a0\u00a0at\u00a0least\u00a0one\u00a0of\u00a0the\u00a0initial\u2212state\u00a0colliding\u00a0partonsc1andc2is\u00a0a\u00a0gluon; (9) \u2219\u00a0\u00a0at\u00a0least\u00a0one\u00a0of\u00a0the\u00a0final\u2212state\u00a0particles\u00a0with\u00a0momentap3andp4carries\u00a0colour\u00a0charge. (10)\n\nThe conditions in (9) and (10) follow from a generalization of the results in Refs.\u00a0[11, 12]. The divergences arise from the computation of the QCD radiative correction to the partonic process (they do not arise in the computation of that partonic subprocess itself!). Specifically, the f.o. divergences originate from collinear-parton radiation [11] in the case of the condition (9) and from soft-parton radiation [12] in the case of the condition (10). We remark that one of the conditions in (9) and (10) is sufficient to produce the f.o. divergences. In particular, the condition (10) necessarily produces f.o. divergences, while the condition (9) produces divergences with some \u2018exception\u2019 (words of caution) in few specific cases (see below). Having discussed the source of f.o. divergences in general terms, we can comment on some specific processes.\n\nThe production of heavy-quark pairs such as, for instance, can occur through the partonic subprocesses of quark-antiquark annihilation () and gluon fusion (). One of these subprocesses has initial-state gluons and, moreover, both and have QCD colour charge. Therefore, if both conditions (9) and (10) are fulfilled and the corresponding azimuthal correlations have f.o. divergences (as stated in Eq.\u00a0(6)). The same reasoning and conclusions apply to the production of and (for instance, can be produced through , where at the lowest-order carries QCD colour charge). In the case of production the final-state photons do not carry QCD colour charge. However, diphoton production at the next-to-next-to-leading order (NNLO) can occur through the subprocess (the interaction is mediated by a quark loop): therefore, due to the condition (9), the azimuthal correlations for diphoton production diverge starting from the NLO computation. The cases and behave similarly to , and they also lead to azimuthal correlations that have f.o. divergences.\n\nWe consider the DY process, where and the high-mass lepton pair originates from the decay of a vector boson (). The final-state leptons have no QCD colour charge and, therefore, the condition (10) is not fulfilled. The partonic subprocesses and are forbidden by colour conservation, and the partonic subprocess is forbidden by the spin\u00a01 of the vector boson. The DY lepton pair can be produced through , but this partonic subprocess has no initial-state gluons. Therefore, also the condition (9) is not fulfilled and the azimuthal correlation for the DY process have no f.o. divergences in the computation of QCD radiative corrections (as it is well known and recalled in Eq.\u00a0(6)). However, we remark that f.o. divergences do appear in the computation of QED (or, generally, electroweak) radiative corrections to azimuthal correlations for the DY process (this important point is discussed in more detail in Sect.\u00a03).\n\nWe can comment on the production of a system of colourless particles, such as or , in the specific case (or, better, within the approximation) in which those particles originate from the decay of a spin-0 boson, such as the Standard Model (SM) Higgs boson (e.g., or ). In this case the production mechanism (followed by the decay) is allowed. However, due to the spin-0 nature of , the decay dynamically decouples from the production mechanism: the angular distribution of the final-state particles ( or ) with momenta and is dynamically flat (it has no dynamical dependence on the decay angles, since it can only depend on the Lorentz invariant , namely, on the invariant mass of the produced pair of particles). As a consequence, although the condition (9) is fulfilled, there are no azimuthal correlations in this specific case and, hence, there are no accompanying f.o. divergences. The absence of azimuthal correlations follows from the requirement that the two final-state particles are due to the decay of a spin-0 boson. As a matter of principle at the conceptual level we note that, considering the final-state system or , the various subprocesses that contribute to the production mechanism (subprocesses with and without an intermediate , and corresponding interferences) are not physically distinguishable (this is, strictly speaking, correct although the applied kinematical cuts can sizeably affect the relative size of the various contributing subprocesses). As we have previously discussed, the production of such systems without the intermediate decay of a spin-0 boson leads to azimuthal correlations with f.o. perturbative divergences. Therefore, if the condition (9) is fulfilled, we can conclude that sooner or later (in the computation of subsequent perturbative orders) QCD radiative corrections produce non-vanishing azimuthal correlations and ensuing f.o. divergences.\n\nTo the purpose of studying azimuthal correlations and presenting corresponding quantitative results, we find it convenient to introduce harmonic components of azimuthal-dependent cross sections. In particular, we define the -th harmonic:\n\n d\u03c3ndM2\u2261\u222b2\u03c00d\u03c6cos(n\u03c6)\u222b+\u221e0dq2Td\u03c3dM2dq2Td\u03c6\u0398(qT\u2212qcut),qcut=rcutM, (11)\n\nwhere is a positive integer . Note that, in view of our previous discussion on the origin of the divergences that are mentioned in Eq.\u00a0(6), we have introduced a minimum value of (). We usually consider values of that are proportional to the invariant mass of the system (, with a fixed parameter ), although also fixed values of can be used. One can also consider -th harmonics of more differential cross sections (e.g., differential with respect to and as in Eq.\u00a0(4)). The -th harmonic in Eq.\u00a0(11) is defined by using the weight function ; -th harmonics with respect to the sine function can also be considered by simply replacing in the integrand of Eq.\u00a0(11). In particular, the knowledge of the harmonics with respect to both and for all integer values of is equivalent to the complete knowledge of the azimuthal-correlation cross section in Eq.\u00a0(7). Note that the -th harmonic in Eq.\u00a0(11) is not a positive definite quantity. Although the physical azimuthal cross section is positive definite, the weight function (or ) has no definite sign.\n\nThe -th harmonic with weight (or ) gives a direct measurement of the (or ) asymmetry of the azimuthal-dependent cross section. The QCD computation of the -th harmonic gives a finite result provided is not vanishing. On the basis of Eq.\u00a0(6), in the limit some harmonics (asymmetries) can be divergent if computed at some f.o. in QCD perturbation theory. We cannot draw general conclusions on harmonics with odd values of (), but from the results in Refs.\u00a0[11, 12] we know that harmonics with even values of () can have f.o. divergences. Specifically, if the condition (9) is fulfilled the harmonics with and have f.o. divergences [11], and if the condition (10) is fulfilled all the -even ( and so forth) asymmetries have f.o. divergences [12]. In Sect.\u00a03 we explicitly show quantitative results on f.o. computations of harmonics for the DY and production processes. In Sect.\u00a05.1 we also show f.o. results for production and we further comments on -odd harmonics.\n\nAs mentioned in the Introduction and discussed in Sect.\u00a05.2, the f.o. divergences of the azimuthal asymmetries can be cured by a proper all-order perturbative resummation of QCD radiative corrections. The resummed computation leads to azimuthal correlations and asymmetries that are finite, as it is the case of the corresponding physical (and measurable) quantities.\n\n## 3 Examples\n\nThe lepton angular distribution of the DY process is a much studied subject both experimentally and theoretically. Recent measurements of the lepton angular distribution for production at LHC energies (\u00a0TeV) are presented in Refs.\u00a0[3, 4], together with corresponding QCD predictions from Monte Carlo event generators and f.o. calculations. A very recent phenomenological study of the DY lepton angular distribution is performed in Ref.\u00a0[5]. As for previous literature on the subject, we mainly refer the reader to the list of references in Refs.\u00a0[3, 4, 5].\n\nThe QCD structure of the DY lepton angular distribution is well known, and it is a consequence of the spin-1 nature of the vector bosons involved in the DY mechanism. To be definite we consider the DY process at the Born level with respect to electroweak (EW) interactions. Within this Born level framework, the DY differential cross section is expressed in terms of a leptonic tensor (for the EW leptonic decay of the vector boson) and a hadronic tensor (for the hadroproduction of the vector boson). The QCD production dynamics is embodied in the hadronic tensor and the two tensors are coupled through the spin (helicity) correlations of the vector boson. Owing to quantum interferences, we are dealing with 9 helicity components of the five-fold differential cross section in Eq.\u00a0(4) (the hadronic tensor is a helicity polarization matrix, due to the 3 polarization states of the vector boson), which, therefore, can be expressed [15] as a linear combination of 9 spherical harmonics or, better, harmonic polynomials (the leptonic tensor has a polynomial dependence of rank 2 on the lepton momenta) of the lepton angles and (we specifically use the CS frame). For the illustrative purposes of our discussion of azimuthal correlations, it is sufficient to consider the four-fold differential cross section , which is obtained by integrating Eq.\u00a0(4) over . We write\n\n 2\u03c0d\u03c3DYdM2dydq2Td\u03c6=d\u03c3DYdM2dydq2T+[d\u03c3]3cos\u03c6+[d\u03c3]2cos(2\u03c6)+[d\u03c3]7sin\u03c6+[d\u03c3]5sin(2\u03c6), (12)\n\nwhere is the DY differential cross section integrated over , and the shorthand notation denotes differential cross section components that only depend on . The azimuthal dependence of Eq.\u00a0(12) is entirely given by the harmonics that are explicitly written in the right-hand side of Eq.\u00a0(12). The subscript in is defined according to a customary notation in the literature [16, 15, 3, 4], such that , where the functions are known as \u2018DY angular coefficients\u2019 and are normalization factors (specifically, we have ). A point that we would like to remark is that the QCD dependence of the azimuthal correlations of the DY cross section involves only four harmonics. The five-fold cross section in Eq.\u00a0(4) for the DY process is expressed in terms of 9 harmonic polynomials of and : however, their dependence on is given only in terms of the four harmonics that also appear in Eq.\u00a0(12).\n\nAt LO in QCD perturbation theory, the DY cross section is due to the partonic subprocess , which is of . At this order in , there is no azimuthal dependence. Azimuthal correlations start to appear at the NLOThroughout the paper we use the labels LO, NLO, NNLO and so forth according to the perturbative order in which the corresponding result (or partonic subprocess) contribute to the total cross section of that specific process. In the case of azimuthal correlations, the NLO result can effectively represent a QCD prediction at some lower perturbative order. through the tree-level partonic processes at ( and its crossing related channels, such as ). At this order only the cross section components and in Eq.\u00a0(12) are not vanishing. The azimuthal harmonics and receive non-vanishing contributions only starting from NNLO processes. More precisely, these non-vanishing contributions [17] are entirely due to one-loop absorptive (and time reversal odd) corrections to the partonic subprocess and its crossing related channels. Azimuthal correlations up to NNLO were first computed in Ref.\u00a0[15]. Azimuthal-correlation results at NLO can be obtained, in principle, by exploiting recent progress on the computation of the NNLO corrections to \u2018\u2009jet\u2019 production [18, 19].\n\nWe recall that the cross section components and in Eq.\u00a0(12) are parity conserving, while and are parity violating. Moreover, vanishes in the limit of vanishing axial coupling of to either quarks or leptons. Therefore, if , only the harmonic contributes in Eq.\u00a0(12), while in the case of the five-fold differential cross section of Eq.\u00a0(4) there is an additional non-vanishing azimuthal contribution that is proportional to and it is due to .\n\nAll the quantitative results that are presented in this paper refer to collisions at the LHC energy \u00a0TeV. In the QCD calculations we use the set MSTW2008 [20] of PDFs at NLO.\n\nTo present some illustrative results for the DY process we consider on-shell production and its leptonic decay in an electron\u2013positron pair. Our QCD calculation is performed by using the numerical Monte Carlo program DYNNLO [21]. The EW parameters are specified in the scheme and we use the following values: the Fermi constant is \u00a0GeV, the mass of the boson is \u00a0GeV and the mass of the boson is \u00a0GeV. We use equal values of the factorization () and renormalization () scales and we set .\n\nWe specifically consider the numerical calculation of various asymmetries (see Eq.\u00a0(11)), and is the azimuthal angle of the electron in the CS frame. We evaluate the asymmetries at their non-trivial lowest order in f.o. perturbation theory and, therefore, we compute all the NLO tree-level partonic processes whose final state is \u2018\u00a0parton\u2019. Our numerical results are presented in Fig.\u00a01-left.\n\nThe -th harmonics are integrated over the lepton polar angle and over the rapidity and transverse-momentum of the pair, and they are computed as a function of , where is the minimum value of (). The results in Fig.\u00a01 are obtained for very small values of in the range . As we have already recalled, the azimuthal asymmetries for the DY process have no f.o. divergences. Indeed, the results for that are presented in Fig.\u00a01-left show that the corresponding asymmetries are basically independent of for very small values of and finite (the results in Fig.\u00a01-left practically coincide with the numerical evaluation at ). The result for the asymmetry (blue line) in Fig.\u00a01-left is consistent with a vanishing value, in agreement with the general expression in Eq.\u00a0(12) (the very small deviations that are observed for in Fig.\u00a01-left give an idea of the numerical uncertainties in our calculation of the various asymmetries). The result for the asymmetry (black line) gives a non-vanishing value (it corresponds to the integral of the differential cross section component in Eq.\u00a0(12)). A non-vanishing value is obtained also for (red line), and it corresponds to the computation of the cross section component in Eq.\u00a0(12). Note that the result reported in Fig.\u00a01-left is rescaled by a factor of 0.1. Therefore, by inspection of Fig.\u00a01-left we can see that the asymmetry is approximately a factor of 5 larger than the asymmetry.\n\nIn Fig.\u00a01-left we also report the result for the harmonic (dashed line) of \u2018\u00a0parton\u2019 production. The harmonic corresponds to the total (azimuthally-integrated) cross section, and it receives contributions from both real and virtual emission subprocesses. Real and virtual terms are separately divergent and their divergences cancel in the total contribution. In Fig.\u00a01-left we report the result of the harmonic computed exactly as specified in Eq.\u00a0(11), namely, by applying a non-vanishing lower limit on . Therefore, our computation selects only the real-emission term due to the \u2018\u00a0parton\u2019 subprocesses. As is well known (see also Sect.\u00a04.1), this real-emission term diverges in the limit and the divergent behaviour is proportional to : in Fig.\u00a01-left we clearly see the increasing behaviour of the harmonic as . In the actual computation of the total cross section, the result in Fig.\u00a01-left has to be combined with the NLO real-emission term at still smaller values of and with the LO and NLO virtual terms, thus leading to a total finite result. For the sake of completeness, we report the value (including the numerical error of the Monte Carlo integration) of the NLO total cross section that we obtain by using the same parameters as used in the results of Fig.\u00a01-left: it is \u00a0pb. We note that is roughly 100 times larger than the value of the asymmetry in Fig.\u00a01-left.\n\nThe DY process is quite \u2018special\u2019 with respect to azimuthal correlations: the azimuthal correlations are finite order-by-order in QCD perturbation theory and their general QCD structure involves only 4 azimuthal harmonics (as shown in Eq.\u00a0(12)). For most of the other hard-scattering processes (with few possible exceptions such as production, as discussed in Sect.\u00a02) azimuthal correlations behave differently: usually they have an azimuthal dependence that involves an infinite set of harmonics (all values of ) and in many cases f.o. QCD computations lead to divergences.\n\nWe consider the production of heavy-quark pairs and we treat the heavy quark and antiquark as on-shell particles. Our discussion equally applies to any heavy quark, but we specifically consider top quarks since the on-shell treatment is more suitable in this case. In Fig.\u00a01-right we present results for production that are obtained analogously to the DY results presented in Fig.\u00a01-left.\n\nOur QCD computation of production is performed by using the numerical Monte Carlo program of Ref.\u00a0[22], which includes QCD radiative corrections at NLO (it uses the NLO scattering amplitudes of the MCFM program [23]) and part of the NNLO contributions. We set and we use the value \u00a0GeV for the mass of the top quark. We consider the azimuthal angle of the top quark in the CS frame, and in Fig.\u00a01-right we present the numerical results of various asymmetries (see Eq.\u00a0(11)) after their integration over the invariant mass of the pair. As in the case of the results in Fig.\u00a01-left, the results are integrated over the polar angle of the top quark and over the rapidity and transverse momentum () of the pair. The azimuthal asymmetries are evaluated (as a function of ) at their non-trivial lowest order (i.e., ) in f.o. perturbation theory and, therefore, we compute all the NLO tree-level partonic processes whose final state is \u2018\u00a0parton\u2019.\n\nBefore commenting on the results in Fig.\u00a01-right, we recall some of the results in Ref.\u00a0[12]. In Ref.\u00a0[12] azimuthal correlations for production are computed in analytic form in the small- limit. At the result of Ref.\u00a0[12] shows that the -dependent azimuthal-correlation cross section behaves proportionally to in the limit and, hence, it is not integrable over down to the region where . The behaviour is proportional to a non-polynomial function of that, therefore, leads to divergent harmonics for even values of . The spectra of harmonics with odd values of have instead a less singular behaviour at and they are integrable over in the limit .\n\nThe numerical results in Fig.\u00a01-right are consistent with the convergent or divergent behaviour predicted in Ref.\u00a0[12]. The harmonic with (black line) is not vanishing and basically independent of for very small values of . The sign of the harmonic is negative (the harmonic of would be positive, analogously to the corresponding harmonic of the electron in the DY case of Fig.\u00a01-left), and its absolute size (note that it is rescaled by a factor of 0.1 in Fig.\u00a01-right) is roughly a factor of two smaller than the size of the harmonic for production (Fig.\u00a01-left). The harmonics with (red line), (blue line) and (magenta line) in Fig.\u00a01-right have instead an increasing (in absolute value) size for small and decreasing values of : this behaviour is consistent with a dependence, as expected from the analytical results in Ref.\u00a0[12]. The results for and 6 in Fig.\u00a01-right have no straightforward quantitative implications for physical azimuthal asymmetries since they refer to small values of (and they eventually diverge in the limit ). Nonetheless we observe that the absolute magnitude of the -even asymmetries decreases as increases. As in the case of Fig.\u00a01-left for the DY process, in Fig.\u00a01-right we also present the result of the harmonic (dashed line) for the real-emission process \u2018\u00a0parton\u2019. Analogously to the DY process, the \u2018\u00a0parton\u2019 contribution to the harmonic diverges in the limit , and its dominant behaviour at small is proportional to . At small values of the shape of the -dependence of the result is thus steeper than that of the results for the harmonics with and 6. After combining real and virtual contributions, the harmonic gives the total cross section. The value (including the numerical error of the Monte Carlo integration) of the NLO total cross section that we find is \u00a0pb. We note that is roughly 200 times larger than the absolute value of the asymmetry () in Fig.\u00a01-right.\n\nAs we have anticipated in Sect.\u00a02 with a brief sentence, we expect (and predict) the appearance of f.o. perturbative divergences in the computation of QED radiative corrections to azimuthal correlations for the DY process. To explicitly explain this point we consider some analogies of the DY and production processes. We have illustrated the divergences of the -even asymmetries for production. Part of these divergences arise from the computation of the partonic subprocess and, specifically, they arise from the kinematical region where the radiated final-state gluon is soft and at wide angle with respect to the direction of the initial-state and . From this kinematical region the spectrum receives an azimuthal-correlation contribution that behaves as and that depends on the QCD colour charges of the final-state and . Specifically, this contribution is proportional to the Fourier transformation of the function in Eq.\u00a0(36) of Ref.\u00a0[12]. The first-order QED radiative corrections to the DY process involve analogous partonic processes, such as , with a soft and wide-angle photon that is radiated in the final state. These photon radiative corrections produce f.o. QED divergences to azimuthal asymmetries for the DY process. The DY analogue of the function in Eq.\u00a0(36) of Ref.\u00a0[12] is simply obtained by replacing the QCD colour charges of and with the QED electric charges of the DY final-state leptons (by simple inspection of Eq.\u00a0(36) in Ref.\u00a0[12], such replacement leads to a non-vanishing result).\n\nMore generally, we remark that the expression in Eq.\u00a0(12) for the DY process is valid only at the Born level with respect to the EW interactions (although the expression is valid at arbitrary orders in QCD perturbation theory). Including QED radiative corrections, we conclude that the azimuthal-correlation cross section for the DY process receives contributions from and harmonics with arbitrary values of (not only as in Eq.\u00a0(12)). Moreover, the lowest-order perturbative QED computation of the contributions with -even already leads to a singular (not integrable) behaviour in the limit and to ensuing divergent azimuthal asymmetries upon integration over . The divergences are switched on by QED radiative corrections and can receive additional contributions from powers of in the context of mixed QEDQCD radiative corrections. Obviously, similar divergences arise also in a pure QED context such as, for instance, in the QED computation of the process . Eventually all these divergences originate from the QED analogue of the condition (10): the QED divergences arise if \u2018at least one the final-state particles with momenta and carries non-vanishing electric charge\u2019.\n\nWe note that lepton pairs with high invariant mass can also be produced throughout the partonic subprocess , in which the initial-state colliding photons arise from the photon PDF of the colliding hadrons. Strictly speaking, lepton pairs that are produced in this way are not physically distinguishableThe reasoning is somehow analogous to that in Sect.\u00a02 about diboson systems that can be produced with or without an intermediate Higgs boson. The high-mass lepton pair can be produced with or without an intermediate vector boson. from those that are produced by the DY mechanism of quark\u2013antiquark annihilation. Since the ratio between the photon PDF and the quark (or antiquark) PDF is formally of ( is the fine structure constant), the subprocess can be regarded as a QED radiative correction in the context of the DY process. We remark that radiative corrections to also produce divergences in the computation of the lepton azimuthal correlations. These divergences (see Sect.\u00a05.1) originate from the abelian (photonic) analogue of the condition (9): the divergences arise if \u2018at least one of the initial-state colliding partons and is a photon\u2019.\n\nTo cure the f.o. perturbative divergences of azimuthal correlations one may advocate non-perturbative strong-interactions dynamics and related non-perturbative QCD effects, which can be sizeable in the small- region. However, in the case of -integrated azimuthal correlations (see Eq.\u00a0(7)) the non-perturbative QCD dynamics should cancel divergent terms proportional to some powers of , and this would imply that non-perturbative QCD effects scale logarithmically with (i.e., these effects would not be suppressed by some power of in the hard-scattering regime ), thus spoiling not only the finiteness but also the infrared safety of the azimuthal correlations. Moreover such non-perturbative cure of the divergent behaviour cannot be effective in the case of QED radiative corrections and, in particular, it cannot be conceptually at work in a pure QED context since QED is well-behaved in the infrared (small-) region. In the following Sections we show that the problem of f.o. divergences in azimuthal correlations has a satisfactory solution entirely within the context of perturbation theory. Namely, the resummation of perturbative corrections to all orders leads to ( integrated) azimuthal asymmetries that are finite and computable. Such solution is effective in both QCD and QED contexts, although in the QCD case this does not imply that the non-perturbative effects have a negligible quantitative role in the small- region.\n\n## 4 The small-qT region\n\nThe f.o. divergences of azimuthal correlations arise from the small- region and, eventually, from the behaviour at the phase space point where . From a physical viewpoint, however, a single (isolated) phase space point is harmless and, in particular, non-vanishing azimuthal correlations (which are defined with respect to the direction of ) cannot be measured if . These physical requirements imply that the dependence of azimuthal-correlation observables has to be sufficiently smooth in the small- region and, in particular, in the limit .\n\nWe specify more formally these smoothness requirements. Since physical measurements cannot be performed at a single phase space point, we split the integration region in the two intervals where (the lower- bin) and (the higher- region). We then consider a generic azimuthal-correlation observable (e.g., multidifferential cross sections as in Eqs.\u00a0(3) and (4), or -th harmonics as in Eq.\u00a0(11)) and we denote by and the \u2018cross sections\u2019 that are obtained by integration of the observable over the lower- bin and the higher- region, respectively. The smoothness requirements are\n\n limqcut\u21920\u03c3l(qcut;\u03c6)=0, (13) limqcut\u21920\u03c3h(qcut;\u03c6)=\u03c3tot(\u03c6), (14)\n\nwhere denotes the \u2018total\u2019 cross section (i.e., the result obtained by integrating the observable over the entire region of ). These requirements specify azimuthal correlations that are physically well behaved. In particular, Eq.\u00a0(13) implies that non-vanishing azimuthal correlations cannot be physically observed if , and Eq.\u00a0(14) implies that the total azimuthal correlation is physically observable. Note, however, that Eqs.\u00a0(13) and (14) do not specify how exactly behaves in the limit . Nonetheless, Eqs.\u00a0(13) and (14) imply that the sole phase space point at (where azimuthal-correlation angles are not defined) has no relevant physical role (the lower- bin has an analogous harmless role if is sufficiently small).\n\nThe f.o. divergences of azimuthal-correlation observables are due to the fact that does not have a (\u2018sufficiently\u2019) smooth dependence on at small values of if this quantity is computed order-by-order in perturbation theory. In particular, diverges in the limit and, consequently, is not computable, whereas is definitely not computable (divergent) even if has a finite value.\n\nAs we have observed in Sect.\u00a03, also the customary azimuthally-integrated (or azimuthally-averaged) cross section (i.e., the harmonic with in Eq.\u00a0(11) or Fig.\u00a01) does not fulfil the requirements in Eqs.\u00a0(13) and (14) (both and can separately be divergent in the limit ). Therefore, the f.o. calculation of azimuthally-integrated cross section can have difficulties (as is well known) in describing the detailed shape in the small- region. In contrast to azimuthal-correlations observables, however, both and are computable for finite values of . In particular, the total cross section is always finite and computable (for arbitrary non-vanishing values of ) within f.o. perturbation theory.\n\nIn the following sections we discuss the small- behaviour of cross sections in f.o. perturbation theory and after all-order resummation.\n\n### 4.1 Perturbative expansion with azimuthal-correlation terms\n\nAmong all the hadroproduction processes of the type in Eq.\u00a0(1), we consider those that at the LO in perturbative QCD are produced by the following partonic subprocesses:\n\n c1+c2\u2192F({p3,p4}), (15)\n\nwhere and are the initial-state massless colliding partons of the two hadrons and . In the case in which one or both particles (with momenta ) in is a jet, the notation in Eq.\u00a0(15) means that the jet is replaced by a corresponding QCD parton. Note that the LO process in Eq.\u00a0(15) is an \u2018elastic\u2019 production process, in the sense that is not accompanied by any additional final-state radiation******As usual, the final-state collinear remnants of the colliding hadrons are not denoted in the partonic process.. This specification is not trivial, since it excludes some processes from our ensuing considerations. Some processes are excluded because of quantum number conservation. For instance, if includes two top quarks (not a pair) no LO process as in Eq.\u00a0(15) is permitted by flavour conservation. Some other processes are excluded because of their customary perturbative QCD treatment. For instance, if contains a hadron, its QCD treatment requires the introduction of a corresponding fragmentation function and, consequently, is necessarily produced with accompanying fragmentation products in the final state. Therefore, in the following we exclude the cases in which includes one or two hadrons\u2020\u2020\u2020\u2020\u2020\u2020Specifically, our ensuing discussion does not apply to azimuthal correlations of systems that contain hadrons with momenta or (whereas it applies to infrared and collinear safe jets, which physically contain hadrons).. All the processes that are explicitly listed in Eq.\u00a0(6) (including the DY process) have LO partonic processes of the type in Eq.\u00a0(15).\n\nThe small- behaviour of azimuthal correlations for the processes that we have just specified is partly related to the behaviour of differential cross sections that are integrated over the entire azimuthal-angle region (or, equivalently, that are azimuthally averaged). The behaviour in the azimuthally-integrated (averaged) case is well known (starting from some of the \u2018classical\u2019 studies for the DY process [24, 25, 26, 27, 28]), and in our presentation we contrast it with the cases with divergent azimuthal correlations. To remark the differences between the two cases in general terms, we use a shorthand notation, which can be applied to final-state systems with two or more particles (a more refined kinematical notation can be found in Refs.\u00a0[11, 12]). A generic multidifferential cross section (e.g., Eq.\u00a0(3), Eq.\u00a0(4) or related observables) with dependence on and on the azimuthal-correlation angle is simply denoted by , where is the transverse momentum of . Additional kinematical variables that are possibly not integrated (such as rapidities and polar angles) are not explicitly denoted. The dependence on a generic (as discussed in Sect.\u00a02) azimuthal-correlation angle with respect to is denoted through the dependence on the direction of the two-dimensional vector . In practice such dependence will occur through functions of scalar quantities such as, for instance, . In particular, independent of means absence of azimuthal correlations, and the azimuthally-integrated (averaged) cross section corresponds to the azimuthal integration (average) with respect to the direction of .\n\nOwing to transverse-momentum conservation in the process of Eq.\u00a0(15), at the LO level is produced with vanishing . Its corresponding LO cross section is proportional to ,\n\n d\u03c3LOdM2d2\\boldmathqT\u221d\u03b4(2)(\\boldmathqT), (16)\n\nand the proportionality factor is simply the LO total ( integrated) cross section. In the presence of the LO sharp behaviour of Eq.\u00a0(16), NLO QCD radiative corrections are dynamically enhanced in the small- region. The dynamical enhancement is due to QCD radiation of low transverse-momentum partons (soft gluons or QCD partons that are collinear to the initial-state colliding partons), and it has the following general form:\n\n d\u03c3NLOdM2d2\\boldmathqT\u221d\u03b4(2)(\\boldmathqT)+\u03b1S{(a2[1q2Tln(M2q2T)]++a1[1q2T]++a0\u03b4(2)(\\boldmathqT)+acorr(\\boldmath^qT)q2T)+\u2026}, (17)\n\nwhere the dots on the right-hand side denote contributions that are less enhanced (\u2018non-singular\u2019) in the limit . The \u2018coefficients\u2019 () and in Eq.\u00a0(17) depend on the process and they are independent of (we mean they are independent of the magnitude of the vector ). The important point (see below) is that does depend on the direction of , whereas () do not depend on it. All these coefficients () can depend on the other kinematical variables (e.g., rapidities and polar angles). The QCD running coupling in Eq.\u00a0(17) and in all the subsequent formulae is evaluated at a scale of the order of (e.g., we can simply assume ).\n\nThe terms that we have explicitly written in the right-hand side of Eq.\u00a0(17) scale as (modulo logs) in the limit (i.e., they scale as under the replacement ). The non-singular terms (which are simply denoted by the dots) represent subdominant (\u2018power-correction\u2019) contributions in the limit . These are, for instance, terms of the type or, more generally, terms that are relatively suppressed by some powers (modulo logs) of . Independently of their specific form and of their azimuthal dependence, these non-singular terms have an integrable and smooth behaviour in the limit . Because of these features, the non-singular terms and the ensuing azimuthal-correlation effects are well behaved in the small- region. Note, however, that the non-singular terms produce azimuthal effects whose actual size (and behaviour) depends on the specific definition of the azimuthal-correlation angle (see Eq.\u00a0(8) and related comments in Sect.\u00a02).\n\nThe expression in Eq.\u00a0(17) is the master formula that we use for our discussion of the small- behaviour of azimuthal correlations at NLO and higher orders. Considering the \u2018singular\u2019 terms (the NLO terms that scale as , modulo logs, in the limit ), the azimuthal dependence is entirely embodied in . More precisely, the azimuthal-correlation dependence has been separated (as in Eq.\u00a0(7)) and embodied in that, therefore, gives a vanishing result after azimuthal integration. Using the notation in Eq.\u00a0(7) we have\n\n \u27e8acorr(\\boldmath^qT)\u27e9av.=0. (18)\n\nThis azimuthal-correlation term is absent for the DY process (i.e., in this case), and in all the other processes it has no effect by considering dependent but azimuthally-integrated observables. The presence of such term produces a behaviour that definitely differs from the behaviour studied in the \u2018classical\u2019 literature [24, 25, 26, 27, 28] on the small- region and on QCD transverse-momentum resummation.\n\nThe other (\u2018classical\u2019) terms in Eq.\u00a0(17) are proportional to the coefficients (), and the symbol denotes the (\u2018singular\u2019) \u2018plus\u2019-distribution, which is defined by its action onto any smooth function of . The definition is\n\n \u222b+\u221e0d2\\boldmathqTf(\\boldmathqT)[g(\\boldmathqT)]+\u2261\u222b+\u221e0d2\\boldmathqT[f(\\boldmathqT)\u2212f(0)\u03b8(\u03bc0\u2212qT)]g(\\boldmathqT), (19)\n\nwhere () in Eq.\u00a0(17), and is a scale of the order of (at the formal level, varying the value of changes the plus-distributions and the coefficient , so that the right-hand side of Eq.\u00a0(17) is unchanged). The plus-distribution in Eq.\u00a0(19) is equivalently defined through a limit procedure as follows:\n\n [g(\\boldmathqT)]+=limq0\u21920[\u03b8(qT\u2212q0)g(\\boldmathqT)\u2212\u03b4(2)(% \\boldmathqT)\u222bd2kTg(kT)\u03b8(\u03bc0\u2212kT)\u03b8(kT\u2212q0)]. (20)\n\nNote that the point at is at the border of the phase space, but, at the strictly formal level, it is inside the phase space (at variance with the case of azimuthal correlations, in which it is formally outside the phase space). This is essential to make sense of the LO differential cross section in Eq.\u00a0(16) and of the corresponding LO total cross section. This is also essential (see Eqs.\u00a0(19) and (20)) to transform the non-integrable behaviour of into an integrable plus-distribution over the small- region. The plus-distribution involves two terms: a term that is simply proportional to the function and a contact term (the term proportional to in Eq.\u00a0(19) or, equivalently, the term proportional to in Eq.\u00a0(20)). At the conceptual level, these two terms arise from a combination of real and virtual radiative corrections, which are separately infrared divergent. Real (soft and collinear) emission corrections to the LO process in Eq.\u00a0(15) produce the non-integrable terms , while the corresponding virtual radiative corrections lead to the contact terms that eventually regularize the divergence at .\n\nThe azimuthal-correlation term in Eq.\u00a0(17) is instead proportional to the divergent (not integrable) function","date":"2019-07-16 04:48:09","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9402642846107483, \"perplexity\": 918.401823750565}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-30\/segments\/1563195524502.23\/warc\/CC-MAIN-20190716035206-20190716061206-00190.warc.gz\"}"}
null
null
This out of date ordinance has been challenged in other cities including Dallas and Austin, where the ordinance was overturned. Not without rules and provisions of course but the law was changed so that restaurants and their patrons could decide…the free market, rather than the City government. Of course not everyone wants to eat with a dog next to them, but not every restaurant would allow pups on the patio either. More to come on this issue for sure! I don't understand how the city has this ordinance yet there are dog-friendly restaurants like Barnaby's and Becks Prime. So are they breaking the law? I realllllly don't like or understand why this is happening! There needs to be a place where it's allowed and it's the human's choice whether they want to eat there or not because the pets are welcome! The city cries about not having funds for such programs…Then they shut it down?? How is it that some big headed person who wants to show power can do this? When people like you and I go to support these kind of very important functions to help this cause. I am sure that whoever decided to shut this function down is donating money…right???
{ "redpajama_set_name": "RedPajamaC4" }
9,398
"""pylint should detect yield and return mix inside genrators""" __revision__ = None def somegen(): """this is a bad generator""" if True: return 1 else: yield 2 def moregen(): """this is another bad generator""" if True: yield 1 else: return 2
{ "redpajama_set_name": "RedPajamaGithub" }
7,864
Scoliacma rubrata är en fjärilsart som beskrevs av Tepp. 1882. Scoliacma rubrata ingår i släktet Scoliacma och familjen björnspinnare. Inga underarter finns listade. Källor Björnspinnare rubrata
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,388
Der Distrikt Kheda (Gujarati: ) ist einer von 26 Distrikten des Staates Gujarat in Indien. Die Stadt Nadiad ist die Hauptstadt des Distrikts. Die letzte Volkszählung im Jahr 2011 ergab eine Gesamtbevölkerungszahl von 2.299.885 Menschen. Geschichte Von vorchristlicher Zeit bis ins Jahr 1298 wurde das Gebiet – wie die ganze Region – von diversen buddhistischen und hinduistischen Herrschern regiert. Die erste Zivilisation war die Indus-Kultur. Der erste namentlich bekannte Staat war das Maurya-Reich, die letzte nichtmuslimische Dynastie waren die Solanki. Nach jahrhundertelangen militärischen Auseinandersetzungen mit muslimischen Eroberern und Regenten im Norden Indiens erfolgte 1298 die Besetzung durch muslimische Soldaten. Danach herrschten bis ins Jahr 1753 verschiedene muslimische Dynastien (Sultanat von Delhi, Sultanat Gujarat und die Großmoguln). Im Jahr 1753 wurde das Gebiet Teil des hinduistischen Marathenreichs. Zwischen 1803 und 1817 gerieten alle Regionen des heutigen Distrikts unter britische Herrschaft. Der Distrikt wurde Teil der Northern Division der britischen Verwaltungsregion Bombay Presidency. Für die kurze Zeit zwischen 1830 und 1833 gehörte das Gebiet zum Distrikt Ahmedabad. Mit der Unabhängigkeit Indiens 1947 und der Neuordnung des Landes wurde es 1950 Teil des neuen Bundesstaats Bombay. Im Jahre 1960 wurde dieser indische Bundesstaat geteilt und das Gebiet kam zum neu geschaffenen Bundesstaat Gujarat. Bevölkerung Bevölkerungsentwicklung Wie überall in Indien wächst die Einwohnerzahl im Distrikt Kheda seit Jahrzehnten stark an. Die Zunahme betrug in den Jahren 2001–2011 beinahe 14 Prozent (13,62 %). In diesen zehn Jahren nahm die Bevölkerung um rund 275.000 Menschen zu. Die genauen Zahlen verdeutlicht folgende Tabelle: Bedeutende Orte Einwohnerstärkste Ortschaft des Distrikts ist Nadiad mit fast 220.000 Bewohnern. Weitere bedeutende Städte mit einer Einwohnerschaft von mehr als 20.000 Menschen sind Kapadvanj, Chaklasi, Balasinor, Mehmedabad, Kheda, Dakor und Kathlal. Die städtische Bevölkerung macht nur 22,77 Prozent der gesamten Bevölkerung aus. Weblinks Karte des Distrikts Kheda Distrikt Kheda Wirtschaft, Natur und Sehenswürdigkeiten kurze Übersicht des Distrikts Ergebnis der Volkszählung 2001 für Kheda Statistisches Handbuch des Distrikts Kheda Einzelnachweise Distrikt in Gujarat
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,224
High Speed Two (HS2) is the new high speed railway being built as the backbone of the national rail network. It will link London and Birmingham to Manchester, the East Midlands and Leeds. It is the first main line railway to be built in the UK since 1899. HS2 grew rapidly throughout 2015, as it progressed through the House of Commons legislative process and began to prepare for construction. As part of a broader business transformation programme, HS2 needed to deliver information systems capabilities that would maximise organisational effectiveness. HS2 was looking to achieve a step-change in its ability to design, deliver and operate high-quality IT solutions that were better aligned with the needs of the organisation and its users. Rainmaker led a ten-week user research project to investigate how HS2 used technology and to provide the data to support decisions around organisational culture, ways of working and the design and delivery of services. A design research methodology was employed which focused on user needs and building empathy between users – HS2 staff and partners – and designers – those with responsibility for creating and maintaining the IT systems. Following a series of interviews and cultural probes with over 100 users from across the organisation, HS2 gained a wide understanding of high-level user needs, pain points and personas The engagements helped HS2 understand how and why people interacted with technology and they informed strategic decisions about technology design and procurement. Rainmaker committed to skills transfer, delivering the whole project in close collaboration with the HS2 team and producing a toolkit for conducting effective user research in future. Detailed understanding of user needs. Developed user research tool kit, published on GitHub, for the benefit of the wider user research community. Technology at heart of organisation. Encapsulated in a new digital strategy which embraced a new culture based on improved ways of working.
{ "redpajama_set_name": "RedPajamaC4" }
8,224
/*global angular */ (function () { 'use strict'; function DogPageController($scope, $stateParams, bbData, bbWindow, dogId) { var self = this; self.tiles = [ { id: 'DogCurrentHomeTile', view_name: 'currenthome', collapsed: false, collapsed_small: false }, { id: 'DogPreviousHomesTile', view_name: 'previoushomes', collapsed: false, collapsed_small: false }, { id: 'DogNotesTile', view_name: 'notes', collapsed: false, collapsed_small: false }, { id: 'DogBehaviorTrainingTile', view_name: 'behaviortraining', collapsed: false, collapsed_small: false } ]; self.layout = { one_column_layout: [ 'DogCurrentHomeTile', 'DogPreviousHomesTile', 'DogNotesTile', 'DogBehaviorTrainingTile' ], two_column_layout: [ [ 'DogCurrentHomeTile', 'DogPreviousHomesTile' ], [ 'DogNotesTile', 'DogBehaviorTrainingTile' ] ] }; bbData.load({ data: 'api/dogs/' + encodeURIComponent(dogId) }).then(function (result) { self.dog = result.data; bbWindow.setWindowTitle(self.dog.name); }); } DogPageController.$inject = [ '$scope', '$stateParams', 'bbData', 'bbWindow', 'dogId' ]; angular.module('barkbaud') .controller('DogPageController', DogPageController); }());
{ "redpajama_set_name": "RedPajamaGithub" }
9,225
Next message: Randy Leedy: "Re: JR & MO vs. the World <grin>" Previous message: Jonathan Robie: "Re: aspect of present tense" Maybe in reply to: KULIKOVSKY, Andrew: "aspect of present tense" Next in thread: Don Wilkins: "Re: aspect of present tense" > Just had a sudden thought about verbal aspect and the present indicative. > willing) - although I'd like to be a Greek Prof one day! > "I was" and "I will be". > Remember Einstein - time is relative. going to the temple."? What would that be in Koine, "ERXOMAI hIERW."? Does this case also imply the events of the recent past and near future?
{ "redpajama_set_name": "RedPajamaC4" }
6,475
\section{Introduction} Reduced-order modeling has been a cornerstone of modern computational methods. The effectiveness of the reduced-order models relies both on their design, so they can capture the complexity of the underlying process, and also on the information they rely on. This fundamental information can have the form of i) governing equations, which typically carry assumptions or simplifications of their own, and ii) data, which is a more reliable but expensive source. The present work involves the development of criteria for the most effective selection of data---or associated experiments to generate this data---in order to perform data-driven reduced-order modeling. The literature related to data-driven reduced-order modeling is vast and spans a great number of engineering and scientific fields ranging from fluid mechanics \cite{brunton2019a, Karniadakis2021, Ghattas2021,Fernex2020, Ma2021} to structural mechanics \cite{Kerschen2006, Jain2021, Moore2019, Bertalan2019}. In the majority of these works the assumption is plentiful data. While for many applications this is indeed the case, there are several important problems where plentiful data is not available, either because the associated physical or numerical experiments are too expensive or because of the nature of the problem, e.g., extreme events that occur rarely \cite{Sapsis2021}. The scientific field that aims to tackle this issue is active learning \cite{Sacks1989, Chaloner95, Gramacy2009}. A critical issue in active learning is the choice of acquisition function, i.e., the criterion used to select which sample to query next in an optimal manner. Acquisition functions come in various shapes and forms \cite{Chaloner95, Shahriari2016}, but many popular criteria suffer from severe limitations, including high computational cost, intractability in high dimensions, and inability to discriminate between active and idle input variables \cite{sapsis20}. Recently, a new class of active learning criteria was introduced, designed to take into account the effect of the output on the selection of samples \cite{sapsis20, Blanchard_SIAM21}. This new class of output-weighted criteria has led to significantly improved performance in a variety of problems involving uncertainty quantification, Bayesian optimization \cite{blanchard_jcp21, Blanc_turbu}, and decision making \cite{Blanchard2021a, Yang2021}. However, there is no sound theoretical understanding of its favorable properties. In this work we rigorously show the optimal properties of output-weighted acquisition functions when it comes to the problem of selecting samples for data-driven reduced-order modeling in the Bayesian context. Specifically, we employ Gaussian process regression (GPR) as the building block to develop Bayesian surrogates or stochastic reduced-order models. Within the GPR framework, we derive asymptotically optimal acquisition functions, where optimality is defined in the sense of fastest convergence of the probability density function (pdf) describing the quantity of interest. We demonstrate the derived optimal acquisition functions in a mechanical oscillator subjected to high-dimensional stochastic forcing resulting in non-Gaussian heavy tails, as well as the reduced-order modeling of a beam under axial and transverse stochastic loads that result in buckling and bending. \section{Problem setup } Our aim is to build a data-driven reduced-order model that will capture the behavior of an output quantity $y \in \mathbb{R}$ (assumed scalar for simplicity) with respect to an input variable $x \in \mathbb{R}^n$. The input can represent, for example, initial conditions or parameters governing the evolution of a dynamical system, while the output is any quantity of interest that depends on the input variable. We also assume that the input variable has a prescribed probability distribution function, $p_x({x})$. We are given a set of input datapoints $X={\left\{x_i\right\}}_{i=1}^N$ and corresponding output values \begin{displaymath} Y=[y(x_1),...,y(x_N)]^{\mathsf{T}}, \end{displaymath} and the objective is to identify a criterion that allow us to select the next input point $x_{N+1}\triangleq h$ that will supplement---together with the corresponding output $y(x_{N+1})$---the dataset $\mathcal{D}=\{X,Y\}$. There is a plethora of criteria in the active-learning field (see, e.g., \cite{Chaloner95} for a review), the majority of which are designed to select the next input $h$ that either minimizes the uncertainty of the reduced-order model, i.e., the posterior variance $\sigma^2_y(x|\mathcal{D})$, or maximizes the information content between input and output variables. However, neither of these methods gives appropriate attention on increasing the accuracy of the resulting model in the regions of the input space where it is needed the most: inputs that result in large deviations of the output variable $y$ from its expected value. To accomplish this goal we will focus on accelerating the convergence of the model output pdf, $p_y(s|\mathcal{D})$. In the context of reduced-order modeling this is the appropriate quantity to consider as it encodes the probabilistic information about the large deviations of $y$, while the input variables have been marginalized. Therefore, while criteria based on the posterior variance aim to minimize the error of the model for all or certain input variables, a criterion based on the output pdf explicitly targets regions of the input space that contribute to the output pdf. Moreover, to emphasize regions associated with large deviations of the output variable $y$, we will define convergence in terms of the \textit{logarithm} of $p_y(s|\mathcal{D})$, using the below function to measure the discrepancy between two pdfs $p_1$ and $p_2$: \begin{equation} \mathbb{D}(p_1,p_2)=\int_{S_y}\left|\log p_1({s})-\log p_2({s})\right|\mathrm{d}{s}. \end{equation} In the above, $S_y$ is a finite domain of $y$ (the output variable), over which we aim to build the reduced-order model. Note that this is different from the Kullback--Leibler divergence, which does not give the same emphasis to low probability events (associated with large deviations of $y$). Moreover, it can be easily shown that the above function defines a metric. In what follows, we first provide a quick review of GPR and its basic properties, and subsequently formulate the optimality condition that our sampling criteria will satisfy. \subsection{Review of Gaussian process regression} We use GPR to build a surrogate for the unknown function $y(x):\mathbb{R}^n\rightarrow \mathbb{R}$. The idea is to utilize a Gaussian prior on the function $y$, i.e., \begin{equation} y_0\sim \text{GP}(m(x),k(x,x')), \end{equation} with known mean $m(x):\mathbb{R}^n\rightarrow \mathbb{R}$ and covariance function $k(x,x'):\mathbb{R}^{n\times n}\rightarrow \mathbb{R}$ assumed to be positive-definite. We typically set the mean function $m$ to be identically zero. As for the covariance kernel $k$, we often use the Gaussian covariance: \begin{equation} k(x,x')=\sigma_k^2\exp\left( -\frac{\left\Vert x-x' \right\Vert^2}{2\lambda^2} \right). \end{equation} Conditioning the Gaussian process on the available datapoints, we obtain the predictive process \begin{align} y_*\sim \text{GP}(\bar y(x),\bar k(x,x')), \end{align} where the predictive mean, $\bar y(x)$ and predictive covariance $\bar k(x,x')$ are given in terms of the datapoints: \begin{align}\begin{split} \bar y(x)&=k(x,X)K(X,X)^{-1}Y \\ \bar k(x,x')&=k(x,x')-k(x,X)K(X,X)^{-1}k(X,x'), \end{split} \label{gpr00} \end{align} where $k(x,X)=[k(x,x_1),...,k(x,x_N)]\in \mathbb{R}^N$ and $K(X,X)=\left\{ k(x_i,x_j) \right\}_{i,j=1}^N \in \mathbb{R}^{N\times N}$. The native space for GPR schemes is the reproducing kernel Hilbert space corresponding to the kernel $k$, defined as follows: \begin{definition} A Hilbert space $\mathcal{H}_k$ of functions $y:\mathbb{R}^n\rightarrow\mathbb{R}$, with inner product $\left\langle \cdot,\cdot \right\rangle_{\mathcal H_k}$, is called the reproducing kernel Hilbert space (RKHS) corresponding to a symmetric, positive-definite kernel $k$ if \begin{enumerate} \item for all $x\in \mathbb{R}^n$, $k(x,x')$, as a function of its second argument, $x'$, belongs to $\mathcal H_k$; and \item for all $x\in \mathbb{R}^n$ and $y \in \mathcal H_k$, $\left\langle y,k(x,\cdot) \right\rangle_{\mathcal{H}_k}=y(x)$. \end{enumerate} \end{definition} \noindent For special choices of kernels $k$, the RKHS can be characterized through its spectrum. Specifically, we have the following theorem for the characterization of the RKHS in the case of Gaussian kernels: \begin{theorem}[Wendland, 2004, Theorem 10.12, \cite{Wendland2004}] Let $k(x,x')=\sigma_k^2\exp( -{\left\Vert x-x' \right\Vert^2}/{2\lambda^2} )$ be the squared-exponential kernel. The corresponding RKHS $\mathcal H_k$ can be written as\begin{align} \mathcal H_k=\left\{ y\in L_2(\mathbb{R}^n)\cap C(\mathbb{R}^n):\left\Vert y \right\Vert _{\mathcal H_k}=\frac{1}{c_0}\int|\mathcal{F}[y](\omega)|^2\exp(\lambda ^2\left\Vert \omega \right\Vert^2/2) \, \mathrm{d}\omega < \infty\right\}, \end{align} where $c_0$ is a constant that depends on $n$ and $\lambda$, and $\mathcal F$ is the Fourier transform. \end{theorem} \noindent The above results shows that for any $y \in \mathcal H_k$, the magnitude of its Fourier transform $|\mathcal{F}[y](\omega)|$ decays exponential fast as $|\omega|\rightarrow \infty$ and the speed of decay increases with $\lambda$. Analogous results exist for the case of Mat{\'e}rn kernels \cite{Wendland2004}. A fundamental property of GPR schemes is the fact that one can obtain a priori estimates for the accuracy of the surrogate model. In particular, for the case of GPR schemes, we have the following error estimate: \begin{proposition}[Stuart \& Teckentrup, 2018, Proposition 3.5, \cite{Stuart2016}]\label{optthm1} Suppose that $\bar y(x)$ and $\bar k(x,x')$ are given by the GPR scheme \eqref{gpr00}. Then \begin{align} \sup_{\left\Vert y \right\Vert_{\mathcal H_k}=1}|y(x)-\bar y(x)|=\bar k(x,x)^{\frac{1}{2}}\triangleq \bar \sigma (x), \end{align} where the supremum occurs when the functions $y(\cdot)$ and $\bar k(\cdot,x)$ are linearly dependent. \end{proposition} \noindent Based on this result which involves the accuracy of the surrogate map, we will derive the corresponding results for the pdf of the output variable, $p_y(s|\mathcal{D})$. \subsection{Optimality condition for data selection} We formulate an active-sampling criterion that aims directly for the convergence of the output pdf, $p_y(s|\mathcal{D})$. Let $h \in \mathbb{R}^n$ be the new candidate input point. As output, we employ the approximation by the surrogate model, $\bar y(h)$. In this way, we have the augmented dataset $\mathcal{D'}=\{[X,h],[Y,\bar y(h)]\}$, which results in the same predictive mean, $\bar y(x)$, and a new predictive covariance $\bar k'(x,x';h)$. Ideally, we would want the new input to be chosen by minimizing the distance \begin{equation} \mathbb{D}(p_y,p_{\bar y'})=\int_{S_y}\left|\log p_y({s})-\log p_{\bar y'}({s}|\mathcal{D},h)\right| \mathrm{d}{s}. \end{equation} However, this is not possible as $p_y$ is a priori unknown. To this end, we will use as a selection criterion the supremum of the above distance over the unit sphere of the functional space $y \in \mathcal{H}_k$ (we fix the norm of the unknown function without loss of generality). Therefore the selection criterion takes the form of minimizing the acquisition function \begin{equation} Q(h|\mathcal{D})\triangleq\sup_{ \left\Vert y \right\Vert_{\mathcal{H}_k}=1}\mathbb{D}(p_y,p_{\bar y'})=\sup_{ \left\Vert y \right\Vert_{\mathcal{H}_k}=1}\int_{S_y}\left|\log p_y({s})-\log p_{\bar y'}({s}|\mathcal{D},h)\right|\mathrm{d}{s}. \label{criterionq} \end{equation} While the criterion is targeting the output pdf, it is not easily computable, especially in high dimensions. The rest of the paper aims to derive a computable version appropriate for high-dimensional input spaces. \section{Asymptotically optimal criterion for data selection} Our efforts focus on obtaining a computable version of the criterion \eqref{criterionq}. We plan to achieve this by assuming small variance $\bar \sigma^2(x)=\bar k(x,x)$. We first recall an asymptotic result that connects the error for a map and the error between the induced pdfs defined by the corresponding maps. \begin{theorem}[Mohamad \& Sapsis, 2018, Theorem 2, \cite{mohamad2018}]\label{approxthm1} Let $\hat y(x)$ and $y(x):\mathbb{R}^n\rightarrow\mathbb{R}$ be two continuous functions with difference $\Delta y({x})=\hat y(x)-y(x)$, which is assumed to be small. Let also $p_x({x})$ be the probability density function of the random vector $x \in \mathbb{R}^n$. The difference between the induced pdfs for $\hat y$ and $y$ has the following asymptotic behavior: \begin{equation*} p_{y}({s})-p_{\hat y}({s}) =-\frac{\mathrm{d}}{\mathrm{d}{s}}\int\displaylimits_{{\hat y}({x})={s}} p_x({x})\Delta y({x})\,\mathrm{d}{x}+\mathcal{O}(|\Delta y|^{2}). \end{equation*} \end{theorem} \noindent Building on this result, we have the main theorem that characterizes the supremum in the RKHS between the pdf induced by the surrogate approximation and the pdf induced by the exact map: \begin{theorem}\label{thm_main} Let $y(x) \in \mathcal{H}_k$ be an arbitrary function and its GPR approximation with kernel $k$ given by ${\bar y(x)}$ with corresponding variance $\bar \sigma^2(x)$. Then the following property holds for small $\bar \sigma$: \begin{align}\label{res1} & \sup_{ \left\Vert y \right\Vert_{\mathcal{H}_k}=1}\int_{S_y}\left|\log p_{\bar y}({s})-\log p_{y}({s})\right|\mathrm{d}{s} = \int\displaylimits_{\bar y^{-1}(S_y)}\frac{p_x({x})|p'_{ y}({\bar y}({x}))|} {p_{ y}({\bar y}({x}))^2}\bar \sigma({x})\,\mathrm{d}{x}+\mathcal{O}(\bar \sigma^2). \end{align} \end{theorem} \noindent \textbf{Proof: } Utilizing Theorem \ref{approxthm1}, we have for the case where ${\bar y}$ is close to $y$ (i.e., $\Delta y/ y= (\hat y-y)/y \ll1$) \begin{align*}\log p_{\bar y}({s})-\log p_{y}({s})&=\frac{p_{\bar y}({s})-p_{y}({s})}{p_{y}({s})}+\mathcal{O}(|\Delta p_{y}|^2)\\ &=-\frac{\frac{\mathrm{d}}{\mathrm{d}{s}}\int\displaylimits_{{\bar y}({x})={s}} p_x({x})\Delta y({x})\,\mathrm{d}{x}}{p_{y}({s})}+\mathcal{O}(|\Delta y|^2). \end{align*} Expressing the right-hand side as a volume integral with a delta function and using properties of generalized derivatives, we have \begin{align*} \frac{\frac{\mathrm{d}}{\mathrm{d}{s}}\int\displaylimits_{{\bar y}({x})={s}} p_x({x})\Delta y({x})\,\mathrm{d}{x}}{p_{y}({s})}&=\frac{\frac{\mathrm{d}}{\mathrm{d}{s}}\int\displaylimits_{} p_x({x})\Delta y({x})\delta ({s-{\bar y}({x})})\,\mathrm{d}{x}}{p_{y}({s})}\\ &=\int\displaylimits_{}\frac{p_x({x})\Delta y({x})\delta '({s-{\bar y}({x})})}{p_{y}({s})}\,\mathrm{d}{x}\\ &=\int\displaylimits_{} \frac{p_x({x})\Delta y({x})p'_{y}({s})\delta ({s-{\bar y}({x})})}{p^2_{y}({s})}\,\mathrm{d}{x}\\ &=p'_{y}({s})\int\displaylimits_{} \frac{p_x({x})\Delta y({x})\delta ({s-{\bar y}({x})})}{p^2_{y}({s})}\,\mathrm{d}{x}. \end{align*} We note that $p_x({x})\delta ({s-{\bar y}({x})})/p^2_{y}({s}) \ge 0$. By employing Proposition \ref{optthm1} we obtain the tight bound \begin{align*} \int \frac{p_x({x})\Delta y({x})\delta ({s-{\bar y}({x})})}{p^2_{y}({s})} \, \mathrm{d}{x}\leq \int\displaylimits_{} \frac{p_x({x})\bar \sigma({x})\delta ({s-{\bar y}({x})})}{p^2_{y}({s})}\,\mathrm{d}{x},\end{align*} where equality holds for the case where the functions $y(\cdot)$ and $\bar k(\cdot,x)$ are linearly dependent \cite{Stuart2016}. Combining the above, we have \begin{align*} \left|\log p_{\bar y}({s})-\log p_{y}({s})\right|&\leq |p'_{y}({s})|\int\displaylimits_{} \frac{p_x({x})\bar \sigma({x})\delta ({s-{\bar y}({x})})}{p^2_{y}({s})}\,\mathrm{d}{x}+\mathcal{O}(\bar \sigma^2). \end{align*} Integrating over ${s}$, we have the final result: \begin{align*}\sup_{\left\Vert y \right\Vert_{\mathcal{H}_k}=1}\int_{S_y}\left|\log p_{\bar y}({s})-\log p_{y}({s})\right| \mathrm{d}s &= \int_{S_y} |p'_{y}({s})|\int\displaylimits_{} \frac{p_x({x})\bar \sigma({x})\delta ({s-{\bar y}({x})})}{p^2_{y}({s})}\,\mathrm{d}{x}\,\mathrm{d}{s}+\mathcal{O}(\bar \sigma^2) \\ & = \int\displaylimits_{\bar y^{-1}(S_y)} \frac{p_x({x})|p'_{y}({{\bar y}({x})})|}{p^2_{y}({\bar y}({x}))}\bar \sigma({x})\,\mathrm{d}{x}+\mathcal{O}(\bar \sigma^2).\end{align*} This completes the proof. $\blacksquare$ Next, we utilize this asymptotic form to reformulate the data selection criterion \eqref{criterionq}. Specifically, we apply the above theorem for the augmented dataset $\mathcal{D}'$ and obtain the asymptotic reformulation of the selection criterion: \begin{align} \sup_{ \left\Vert y \right\Vert_{\mathcal{H}_k}=1}\int_{S_y} \left|\log p_y({s})-\log p_{\bar y'}({s}|\mathcal{D},h)\right|\mathrm{d}{s} & = \int\displaylimits_{\bar y^{-1}(S_y)}\frac{p_x({x})|p'_{ y}({\bar y}({x}))|} {p_{ y}({\bar y}({x}))^2}\bar \sigma({x;h})\,\mathrm{d}{x}+\mathcal{O}(\sigma^2), \end{align} where $\bar \sigma({x;h})=\bar k(x,x;h)$ is the predictive variance based on the augmented dataset $\mathcal{D}'$. We note that the right-hand side involves the pdf $p_y$, which is unknown but can always be approximated by $p_{\bar y}$ with negligible error, given the assumptions of Theorem \ref{thm_main}. This gives us the final asymptotic approximation: \begin{align} Q(h|\mathcal{D})=\sup_{ \left\Vert y \right\Vert_{\mathcal{H}_k}=1}\int_{S_y}\left|\log p_y({s})-\log p_{\bar y'}({s}|\mathcal{D},h)\right|\mathrm{d}{s} & \simeq \int\displaylimits_{\bar y^{-1}(S_y)}\frac{p_x({x})|p'_{\bar y}({\bar y}({x}))|} {p_{\bar y}({\bar y}({x}))^2}\bar \sigma({x;h}) \,\mathrm{d}x. \end{align} It is worth emphasizing the term in the denominator, which promotes sampling of regions associated with low probability, i.e., large deviations of the output of the reduced-order model. We can simplify the right-hand side further by using the Cauchy--Schwarz inequality (with weight $\frac{p_x({x})}{p_{\bar y}({\bar y}({x}))}$) to obtain the less conservative upper bound: \begin{equation}\label{res2} Q(h|\mathcal{D})\simeq\int\displaylimits_{\bar y^{-1}(S_y)} \frac{p_x({x})|p'_{\bar y}({{\bar y}({x})})|}{p^2_{\bar y}({\bar y}({x}))}\bar \sigma({x;h})\,\mathrm{d}{x}\le c \left[\int\displaylimits_{\bar y^{-1}(S_y)}\frac{p_x({x})}{p_{\bar y}({\bar y}({x}))}\bar \sigma^{2}({x};h)\, \mathrm{d}x\right]^\frac{1}{2}, \end{equation} where \begin{equation} c=\left[\int\displaylimits_{\bar y^{-1}(S_y)} \frac{p_x({x})p^{\prime 2}_{\bar y}({{\bar y}({x})})}{p^3_{\bar y}({\bar y}({x}))}\,\mathrm{d}{x}\right]^\frac{1}{2} \end{equation} is a constant that depends only $p_x(x)$, $p_{\bar y}(y)$, and $\bar y(x)$, i.e., not on $h$. This form, also referred as output-weighted (or likelihood-weighted) criterion, is appropriate for computations involving even high-dimensional input spaces, since it allows for the analytical computation of $\sigma^{2}({x};h)$ in terms of simpler integrals \cite{Blanchard_SIAM21}. It has been studied numerically in recent papers \cite{sapsis20, Blanchard_SIAM21,blanchard_jcp21}, showing significantly favorable convergence properties compared with existing active-learning criteria. \subsection{The case of extreme-event quantiles} For a wide range of applications, the focus is on characterizing the probability of exceeding a certain level, $P_y(s_*)=P[y\leq s_*]$, rather than characterizing the full pdf $p_y(s)$ (the case of non-exceeding probability can be addressed in a similar fashion). For this type of problems the appropriate selection criterion is minimizing the acquisition function\begin{equation} R(h|\mathcal{D},s_*)\triangleq\sup_{ \left\Vert y \right\Vert_{\mathcal{H}_k}=1}| P_{\bar y'}({s_*}|\mathcal{D},h)- P_{y}({s_*})|. \end{equation}For this case, we have the following asymptotic result for small $\sigma$: \begin{theorem} Let $y(x) \in \mathcal{H}_k$ and its GPR approximation with kernel $k$ given by ${\bar y(x)}$ with variance $\bar \sigma^2(x)$. Then, for any given $s_*$, the following property holds for small $\bar \sigma$: \begin{align*} & \sup_{ \left\Vert y \right\Vert_{\mathcal{H}_k}=1}| P_{\bar y}({s_*})- P_{y}({s_*})| = \int\displaylimits_{\bar { y}({x})={s_*}}\bar \sigma({x})p_x({x})\,\mathrm{d}x+\mathcal{O}(\bar \sigma^2). \end{align*} \end{theorem} \noindent \textbf{Proof: } The starting point is Theorem \ref{approxthm1}. We integrate from $-\infty$ to $s_*$ to obtain the cumulative distribution function on the left-hand side, when ${\bar y}$ is close to $y$ (i.e., $\Delta y/ y= (\hat y-y)/y \ll1$): \begin{align*}P_{\bar y}({s})-P _{y}({s})=-\int\displaylimits_{{\bar y}({x})={s}} p_x({x})\Delta y({x})\,\mathrm{d}{x}+\mathcal{O}(|\Delta y|^2). \end{align*} We then employ Proposition \ref{optthm1} and bound the difference $\Delta y$ on the right-hand side. This completes the proof. $\blacksquare$ Applying the above theorem to the augmented dataset $\mathcal D'$, we obtain the asymptotic form for the optimal selection criterion involving quantiles: \begin{align} & R(h|\mathcal{D},s_*)\triangleq\sup_{ \left\Vert y \right\Vert_{\mathcal{H}_k}=1}| P_{\bar y'}({s_*}|\mathcal{D},h)- P_{y}({s_*})|= \int\displaylimits_{\bar { y}({x})={s_*}}\bar \sigma({x;h})p_x({x})\,\mathrm{d}x+\mathcal{O}(\bar \sigma^2). \end{align} We observe that for this case, the optimal data selection criterion takes a different form, focusing primarily on reducing the error around the contour $\bar y(x)=s_*$. This is not surprising given that the extreme-event quantile is essentially equivalent to a classification problem, so what is most important is to have low error around the a priori unknown contour of interest. \subsection{Convergence of spatial approximation error} Given an active-learning criterion, it is important to characterize the convergence of the spatial error of the GPR approximation. This can be obtained with the help of Proposition \ref{optthm1} and standard results of measurable functions. Here we give a proof for the convergence properties using the sampling criterion appearing on the right-hand side in \eqref{res2}. Specifically, we have the following result: \begin{theorem} Suppose that ${\bar y_N}$ and $\bar \sigma_N$ are given by a GPR with kernel $k$, using $N$ samples. Moreover, assume that sampling is performed using the optimal criterion so that \begin{displaymath} \lim_{N\rightarrow\infty} \int \frac{p_x({x})} {p_{\bar y_N}({\bar y_N}({x}))}{\bar \sigma_{{N}}^2}({x})\,\mathrm{d}{x}=0. \end{displaymath} Then we have convergence in measure, i.e., \begin{equation*} \lim_{N\rightarrow \infty}P \left[ x:\sup_{ \left\Vert y \right\Vert_{\mathcal{H}_k}=1}({\bar y_N}({x})-{y}({x}))^2 \le q \frac{p_{\bar y_N} ({\bar y_N}( x))}{p_x({x})} \right]=1, \end{equation*} for every $q>0$. \end{theorem} \textbf{Proof:} From standard results of measurable functions \cite{vulikh}, for every sequence of functions $\phi_k(x)\in L^p$ ($1\le p \le\infty$) for which $\Vert \phi_k(x) \Vert_p \rightarrow 0$, we have convergence in measure: \begin{align} \lim_{k\rightarrow \infty} P[x:|\phi_k(x)|\ge q]=0, \end{align} for every $q>0$. Applying this result to the sequence $p_x({x})\bar \sigma_{{N}}^2({x})/ p_{\bar y_N}({\bar y_N}({x}))$, we obtain \begin{align*}\lim_{N\rightarrow \infty} P \left[x:\frac{p_x({x})} {p_{\bar y_N}({\bar y_N}({x}))}{\bar \sigma_{{N}}^2}({x})\le q \right]=1. \end{align*} We then utilize Proposition \ref{optthm1}, which immediately leads to the desired result. $\blacksquare$ This result provides a description of the spatial convergence properties for the approximation error of the reduced-order model. Specifically, it shows that the convergence is accelerated in regions of the input space that are i) most probable according to the pdf $p_x(x)$, and ii) associated with small probability of the output pdf $p_{\bar y_N} ({\bar y_N}( x))$, i.e., large deviations of the output $y$. This type of convergence guarantees a balanced distribution of resources between the most probable inputs and those that result in large deviations for the output, resulting in an effective way of sampling towards data-driven reduced-order modeling. \section{Numerical illustration} We demonstrate the optimal sampling criteria in a mechanical oscillator subject to stochastic forcing and in the reduced-order modeling of a beam under axial and transverse stochastic loads. We consider the optimal sampling criteria appearing on each side of the inequality in \eqref{res2}. The left-hand side will be referred to as the ``B'' criterion, and the right-hand side as ``IVR-LW'' as it is strictly equivalent to the eponym criterion introduced in \cite{sapsis20, Blanchard_SIAM21}. We also consider two criteria commonly used in the literature which do not account for the importance of the output relative to the input; namely, uncertainty sampling (US) and input-weighted integrated variance reduction (IVR-IW), whose definitions can be found in \cite{sapsis20, Blanchard_SIAM21}. For each example below, we run 100 Bayesian experiments, each differing in the choice of the $n+1$ points making up the initial dataset. Observations are assumed to be corrupted by Gaussian noise with zero mean and unknown (i.e., to be learned) variance $\sigma_n^2$. Performance at each iteration is evaluated using the median of $\mathbb{D}(p_y,p_{\bar{y}_N})$ across the 100 randomized experiments. Note that we employ this particular quantity to measure performance so we can better emphasize the accuracy in the tails of the resulting probability distributions. \subsection{Forced nonlinear oscillator exhibiting extreme events} We begin with the stochastic oscillator of Mohamad and Sapsis \cite{mohamad2018}, \begin{equation} \ddot{u} + \delta \dot{u} + F(u) = \xi(t), \quad t \in [0,T], \label{eq:43} \end{equation} where $u(t) \in \mathbb{R}$ is the state variable, $F$ a nonlinear restoring force, and $\xi(t)$ a stationary stochastic process which we parametrize using a Karhunen--Lo{\`e}ve expansion with $n$ modes:\begin{displaymath} \xi(t)\simeq x \Phi(t),\ \end{displaymath} where $\{\Lambda, \Phi(t)\}$ contains the first $n$ eigenpairs of the correlation matrix, and $x \in \mathbb{R}^n$ is a vector of random coefficients having mean zero and diagonal covariance matrix $\Lambda$. The quantity of interest is taken to be the mean value of $u(t)$ over the interval $[0,T]$. (Further details about system parameters can be found in \cite{mohamad2018, Blanchard_SIAM21}). We consider the case $n=2$ as it allows visual comparison of the decisions made by the sampling criteria as more points are being acquired. Even with $n=2$, the output pdf has heavy tails (see figure \ref{fig:1}), a consequence of the strong nonlinearity in \eqref{eq:43}. For $\sigma_n^2 = 10^{-3}$, figure \ref{fig:1} shows that the derived optimal criteria accelerate convergence of the output pdf quite dramatically. The error for B remains close to, but always slightly below, that for IVR-LW, which is consistent with the mathematical derivation laid out in the previous section. \begin{figure}[!ht] \centering \includegraphics[width=5.9in, clip=true, trim=15 10 10 10]{fig_1} \caption{For the stochastic oscillator \eqref{eq:43} with $n=2$, contour plot of the output $y$ (left) and its pdf (center), and performance of several sampling criteria for $\sigma_n^2 = 10^{-3}$ (right). The error bands indicate one half of the median absolute deviation.} \label{fig:1} \end{figure} To explain the success of the proposed optimal criteria, we investigate the decisions made by US, B, and IVR-LW in the case where observations are noiseless and $\sigma_n^2$ is set to zero in the GPR model (i.e., it is not learned from data). Consistent with \cite{sapsis20, Blanchard_SIAM21}, figure \ref{fig:2} shows that US attempts to reduce uncertainty somewhat evenly across the space as it has no mechanism to discriminate between relevant and irrelevant regions. With IVR-IW, the algorithm does not explore beyond the center region where $p_x$ is large, and consequently the interesting regions are not visited. On the other hand, both B and IVR-LW decide to focus on a diagonal band, with the former being even more surgical and localized than the latter. Specifically, IVR-LW focuses on input regions with important probability as well as those input regions associated with large outputs. In this way the resulting surrogates predict the output statistics much better than with US. \begin{figure}[!t] \centering \subfloat[][US]{\includegraphics[width=2.2in]{us0060}} \qquad \subfloat[][IVR-IW]{\includegraphics[width=2.2in]{ivr_iw0060}} \subfloat[][B]{\includegraphics[width=2.2in]{b0060}} \qquad \subfloat[][IVR-LW]{\includegraphics[width=2.2in]{ivr_lw0060}} \caption{For the stochastic oscillator \eqref{eq:43} with $n=2$ and $\sigma_n^2=0$, progression of the sampling algorithm for several criteria after 60 iterations; the contours denote the posterior mean of the GPR model, the open squares the initial dataset, and the filled circles the optimized samples.} \label{fig:2} \end{figure} \subsection{Buckling of a beam under stochastic axial and transverse excitation} Next, we consider the case of a beam of length $l$ subject to both axial and transverse stochastic loads. The linearized equation of motion takes the form \begin{align} \frac{\partial^2w}{\partial t^2}+2\zeta\omega_0\frac{\partial w}{\partial t}+\omega_0^2\frac{\partial ^4w}{\partial x^4}+P(t)\frac{\partial ^2w}{\partial x^2}=R(x,t), \end{align} where $P(t)$ is random axial load and $R(x,t)$ is a random distributed transverse load \cite{nayfeh_mook}. We assume that both functions have zero-mean Gaussian statistics and prescribed spectra. We also assume that the beam has pin boundary conditions at both ends. In this case and assuming weak damping, the solution can be expressed as \begin{align} w(x,t)=\sum_{j=1}^{\infty}f_j(t)\sin(j\pi x/l), \end{align} which results in a set of modal equations: \begin{align} \ddot f_j+2\zeta\omega_0 \dot f + \omega_j^2 [1-P(t)/c_j] f=R_j(t), \qquad j=1,2,... \end{align} where \begin{align} \omega_j^2=\omega_0^2 (j\pi/l)^4, \quad c_j=(j\pi/l)^4, \quad \text{and} \quad R_j(t)=\frac{2}{l}\int_0^l \sin (j\pi x/l)R(x,t)\,\mathrm{d}x. \end{align} We define as quantity of interest the maximum absolute displacement at $x=l/4$ over a prescribed time interval $[0,T]$: \begin{align} y=\max_{t\in [0,T]} \left|w(l/4,t)\right|= \max_{t\in [0,T]} \left|\sum_{j=1}^{J}\sin(j\pi/4)f_j(t)\right|. \end{align} We use $\sigma_\xi^2 \exp[-t^2/(2\ell_{\xi}^2)]$ for the correlation function of $P(t)$ and each $R_j(t)$, which are expanded using Karhunen--Lo{\`e}ve expansions with $n_\textit{KL}$ modes. The search space, therefore, has dimension $n= J(n_\textit{KL}+1)$. We use parameters $\sigma_\xi=20$, $\ell_{\xi}=0.1$, and $T=5$. For simplicity, we consider a modal truncation of $J=1$ and $n_\textit{KL}=1$, leading to a two-dimensional search space. For the parameters considered, figure \ref{fig:3} shows that the pdf of the output has a heavy right tail, to which the B and IVR-LW criteria converge more quickly than US and IVR-IW. The reason is that the extreme displacement values are found in low-probability areas of the search space (i.e., small $p_x$), which US and IVR-IW have no mechanism to discover. \begin{figure}[!ht] \centering \includegraphics[width=5.9in, clip=true, trim=12 10 10 10]{fig_3} \caption{For the beam under random load with $n=2$ ($J=1$, $n_\textit{KL}=1$), contour plot of the output $y$ (left) and its pdf (center), and performance of several sampling criteria for $\sigma_n^2 = 10^{-3}$ (right). The error bands indicate one half of the median absolute deviation.} \label{fig:3} \end{figure} \section{Conclusions} We have derived optimal acquisition functions for active-learning schemes utilized for reduced-order modeling based on Gaussian process regression. The key feature of these optimal acquisition functions is a mechanism that targets the output pdf of the surrogate model. The derivation begins by selecting each sample so that the distance between the exact output pdf and the approximated output pdf obtained from the reduced-order model is minimized. Given that the exact pdf is a priori unknown, a supremum of this distance over the native Hilbert space for the Gaussian process regression is considered. The resulting selection criterion, although optimal, it is generally computationally intractable. We show that this difficulty can be overcome by deriving successive asymptotic upper bounds, resulting in a sampling criterion which can be evaluated analytically along with its gradients. In addition to the data-selection criterion, we derive the corresponding bound for the approximation error of the resulted reduced-order model. This result shows that our approach enables the reduced-order model to find the optimal trade-off between most likely and most interesting (i.e. large-deviation) output for the process at hand, thereby greatly enhancing the effectiveness of the algorithm. Numerical results confirm the derived analytical findings. Note that in this work and for the sake of simplicity we have chosen to represent the reduced-order model in the form of a function from the parameter space to the output space. The present framework can be adapted to represent the reduced-order model in the form of a low-dimensional dynamical system, focusing e.g. on capturing a specific mechanism of the full dynamics. We leave this topic as a possible direction for future work. \subsubsection*{Acknowledgments} The authors acknowledge support from the Air Force Office of Scientific Research (MURI\ Grant No. FA9550-21-1-0058), the Defense Advanced Research Projects Agency (Grant No. HR00112110002), and the Office of Naval Research (Grant No. N00014-21-1-2357). \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,658
\section{Introduction} \label{sec:Introduction} Owing to its superior spectral efficiency, {non-orthogonal multiple access (NOMA)} technique has emerged as a promising candidate for future wireless cellular networks. Unlike traditional {orthogonal multiple access (OMA)}, NOMA enables the {base stations (BSs)} to concurrently serve multiple users using the same resource block (RB); see \cite{ding2017survey} and the references therein. In NOMA, the BS superimposes multiple layers of messages at different power levels and the user decodes its intended message using successive interference cancellation (SIC) technique. In particular, the users with weaker channel qualities are assigned with higher powers so that their intra-cell interference is smaller. For such power allocation, the user first decodes and cancels interference power from the layers assigned to the users with weaker channels successively using SIC and then decodes its intended message. {A key design aspect for NOMA is the pairing of users for non-orthogonal transmission. For example, consider there is a user with good channel quality, say UE$_1$, and a user with poor channel quality, say UE$_2$. In OMA, the associated BS ends up allocating most of the time slots to UE$_2$ in order to ensure its quality of service (QoS) requirement which leads to lesser transmission opportunities for UE$_1$ and consequently smaller cell throughput. On the other hand, NOMA allows the BS to concurrently serve both UE$_1$ and UE$_2$ using the same spectral resource, which provides uninterrupted channel access to UE$_1$ while meeting the required QoS constraints of UE$_2$. Although the UE$_1$'s {\em per-slot} throughput may suffer (compared to OMA) because of the need to decode both the messages (corresponding to UE$_1$ and UE$_2$), the overall transmission rate may still improve because of more transmission opportunities compared to OMA. As a result, NOMA can result in higher cell ergodic capacity as compared to OMA \cite{Ding_NOMA_2014}. In this regard, it is beneficial to pair users with distinct link qualities. The readers can refer to \cite{ding2016impact} for more details on the effects of user pairing on NOMA performance.} \subsection{Prior Art} \label{subsec:PriorArt} The set of users scheduled for the non-orthogonal transmission in NOMA is often termed as the {\em user cluster}. To form a user cluster, one needs to first rank users based on their channel gains (both large-scale path losses and small-scale fading effects) and perceived inter-cell interference, and then place users with distinct link qualities in the same cluster \cite{ding2016impact}. However, since the prior works on NOMA are mostly focused on the {single-cell} NOMA analysis, the users are ranked solely based on their distances from the BS \cite{choi2016power,liu2016cooperative} or based on their link qualities \cite{Ding_NOMA_2014,ding2015cooperative,timotheou2015fairness,zhu2017optimal}. While such user ranking is meaningful in the context of {single-cell} analysis, it ignores the impact of the {inter-cell} interference which is crucial for accurate performance analysis of NOMA \cite{ali2017non}. {Recently, stochastic geometry has emerged as a popular choice for the analysis of large scale cellular networks \cite{haenggi2012stochastic,AndrewsGD16,blaszczyszyn_haenggi_keeler_mukherjee_2018}. However, incorporating sophisticated user ranking jointly based on the link qualities and inter-cell interference is challenging.} This is because of the correlation in the corresponding desired and inter-cell interference powers received by the users within the same cell. Therefore, most of the existing works in this direction ignore these correlations and instead rank the users in the order of their mean desired signal powers (i.e., link distances) so that the $i$-th closest user becomes the $i$-th strongest user. Within this general direction, the authors of \cite{ali2019downlink,ali2019downlinkLetter,salehi2018meta,salehi2018accuracy,tabassum2017modeling} analyzed $N$-ranked NOMA in cellular networks assuming that the BSs follow a PPP. In \cite{tabassum2017modeling}, the uplink success probability is derived assuming that the users follow a Poisson cluster process (PCP). In \cite{ali2019downlink}, the downlink success probability is derived while forming the user cluster within the indisk of the Poisson-Voronoi (PV) cell. However, this may underestimate the NOMA performance gains because users within the indisk of a PV cell will usually experience similar channel conditions and hence lack channel gain imbalance that results in the NOMA gains (see \cite{ding2016impact}). {By ranking the users based on their link distances, the authors of \cite{salehi2018meta,ali2019downlinkLetter} derive the moments of the {\em meta distribution} of the downlink signal-to-interference ratio ($\mathtt{SIR}$) \cite{Martin2016Meta}.} However, \cite{salehi2018meta} ignores the joint decoding of the subset of layers associated with SIC. {The authors of \cite{ali2019downlinkLetter,salehi2018meta,salehi2018accuracy,tabassum2017modeling} use the distribution of the typical link distance (in the network) to derive the order statistics of link distances of clustered users. As implied already, this ignores the correlation in the user locations placed in a PV cell which are a function of the BS point process.} A key unintended consequence of this approach is that it does not necessarily confine the user cluster in a PV cell, which is a significant approximation of the underlying setup (see Fig. \ref{fig:illustration_PriorNOMAModel}, Middle and Left). The spectral efficiency of the $K$-tier heterogeneous cellular networks is analyzed in \cite{liu2017non} wherein the small cells serve their users using two-user NOMA with distance-based ranking. On similar lines, \cite{zhang2017downlink} derives the outage probability for two-user downlink NOMA cellular networks, modeled as a PPP, by ranking the users based on the channel gains normalized by their received inter-cell interference powers. {The normalized gains are assumed to be independent and identically distributed (i.i.d.) and follow the distribution that is observed by a typical user in the network. This indeed ignores the correlation in the link distances as well as the inter-cell interference powers associated with the users placed in the same PV cell.} A more reasonable way of accurately ranking the users is to form a user cluster by selecting users from distinct regions of the PV cell. One way of constructing these regions is based on the ratio of mean powers received from the serving and dominant interfering BSs. In particular, a PV cell can be divided into the cell center (CC) region, wherein the ratio is above a threshold $\tau$, and the cell edge (CE) region, wherein the ratio is below $\tau$. A similar approach of classifying users as CC and CE is also used in 3GPP studies to analyze the performance of schemes such as soft frequency reuse (SFR) \cite{dominique2010self}. Inspired by this, we characterize the CC and CE users based on their path-losses from the serving and dominant interfering BSs to pair them for the two-user NOMA system. While the proposed approach can be directly extended to the $N$-user NOMA using $N-1$ region partitioning thresholds, we focus on the two-user NOMA for the ease of exposition. {The proposed user pairing technique helps to construct the user cluster with distinct link qualities, which is essential for NOMA performance \cite{ding2016impact}. This is because of two key reasons: 1) the order statistic of received powers at different users in the cell is dominated by their corresponding path-losses \cite{wildemeersch2014successive}, and 2) the dominant interfering BS contributes most of the interference power in the PPP setting \cite{Vishnu2017UAV}.} Besides user ranking, {resource allocation (RA)} is an integral part of the NOMA design. On these lines, the prior works have investigated RA for the downlink NOMA system using a variety of performance metrics, such as user fairness \cite{timotheou2015fairness,Choi2016Fairness,zhu2017optimal} and weighted sum-rate maximization \cite{zhu2017optimal,sun2017optimal,Parida_NOMA}. {Further, the RA problems for maximizing the sum-rate \cite{zhu2017optimal,ali2019downlink,Wang2016PA} or the energy efficiency \cite{zhu2017optimal,Zhang2017EE,fang2016energy} subject to a minimum transmission rate constraint have also been investigated. } {Besides, \cite{Chen_NOMA_OMA_RA} has demonstrated that the achievable {sum-rate} in NOMA is always higher than that of the OMA under minimum transmission rate constraints.} {These RA formulations are meaningful for the full buffer {non-real time (NTR)} services, such as file downloading, wherein transmission rates usually determine the QoS. However, they are not suitable for delay-sensitive applications with {real time (RT)} traffic, such as video streaming and augmented reality, which are becoming even more critical in the context of newly emerging applications of wireless communication.} {For such applications, it is essential that the RA formulations explicitly include the delay constraints.} In this context, the effective capacity ($\mathtt{EC}$), defined in \cite{wu2003EffectiveCapacity} as the maximum achievable arrival rate that satisfies a delay QoS constraint, is analyzed for NOMA in \cite{yu2018link,Xiao2019LowLatency,Choi2017EffectiveCapacityDelayQoS} for the downlink case and in \cite{Qiao2012TransStrategy,Sebastian} for the uplink case. However, the existing works on the delay analysis of NOMA are relatively sparse. {Besides, the works mentioned above on RA with the focus on $\mathtt{CSR}$ (i.e., \cite{Chen_NOMA_OMA_RA,timotheou2015fairness,Choi2016Fairness,zhu2017optimal,Wang2016PA, Zhang2017EE,fang2016energy}) and $\mathtt{EC}$ (i.e., \cite{yu2018link,Xiao2019LowLatency,Choi2017EffectiveCapacityDelayQoS, Qiao2012TransStrategy,Sebastian}) are limited to the single-cell setting, and hence {they} ignore the impact of inter-cell interference.} \subsection{Contributions} \label{subsec:Contribution} {The primary objective of this paper is to enable the accurate downlink NOMA analysis of cellular networks from the perspective of stochastic geometry. {As we discussed in Section \ref{subsec:PriorArt}, the existing stochastic geometry-based NOMA analyses do not capture correlation between signal qualities of the paired users that is induced by the fact that these users are located in the same PV cell. In addition, these analyses also do not explicitly pair users with distinct signal qualities which is crucial for harnessing performance gains using NOMA. The user pairing technique presented in this paper overcomes the above limitations while enabling tractable system-level analysis of downlink NOMA using stochastic geometry. } Our approach focuses on the performance of the {\em typical cell}, which departs significantly from the standard approach of analyzing the performance of the {\em typical user} in a cellular network that is selected independently of the BS locations \cite{Praful_TypicalCell,Praful_TypicalCell_MetaDis}. The key contributions of our analysis are briefly summarized below. \begin{enumerate} \item {This paper presents a novel 3GPP-inspired user pairing technique for NOMA to accurately select the CC and CE users with distinct link qualities. } \item We derive approximate yet accurate moments of the meta distributions for the CC and CE users belonging to the typical cell under both NOMA and OMA systems. Besides, we also provide tight beta distribution approximations of the meta distributions. \item Next, we derive approximate distributions of the mean transmission rates and the upper bounds on the distributions of the mean packet delays of the CC and CE users for both NOMA and OMA systems under the random scheduling scheme. \item Finally, we present RA formulations for NRT and RT services under both NOMA and OMA systems. For the NRT service, the objective is to maximize $\mathtt{CSR}$ such that minimum transmission rates of the CC and CE users are satisfied. For the RT service, the aim is to maximize {sum $\mathtt{EC}$ ($\mathtt{SEC}$)} such that the minimum $\mathtt{EC}$s of CC and CE services are achieved. We present an efficient approach to obtain the near-optimal RAs for these formulations. \item Our numerical results demonstrate that: a) NOMA is beneficial to both CC and CE users as compared to OMA except when the OMA schedules the CC users for most of the time, b) the proposed near-optimal RA under NRT services provide higher $\mathtt{CSR}$ in NOMA as compared to that of the OMA, and c) the proposed near-optimal RA under RT services provide improved $\mathtt{SEC}$ in NOMA as compared to that of the OMA at a higher user density. \end{enumerate} } \section{System Model} \label{sec:SystemModel} \subsection{Network Modeling} We model the locations of BSs using the homogeneous PPP $\Phi$ with density $\lambda$. {This paper considers the strongest mean power-based BS association policy. Hence, the coverage region of the BS at $\mathbf{x}\in\Phi$ becomes the PV cell $V_\mathbf{x}$ which is given by} $V_\mathbf{x}=\{\mathbf{y}\in\mathbb{R}^2: \|\mathbf{y}-\mathbf{x}\|\leq \|\mathbf{y}-\mathbf{x}'\|,\forall \mathbf{x}'\in\Phi\}.$ Let $V_{\mathbf{x} c}$ and $V_{\mathbf{x} e}$ be the CC and CE regions, respectively, of the PV cell $V_\mathbf{x}$ corresponding to the BS at $\mathbf{x}\in\Phi$, which are defined as \begin{equation} \begin{split} V_{\mathbf{x} c}&=\{\mathbf{y}\in V_\mathbf{x} : \|\mathbf{y}-\mathbf{x}\|\leq \min_{\mathbf{x}'\in\Phi_\mathbf{x}}\tau\|\mathbf{y}-\mathbf{x}'\|\} \\\text{ and } V_{\mathbf{x} e}&=\{\mathbf{y}\in V_\mathbf{x} : \|\mathbf{y}-\mathbf{x}\|> \min_{\mathbf{x}'\in\Phi_\mathbf{x}}\tau\|\mathbf{y}-\mathbf{x}'\|\}, \end{split} \label{eq:CC_CE_Regions} \end{equation} where $\Phi_\mathbf{x}=\Phi\setminus\{\mathbf{x}\}$ and $\tau\in(0,1)$ is the boundary threshold. Fig. \ref{fig:illustration_PriorNOMAModel} (Left) depicts the CC and CE regions for $\tau=0.7$. Now, similar to \cite{Priyo2019FPR}, extending the application of the Type I user point process \cite{Haenggi2017}, we define the point processes of the locations of the CC and CE users as \begin{equation} \begin{split} \Psi_{cc}&=\{U(V_{\mathbf{x} c};N_{\mathbf{x} c}): \mathbf{x}\in \Phi \}\\ \text{and}~\Psi_{ce}&=\{U(V_{\mathbf{x} e};N_{\mathbf{x} e}): \mathbf{x}\in \Phi \}, \end{split} \label{eq:CC_CE_pps} \end{equation} respectively, where $U(A;N)$ denotes $N$ points chosen independently and uniformly at random from the set $A$. {Here, $N_{\mathbf{x} c}$ and $N_{\mathbf{x} e}$ are the numbers of CC users in $V_{oc}$ and CE users in $V_{oe}$, respectively, where $\mu$ represents the user density. We assume that $N_{\mathbf{x} c}$ and $N_{\mathbf{x} e}$ follow the zero-truncated Poisson distributions with means $\nu|V_{\mathbf{x} c}|$ and $\nu|V_{\mathbf{x} e}|$, respectively.} We refer to $N_{\mathbf{x} c}$ and $N_{\mathbf{x} e}$ as the {\em CC} and {\em CE loads} of the BS at $\mathbf{x}$. {Table I summarizes the system design variables and performance metrics considered in this paper. } Since by the Slivnyak\textquotesingle s theorem, conditioning on a point is the same as adding a point to the PPP, we consider that the nucleus of the {\em typical cell} of the point process $\Phi\cup\{o\}$ is located at the origin $o$. Thus, the typical cell becomes $ V_o=\{\mathbf{y}\in\mathbb{R}^2\mid \|\mathbf{y}-\mathbf{x}\|>\|\mathbf{y}\|\,\forall \mathbf{x}\in\Phi\}$. For this setting, the locations the {\em typical CC user} and the {\em typical CE user} of $\Psi_{cc}$ and $\Psi_{ce}$ can be modeled using the uniformly distributed points in $V_{oc}$ and $V_{oe}$, respectively. Thus, $\Phi$ becomes the point process of the interfering BSs that is observed by the typical CC and CE users. \begin{table*} \centering \caption{} \label{table:Syatem_Variable} \hspace{-5mm}{\small \begin{tabular}{ |c |c||c|c| } \hline \multicolumn{2}{|c||}{\textbf{System design variables }} & \multicolumn{2}{c|}{\textbf{Performance metrics}}\\ \hline BS point process & $\Phi$ & Meta distribution for CC users & $\bar{F}_c(\cdot,\cdot)$ \\ \hline BS and user densities & $\lambda$ and $\nu$ & Meta distribution for CE users & $\bar{F}_e(\cdot,\cdot)$ \\ \hline CC and CE users point processes & $\Psi_{cc}$ and $\Psi_{ce}$ & $b$-th moments of meta distributions & $M_b^c$ and $M_b^e$\\ \hline PV cell associated with BS at $\mathbf{x}$ & $V_\mathbf{x}$ & Transmission rates of CC and CE users & $R_c$ and $R_e$\\ \hline CC and CE regions of PV cell $V_x$ & $V_{\mathbf{x}c}$ and $V_{\mathbf{x}e}$ & Service rates of CC and CE users & $\mu_c$ and $\mu_e$ \\ \hline Number of CC and CE users & $N_{\mathbf{x}c}$ and $N_{\mathbf{x}e}$ & $\mathtt{CDF}$ of CC user's transmission rate & $\mathcal{R}_c(\cdot,\cdot)$ \\ \hline Service link distance & $R_o$ & $\mathtt{CDF}$ of CE user's transmission rate & $\mathcal{R}_e(\cdot,\cdot)$\\ \hline Dominant interfering link distance & $R_d$ & $\mathtt{CDF}$ of upper bound on CC user's delay & $\mathcal{D}_c(\cdot,\cdot)$ \\ \hline Boundary threshold for regions & $\tau$ & $\mathtt{CDF}$ of upper bound on CE user's delay & $\mathcal{D}_e(\cdot,\cdot)$ \\ \hline Transmission layers & $\mathtt{L_c}$ and $\mathtt{L_e}$ & Cell sum rate & $\mathtt{CSR_{NOMA}}$\\ \hline $\mathtt{SIR}$ thresholds & $\beta_c$ and $\beta_e$ & Effective capacity of CC service & $\mathtt{EC}^c_{\mathtt{NOMA}}$ \\ \hline Power allocated to $\mathtt{L_c}$ and $\mathtt{L_e}$ & $\theta $ and $(1-\theta)$ & Effective capacity of CE service & $\mathtt{EC}^e_{\mathtt{NOMA}}$ \\ \hline \end{tabular}} \end{table*} Let $R_o=\|\mathbf{y}\|$ be the {\em service link distance}, i.e., the distance between the user at $\mathbf{y}\in V_o$ and its serving BS at $o$. Let $R_d=\|\mathbf{x}_d-\mathbf{y}\|$ be the distance from the user at $\mathbf{y}\in V_o$ to its dominant interfering BS at $\mathbf{x}_d\in\Phi$ where $\mathbf{x}_d=\operatorname{arg~max}_{\mathbf{x}\in\Phi}\|\mathbf{x}-\mathbf{y}\|^{-\alpha}$ and $\alpha$ is the path-loss exponent. Therefore, the definitions, given in \eqref{eq:CC_CE_Regions} and \eqref{eq:CC_CE_pps}, implicitly classify the CC and CE users based on their distances (i.e., path-losses) from their serving and dominant interfering BSs such that the CC user at $\mathbf{y}\in V_{oc}$ has $R_o\leq \tau R_d$ and the CE user at $\mathbf{y}\in V_{oe}$ has $R_o> \tau R_d$. \begin{figure*} \centering \hspace{-1.5cm} \includegraphics[trim=.85cm 1cm .85cm .5cm, width=.35\textwidth]{Illustration_ProposedCriteria.pdf}\hspace{-.5cm} \includegraphics[trim=.85cm 1cm .85cm .5cm, width=.35\textwidth]{Illustration_Comp.pdf}\hspace{-.1cm} \includegraphics[trim=.85cm 1cm .85cm .5cm, width=.35\textwidth]{DistanceBasedRanking.pdf}\vspace{-.2cm} \caption{Left: typical realization of $\Psi_{cc}$ and $\Psi_{ce}$ for $\tau=0.7$, $\lambda=1$, and $\nu=20$. Middle: an illustration of the user cluster from \cite{ali2019downlinkLetter,salehi2018meta,salehi2018accuracy} for $N=6$ (and the fact that the PV cell not necessarily confines the user cluster). Right: the distributions of the ordered link distances modeled in \cite{ali2019downlinkLetter,salehi2018meta,salehi2018accuracy} for $N=6$ and $\lambda=1$. The dot, cross, plus, and star markers correspond to BSs, CC users, CE users, and user cluster, respectively. The green and white colors in the left figure show the CC and CE regions.}\vspace{-3mm} \label{fig:illustration_PriorNOMAModel} \end{figure*} Fig. \ref{fig:illustration_PriorNOMAModel} (Left) illustrates a typical realization of $\Psi_{cc}$ and $\Psi_{ce}$. From this figure, it is clear that \eqref{eq:CC_CE_Regions} accurately preserves the CC and CE regions wherein the $\mathtt{SIR}$ is expected to be higher and lower, respectively. As a comparison, Fig. \ref{fig:illustration_PriorNOMAModel} (Middle) illustrates a realization of a user cluster that results from the distance-based ranking scheme {that is employed in} \cite{ali2019downlinkLetter,salehi2018meta,salehi2018accuracy}. As is clearly evident from the figure, the user cluster is not confined to the PV cell, which is an unintended consequence of ignoring correlation in the user locations. This can also be verified by comparing the distributions of the ordered distances used in \cite{ali2019downlinkLetter,salehi2018meta,salehi2018accuracy} with those obtained from the simulations. This comparison is given in Fig. \ref{fig:illustration_PriorNOMAModel} (Right) wherein $\tilde{R}_n$ is the link distance of the $n$-th closest user from the BS. It should be noted here that {the user clustering described in the above is different} from the {\em geographical} clustering of users and BSs that have been recently modeled using PCPs, e.g., see \cite{Mehrnaz2018,Praful_UserClustering}. While the PCPs are useful in capturing the spatial coupling in the locations of the users and BSs, they do not necessarily partition users in different groups based on their QoS as done in the above scheme. Therefore, the proposed technique to form user clusters can apply to more general studies that may require to partition the users based on their QoS experiences, such as soft frequency reuse \cite{dominique2010self}. Now, we discuss the downlink NOMA transmission for a randomly selected pair of the CC and CE users in the typical cell in the following subsection. \subsection{Downlink NOMA Transmission based on the Proposed User Pairing} Each BS is assumed to transmit signal superimposed of two layers corresponding to the messages for the CC and CE users forming a user cluster. Henceforth, the layers intended for the CC and CE users are referred to as the $\mathtt{L}_c$ and $\mathtt{L}_e$ layers, respectively. The $\mathtt{L}_c$ and $\mathtt{L}_e$ layers are encoded at power levels of $\theta P$ and $(1-\theta)P$, respectively, where $P$ is the transmission power per RB and $\theta \in(0,1)$. Without loss of generality, we assume $P=1$ (since we ignore thermal noise). Usually, NOMA allocates more power to the weaker user so that it receives smaller intra-cell interference power compared to the desired signal power. Thus, the CC user first decodes the $\mathtt{L}_e$ layer while treating the power assigned to the $\mathtt{L}_c$ layer as interference. After successfully decoding the $\mathtt{L}_e$ layer, the CC user cancels its signal using SIC from the received signal and then decodes the $\mathtt{L}_c$ layer. The $\mathtt{SIR}$s of the typical CC user at $\mathbf{y}\in V_{oc}$ for decoding the $\mathtt{L}_c$ and $\mathtt{L}_e$ layers are \begin{align} \mathtt{SIR}_{e}&=\frac{h_{o}R_o^{-\alpha}(1-\theta)}{\theta h_{o}R_o^{-\alpha}+I_{\Phi}}~ \text{and~} \mathtt{SIR}_{c}=\frac{h_{o}R_o^{-\alpha}\theta}{I_{\Phi}}, \label{eq:SIR} \end{align} respectively, where $I_{\Phi}=\sum_{\mathbf{x}\in\Phi} h_{\mathbf{x}}\|\mathbf{x}-\mathbf{y}\|^{-\alpha} $ represents the aggregate inter-cell interference resulting from the set of BSs in $\Phi$ and $h_{\mathbf{x}}\sim \exp(1)$ are i.i.d. fading channel gains. Besides, the CE user decodes {the} $\mathtt{L}_e$ layer while treating the power assigned to the $\mathtt{L}_c$ layer as interference. Thus, the effective $\mathtt{SIR}$ of the typical CE user at $\mathbf{y}\in V_{oe}$ is also $\mathtt{SIR}_{e}$ {as} given in \eqref{eq:SIR}. Note that we assumed a full-buffer system in \eqref{eq:SIR}, which is a common assumption in the stochastic geometry literature and is quite reasonable for NRT services for which the objective is to maximize the {sum-rate}. We will further argue in Subsection \ref{subsec:SchedulinThroughputDelay} that this simple setting also provides useful bounds on the delay-centric metrics. \subsection{Meta Distribution for the Downlink NOMA System} \label{subsec:MetaDistribution} The success probabilities for the CC and CE users are defined as the probabilities that the typical CC and CE users can decode their intended messages. {These success probabilities provide the mean performance of the typical CC and CE users in the network. However, they do not give any information on the disparity in the link performance of the CC and CE users spread across the network.} For that purpose, the distribution of the conditional success probability (conditioned on the locations of BS w.r.t. to the CC/CE user location) can be useful. The distribution of the conditional success probability is referred to as the {\em meta distribution} \cite{Martin2016Meta}. The meta distribution for the CC/CE user can be used to answer questions like ``what percentage of the CC/CE users can establish their links with the transmission reliability above a predefined threshold for a given $\mathtt{SIR}$ threshold?". Building on the definition of the meta distribution in \cite{Martin2016Meta}, we define the meta distributions for the CC and CE users under NOMA as below. \begin{definition} The meta distribution of the typical CC user's success probability is defined as \begin{equation} \bar{F}_{\text{c}}(\beta_c,\beta_e;x)=\mathbb{P}[p_c(\beta_c,\beta_e\mid\mathbf{y}, \Phi)>x], \end{equation} and the meta distribution of the typical CE user's success probability is defined as \begin{equation} \bar{F}_{\text{e}}(\beta_e;x)=\mathbb{P}[p_e(\beta_e\mid\mathbf{y},\Phi)>x], \end{equation} where $x\in[0,1]$, $\beta_c$ and $\beta_e$ are the $\mathtt{SIR}$ thresholds corresponding to the $\mathtt{L}_c$ and $\mathtt{L}_e$ layers, respectively. $p_c(\beta_c,\beta_e\mid\mathbf{y},\Phi)=\mathbb{P}[\mathtt{SIR}_{c}\geq \beta_c,\mathtt{SIR}_{e}\geq \beta_e\mid\mathbf{y}, \Phi]$ and $p_e(\beta_e\mid \mathbf{y},\Phi)=\mathbb{P}[\mathtt{SIR}_{e}\geq \beta_e\mid \mathbf{y},\Phi]$ are the success probabilities of the typical CC and CE users conditioned on their locations at $\mathbf{y}$ and the point process $\Phi$ of the interfering BSs, respectively. \end{definition} \subsection{Traffic Modeling, Scheduling, and Performance Metrics} \label{subsec:SchedulinThroughputDelay} For the above setting, we will perform a comprehensive load-aware performance analysis of the CC/CE users under the NOMA system. For this, we consider {\em random scheduling} wherein each BS randomly selects a pair of CC and CE users within its PV cell for the NOMA transmission in a given time slot. For the NRT services, our objective is to maximize the sum-rate. Since the network is assumed to be static, the load-aware transmission rate of CC/CE user at $\mathbf{y}$ depends on its {\em scheduling probability} and {\em successful transmission probability}, both conditioned on $\Phi$. {As already implied above, each CC/CE user within a given cell is equally likely to be scheduled in each time slot. Besides, $\mathtt{SIR}$s experienced by the CC/CE user at $\mathbf{y}$ are i.i.d.~across the time slots for given $\Phi$.} Each BS transmits signal superimposed with $\mathtt{L}_c$ and $\mathtt{L}_e$ layers which are encoded at rates $\mathtt{B}\log_2(1+\beta_c)$ and $\mathtt{B}\log_2(1+\beta_e)$ bits/sec, respectively, where $\mathtt{B}$ is the channel bandwidth. Without loss of generality, here onwards we assume $\mathtt{B}=1$. Therefore, according to Shannon's capacity law, the user requires $\mathtt{SIR}_c$ and $\mathtt{SIR}_e$ above thresholds $\beta_c$ and $\beta_e$, respectively, to successfully decode these layers. Hence, for given $\Phi$, the achievable transmission rates of the CC and CE users located at $\mathbf{y}\in V_o$ under random scheduling respectively become \begin{equation} \begin{split} R_c(\mathbf{y},\Phi)&=\frac{p_c(\beta_c,\beta_e\mid\mathbf{y},\Phi)}{N_{oc}} \log_2(1+\beta_c)\\ \text{and~}R_e(\mathbf{y},\Phi)&=\frac{p_e(\beta_e\mid\mathbf{y},\Phi)}{N_{oe}} \log_2(1+\beta_e). \end{split}\label{eq:RateCondPhi_CC_CE} \end{equation} \indent The characterization of the typical CC/CE user's success probability under the full-buffer system is also useful in obtaining an upper bound on its delay performance for more general traffic patterns. This can subsequently be used to derive lower bounds on the $\mathtt{SEC}$ for the given RT service. {The exact delay analysis for the cellular networks is known to be challenging because of the coupled queues at different BSs. Generally, the performance of coupled queues is studied by employing meaningful {\em modifications} to the system \cite{Rao_1988}; see \cite{bonald2004wireless,Haenggi_Stability,Zhong_SpatoiTemporal} for a small subset of relevant works in the context of cellular networks.} {For a modified system, the general approach includes the full buffer assumption for the interfering links. As a result, the queue associated with the link of interest operates independently of the statues of queues associated with the interfering links. Note that such a modified system is consistent with our system set-up that is assumed from the very beginning.} Since this effectively underestimates the success probability, it provides an upper bound on the packet transmission delay. In the same spirit, this paper presents an upper bound on the distribution of the conditional mean delay of the typical link, which is tighter for the higher load scenarios. {One can, of course, determine the mean delay of the typical link by adopting more sophisticated analyses, such as the mean cell approach \cite{Bartllomiej2016}. However, the contributions of this paper revolve around accurate downlink NOMA analysis using a new user pairing scheme because of which these other approaches are out of the scope of the current paper. } {We assume that the typical CC/CE user has a dedicated queue of infinite length which is placed at its serving BS. The packet arrival process of the CC/CE service is assumed to follow the Bernoulli distribution with {a} mean of $\varrho_c$/$\varrho_e$ packets per slot. The packet sizes of the CC and CE services are considered to be equal to $\mathtt{TB}\log_2(1+\beta_c)$ and $\mathtt{TB}\log(1+\beta_e)$ bits, respectively, where $\mathtt{T}$ is the slot duration. } Note that the generalized values of $\mathtt{SIR}$ thresholds $\beta_c$ and $\beta_e$ facilitate the choice of selecting CC and CE services with different packet sizes. {Thus, the successful packet transmission rates (in packets per slot) for the typical CC and CE users at $\mathbf{y}$ given $\Phi$ respectively become } \begin{equation} \begin{split} \mu_{c}(\mathbf{y},\Phi) &=\frac{p_c(\beta_c,\beta_e\mid\mathbf{y},\Phi)}{N_{oc}} \\ \text{ and~} \mu_{e}(\mathbf{y},\Phi) &= \frac{p_e(\beta_e\mid\mathbf{y},\Phi)}{N_{oe}}. \end{split}\label{eq:DelayCondPhi_CC_CE} \end{equation} \indent Besides, we also analyze the load-aware performances of the CC and CE users under above discussed services for the OMA system. {For OMA system, we consider that each BS schedules one of its associated CC (CE) users that is chosen uniformly at random in a given time slot if $r\leq \eta$ ($r>\eta$) where $r\sim U([0,1])$ is generated independently across the time slots.} The parameter $\eta$ allows to control the frequency of scheduling of the CC and CE users in order to meet their QoS requirements. The CC/CE load of the typical cell, and hence the scheduling probability of typical CC/CE user, depends on $\Phi$ (see \eqref{eq:CC_CE_pps}). Thus, from \eqref{eq:RateCondPhi_CC_CE}-\eqref{eq:DelayCondPhi_CC_CE}, it is quite evident that the exact analysis requires the joint statistical characterization of the success probability and scheduling probability. However, such joint characterization is challenging as the distribution (even the moments) of the area of the PV cell $V_o$ conditioned on $\mathbf{y}\in V_o$ is difficult to obtain. Hence, similar to \cite{Zhong_SpatoiTemporal,Priyo2019FPR}, we adopt the following reasonable assumption in our analysis. \begin{assumption} \label{assumption:Independence_PVCellArea_SuccessProb} We assume that the CC/CE load (or, the scheduling probability) and the successful transmission probability observed by the typical CC/CE user are independent. \end{assumption} The numerical results presented in Section \ref{sec:NumericalResults} will demonstrate the accuracy of this assumption for the analysis of the metrics discussed above. \section{Meta Distribution Analysis for the CC and CE users} \label{sec:MetaDistributionsAnalysis} The main goal of this Section is to present the downlink meta distribution analysis for the NOMA and OMA systems. As stated already in Section \ref{subsec:Contribution}, we characterize the performance of the {\em typical cell} which departs significantly from the standard stochastic geometry approach of analyzing the performance of the typical user. The key intermediate step in the meta distribution analysis is the joint characterization of the service link distance $R_o=\|\mathbf{y}\|$, where $\mathbf{y}\sim U(V_o)$, and the point process $\Phi$ of the interfering BSs. Hence, to enable the analysis of the meta distributions of the CC and CE users in the typical cell, we require the joint characterizations of $R_o$ and $\Phi$ under the conditions of $R_o\leq R_d\tau$ and $R_o>R_d\tau$. For this, we first determine the marginal and joint probability density functions ($\mathtt{pdf}$s) of $R_o$ and $R_d$ for the CC and CE users. However, given the complexity of the analysis of r.v. $R_o$ \cite{PraPriHar}, it is reasonable to assume that the exact joint characterization of $R_o$ and $R_d$ is equally, if not more, challenging. The marginal distribution of $R_o$ is generally approximated using the contact distribution {with adjusted density by} a correction factor (c.f.) to maintain tractability \cite{Haenggi2017}. Thus, we approximate the joint $\mathtt{pdf}$ of $R_o$ and $R_d$ using the joint $\mathtt{pdf}$ of the distances to the two nearest points in PPP as \begin{align} f_{R_o,R_d}(r_o,r_d)&=(2\pi\rho\lambda)^2r_or_d\exp(-\pi\rho\lambda r_d^2), \label{eq:pdf_RoRd} \end{align} for $r_d\geq r_o\geq 0$, where $\rho=\frac{9}{7}$ is the c.f.~(refer \cite{Praful_TypicalCell} for more details). \begin{lemma} \label{lemma:DistanceDistributions} The probabilities that a user uniformly distributed in the typical cell is the CC user and the CE user are equal to $\tau^2$ and $1-\tau^2$, respectively. The cumulative density function ($\mathtt{CDF}$) of $R_o$ and the $\mathtt{CDF}$ of $R_d$ conditioned on $R_o$ for the CC user are given by \begin{equation} F^{\text{c}}_{R_o}(r_o)=1-\exp\left(-\pi\rho\lambda {r_o^2}/{\tau^2}\right),\label{eq:CDFRo_CC1} \end{equation} for $r_o>0$, and \begin{equation} F^{\text{c}}_{R_d\mid R_o}\left(r_d\mid r_o\right)=1-\exp\left(-\pi\rho\lambda \left(r_d^2-{r_o^2}/{\tau^2}\right)\right),\label{eq:CDFRo_CC2} \end{equation} for $r_d>\frac{r_o}{\tau}$, respectively. The joint $\mathtt{pdf}$ of $R_o$ and $R_d$ for the CC user is given by \begin{equation} f^{\text{c}}_{R_o,R_d}(r_o,r_d)=\frac{(2\pi\rho\lambda)^2}{\tau^2}r_or_d\exp\left(-\pi\rho\lambda r_d^2\right), \label{eq:pdf_RoRd_CC} \end{equation} for $r_d>\frac{r_o}{\tau}$ and $r_o>0$. The $\mathtt{CDF}$ of $R_o$ and the $\mathtt{CDF}$ of $R_d$ conditioned on $R_o$ for the CE user are respectively given by \begin{equation} F^{\text{e}}_{R_o}(r_o)=1-\frac{1-\tau^2\exp\left(-\pi\rho\lambda{r_o^2}(\tau^{-2}-1)\right)}{(1-\tau^2)\exp\left(\pi\rho\lambda{r_o^2}\right)},\label{eq:CDFRo_CE} \end{equation} for $r_o> 0$, and $F^{\text{e}}_{R_d\mid R_o}\left(r_d\mid r_o\right)$ \begin{align} =\begin{cases}\frac{1-\exp(-\pi\rho\lambda(r_d^2-r_o^2))}{1-\exp(-\pi\rho\lambda r_o^2(\tau^{-2}-1))},~ &\text{for}~\frac{r_o}{\tau}> r_d\geq r_o,\\ 1, &\text{for}~r_d\geq \frac{r_o}{\tau}. \end{cases} \label{eq:CDFRdCondRo_CE} \end{align} The joint $\mathtt{pdf}$ of $R_o$ and $R_d$ for the CE user is given by \begin{equation} f^{\text{e}}_{R_o,R_d}(r_o,r_d)=\frac{(2\pi\rho\lambda)^2}{1-\tau^2}r_or_d\exp\left(-\pi\rho\lambda r_d^2\right), \label{eq:pdf_RoRd_CE} \end{equation} for $\frac{r_o}{\tau}\geq r_d>r_o$ and $r_o>0$. \end{lemma} \begin{proof} Please refer to Appendix A. \end{proof} \subsection{Meta Distribution for CC and CE users} \label{sec:MetaDistributions_NOMA_OMA} {The CC user needs to decode both the $\mathtt{L}_c$ and $\mathtt{L}_e$ layers for the successful reception its own message. Thus, the successful transmission event for the CC user becomes} \begin{align} {\mathcal{E}}_c&=\{\mathtt{SIR}_{c}>\beta_c\}\cap\{\mathtt{SIR}_{e}>\beta_e\}\nonumber\\ &=\left\{h_o>R_o^\alpha I_\Phi\chi_c\right\}, \label{eq:SuccessEevent_CC} \end{align} where $\chi_c=\max\left\{\frac{\beta_c}{\theta},\frac{\beta_e}{1-\theta(1+\beta_e)}\right\}$. On the other hand, the CE user decodes its message while treating the signal intended for the CC user as the interference power. Thus, the successful transmission event for the CE user is given by \begin{align} {\mathcal{E}}_e&=\{\mathtt{SIR}_{e}>\beta_e\}=\left\{h_{o}>R_o^\alpha I_{\Phi}\chi_e\right\}, \label{eq:SuccessEevent_CE} \end{align} where $\chi_e=\frac{\beta_e}{1-\theta(1+\beta_e)}$. Since it is difficult to derive the meta distribution \cite{Martin2016Meta}, we first derive its moments in the following theorem and then use them to approximate the meta distribution. \begin{theorem} \label{thm:MetaDisMoment} The $b$-th moments of the meta distributions of conditional success probability for the typical CC and CE users under NOMA respectively are \begin{equation} \begin{split} M_b^{\text{c}}(\chi_c)&=\frac{\rho^2}{\tau^2}\int\limits_0^{\tau^2} \frac{\left(\rho + v{\mathcal{Z}}_b\left(\chi_c,v\right)\right)^{-2}}{(1+\chi_cv^{\frac{1}{\delta}})^b}{\rm d}v,\\ \text{and}~M_b^{\text{e}}(\chi_e)&=\frac{\rho^2}{1-\tau^2}\int\limits_{\tau^2}^1 \frac{\left(\rho + v{\mathcal{Z}}_b\left(\chi_e,v\right)\right)^{-2}}{(1+\chi_e v^\frac{1}{\delta})^b}{\rm d}v,\label{eq:MetaDisMoment_CC_CE} \end{split} \end{equation} where~ ${\mathcal{Z}}_b(\chi,a)=\chi^\delta\int_{\chi^{-\delta} a^{-1}}^\infty \left[1-({1+t^{-\frac{1}{\delta}}})^{-b}\right]{\rm d}t$ and \\$\delta=\frac{2}{\alpha}$. \end{theorem} \begin{proof} Please refer to Appendix B \end{proof} In OMA, each BS serves its associated users using orthogonal RBs which means that there is no intra-cell interference. Thus, OMA provides better success probabilities for the CC and CE users compared to NOMA. However, the orthogonal RB allocation reduces the transmission instances for the CC and CE users, which in turn affects their transmission rates negatively. The successful transmission events for the CC and CE user under OMA respectively given by \begin{align} \tilde{{\mathcal{E}}}_c&=\{h_o>R_o^\alpha \beta_c I_\Phi\} \text{ and } \tilde{{\mathcal{E}}}_e=\{h_o>R_o^\alpha \beta_e I_\Phi\}. \label{eq:SuccessEevent_CC_CE_OMA} \end{align} The following corollary presents the $b$-th moments of the meta distributions for the OMA case. \begin{cor} \label{cor:OrthogonalTransmission} The $b$-th moments of the meta distributions of conditional success probability for the typical CC and CE users under OMA respectively are \begin{align} \begin{split} \tilde{M}_b^{\text{c}}(\beta_c)&=\frac{\rho^2}{\tau^2}\int\limits_0^{\tau^2} \frac{\left(\rho + v{\mathcal{Z}}_b\left(\beta_c,v\right)\right)^{-2}}{(1+\beta_cv^{\frac{1}{\delta}})^b}{\rm d}v,\\ \text{and}~\tilde{M}_b^{\text{e}}(\beta_e)&=\frac{\rho^2}{1-\tau^2}\int\limits_{\tau^2}^1 \frac{\left(\rho + v{\mathcal{Z}}_b\left(\beta_e,v\right)\right)^{-2}}{(1+\chi_e v^\frac{1}{\delta})^b}{\rm d}v, \end{split}\label{eq:MetaDisMoment_CC_CE_OMA} \end{align} where~ ${\mathcal{Z}}_b(\beta,a)=\beta^\delta\int_{\beta^{-\delta} a^{-1}}^\infty \left[1-({1+t^{-\frac{1}{\delta}}})^{-b}\right]{\rm d}t$. \end{cor} \begin{proof} Using the definitions in \eqref{eq:SuccessEevent_CC_CE_OMA} and following the steps in Appendix B, we obtain \eqref{eq:MetaDisMoment_CC_CE_OMA}. \end{proof} Fig. \ref{fig:Moments_NOMA_OMA_BetaApp} verifies that the means and variances of the meta distributions of the CC and CE users under NOMA (Left) and OMA (Middle) derived in Theorem \ref{thm:MetaDisMoment} and Corollary \ref{cor:OrthogonalTransmission}, respectively, closely match with the simulation results. The moments for the CE user monotonically decrease with $\theta$ as the interference from the $\mathtt{L}_c$ layer increases with $\theta$. However, the performance trend of the moments for the CC user w.r.t. $\theta$ is different. {This is because $\theta$ affects the probabilities of decoding the $\mathtt{L}_c$ and $\mathtt{L}_e$ layers at the CC user differently.} While increasing $\theta$ makes it difficult to decode $\mathtt{L}_e$ layer, it makes it easier to decode $\mathtt{L}_c$ layer. As a result, the impact of $\mathtt{L}_c$ layer decoding is dominant for $\theta\leq\hat\theta~(=0.5$ for $(\beta_c,\beta_e)=(0,-3)$ (in dB)) and the impact of $\mathtt{L}_e$ layer decoding is dominant for $\theta>\hat\theta$ where $\hat\theta$ will be defined in Section \ref{sec:CSEoptimization}. \begin{figure*} \centering \hspace{-.35cm} \includegraphics[width=.33\textwidth]{Moments_MetaDistributionNOMA.pdf} \includegraphics[width=.33\textwidth]{Moments_MetaDistributionOMA.pdf} \includegraphics[width=.33\textwidth]{BetaApproximation.pdf} \caption{Moments for the CC and CE users under NOMA (Left) and OMA (Middle), and the beta approximations (Right) for $\tau=0.7$, $\alpha=4$, $\lambda=1$, and $(\beta_c,\beta_e)=(0,-3)$ dB. The solid and dashed curves correspond to the analytical results and the markers correspond to the simulation results.} \label{fig:Moments_NOMA_OMA_BetaApp} \end{figure*} \subsection{Meta Distribution and its Approximation} \label{subsec:BetaApproximation} Using $M_{b}^{c}$ and $M_{b}^{e}$ derived in Theorem \ref{thm:MetaDisMoment} and the Gil-Pelaez's inversion theorem \cite{Gil1951}, the meta distributions for the typical CC and CE users under NOMA can be obtained. However, the evaluation of meta distributions using the Gil-Pelaez inversion is computationally complex. Thus, similar to \cite{Martin2016Meta}, we approximate the meta distributions of the CC and CE users with the beta distributions by matching the moments as below \begin{equation} \begin{split} \bar{F}_\text{c}(\chi_c;x)&\approx 1-I(x;\kappa_{1c},\kappa_{2c})\\ \text{ and } \bar{F}_\text{e}(\chi_e;x)&\approx 1-I(x;\kappa_{1e},\kappa_{2e}), \end{split} \label{eq:BetaApp} \end{equation} respectively, where $I(x;a,b)$ is the regularized incomplete beta function and $$\kappa_{1s}= \frac{M_1^{s}\kappa_2^{ss}}{1-M_1^{s}} \text{~~and~~} \kappa_{2s}=\frac{(M_1^{s}-M_2^{s})(1-M_1^{s})}{M_2^{s}-(M_1^{s})^2}$$ For $s=\{c,e\}$. Similarly, the beta approximations for the meta distributions under OMA can be obtained using the moments given in Corollary \ref{cor:OrthogonalTransmission}. We denote the parameters of the beta approximation for the CC (CE) user under OMA by $\tilde{\kappa}_{1c}$ and $\tilde{\kappa}_{1c}$ ($\tilde{\kappa}_{1e}$ and $\tilde{\kappa}_{1e}$). Fig. \ref{fig:Moments_NOMA_OMA_BetaApp} (Right) shows that the beta distributions closely approximate the meta distributions of the CC and CE users for both NOMA and OMA. The proposed beta approximations can be used for the system-level analysis without having to perform the computationally complex Gil-Pelaez inversion. {The figure shows that the percentage of the CE users meeting the link reliability (i.e., conditional success probability) drops with increasing $\theta$ whereas the percentage of CC users achieving the link reliability increases with $\theta$.} \section{Throughput and Delay Analysis for the CC and CE Users} \label{sec:ThroughputDelayAnalysis} In this section, we characterize the conditional transmission rate (given in \eqref{eq:RateCondPhi_CC_CE}) and the conditional mean delay (given in \eqref{eq:DelayCondPhi_CC_CE}) of the CC/CE users under NRT and RT services, respectively. For this, we first derive the distributions of the CC and CE loads in the following subsection. \subsection{Distributions of the CC and CE Loads} \label{subsec:AreaDistributions} The CC and CE loads of the typical cell $V_o$, i.e., $N_{oc}$ and $N_{oc}$, depend on the areas of CC and CE regions, i.e., $|V_{oc}|$ and $|V_{oe}|$. {It is challenging to derive the area distributions of a random set directly. Thus, we first drive the exact first two moments of these areas in the following lemma and then use them to approximate the area distributions of CC and CE regions.} \begin{lemma} \label{lemma:Area_CC_CE_Region} For a given $\tau$, the mean areas of the CC and CE regions are {\small \begin{align} {\mathbb{E}}[|{V}_{oc}|]&={\tau^2}{\lambda^{-1}}\text{~and~}{\mathbb{E}}[|{V}_{oe}|]={(1-\tau^2)}{\lambda^{-1}},\label{eq:Mean_CCCERegion} \end{align} } respectively, and second moments of the areas of the CC and CE regions are {\small \begin{equation} \begin{split} {\mathbb{E}}[|V_{oc}|^2]=4\pi&\int_0^\pi\int_0^\infty\int_0^\infty\exp\left(-\lambda U_3\right)r_1{\rm d}r_1r_2{\rm d}r_2{\rm d}u\\ \text{and}~{\mathbb{E}}[|V_{oe}|^2]&=4\pi\int_0^\pi\int_0^\infty {\mathcal{F}}(r_2,u)r_2{\rm d}r_2{\rm d}u, \end{split}\label{eq:SecondMoment_CCCERegion} \end{equation}} respectively, where {\small\begin{align*} {\mathcal{F}}&(r_2,u)=\int_{\mathtt{D}{(r_2,u)}}\hspace{-.5cm}\left(\exp(-\lambda U_1)\mathbbm{1}_{r_1\leq r_2}+\exp(-\lambda U_2)\mathbbm{1}_{r_2<r_1}\right)r_1{\rm d}r_1\\ &+\int_{\mathbb{R}\setminus\mathtt{D}{(r_2,u)}}\hspace{-.5cm}([\exp(-\lambda {U}_o)-\exp(-\lambda( {U}_1+ {U}_2- {U}_3)]+\\ &~~~~[\exp(-\lambda( {U}_2- {U}_3)-1][\exp(-\lambda {U}_1)-\exp(-\lambda {U}_3)])r_1{\rm d}r_1, \end{align*}} {\small $\mathtt{D}(r_2,u)=\{r_1\in\mathbb{R}: d\leq \tau^{-1}|r_1-r_2|\}$,\\ $d=(r_1^2+r_2^2-2r_1r_2\cos(u))^\frac{1}{2}$, $u_o=u$,\\ $U_o=U(r_1,r_2,u_o)$, $U_3=U(r_1\tau^{-1},r_2\tau^{-1},u_3)$,\\ $U_1=U(r_1\tau^{-1},r_2,u_1)$ if $r_1\tau^{-1}<d+r_2$ otherwise $U_1=\pi r_1^2\tau^{-2}$, \\ $U_2=U(r_1,r_2\tau^{-1},u_2)$ if $r_2\tau^{-1}<d+r_1$ otherwise $U_2=\pi r_2^2\tau^{-2}$,\\ $u_1=\arccos\left((\tau^{-1}-\tau)\frac{r_1}{2r_2}+\tau\cos(u)\right)$,\\ $u_2=\arccos\left((\tau^{-1}-\tau)\frac{r_2}{2r_1}+\tau\cos(u)\right)$, \\ $u_3=\arccos\left((1-\tau^2)\frac{r_1^2+r_2^2}{2r_1r_2}+\tau^2\cos(u)\right)$,\\ $w(r_1,r_2,u)=\arccos\left(d^{-1}{(r_1-r_2\cos(u))}\right)$, and \\ $U(r_1,r_2,u)=r_1^2\left(\pi-w(r_1,r_2,u) + \frac{\sin\left(2w(r_1,r_2,u)\right)}{2}\right)$\\ \hspace*{17mm}$+~r_2^2\left(\pi-w(r_2,r_1,u) + \frac{\sin\left(2w(r_2,r_1,u)\right)}{2}\right)$.} \end{lemma} \begin{proof} Please refer to Appendix C \end{proof} Now, we approximate the distributions of the CC and CE areas. {The area of the PV cell follows the gamma distribution \cite{tanemura2003statistical} and the sum of two independent gamma distributed random variables also follows the gamma distribution. Therefore, the gamma distribution is the natural choice for these approximations as the correlation between the CC and CE areas is not too high.} Thus, $\mathtt{pdf}$s of the CC and CE areas are, respectively, approximated as \begin{equation} \begin{split} f_{|V_{oc}|}(a)&=\frac{\gamma_{1c}^{\gamma_{2c}}}{\Gamma(\gamma_{2c})}a^{\gamma_{2c}-1}\exp(-\gamma_{1c} a)\\ \text{and } f_{|V_{oe}|}(a)&=\frac{\gamma_{1e}^{\gamma_{2e}}}{\Gamma(\gamma_{2e})}a^{\gamma_{2e}-1}\exp(-\gamma_{1e} a), \end{split} \label{eq:PDF_CCCEarea} \end{equation} {where for~} $s\in\{c,e\}$ {~we have~}$$\gamma_{2s}=\gamma_{1s}\mathbb{E}[|V_{os}|] \text{~~and~~}\gamma_{1s}=\frac{\mathbb{E}[|V_{os}|]}{\mathbb{E}[|V_{os}|^2]-\mathbb{E}[|V_{os}|]^2}.$$ Fig. \ref{fig:AreaDistribution} provides the visual verification of the gamma approximations given in \eqref{eq:PDF_CCCEarea}. Now, using \eqref{eq:PDF_CCCEarea}, we obtain the distributions of the CC and CE loads in the following lemma. \begin{figure}[h] \centering\vspace{-7mm} \includegraphics[width=.48\textwidth]{GammaApproximation_CC_CE_Area.pdf}\vspace{-3mm} \caption{Gamma approximation of the area distributions of the CC and CE regions. The solid and dashed curves correspond to the gamma approximations and the markers correspond to the simulation results.}\vspace{-4mm} \label{fig:AreaDistribution} \end{figure} \begin{lemma} \label{lemma:CC_CE_Loads} The probability mass function ($\mathtt{pmf}$) of the number of CC users, i.e., $N_{oc}$, is $\mathbb{P}[N_{oc}=n]=$ \begin{equation} \frac{\nu^n\gamma_{1c}^{\gamma_{2c}}}{n!\Gamma(\gamma_{2c})}\int_0^\infty a^{n+\gamma_{2c}-1}\frac{\exp(-(\nu+\gamma_{1c}) a)}{1-\exp(-\nu a)}{\rm d}a, \label{eq:PMF_Noc} \end{equation} where $\gamma_{1c}$ and $\gamma_{2c}$ are given in \eqref{eq:PDF_CCCEarea}. The $\mathtt{pmf}$ of the number of CE, i.e., $N_{oe}$, is $\mathbb{P}[N_{oe}=n]=$ \begin{equation} \frac{\nu^n\gamma_{1e}^{\gamma_{2e}}}{n!\Gamma(\gamma_{2e})}\int_0^\infty a^{n+\gamma_{2e}-1}\frac{\exp(-(\nu+\gamma_{1e}) a)}{1-\exp(-\nu a)}{\rm d}a, \label{eq:PMF_Noe} \end{equation} where $\gamma_{1e}$ and $\gamma_{2e}$ are given in \eqref{eq:PDF_CCCEarea}. \end{lemma} \begin{proof} For given $|V_{oc}|$, $N_{oc}$ follows zero-truncated Poisson with mean $\nu|V_{oc}|$. Thus, we have \begin{align*} \mathbb{P}[N_{oc}=n]&=\mathbb{E}_{|V_{oc}|}\left[\mathbb{P}\left[N_{oc}=n\mid|V_{oc}|\right]\right]~\text{for}~n>0. \end{align*} {Now, taking expectation over $\mathtt{pdf}$ of $|V_{oc}|$ given in \eqref{eq:PDF_CCCEarea} will provide the $\mathtt{pmf}$ of $N_{oc}$ as given in \eqref{eq:PMF_Noc}.} Similarly, the $\mathtt{pmf}$ of $N_{oe}$ given in \eqref{eq:PMF_Noe} follows using the $\mathtt{pdf}$ of $|V_{oe}|$ given in \eqref{eq:PDF_CCCEarea}. \end{proof} \subsection{Transmission Rates of the CC and CE Users} \label{subsec:TransmissionRate} In this subsection, we derive the distributions of the conditional transmission rates of the CC and CE users under random scheduling which are, respectively, defined as \begin{equation} \begin{split} {\mathcal{R}}_c(\mathtt{r_{c}};\chi_c)&=\mathbb{P}[R_c(\mathbf{y},\Phi)\leq \mathtt{r_{c}}]\\ \text{~and~}{\mathcal{R}}_e(\mathtt{r_{e}};\chi_e)&=\mathbb{P}[R_e(\mathbf{y},\Phi)\leq \mathtt{r_{e}}], \end{split}\label{eq:RateOutage_CC_CE} \end{equation} where $R_c(\mathbf{y},\Phi)$ and $R_e(\mathbf{y},\Phi)$ are given in \eqref{eq:RateCondPhi_CC_CE}. {Now, using the meta distributions and the $\mathtt{pmf}$s of the cell loads, the means and the distributions of the conditional transmission rates of the CC and CE users are derived in the following theorem.} \begin{theorem} \label{thm:TransmissionRate_NOMA} The mean transmission rates of the typical CC and CE users under NOMA are { \begin{align} \begin{split} \bar{R}_{c}(\chi_c) &=\xi_{c} \log_2\left(1+\beta_c\right)M_{1}^{c}(\chi_c)\\ \text{and~} \bar{R}_{e}(\chi_e) &= \xi_{e}\log_2\left(1+\beta_e\right)M_{1}^{e}(\chi_e), \end{split} \label{eq:MeanTransmissionRate_CC_CE} \end{align}} respectively, where $M_1^{c}(\chi_c)$ and $M_1^{e}(\chi_e)$ are given in \eqref{eq:MetaDisMoment_CC_CE}, $s\in\{c,e\}$, $\xi_s=\sum_{n=1}^{\infty}\frac{1}{n}\mathbb{P}[N_{os}=n]$, and $\mathbb{P}[N_{os}=n]$ is given in Lemma \ref{lemma:CC_CE_Loads}. The $\mathtt{CDF}$ of the conditional transmission rates of the typical CC and CE users under NOMA are, respectively, { \begin{align} &{\mathcal{R}}_c(\mathtt{r_{c}};\chi_c)=\nonumber\\ &\mathbb{E}_{N_{oc}}\left[I\left(\min\left(\mathtt{r_{c}} N_{oc}\log_2(1+\beta_c)^{-1},1\right);\kappa_{1c},\kappa_{2c}\right)\right],\label{eq:RateCDF_UpperBound_CC}\\ &\text{and }{\mathcal{R}}_e(\mathtt{r_{e}};\chi_e)=\nonumber\\ &\mathbb{E}_{N_{oe}}\left[I\left(\min\left(\mathtt{r_{e}} N_{oe}\log_2(1+\beta_e)^{-1},1\right);\kappa_{1e},\kappa_{2e}\right)\right], \label{eq:RateCDF_UpperBound_CE} \end{align}} where $s\in\{c,e\}$, $\kappa_{1s}$ and $\kappa_{2s}$ are given in \eqref{eq:BetaApp}, and $\mathbb{P}[N_{os}=n]$ is given in Lemma \ref{lemma:CC_CE_Loads}. \end{theorem} \begin{proof} Please refer to Appendix D \end{proof} For OMA, we consider that each BS serves its associated CC and CE users for $\eta$ and $1-\eta$ fractions of time, respectively. Note that for $\eta=\frac{|V_{oc}|}{|V_o|}$, the above OMA scheduling scheme will be almost equivalent to the random scheduling wherein the typical BS randomly schedules one of its associated users in a given time slot. \begin{cor} \label{cor:TransmissionRate_OMA} The mean transmission rates of the typical CC and CE users under OMA are { \begin{equation} \begin{split} \tilde{{R}}_{c}(\beta_c)& =\eta\xi_{c} \log_2\left(1+\beta_c\right)\tilde{M}_{1}^{c}(\beta_c)\\ \text{and~} \tilde{{R}}_{e}(\beta_e)& = (1-\eta)\xi_{e}\log_2\left(1+\beta_e\right)\tilde{M}_{1}^{e}(\beta_e), \end{split} \label{eq:MeanTransmissionRate_CC_CE_OMA} \end{equation}} respectively, where $\tilde{M}_1^{c}(\chi_c)$ and $\tilde{M}_1^{e}(\chi_e)$ are given in \eqref{eq:MetaDisMoment_CC_CE_OMA} and, $s\in\{c,e\}$, $\xi_s=\sum_{n=1}^{\infty}\frac{1}{n}\mathbb{P}[N_{os}=n]$, and $\mathbb{P}[N_{os}=n]$ is given in Lemma \ref{lemma:CC_CE_Loads}. The $\mathtt{CDF}$ of the conditional transmission rates of the typical CC and CE users under OMA respectively are { \begin{align} &\tilde{{\mathcal{R}}}_c(\mathtt{r_{c}};\beta_c)=\nonumber\\ &\mathbb{E}_{N_{oc}}\left[I\left(\min\left( \frac{\mathtt{r_{c}}N_{oc}\eta ^{-1}}{\log_2(1+\beta_c)},1\right);\tilde{\kappa}_{1c},\tilde{\kappa}_{2c}\right)\right],\\ &\text{and }\tilde{{\mathcal{R}}}_e(\mathtt{r_{e}};\beta_e)=\nonumber\\ &\mathbb{E}_{N_{oe}}\left[I\left(\min\left( \frac{\mathtt{r_{e}}N_{oe}(1-\eta)^{-1}}{\log_2(1+\beta_e)},1\right);\tilde{\kappa}_{1e},\tilde{\kappa}_{2e}\right)\right], \end{align}} where $s\in\{c,e\}$, $\tilde{\kappa}_{1s}$ and $\tilde{\kappa}_{2s}$ are given in Section \ref{subsec:BetaApproximation}, and $\mathbb{P}[N_{os}=n]$ is given in Lemma \ref{lemma:CC_CE_Loads}. \end{cor} {\begin{proof} In OMA case, the transmission rates of the CC and CE users, for given $\mathbf{y}$ and $\Phi$, are $\frac{\eta}{N_{oc}}\log_2(1+\beta_c)\mathbb{E}\left[\mathbbm{1}_{\tilde{{\mathcal{E}}_c}}(\mathtt{SIR}_{c})\mid\mathbf{y},\Phi\right]$ and $\frac{1-\eta}{N_{oe}}\log_2(1+\beta_c)\mathbb{E}\left[\mathbbm{1}_{\tilde{{\mathcal{E}}_e}}(\mathtt{SIR}_{e})\mid\mathbf{y},\Phi\right]$, respectively. Hence, further following the steps in Appendix D, we complete the proof. \end{proof}} \subsection{Delay Analysis of the CC and CE Users} \label{subsec:PacketDelay} This subsection analyzes the delay performance of the typical CC/CE user for the given RT service under the NOMA setup described in Section \ref{subsec:SchedulinThroughputDelay}. Because of the assumption of saturated queues at the interfering BSs, the meta distributions derived in Section \ref{sec:MetaDistributions_NOMA_OMA} can be directly used to analyze the upper bound of the delay performance of the typical CC and CE user (see Section \ref{subsec:SchedulinThroughputDelay}). That said, the conditional packet transmission rate of the typical CC/CE user is the product of its scheduling probability and success probability as stated in \eqref{eq:DelayCondPhi_CC_CE}. {It may be noted that the successful transmission events across the time slots are independent for given $\mathbf{y}$ and $\Phi$. Hence, the service times of packets of the typical CC/CE user at $\mathbf{y}$ given $\Phi$ are i.i.d. and follow a geometric distribution with parameter $\mu_c(\mathbf{y},\Phi)$/$\mu_e(\mathbf{y},\Phi)$.} Besides, the packet arrives in each time slot as per the Bernoulli process with mean $\varrho_c$/$\varrho_e$. Thus, the queue of the typical CC/CE user can be modeled as the Geo/Geo/1 queue. {The upper bounds of the conditional mean delays of the typical CC and CE users becomes \cite{atencia2004discrete}} { \begin{align} \begin{split} D_c(\mathbf{y},\Phi)&=\frac{1-\varrho_c}{\mu_c(\mathbf{y},\Phi)-\varrho_c}\mathbbm{1}_{\mu_c(\mathbf{y},\Phi)>\varrho_c}\\ \text{ and } D_e(\mathbf{y},\Phi)&=\frac{1-\varrho_e}{\mu_e(\mathbf{y},\Phi)-\varrho_e}\mathbbm{1}_{\mu_e(\mathbf{y},\Phi)>\varrho_e}, \end{split}\label{eq:MeanConditionalDelay_CC_CE} \end{align}} respectively. Let $\mathtt{t_{c}}$ and $\mathtt{t_e}$ are mean delay thresholds of the CC and CE users, respectively. \begin{theorem} \label{thm:CDF_CondMeanDelay} The complementary $\mathtt{CDF}$ ($\mathtt{CCDF}$) of the conditional mean delays of the typical CC and CE users under NOMA with random scheduling are upper bounded respectively by \begin{align} &{\mathcal{D}}_c(\mathtt{t_{c}};\chi_c)=\nonumber\\ & \mathbb{E}_{N_{oc}}\left[I\left(\min\left(N_{oc}\left(\frac{1-\varrho_c}{\mathtt{t_{c}}}+\varrho_c\right),1\right);\kappa_{1c},\kappa_{2c}\right)\right],\label{eq:DelayCDF_UpperBound_CC}\\ &\text{and }{\mathcal{D}}_e(\mathtt{t_{e}};\chi_e)=\nonumber\\ &\mathbb{E}_{N_{oe}}\left[I\left(\min\left(N_{oe}\left(\frac{1-\varrho_e}{\mathtt{t_{e}}}+\varrho_e\right),1\right);\kappa_{1e},\kappa_{2e}\right)\right] \label{eq:DelayCDF_UpperBound_CE} \end{align} where $s\in\{c,e\}$, $\kappa_{1s}$ and $\kappa_{2s}$ are given in \eqref{eq:BetaApp}, and $\mathbb{P}[N_{oc}=n]$ is given by Lemma \ref{lemma:CC_CE_Loads}. \end{theorem} \begin{proof} Using Assumption 1 along with \eqref{eq:DelayCondPhi_CC_CE} and \eqref{eq:MeanConditionalDelay_CC_CE}, the $\mathtt{CDF}$ of $D_c(\mathbf{y},\Phi)$ becomes $ \mathbb{P}[D_c(\mathbf{y},\Phi)< \mathtt{t_{c}}]$ { \begin{align*} &=\mathbb{P}\left[\mu_c(\mathbf{y},\Phi)>\frac{1-\varrho_c}{\mathtt{t_c}}+\varrho_c,\mu_c(\mathbf{y},\Phi)>\varrho_c\right],\\ &= \mathbb{E}_{N_{oc}}\left[\mathbb{P}\left(p_c(\beta_c,\beta_e\mid\mathbf{y},\Phi)> N_{oc}\left(\frac{1-\varrho_c}{\mathtt{t_{c}}}+\varrho_c\right)\mid N_{oc}\right)\right] \end{align*}} {Further, using the meta distribution for the system model discussed in Section \ref{subsec:SchedulinThroughputDelay}, the $\mathtt{CCDF}$ of the upper bounded mean delay of the CC users $D_c(\mathbf{y},\Phi)$ is given by \eqref{eq:DelayCDF_UpperBound_CC}. Similarly, the $\mathtt{CCDF}$ of the upper bounded mean delay of CE users $D_e(\mathbf{y},\Phi)$ is obtained as given in \eqref{eq:DelayCDF_UpperBound_CE}.} \end{proof} For OMA, the mean delay of the typical CC and CE users at $\mathbf{y}$ conditioned on $\Phi$ becomes { \begin{align} \begin{split} \tilde{\mu}_{c}(\mathbf{y},\Phi) &=\frac{\eta\mathbb{E}\left[\mathbbm{1}_{\tilde{{\mathcal{E}}}_c}(\mathtt{SIR}_{c})\mid\mathbf{y},\Phi\right]}{N_{oc}}\\ \text{ and } \tilde{\mu}_{e}(\mathbf{y},\Phi) &= \frac{(1-\eta)\mathbb{E}\left[\mathbbm{1}_{\tilde{{\mathcal{E}}}_e}(\mathtt{SIR}_{e})\mid\mathbf{y},\Phi\right]}{N_{oe}}, \end{split} \label{eq:DelayCondPhi_CC_CE_OMA} \end{align}} The following corollary presents the upper bounded $\mathtt{CCDF}$s of mean delays for the OMA case. \begin{cor} \label{cor:CDF_CondMeanDelay} The $\mathtt{CCDF}$ of the mean delays of the typical CC and CE users under OMA with random scheduling are upper bounded respectively by \begin{align} &\tilde{{\mathcal{D}}}_c(\mathtt{t_{c}};\beta_c)=\nonumber\\ & \mathbb{E}_{N_{oc}}\left[I\left(\min\left( \frac{N_{oc}}{\eta}\left(\frac{1-\varrho_c}{\mathtt{t_{c}}}+\varrho_c\right),1\right);\tilde{\kappa}_{1c},\tilde{\kappa}_{2c}\right)\right],\label{eq:DelayCDF_UpperBound_CC_OMA}\\ &\text{and }\tilde{{\mathcal{D}}}_e(\mathtt{t_{e}};\beta_e)=\nonumber\\ & \mathbb{E}_{N_{oe}}\left[I\left(\min\left(\frac{N_{oe}}{1-\eta}\left(\frac{1-\varrho_e}{\mathtt{t_{e}}}+\varrho_e\right),1\right);\tilde{\kappa}_{1e},\tilde{\kappa}_{2e}\right)\right],\label{eq:DelayCDF_UpperBound_CE_OMA} \end{align} where $s\in\{c,e\}$, $\tilde{\kappa}_{1s}$ and $\tilde{\kappa}_{2s}$ are given in Section \ref{subsec:BetaApproximation}, and $\mathbb{P}[N_{os}=n]$ is given in Lemma \ref{lemma:CC_CE_Loads}. \end{cor} { \begin{proof} Using the successful packet transmission rates $\tilde{\mu}_{c}(\mathbf{y},\Phi)$ and $\tilde{\mu}_{e}(\mathbf{y},\Phi)$ given in \eqref{eq:DelayCondPhi_CC_CE_OMA} and further following the proof of Theorem \ref{thm:CDF_CondMeanDelay}, we obtained the upper bounds on $\mathtt{CCDF}$s of mean delays of the CC and CE users under OMA as given in \eqref{eq:DelayCDF_UpperBound_CC_OMA} and \eqref{eq:DelayCDF_UpperBound_CE_OMA}, respectively. \end{proof}} \section{Resource Allocation and Performance Comparison} \label{sec:CSEoptimization} {In this section, we focus on the RA for maximizing the network performance under both NRT and RT services while meeting the QoS constraints of the CC and CE users. For NRT services, we focus on the maximization of $\mathtt{CSR}$ such that the minimum transmission rates of the CC and CE users are ensured. However, for RT services, we consider the maximization of $\mathtt{SEC}$ such that CC and CE services with minimum arrival rates can be supported, and their corresponding packet transmission delays are also bounded. First, we develop an efficient method to obtain near-optimal RA for the NRT services in the following subsection. } {\subsection{RA under NRT services} \label{subsec:RA_NRT} Using the success probabilities of the CC and CE users, $\mathtt{CSR}$s for the NOMA and OMA systems can be, respectively, obtained as \begin{align} \mathtt{CSR}_\mathtt{NOMA}=&\mathtt{B}\log_2(1+\beta_c)M_1^c(\chi_c)\nonumber\\ &+\mathtt{B}\log_2(1+\beta_e)M_1^e(\chi_e),\\ \text{and}~ \mathtt{CSR}_\mathtt{OMA}=&\eta\mathtt{B}\log_2(1+\beta_c)\tilde{M}_1^c(\beta_c)\nonumber\\ &+(1-\eta)\mathtt{B}\log_2(1+\beta_e)\tilde{M}_1^e(\beta_e). \end{align} The RA formulation for the NRT services is as follows.} {$\bullet$ $\mathcal{P}_1$- $\mathtt{CSR}$ maximization subject to the minimum mean rates of the CC and CE users. \begin{align*} \text{NOMA:}\ \hspace{-0mm}\begin{split} \max~&\mathtt{CSR}_\mathtt{NOMA}\\ \text{s.t.}~&0<\theta<1\\ &\bar R_c(\chi_c)\geq \mathtt{R_c}\\ &\bar R_e(\chi_e)\geq \mathtt{R_e}, \end{split} \text{~and~OMA:}\ \hspace{-3mm}\begin{split} \max~&\mathtt{CSR}_\mathtt{OMA}\\ \text{s.t.}~&0<\eta<1\\ &\tilde{R}_c(\beta_c)\geq \mathtt{R_c}\\ &\tilde{R}_e(\beta_e)\geq \mathtt{R_e}. \end{split} \end{align*}} {\subsubsection{ Near-optimal RA for ${\mathcal{P}}_1$-NOMA} {It is difficult to obtain an exact optimal RA to ${\mathcal{P}}_1$-NOMA as it does not fall in the standard convex-optimization framework. Therefore, we present an efficient method to obtain a near-optimal solution based on the insights obtained from the NOMA analysis.} It is natural to allocate the remaining power to the CC user after achieving the minimum transmission rate of the CE user in order to maximize the $\mathtt{CSR}$. Therefore, we consider maximizing the mean transmission rate of the CC user under the constraints of the minimum transmission rates of the CC and CE users. One can easily see that $\theta\leq\theta_{\mathtt{NC}}=(1+\beta_e)^{-1}$ is the necessary condition for $\mathtt{SIR}_e\geq\beta_e$. From \eqref{eq:SuccessEevent_CC}, we note that the success probability of the CC user increases with the decrease of $\chi_c$. The success probability (thus, the mean transmission rate) of the CC user is an increasing and decreasing functions of $\theta$ for $0<\theta\leq\hat\theta$ and $\hat\theta<\theta\leq \theta_{\mathtt{NC}}$, respectively, where \begin{equation} \hat\theta=\underset{{0<\theta\leq \theta_{\mathtt{NC}}}}{\mathrm{argmin}}~\chi_c=\left\{\theta:\frac{\beta_c}{\theta}=\frac{\beta_e}{1-\theta(1+\beta_e)}\right\}. \label{eq:theta_critical} \end{equation} From the above, we know that there are two solutions (if any exists), denoted by $\theta_{lc}$ and $\theta_{uc}$, to $\bar{R}_c(\chi_c)=\mathtt{R_c}$ such that $\theta_{lc}\leq \hat\theta$ and $\theta_{uc}\geq\hat\theta$. Besides, the transmission rate of the CE user is a non-increasing function of $\theta$. Let $\theta_e$ be the solution of $\bar{R}_e(\chi_e)=\mathtt{R_e}$. Hence, from the above discussion, the optimal allocation becomes $\theta^*=\min\{\theta_e,\hat\theta\}$ if $\theta_{lc}$ and $\theta_{e}$ exist such that $\theta_e\geq\theta_{lc}$. } {\subsubsection{ Optimal RA for ${\mathcal{P}}_1$-OMA} For $\beta_c>\beta_e$ and $\tilde{M}_1^c(\beta_c)>\tilde{M}_1^e(\beta_e)$, one can easily infer that the $\mathtt{CSE_{OMA}}$ and mean transmission rate of the CC user are monotonically increasing functions of $\eta$. In contrast, the mean transmission rate of the CE user is a monotonically decreasing function of $\eta$. Therefore, it is straightforward to choose the optimal solution $\eta^*=\eta_e$ for OMA if $\eta_c\leq\eta_e$ where $\eta_c$ and $\eta_e$ are the solutions of $\tilde{R}_c(\chi_c)=\mathtt{R_c}$ and $\tilde{R}_e(\chi_e)=\mathtt{R_e}$, respectively. Note that there is no feasible solution if $\eta_c>\eta_e$.} {\subsection{RA under RT services} \label{subsec:RA_RT} To evaluate the $\mathtt{EC}$, we define the delay QoS constraint for the CC/CE users as the probability that the delay outage is below $\mathtt{O}_c$/$\mathtt{O}_e$ for the given mean delay threshold $\mathtt{t_c}$/$\mathtt{t_e}$. Therefore, using the upper bound of the mean delay outage and the fact that the mean delay outage is a monotonically increasing function of the arrival rate, the lower bounds of $\mathtt{EC}$s for the CC and CE services under NOMA and OMA become \begin{align} \mathtt{EC}_\mathtt{NOMA}^s&=\{\varrho_s\in\mathbb{R}_+:{\mathcal{D}}_s(\mathtt{t_s},\chi_s)=\mathtt{O}_s\}\\ \text{ and } \mathtt{EC}_\mathtt{OMA}^s&=\{\varrho_s\in\mathbb{R}_+:\tilde{{\mathcal{D}}}_s(\mathtt{t_s},\beta_s)=\mathtt{O}_s\}, \end{align} respectively, where $s=\{c,e\}$. The RAs for RT services are formulated as below.} $\bullet$ $\mathcal{P}_2$- $\mathtt{SEC}$ maximization subject to the minimum $\mathtt{EC}$s for the CC and CE users. \\ \begin{align*} \text{NOMA:} \begin{split} \max~&\mathtt{EC}_\mathtt{NOMA}^c+\mathtt{EC}_\mathtt{NOMA}^e\\ \text{s.t.}~&0<\theta<1\\ &\mathtt{EC}_\mathtt{NOMA}^c\geq \bar{\varrho}_c\\ &\mathtt{EC}_\mathtt{NOMA}^e\geq \bar{\varrho}_e, \end{split} \text{~and~OMA:} \hspace{-3mm}\begin{split} \max~&\mathtt{EC}_\mathtt{OMA}^c+\mathtt{EC}_\mathtt{OMA}^e\\ \text{s.t.}~&0<\eta<1\\ &\mathtt{EC}_\mathtt{OMA}^c\geq \bar{\varrho}_c\\ &\mathtt{EC}_\mathtt{OMA}^e\geq \bar{\varrho}_e. \end{split} \end{align*} The $\mathtt{EC}$ constraints of RA formulation ${\mathcal{P}}_2$ can facilitate RT services for the CC and CE users requiring minimum packet arrival rates $\tilde{\varrho}_c$ and $\tilde{\varrho}_e$, respectively. At the same time, this formulation also ensures that the outage probability of the mean delays for the CC (CE) user for the given delay threshold $\mathtt{t_c}$ ($\mathtt{t_e}$) is below $\mathtt{O}_c$ ($\mathtt{O}_e$). The local mean packet delay (i.e., the mean number of slots required for the successful delivery of packet) is equal to the first inverse moment of the conditional success probability \cite{Haenggi2013local}. It is therefore natural that the $\mathtt{EC}$ is higher for the lower inverse moment of the conditional success probability. In fact, it will be evident in section \ref{sec:NumericalResults} that the outages of delay and transmission rate follow similar trends w.r.t. the power allocation $\theta$ under NOMA or the time allocation $\eta$ under OMA. Hence, we can infer that the $\mathtt{EC}$s of CC and CE users also behave similar to their transmission rates. Hence, the maximization of $\mathtt{SEC}$ is similar to the maximization of $\mathtt{CSR}$. {\subsubsection{Near-optimal RA for $P_2$-NOMA} Since ${\mathcal{P}}_2$-NOMA is an non-convex-optimization problem, we adopt the similar approach developed in subsection \ref{subsec:RA_NRT} to obtain its near-optimal solution. As already implied above, the $\mathtt{EC}$ of CC users is an increasing and decreasing functions of $\theta$ for $0<\theta\leq\hat\theta$ and $\hat\theta<\theta\leq\theta_{\mathtt{NC}}$, respectively. However, the $\mathtt{EC}$ for the CE users is a decreasing function of $\theta$. Let $\tilde\theta_{lc}$ and $\tilde{\theta}_{uc}$ be the two solutions of $\mathtt{EC}_\mathtt{NOMA}^c=\bar\varrho_c$ such that $\tilde\theta_{lc}\leq \hat\theta$ and $\tilde\theta_{uc}\geq\hat\theta$, and $\tilde\theta_e$ is the solution of $\mathtt{EC}_\mathtt{NOMA}^e=\bar\varrho_e$. Hence, the optimal power allocation becomes $\tilde{\theta}^*=\min(\tilde\theta_e,\hat{\theta})$ if $\tilde\theta_{lc}$ and $\tilde\theta_e$ exist such that $\tilde\theta_e\geq\tilde\theta_{lc}$.} \subsubsection{Optimal RA for ${\mathcal{P}}_2$-OMA} Note that $\mathtt{EC}_\mathtt{OMA}^c$ increases with $\eta$ as the scheduling probability for the CC users increases with $\eta$. However, $\mathtt{EC}_\mathtt{OMA}^e$ follows exactly the opposite trend. Besides, allocating more transmission time to the CC users obviously results in higher $\mathtt{SEC}$ because of $\tilde{M}_1^c(\beta_c)>\tilde{M}_1^e(\beta_e)$. Therefore, we can obtain the optimal solution $\tilde{\eta}^*=\tilde{\eta}_e$ for OMA if $\tilde\eta_c\leq \tilde\eta_e$ where $\tilde\eta_c$ and $\tilde\eta_e$ are the solutions of $\mathtt{EC}_\mathtt{OMA}^c=\bar{\varrho}_c$ and $\mathtt{EC}_\mathtt{OMA}^e=\bar{\varrho}_e$, respectively. \begin{figure*} \centering \includegraphics[width=.45\textwidth]{CSE_MeanRate_NOMA_v1.pdf} \includegraphics[width=.45\textwidth]{CSE_MeanRate_OMA_v1.pdf} \caption{$\mathtt{CSR}$ and mean transmission rates of CC and CE users under NOMA (Left) and OMA (Right). The solid and dashed curves correspond to the analytical results, and the markers correspond to the simulation results.} \label{fig:MeanThrouput_CSE} \end{figure*} {\subsection{Performance gain of NOMA} {Due to the intra-cell interference, the successful transmission probabilities of both the CC and CE users drop in the NOMA system compared to those in OMA system.} However, concurrent transmissions increase their scheduling probabilities in NOMA as compared to those of the OMA system. Since $\tilde{M}_1^c(\beta_c)={M}_1^c(\beta_c)$ and $\tilde{M}_1^e(\beta_e)={M}_1^c(\beta_e)$, the performance gain in the transmission rates of CC and CE users are, respectively, given by \begin{equation} \mathtt{g}_c=\frac{M_1^c(\chi_c)}{\eta M_1^c(\beta_c)} ~\text{and}~ \mathtt{g}_e=\frac{M_1^e(\chi_e)}{(1-\eta) M_1^e(\beta_e)}. \label{eq:NOMA_Gain} \end{equation} Thus, the transmission gain is an increasing function of $\theta$ in the interval $(0,\hat\theta]$ for the CC users, whereas it is a decreasing function of $\theta$ for the CE users. This implies that there is a performance trade-off between the transmission gains for the CC and CE users. For a given $\eta$, we have $\mathtt{g}_c>1$ for a set $\Theta_c(\eta)=\{\theta\in[0,\hat\theta]: \frac{M_1^c(\chi_c)}{ M_1^c(\beta_c)} >\eta\}$ and $\mathtt{g}_e>1$ for a set $\Theta_e(\eta)=\{\theta\in[0,\hat\theta]: \frac{M_1^e(\chi_e)}{ M_1^c(\beta_e)} >1-\eta\}$. Therefore, NOMA is beneficial for both CC and CE users compared to OMA only if $\Theta_c(\eta)\cap\Theta_e(\eta)\neq\emptyset$. Otherwise, at least one of these two types of users will underperform in NOMA compared to OMA. It is difficult to analytically show that the intersection of $\Theta_c(\eta)$ and $\Theta(\eta)$ is non-empty for a given $\eta$. That said, it will be evident in Section \ref{sec:NumericalResults} that this condition does not hold only for higher values of $\eta$.} \section{Numerical Results and Discussion} \label{sec:NumericalResults} In this section, we first verify the accuracy of analytical results by comparing them with the simulation results obtained through Monte Carlo simulations. Next, we discuss the performance trends of the achievable $\mathtt{CSR}$, and the transmission rates and mean delays of the CC and CE users. Further, we also compare the performances of these metrics for the NOMA and OMA systems. For this, we consider $\lambda=1$, $\nu=5$, $\alpha=4$, $\tau=0.7$, $(\beta_c,\beta_e)=(3,-3)$ in dB, $(\mathtt{r_c},\mathtt{r_e})=(0.1,0.05)$, $(\varrho_c,\varrho_e)=(0.05,0.05)$, and $(\mathtt{t_c},\mathtt{t_e})=(20,30)$, unless mentioned otherwise. Fig. \ref{fig:MeanThrouput_CSE} depicts that the mean transmission rates of the CC and CE users closely match with the simulation results for both the NOMA and OMA systems. The curves correspond to the analytical results, whereas the markers correspond to the simulation results. The figure gives the visual verifications of the trends of transmission rate functions discussed in Section \ref{sec:CSEoptimization}. It can be seen that the $\mathtt{CSR}$ and the transmission rates are zero for $\theta>\theta_{\mathtt{NC}}\approx0.66$ which verifies the necessary condition for NOMA operation. In addition, we can see that the transmission rate of the CE user monotonically decreases with $\theta\leq\theta_{\mathtt{NC}}$, whereas the transmission rate of the CC user is a monotonically increasing and decreasing function of $\theta$ for $0<\theta\leq \hat\theta$ and $\hat\theta<\theta\leq\theta_{\mathtt{NC}}$, respectively, where $\hat\theta=0.5$ (see \eqref{eq:theta_critical}). \begin{figure}[h] \centering\vspace{-3mm} \includegraphics[width=.45\textwidth]{RateOutage_NOMA_OMA.pdf} \includegraphics[width=.45\textwidth]{MeandelayOutage_NOMA_OMA_v1.pdf}\vspace{-.2cm} \caption{The outage probabilities of transmission rates (Left) and mean delays (Right) of the CC and CE users. The solid and dashed curves correspond to the analytical results and the markers correspond to the simulation results.}\vspace{-2mm}\label{fig:Rate_Delay_Outage} \end{figure} \begin{figure*} \centering \hspace{-.3cm}\includegraphics[width=.35\textwidth]{RateRegion_v3.pdf} \hspace{-.7cm} \includegraphics[width=.35\textwidth]{CSE_vs_UserDensity_v1.pdf} \hspace{-.7cm} \includegraphics[width=.35\textwidth]{SEC_vs_UserDensity_P2_v1.pdf}\vspace{-.25cm} \caption{Rate region of the CC and CE users in NOMA and OMA for $\beta=[\beta_c,\beta_e]=[3, 0]$ in dB (Left). Maximum $\mathtt{CSR}$ under rate constraints where $\mathtt{R}=[\mathtt{R_c}, \mathtt{R_e}]$ (Middle) and maximum $\mathtt{SEC}$ under minimum $\mathtt{EC}$ constraints (Right) for $(\mathtt{O_c},\mathtt{O_e})=(0.2,0.2)$ and $(\bar{\varrho}_c,\bar{\varrho}_e)=(0.05,0.05)$ where $\mathtt{T}=[\mathtt{t_c}, \mathtt{t_e}]$. The solid and dashed lines correspond to the NOMA and OMA, respectively. } \label{fig:RateRegion_CSEoptimization}\vspace{-3mm} \end{figure*} Fig. \ref{fig:Rate_Delay_Outage} verifies that the transmission rate outage probabilities and upper bounds of the mean delay outage probabilities of the CC and CE users closely match with the simulation results. The outage probabilities follow the trends opposite to the mean transmission rates. The outage probabilities of the CC user degrade with {the} increase of $\tau$ because of two reasons: 1) increase in the mean number of CC users (i.e., degraded scheduling probability) and 2) decrease in the success probability of CC users. However, a similar direct trend is not visible for the CE user with respect to $\tau$. {This is because increasing $\tau$ results in the decrease of both the mean number of CE users (i.e., improved scheduling instances) and success probability for the CE users. } { Fig. \ref{fig:RateRegion_CSEoptimization} (Left) illustrates the transmission rate region for the CC and CE users under the NOMA and OMA systems. For OMA, the rate region is the linear combination of maximum transmission rates of the CC and CE users where $\eta=1$ and $\eta=0$ correspond to their maximum transmission rates, respectively. Figure depicts that the performance gains of both the CC and CE users given in \eqref{eq:NOMA_Gain} are above unity for $\eta\leq 0.72$ which implies that the intersection of $\Theta_c(\eta)$ and $\Theta_e(\eta)$ are non-empty for $\eta\in[0,72]$. Further, it can be clearly observed that the NOMA performance gains are independent of $\nu$, whereas the transmission rate regions scale down with the increase of $\nu$. This is because increasing $\nu$ lowers the scheduling probabilities of these users, which as a result, affects their transmission rates but not their relative performance gains. Besides, the optimal power allocation with respect to the CC users is $\hat\theta$ for the NOMA system. At this point, the CE users receive a non-zero transmission rate as they are allocated with the remaining power of $1-\hat\theta$. Hence, unlike the OMA, the CE users can receive a non-zero transmission rate when the CC users' transmission rate is maximum under the NOMA system. } {Fig. \ref{fig:RateRegion_CSEoptimization} (Middle) shows the achievable $\mathtt{CSR}$ vs. $\nu$ under the minimum transmission rate constraints and Fig \ref{fig:RateRegion_CSEoptimization} (Right) shows the achievable $\mathtt{SEC}$ vs. $\nu$ under the minimum $\mathtt{EC}$ constraints. The circular markers correspond to the proposed solutions and the curves correspond to the exact optimal solutions which are obtained through the brute-force search. We observe that the maximum achievable $\mathtt{CSR}$ and $\mathtt{SEC}$ drop with the increase of $\nu$ for both NOMA and OMA systems.} This is because the mean transmission rates of both CC and CE users drop with the increase of $\nu$ (i.e., the increase of the number of users sharing the same RB). This restricts the feasible range of $\theta$ and thus that of both $\mathtt{CSR}$ and $\mathtt{SEC}$. The middle figure depicts that the NOMA provides better $\mathtt{CSR}$ as compared to OMA. The $\mathtt{CSR}$ drops at a higher rate when the minimum required transmission rates are higher as it causes the power consumption {to increase with the increase in $\nu$ aggressively}. {Fig \ref{fig:RateRegion_CSEoptimization} (Right)} depicts that the NOMA provides better $\mathtt{SEC}$ for lower $\mathtt{SIR}$ threshold at higher values of $\nu$ as compared to OMA. This is because {their scheduling probabilities predominantly determine the packet service rates of CC and CE users (thus the $\mathtt{SEC}$) for the lower values of $\mathtt{SIR}$ thresholds.} \section{Conclusion} This paper has provided a comprehensive analysis of downlink two-user NOMA enabled cellular networks for both RT and NRT services. In particular, a new 3GPP-inspired user ranking technique has been proposed wherein the CC and CE users are paired for the non-orthogonal transmission. To the best of our knowledge, this is the first stochastic geometry-based approach to analyze downlink NOMA using a 3GPP-inspired user ranking scheme that depends upon both the link qualities from the serving and dominant interfering BSs. Unlike the ranking techniques used in the literature, the proposed technique ranks users accurately with distinct link qualities{,} which is vital to obtain performance gains in NOMA. For the proposed user ranking, we first derive the moments and approximate distributions of the meta distributions for the CC and CE users. Next, we obtain the distributions of their transmission rates and mean delays under random scheduling, which are then used to characterize $\mathtt{CSR}$ for NRT service and $\mathtt{SEC}$ for RT service, respectively. Using these results, we investigate two RA techniques with objectives to maximize $\mathtt{CSR}$ and $\mathtt{SEC}$. The numerical results demonstrated that NOMA{,} along with the proposed user ranking technique{,} results in a significantly higher $\mathtt{CSR}$ and improved transmission rate region for the CC and CE users as compared to OMA. Besides, our results showed that NOMA provides improved $\mathtt{SEC}$ as compared to OMA for the higher user density. The natural extension of this work is to analyze the $N$-user downlink NOMA in cellular networks. Besides, one could also employ the proposed way of user partitioning to analyze the transmission schemes relying on the partition of users based on their perceived link qualities, such as SFR. \section*{Appendix} \subsection{Proof of Lemma \ref{lemma:DistanceDistributions}} \label{app:DistanceDistributions} Using \eqref{eq:pdf_RoRd} and the definitions of $\Psi_{cc}$ and $\Psi_{ce}$ given in \eqref{eq:CC_CE_pps}, it is easy to see that a uniformly distributed user in the typical cell $V_o$ is the typical CC user if $R_o\leq \tau R_d$ and the typical CE user if $R_o>\tau R_d$. Therefore, the probability that the typical user is the CC user becomes $\mathbb{P}\left[R_o\leq R_d\tau\right]=$ \begin{align*} &(2\pi\rho\lambda)^2\int\limits_0^\infty\int\limits_0^{ r_d\tau}r_or_d\exp(-\pi\lambda\rho r_d^2){\rm d}r_o{\rm d}r_d=\tau^2. \end{align*} and thus the probability that the typical user is the CE user becomes $1-\tau^2$. Using Eq. \eqref{eq:pdf_RoRd} and $\mathbb{P}[R_o\leq R_d\tau]=\tau^2$, we obtain $\mathtt{CDF}$ of $R_o$ for the CC user as $F^{\text{c}}_{R_o}(r_o)=$ \begin{align} \mathbb{P}[R_o\leq r_o|R_o<R_d\tau]&=\frac{(2\pi\rho\lambda)^2}{\tau^2}\int\limits_0^{r_o}\int\limits_{\frac{u}{\tau}}^\infty u v\exp(-\pi\rho\lambda v^2){\rm d}v{\rm d}u\nonumber\\ &=1-\exp\left(-\pi\rho\lambda{r_o^2}/{\tau^2}\right), \label{eq:CDF_Ro_proof} \end{align} for $r_o\geq 0$. Using $R_o\leq R_d\tau$, we obtain the $\mathtt{CDF}$ of $R_d$ of CC user as \begin{align} F^{\text{c}}_{R_d\mid R_o}\left(r_d\mid r_o\right)&=\mathbb{P}[R_d\leq r_d\mid R_d>r_o/\tau]\nonumber\\ &=1-\exp\left(-\pi\rho\lambda\left(r_d^2-{r_o^2}/{\tau^2}\right)\right), \label{eq:CDF_Rd_given_Ro_proof} \end{align} $\text{for}~r_d\geq \frac{r_o}{\tau}$. Therefore, the joint $\mathtt{pdf}$ of $R_o$ and $R_d$ for the CC user given in \eqref{eq:pdf_RoRd_CC} directly follows using the Bayes' theorem along with \eqref{eq:CDF_Ro_proof} and \eqref{eq:CDF_Rd_given_Ro_proof}. Similarly, using \eqref{eq:pdf_RoRd} and $\mathbb{P}[R_o> R_d\tau]=1-\tau^2$, we obtain the $\mathtt{CDF}$ of $R_o$ for the CE user as \begin{align} F^{\text{e}}_{R_o}(r_o)&=\mathbb{P}[R_o\leq r_o\mid R_o>R_d\tau]\nonumber\\ &=\frac{(2\pi\rho\lambda)^2}{1-\tau^2}\int_0^{r_o}\int_{u}^{\frac{u}{\tau}} uv\exp(-\pi\rho\lambda v^2){\rm d}v{\rm d}u\nonumber\\ &=1-\frac{1-\tau^2\exp\left(-\pi\rho\lambda{r_o^2}(\tau^{-2}-1)\right)}{(1-\tau^2)\exp\left(\pi\rho\lambda{r_o^2}\right)} \label{eq:CDF_Ro_proof_1} \end{align} for $r_o>0$. Now, the conditional $\mathtt{CDF}$ of $R_d$ given $R_o$ for the CE user can be determined as \begin{align} F^{\text{e}}_{R_d\mid R_o}(r_d\mid r_o)&=\mathbb{P}\left[R_d\leq r_d\mid R_d<\frac{r_o}{\tau}\right]\nonumber\\ &=\frac{\mathbb{P}[R_d\leq r_d]}{\mathbb{P}[R_d<\frac{r_o}{\tau}]}\nonumber\\ &=\frac{1-\exp\left(-\pi\rho\lambda(r_d^2-r_o^2)\right)}{1-\exp\left(-\pi\rho\lambda r_o^2(\tau^{-2}-1)\right)}, \label{eq:CDF_Rd_given_Ro_proof_1} \end{align} for $r_o<r_d<\frac{r_o}{\tau}$ and $F^{\text{e}}_{R_d\mid R_o}(r_d\mid r_o)=1$ for $r_d\geq\frac{r_o}{\tau}$. Finally, the joint $\mathtt{pdf}$ of $R_o$ and $R_d$ for the CE given in Eq. \eqref{eq:pdf_RoRd_CE} directly follows using the Bayes' theorem and Equations \eqref{eq:CDF_Ro_proof_1} and \eqref{eq:CDF_Rd_given_Ro_proof_1}. \subsection{Proof of Theorem \ref{thm:MetaDisMoment}} \label{app:MetaDisMoment} The success probability of the CC user at $\mathbf{y}\in V_o$ conditioned on $\mathbf{y}=R_o$ and $\Phi$ is \begin{align*} p_c(\chi_c\mid\mathbf{y}, \Phi)&=\mathbb{P}\left({\mathcal{E}}_c\mid\mathbf{y},\Phi\right)\stackrel{(a)}{=}\prod\limits_{\mathbf{x}\in\Phi}\frac{1}{1+R_o^\alpha \chi_c \|\mathbf{x}-\mathbf{y}\|^{-\alpha}}, \end{align*} where step (a) follows from the independence of the fading gains. Let $\mathbf{x}_d=\arg\min_{\mathbf{x}\in\Phi}\|\mathbf{x}-\mathbf{y}\|$ and $\tilde{\Phi}=\Phi\setminus\{\mathbf{x}_d\}$. Recall $R_d=\|\mathbf{x}_d\|$. The $b$-th moment of $p_c(\chi_c|\mathbf{y},\Phi)$ becomes $M^{c}_b(\chi_c)$ \begin{align*} &={\mathbb{E}}_{R_o,\Phi}\left[\prod\limits_{\mathbf{x}\in{\Phi}}\frac{1}{(1+R_o^\alpha \chi_c \|\mathbf{x}-\mathbf{y}\|^{-\alpha})^b}\right],\\ &={\mathbb{E}}_{R_o,R_d}\left[\mathbb{E}_{\tilde{\Phi}}\left[\prod\limits_{\mathbf{x}\in\tilde{\Phi}}\frac{(1+\chi_c({R_o}/{R_d})^{\alpha})^{-b}}{(1+R_o^\alpha \chi_c \|\mathbf{x}-\mathbf{y}\|^{-\alpha})^b}\mid \mathbf{x}_d\right]\right],\\ &\stackrel{(a)}{=}\mathbb{E}_{R_o,R_d}\left[\frac{\exp\left(-\lambda\int_{{\mathcal{B}}_{\mathbf{y}}(R_d)}\left[1-(1+R_o^\alpha \chi_c \|\mathbf{x}-\mathbf{y}\|^{-\alpha})^{-b}\right]{\rm d}\mathbf{x}\right)}{(1+\chi_c({R_o}/{R_d})^{\alpha})^b}\right],\\ &=\mathbb{E}_{R_o,R_d}\left[\frac{\exp\left(-\pi\lambda R_o^2\tilde{{\mathcal{Z}}}_b\left(\chi_c,{R_o}/{R_d}\right)\right)}{(1+\chi_c({R_o}/{R_d})^{\alpha})^b}\right], \end{align*} where ${\mathcal{B}}_{\mathbf{y}}^c(r)={\mathbb{R}}^2\setminus{\mathcal{B}}_{\mathbf{y}}(r)$, $\tilde{{\mathcal{Z}}}_b\left(\chi_c,a\right)=\chi_c^\delta\int_{\chi_c^{-\delta} a^{-2}}^\infty [1-(1+t^{-\frac{1}{\delta}})^{-b}]{\rm d}t,$ and step (a) follows by approximating $\tilde{\Phi}$ with the homogeneous PPP with density $\lambda$ outside of the disk ${\mathcal{B}}_\mathbf{y}\left(R_d\right)$ and the probability generating functional of PPP. Now, using the joint $\mathtt{pdf}$ of $R_o$ and $R_d$ for the CC user given in \eqref{eq:pdf_RoRd_CC}, we get \begin{align*} M_b^{c}(\chi_c)=\frac{(2\pi\rho\lambda)^2}{\tau^2}&\int_0^\infty r_d\exp(-\pi\rho\lambda r_d^2)\times\\ \int_0^{\tau r_d}&\frac{\exp\left(-\pi\lambda r_o^2\tilde{\mathcal{Z}}_b\left(\chi_c,{r_o}/{r_d}\right)\right)}{(1+\chi_c({r_o}/{r_d})^{\alpha})^b}r_o{\rm d}r_o{\rm d}r_d,\\ \stackrel{(a)}{=}\frac{2(\pi\rho\lambda)^2}{\tau^2}&\int_0^{\tau^2}\frac{1}{(1+\chi_cv^{\frac{1}{\delta}})^b}\times\\ \int_0^\infty r_d^3&\exp\left(-\pi\lambda r_d^2\left[\rho + \tilde{\mathcal{Z}}_b\left(\chi_c,v^\frac{1}{2}\right)\right]\right){\rm d}r_d{\rm d}v, \end{align*} where step (a) follows using the substitution $(r_o/r_d)^\alpha=v^\frac{1}{\delta}$ and the exchange of the integral orders. Further, solving the inner integral gives $M_b^c(\chi_c)$ as in \eqref{eq:MetaDisMoment_CC_CE} such that ${\mathcal{Z}}(\chi_c,v)=\tilde{{\mathcal{Z}}}(\chi_c,v^{\frac{1}{2}})$. Following similar steps and using the joint $\mathtt{pdf}$ of $R_o$ and $R_d$ given in \eqref{eq:pdf_RoRd_CE}, the $b$-th moment of the conditional success probability for the CE user can be obtained as \begin{align*} M_b^{e}(\chi_e)=\frac{(2\pi\rho\lambda)^2}{1-\tau^2}&\int_0^\infty r_d\exp(-\pi\rho\lambda r_d^2)\times\\ \int_{\tau r_d}^{r_d} &\frac{\exp\left(-\pi\lambda r_o^2\tilde{\mathcal{Z}}_b\left(\chi_e,{r_o}/{r_d}\right)\right)}{(1+\chi_e({r_o}/{r_d})^{\alpha})^b}r_o{\rm d}r_o{\rm d}r_d,\\ =\frac{2(\pi\rho\lambda)^2}{1-\tau^2}&\int_{\tau^2}^1 \frac{1}{(1+\chi_ev^{\frac{1}{\delta}})^b}\times\\ \int_0^\infty r_d^3 &\exp\left(-\pi\lambda r_d^2\left[\rho + v\tilde{\mathcal{Z}}_b\left(\chi_e,v^\frac{1}{2}\right)\right]\right){\rm d}r_d{\rm d}v, \end{align*} Finally, solving the inner integral gives $M_b^e(\chi_e)$ as in \eqref{eq:MetaDisMoment_CC_CE} where ${\mathcal{Z}}(\chi_e,v)=\tilde{{\mathcal{Z}}}(\chi_e,v^{\frac{1}{2}})$. \subsection{Proof of Lemma \ref{lemma:Area_CC_CE_Region}} \label{app:CCCERegion} The $n$-th moment of the area of a random set $A\subset\mathbb{R}^2$ can be obtained as \cite{robbins1944} \begin{figure*} \centering \hspace{-7mm}\includegraphics[width=.27\textwidth]{Case1a.pdf} \includegraphics[width=.33\textwidth]{A_o_nonempty.pdf} \includegraphics[width=.33\textwidth]{A_12_nonempty.pdf}\vspace{-.25cm}\\ ~~~~Case 1~~~~~~~~~~~~~~~~~~~~~~~~~~Case 2a~~~~~~~~~~~~~~~~~~~~~~~~~~Case 2b~~~~~~~~~ \caption{Illustration of the cases when $\{\mathbf{x}_1,\mathbf{x}_2\}\in V_{oe}$ given $\{\mathbf{x}_1,\mathbf{x}_2\}\in V_{o}$ (i.e. $\Phi({\mathcal{C}}_o)=0$). The blue diamonds represent the locations $\{\mathbf{x}_1,\mathbf{x}_2\}$, whereas the red dots represent the locations of serving and dominant BSs.} \label{fig:Cond_X1X2inVoe} \end{figure*} \begin{equation} \mathbb{E}[|A|^n]=\int_{\mathbb{R}^d}\dots\int_{\mathbb{R}^d}\mathbb{P}[\mathbf{x}_1,\dots, \mathbf{x}_n\in A]{\rm d}\mathbf{x}_1\dots{\rm d}\mathbf{x}_n. \label{eq:Moment_Random_Set} \end{equation} Let ${\mathcal{B}}_{\mathbf{x}}$ and $\tilde{{\mathcal{B}}}_{\mathbf{x}}$ be the disks of radii $r$ and ${r}{\tau^{-1}}$ both centered at $\mathbf{x}\equiv(r,\theta)$ and let ${\mathcal{A}}_{\mathbf{x}}$ be the annulus formed by the two disks $\tilde{{\mathcal{B}}}_{\mathbf{x}}$ and ${\mathcal{B}}_{\mathbf{x}}$. By definition, the point $\mathbf{x}$ belongs to $V_{oc}$ only if $\Phi({{\mathcal{B}}}_{\mathbf{x}})=0$ (i.e., $\mathbf{x}\in V_o$) and $\Phi({\mathcal{A}}_{\mathbf{x}})=0$. Thus, we have \begin{align*} {\mathbb{P}}[\mathbf{x}\in V_{oc}]&={\mathbb{P}}[\mathbf{x}\in V_{oc}\mid \mathbf{x}\in V_{o}]{\mathbb{P}}[\mathbf{x}\in V_{o}]\\&\stackrel{(a)}{=}\exp\left(-\lambda|{\mathcal{A}}_\mathbf{x}|\right)\exp(-\lambda|{\mathcal{B}}_\mathbf{x}|)\\ &=\exp(-\lambda|\tilde{{\mathcal{B}}}_\mathbf{x}|), \label{eq:Prob_YinVoc}\numberthis \end{align*} where step (a) follows from the independence property and the {\em void probability} of the PPP. However, the point $\mathbf{x}$ belongs to $V_{oe}$ only if $\Phi\left({\mathcal{B}}_{\mathbf{x}}\right)=0$ and $\Phi\left({\mathcal{A}}_{\mathbf{x}}\right)\neq 0$. Thus, we have \begin{align*} {\mathbb{P}}[\mathbf{x}\in V_{oe}]&={\mathbb{P}}[\mathbf{x}\in V_{oe}\mid \mathbf{x}\in V_{o}]{\mathbb{P}}[\mathbf{x}\in V_{o}]\\ &\stackrel{(a)}{=}\left[1-\exp\left(-\lambda|{\mathcal{A}}_\mathbf{x}\right)|\right]\exp(-\lambda|{\mathcal{B}}_\mathbf{x}|)\\ &=\exp(-\lambda|{\mathcal{B}}_\mathbf{x}|)-\exp(-\lambda|\tilde{{\mathcal{B}}}_\mathbf{x}|), \label{eq:Prob_YinVoe}\numberthis \end{align*} where step (a) follows from the independence property and the {\em void probability} of the PPP. Thus, using \eqref{eq:Moment_Random_Set}, \eqref{eq:Prob_YinVoc} and \eqref{eq:Prob_YinVoe}, we get the mean areas of the CC and CE regions as in \eqref{eq:Mean_CCCERegion}. Similarly, the probability of $\{\mathbf{x}_1,\mathbf{x}_2\}\in V_{oc}$ can be directly determined as $ {\mathbb{P}}[\mathbf{x}_1,\mathbf{x}_2\in V_{oc}] = \exp(-\lambda|{\mathcal{C}}_3|)$, where ${\mathcal{C}}_3=\tilde{{\mathcal{B}}}_{\mathbf{x}_1}\cup \tilde{{\mathcal{B}}}_{\mathbf{x}_2}$. Thus, using this and \eqref{eq:Moment_Random_Set}, we can easily obtain the second moment of the area of the CC region as in \eqref{eq:SecondMoment_CCCERegion}. Now, we require the probability of $\{\mathbf{x}_1,\mathbf{x}_2\}\in V_{oe}$ for the evaluation of the second moment of area of the CE region. This requires the careful consideration of the intersection of various sets of two disks. Let $d=\|\mathbf{x}_1-\mathbf{x}_2\|=(r_1^2+r_2^2-2r_1r_2\cos(\theta_1-\theta_2))^{\frac{1}{2}}$. Fig. \ref{fig:Cond_X1X2inVoe} shows two cases wherein Case 1 occurs if $d\leq {\tau^{-1}}|r_1-r_2|$, otherwise Case 2 occurs. Now, we derive ${\mathbb{P}}[\mathbf{x}_1,\mathbf{x}_2\in V_{oe}]$ for these cases in the following. \newline{\em Case 1:} In this case, $\{\mathbf{x}_1,\mathbf{x}_2\}\in V_{oe}$ if $\Phi\left({\mathcal{C}}_o\right)=0$ (i.e. $\{\mathbf{x}_1,\mathbf{x}_2\}\in V_o$) and if either $\Phi\left({\mathcal{C}}_2\setminus{\mathcal{C}}_o\right)\neq 0\text{~for~}r_2\leq r_1\text{~or~}\Phi\left({\mathcal{C}}_1\setminus{\mathcal{C}}_o\right)\neq 0 \text{~for~}r_1<r_2,$ where ${\mathcal{C}}_o={\mathcal{B}}_{\mathbf{x}_1}\cup{\mathcal{B}}_{\mathbf{x}_2}$, ${\mathcal{C}}_1=\tilde{{\mathcal{B}}}_{\mathbf{x}_1}\cup{\mathcal{B}}_{\mathbf{x}_2}$ and ${\mathcal{C}}_2={\mathcal{B}}_{\mathbf{x}_1}\cup\tilde{{\mathcal{B}}}_{\mathbf{x}_2}$. Fig. \ref{fig:Cond_X1X2inVoe} (Left) depicts the second condition of this case for $r_2\leq r_1$. Thus, we get \begin{align} {\mathbb{P}}[\mathbf{x}_1,\mathbf{x}_2\in V_{o}] &= \exp(-\lambda|{\mathcal{C}}_o|),\label{eq:Prob_Y1Y2inVoe1}\\ {\mathbb{P}}[\mathbf{x}_1,\mathbf{x}_2\in V_{oe}\mid \mathbf{x}_1,\mathbf{x}_2\in V_{o}]&= \exp\left(-\lambda(|{\mathcal{C}}_1|-|{\mathcal{C}}_o|)\right)\mathbbm{1}_{r_1\leq r_2}\nonumber\\ + \exp(-\lambda &(|{\mathcal{C}}_2|-|{\mathcal{C}}_o|))\mathbbm{1}_{r_2< r_1}. \label{eq:Prob_Y1Y2inVoe2_Case1} \end{align} Therefore, using \eqref{eq:Prob_Y1Y2inVoe1} and \eqref{eq:Prob_Y1Y2inVoe2_Case1} and the independence property of the PPP, we obtain ${\mathbb{P}}[\mathbf{x}_1,\mathbf{x}_2\in V_{oe}]=$ \begin{align} &\exp\left(-\lambda|{\mathcal{C}}_1|\right)\mathbbm{1}_{r_1\leq r_2} + \exp\left(-\lambda|{\mathcal{C}}_2|\right)\mathbbm{1}_{r_2< r_1}.\label{eq:Prob_Y1Y2inVoe_Case1} \end{align} {\em Case 2:} In this case, $\{\mathbf{x}_1,\mathbf{x}_2\}\in V_{oe}$ if $\Phi\left({\mathcal{C}}_o\right)=0$ and if one of the following conditions is met: 2a) $\Phi\left({\mathcal{A}}_o\right)\neq 0$, and 2b) $\Phi\left({\mathcal{C}}_3\setminus{\mathcal{C}}_1\right)\neq 0$, and $\Phi\left({\mathcal{C}}_3\setminus{\mathcal{C}}_2\right)\neq 0$, where ${\mathcal{A}}_o={\mathcal{A}}_{\mathbf{x}_1}\cap{\mathcal{A}}_{\mathbf{x}_2}$. The above two cases are depicted in Fig. \ref{fig:Cond_X1X2inVoe} (Middle and Right). Thus, ${\mathbb{P}}[\mathbf{x}_1,\mathbf{x}_2\in V_{oe}\mid \mathbf{x}_1,\mathbf{x}_2\in V_{o}]=$\vspace{-.2cm} \begin{align} &[1-\exp(-\lambda|{\mathcal{A}}_{o}|)] +\exp(-\lambda|{\mathcal{A}}_{o}|)\times\nonumber\\ &[1-\exp(-\lambda|{\mathcal{C}}_3\setminus{\mathcal{C}}_1|)][1-\exp(-\lambda|{\mathcal{C}}_3\setminus{\mathcal{C}}_2|)].\label{eq:Prob_Y1Y2inVoe2_Case2} \end{align} We have $|{\mathcal{A}}_o|+|{\mathcal{C}}_o|=|{\mathcal{C}}_1|+|{\mathcal{C}}_2|-|{\mathcal{C}}_3|$. Therefore, using \eqref{eq:Prob_Y1Y2inVoe1} and \eqref{eq:Prob_Y1Y2inVoe2_Case2}, we obtain ${\mathbb{P}}[\mathbf{x}_1,\mathbf{x}_2\in V_{oe}]=$ \begin{align} &[\exp(-\lambda(|{\mathcal{C}}_2|-|{\mathcal{C}}_3|))-1][\exp(-\lambda|{\mathcal{C}}_1|)-\exp(-\lambda|{\mathcal{C}}_3|)]\nonumber\\ +&[\exp(-\lambda|{\mathcal{C}}_o|)-\exp(-\lambda(|{\mathcal{C}}_1|+|{\mathcal{C}}_2|-|{\mathcal{C}}_3|))].\label{eq:Prob_Y1Y2inVoe_Case2} \end{align} Now, we derive the areas of ${\mathcal{C}}_o$, ${\mathcal{C}}_1$, ${\mathcal{C}}_2$, and ${\mathcal{C}}_3$. Let $U(z_1,z_2,u)$ be the area of the union of two circles of radii $z_1$ and $z_2$ with the angular separation of $u$ between their centers w.r.t.~their intersection points. Thus, $U(z_1,z_2,u)$ can be easily obtained as given in Lemma \ref{lemma:Area_CC_CE_Region}. Without loss of generality, we set $\theta_2=0$ and $0\leq \theta_1<\pi$. The two disks in ${\mathcal{C}}_o$, ${\mathcal{C}}_1$, ${\mathcal{C}}_2$ and ${\mathcal{C}}_3$ intersect (if they interest at all) at angles {\small $u_o=\theta_1$, \begin{align*} u_1&=\arccos\left((\tau^{-1}-\tau)\frac{r_1}{2r_2}+\tau\cos(\theta_1)\right),\\ u_2&=\arccos\left((\tau^{-1}-\tau)\frac{r_2}{2r_1}+\tau\cos(\theta_1)\right) \\\text{and~} u_3&=\arccos\left((1-\tau^2)\frac{r_1^2+r_2^2}{2r_1r_2}+\tau^2\cos(\theta_1)\right), \end{align*}} respectively. The evaluation of $|{\mathcal{C}}_3|$ is required only for Case 2 wherein $\tilde{{\mathcal{B}}}_{\mathbf{x}_1}$ and $\tilde{{\mathcal{B}}}_{\mathbf{x}_2}$ alway intersect. Thus, by definition, we have $|{\mathcal{C}}_o|=U(r_1,r_2,u_o)$ and $|{\mathcal{C}}_3|=U\left({r_1}{\tau^{-1}},{r_2}{\tau^{-1}},u_3\right)$. However, $\tilde{{\mathcal{B}}}_{\mathbf{x}_1}$ and ${{\mathcal{B}}}_{\mathbf{x}_2}$ (${{\mathcal{B}}}_{\mathbf{x}_1}$ and $\tilde{{\mathcal{B}}}_{\mathbf{x}_2}$) intersect only if $\frac{r_1}{\tau}<d+r_2$ ($\frac{r_2}{\tau}<d+r_1$), otherwise ${\mathcal{B}}_{\mathbf{x}_2}\subset\tilde{{\mathcal{B}}}_{\mathbf{x}_1}$ (${\mathcal{B}}_{\mathbf{x}_1}\subset\tilde{{\mathcal{B}}}_{\mathbf{x}_2}$). Thus, we get { \begin{align*} |{\mathcal{C}}_1| = \begin{dcases} U\left({r_1}{\tau^{-1}},r_2,u_1\right),\hspace{-.2cm} &\text{if~} {r_1}{\tau^{-1}}<d+r_2,\\ \pi{r_1^2}{\tau^{-2}}, &\text{otherwise}, \end{dcases} \end{align*} \text{and} \begin{align*} |{\mathcal{C}}_2| = \begin{dcases} U\left(r_1,{r_2}{\tau^{-1}},u_2\right),\hspace{-.2cm}&\text{if~} {r_2}{\tau^{-1}}<d+r_1,\\ \pi{r_2^2}{\tau^{-2}}, &\text{otherwise}. \end{dcases} \end{align*}} Finally, substituting above expressions in \eqref{eq:Prob_Y1Y2inVoe_Case1} and \eqref{eq:Prob_Y1Y2inVoe_Case2}, and then integrating \eqref{eq:Prob_Y1Y2inVoe_Case1} over domain $\{d\leq \tau^{-1}|r_1-r_2|\}$ (i.e., Case 1) and \eqref{eq:Prob_Y1Y2inVoe_Case2} over domain $\{d>\tau^{-1}|r_1-r_2|\}$ (i.e., Case 2), we get the second moment of the area of the CE region as in \eqref{eq:SecondMoment_CCCERegion}. \subsection{Proof of Theorem \ref{thm:TransmissionRate_NOMA}} \label{app:TransmissionRate_NOMA} We first derive the mean transmission rate of the typical CC user at $\mathbf{y}\sim U(V_o)$ as $\bar{R}_c(\chi_c)$ \begin{align*} &=\mathbb{E}_{\mathbf{y},\Phi}\left[\frac{1}{N_{oc}}\log_2(1+\beta_c)\mathbb{E}\left[\mathbbm{1}_{{\mathcal{E}}_c}(\mathtt{SIR}_{c},\mathtt{SIR}_{e})\mid\mathbf{y},\Phi\right]\right],\\ &\stackrel{(a)}{=}\log_2(1+\beta_c)\mathbb{E}_{|V_{oc}|}\left[\sum_{n=1}^{\infty}\frac{1}{n}\mathbb{P}(N_{oc}=n\mid |V_{oc}|)\right]\times\\ &~~~~~~~~~~~~~~~~~~~~\mathbb{E}_{\mathbf{y},\Phi}\left[p_c(\beta_c,\beta_e \mid \mathbf{y},\Phi)\right],\\ &=\log_2(1+\beta_c)\sum_{n=1}^\infty\frac{1}{n}\mathbb{P}(N_{oc}=n)\mathbb{E}_{\mathbf{y},\Phi}\left[p_c(\beta_c,\beta_e \mid \mathbf{y},\Phi)\right], \end{align*} where step (a) follows using Assumption \ref{assumption:Independence_PVCellArea_SuccessProb}. From the definition of the meta distribution, we have $\mathbb{E}_{\mathbf{y},\Phi}[p_c(\beta_c,\beta_e \mid \Phi)]=M_{1}^{c}(\chi_c,1)$ for $q_c=q_e=1$. Hence, using the distribution of CC load given in \eqref{eq:PMF_Noc}, we obtained $\bar{R}_c(\chi_c)$ as in \eqref{eq:MeanTransmissionRate_CC_CE}. Similarly, using the distribution of CE load given in \eqref{eq:PMF_Noe}, we obtained the mean transmission rate $\bar{R}_e(\chi_e)$ of the typical CE user as in \eqref{eq:MeanTransmissionRate_CC_CE}. Now, we obtain the distribution of the conditional transmission rate of the typical CC user as \begin{align*} &{\mathcal{R}}_c(\mathtt{r_{c}};\chi_c)=\mathbb{P}\left[p_c(\beta_c,\beta_e\mid\mathbf{y},\Phi)\leq \mathtt{r_{c}} N_{oc}\log_2(1+\beta_c)^{-1}\right],\\ &\stackrel{(a)}{=}\mathbb{E}_{N_{oc}}\left[\mathbb{P}\left[p_c(\beta_c,\beta_e\mid\mathbf{y},\Phi)\leq \mathtt{r_{c}} N_{oc}\log_2(1+\beta_c)^{-1}\mid N_{oc}\right]\right],\\ &\stackrel{(b)}{=}\mathbb{E}_{N_{oc}}\left[I\left(\min\left(\mathtt{r_{c}} N_{oc}\log_2(1+\beta_c)^{-1},1\right);\kappa_{1c},\kappa_{2c}\right)\right], \end{align*} where step (a) follows using Assumption \ref{assumption:Independence_PVCellArea_SuccessProb}, and step (b) follows using the beta approximation of the meta distribution of the success probability (see \eqref{eq:BetaApp}). Similarly, we obtain the distribution of the conditional transmission rate of the typical CE user as in \eqref{eq:RateCDF_UpperBound_CE}.
{ "redpajama_set_name": "RedPajamaArXiv" }
5,688
\section{Introduction}\label{Introduction} {\it Extended affine Weyl groups} are the Weyl groups of {\it extended affine root systems} which are higher nullity generalizations of affine and finite root systems. In 1985, K. Saito [S] introduced axiomatically the notion of an extended affine root system, he was interested in possible applications to the study of singularities. Extended affine root systems also arise as the root systems of a class of infinite dimensional Lie algebras called {\it extended affine Lie algebras}. A systematic study of extended affine Lie algebras and their root systems is given in [AABGP], in particular a set of axioms, different from those given by Saito \cite{S}, is extracted from algebras for the corresponding root systems. In [A2], the relation between axioms of [S] and [AABGP] for extended affine root system's is clarified. Let $R$ be an extended affine root system of nullity $\nu$ (see Definition \ref{ears}) and ${\mathcal V}$ be the real span of $R$ which by definition is equipped with a positive semi-definite bilinear form $I$. We consider $R$ as a subset of a hyperbolic extension $\tilde{\mathcal V}$ of ${\mathcal V}$, where $\tilde{\mathcal V}$ is equipped with a non-degenerate extension $\tilde{I}$ of $I$ (see Section \ref{hyperbolic}). Then the extended affine Weyl group ${\mathcal W}$ of $R$ is by definition the subgroup of the orthogonal group of $\tilde{\mathcal V}$ generated by reflections based on the set of non-isotropic roots $R^{\times}$ of $R$. This work is the first output of a three steps project on the presentations of extended affine Weyl groups and its application to the study of extended affine Lie algebras. In the first step, we study finite presentations for extended affine Weyl groups, where in this work we restrict ourself to the simply laced cases. In the second step the results of the current work will apply to investigate the existence of the so called a {\it presentation by conjugation} for the simply laced extended affine Weyl groups (see \cite{Kr} and \cite{A3}). Finally, in the third step, we will apply the results of the second step to investigate validity of certain classical results for the class of simply laced extended affine Lie algebras. There is only a little known about the presentations of extended affine Weyl groups. In fact if $\nu>2$, there is no known finite presentation for this class and for $\nu=2$ there is only one known finite presentation called the {\it generalized Coexter presentation} (see \cite{ST}). \begin{comment} We are mostly interested in finding a particular finite presentation for ${\mathcal W}$ and also for an intrinsic subgroup $\mathcal H$ of ${\mathcal W}$, called the {\it Heisenberg-like subgroup} of ${\mathcal W}$ (see Section \ref{heisenberg-sim}). We now briefly recall the nature of the known presentations for ${\mathcal W}$. It is known in the literature that a finite or affine root system has a so called {\it a presentation by conjugation}, namely it is the group defined by generators, $\hat{w}_\alpha$, $\alpha\inR^{\times}$ and the relations $\hat{w}_\alpha^2=1$ and $\hat{w}_\alpha\hat{w}_\beta\hat{w}_\alpha=\hat{w}_{w_\alpha(\beta)}$, $\alpha,\beta\inR^{\times}$, where $w_\alpha$ is the reflection based on $\alpha$. This is a rather simple presentation, however it is a finite presentation only if the root system is finite. In [K], the author shows that this presentation holds for all extended affine Weyl group of simply laced type, of rank $>1$. In [A3,4], the author extends these results to all reduced extended affine Weyl group that either have nullity $\leq 2$ or have the property that the commutator subgroup of $\mathcal H$ and its center coincide. Another well-known presentation for a finite or affine Weyl group is the so called Coxeter presentation [H]. It is a finite presentation which one can read from the Dynkin diagram of the root system. Using the notion of elliptic Dynkin diagrams associated to {\it elliptic} extended affine root systems (extended affine root systems of nullity $2$) by K. Saito [S], a presentation called the {\it generalized Coxeter presentation} is given by him and T. Takebayashi [ST] for elliptic extended affine Weyl group. Similar to the finite case the generators and relations can be read from the diagrams. \end{comment} We give a finite presentation, for simply laced extended affine Weyl groups, which is nullity free if $\hbox{rank}>1$ and for $\hbox{rank}=1$ it is given for nullities $\leq 3$ (see Theorem \ref{presen-4-sim}). Our presentation highly depends on the classification of semilattices (see Definition \ref{deflattice}), up to similarity, which appears in the structure of extended affine root systems (see (\ref{AABGP})). Since for types $A_\ell$ ($\ell\geq 2$), $D_\ell$, $E_\ell$ for arbitrary nullity and for type $A_1$ for nullity $\leq 3$ this classification is known (see [AABGP, Chapter II]), our presentation is explicit for the mentioned types and nullities. The paper is arranged as follows. In Section \ref{hyperbolic}, we obtain several results regarding the structure of certain reflection groups. The results of this section are similar but more general than those in \cite{MS} and \cite{A4}, and are applicable to a wide range of reflection groups including extended affine Weyl groups. \begin{comment} In Section \ref{hyperbolic}, we study certain reflection subgroups of the orthogonal group of a real finite dimensional vector space equipped with a non-degenerate bilinear form. They include extended affine Weyl group and so the results of this section provide the ground for the further study of extended affine Weyl group in the subsequent sections and also for an on going work on reduced, non-simply laced extended affine Weyl group. The transformations (\ref{xi}) which are a generalizations of those given in [MS] and [A1,3,4] are the most important tool in our presentation. \end{comment} In Section \ref{heisenberg-sim}, we introduce a notion of {\it supporting class} for the semilattices involved in the structure of extended affine root systems. This notion plays a crucial role in our work. Also we study an intrinsic subgroup $\mathcal H$ of an extended affine Weyl group ${\mathcal W}$ which we call it a {\it Heisenberg-like group}. The center of ${\mathcal H}$ is fully analyzed in terms of the supporting class of the root system. This a basic achievement which distinguishes the results of this section from those in \cite{A4} and \cite{MS} (See Corollary \ref{centers-1}). In particular it, together with other results, provides a unique expression of Weyl group elements in terms of the elements of a finite Weyl group, certain well-known linear transformations belonging to the corresponding Heisenberg-like group and central elements (see Proposition \ref{w-form}). We encourage the reader to compare this with its similar results in \cite{MS} and \cite{A4}. \begin{comment} It is in fact the subgroup generated by elements of the form $w_{\alpha+\sigma}w_\alpha$, $\sigma$ isotropic and $\alpha,\alpha+\sigma\inR^{\times}$. \end{comment} The main results are given in Sections \ref{presentation-sim} and \ref{presentation-W-sim} where we give our explicit presentations for the extended affine Weyl group ${\mathcal W}$ and the Heisenberg-like group $\mathcal H$. \begin{comment}In Section \ref{comparison}, we compare the elements of our presentation, for type $A_1$, with those of [ST]. \end{comment} The presentation is obtained as follows. First by analyzing the semilattice involved in the structure of $R$, we obtain a finite presentation for $\mathcal H$. The generators and relations depend on nullity, the supporting class of the involved semilattice and the Cartan matrix of the corresponding finite type. Next using the fact that ${\mathcal W}=\dot{{\mathcal W}}\ltimes\mathcal H$, where $\dot{{\mathcal W}}$ is a finite Weyl group of the same type of $R$, we obtain our presentation for ${\mathcal W}$. So this presentation consists of three parts, a presentation for $\mathcal H$, the Coxeter presentation for the finite Weyl group $\dot{{\mathcal W}}$ and the relations imposed by the semidirect product (see Theorems \ref{presen-1-sim} and \ref{presen-4-sim}). For a systematic study of extended affine Lie algebras and their root systems we refer the reader to [AABGP]. For the study of extended affine Weyl group we refer the reader to [S], [MS], [A1,2,3,4], [ST] and [T]. \vspace{5mm} \section{\bf REFLECTIONS GROUPS}\label{hyperbolic} \setcounter{equation}{0} Let ${\mathcal V}$ be a finite dimensional real vector space equipped with a non-trivial symmetric bilinear form $I=(\cdot,\cdot)$ of nullity $\nu$. An element $\alpha$ of ${\mathcal V}$ is called {\it nonisotropic} ({\it isotropic}) if $(\alpha,\alpha)\not=0$ ($(\alpha,\alpha)=0$). We denote the set of nonisotropic elements of a subset $A$ with $A^\times$. If $\alpha$ is non-isotropic, we set $\alpha^{\vee}=2\alpha/(\alpha,\alpha)$. Let ${\mathcal V}^{0}$ be the radical of the form and $\dot{\mathcal V}$ be a fixed complement of ${\mathcal V}^{0}$ in ${\mathcal V}$ of dimension $\ell$. Throughout this section we fix a basis $\{\alpha_1,\ldots,\alpha_\ell\}$ of $\dot{\mathcal V}$, a basis $\{\sigma_1,\ldots,\sigma_\nu\}$ of ${\mathcal V}^0$ and a basis $\{\lambda_1,\ldots,\lambda_\nu\}$ of $({\mathcal V}^0)^*$. We enlarge the space ${\mathcal V}$ to a $\ell+2\nu$-dimensional vector space as follows. Set \begin{equation}\label{hyp} \tilde{\mathcal V}:={\mathcal V}\oplus ({\mathcal V}^{0})^*, \end{equation} where $({\mathcal V}^{0})^*$ is the dual space of ${\mathcal V}^{0}$. Now we extend the from $(\cdot,\cdot)$ on ${\mathcal V}$ to a non-degenerate form, denoted again by $(\cdot,\cdot)$, on $\tilde{\mathcal V}$ as follows: \begin{equation}\label{hyp1} \begin{array}{ll} \bullet& (.,.)_{|_{{\mathcal V}\times{\mathcal V}}}:=(.,.),\vspace{2mm}\\ \bullet& (\dot{\mathcal V},({\mathcal V}^{0})^*)=(({\mathcal V}^{0})^*,({\mathcal V}^{0})^*):=0,\vspace{2mm}\\ \bullet& (\sigma_r,\lambda_s):=\delta_{r,s},\quad 1\leq r,s\leq\nu \end{array} \end{equation} The pair $(\tilde{\mathcal V},I)$ is called a {\it hyperbolic extension} of $({\mathcal V},I)$. Let $\hbox{O}(\tilde{\mathcal V},I)$ be the orthogonal subgroup of $GL(\tilde{\mathcal V})$, with respect to $I=(\cdot,\cdot)$. We also set $$\hbox{FO}(\tilde{\mathcal V},I)=\{w\in \hbox{O}(\tilde{\mathcal V},I)\mid w(\delta)=\delta\hbox{ for all }\delta\in{\mathcal V}^0\}. $$ For $\alpha\in{\mathcal V}^\times$, the element $w_\alpha\in \hbox{FO}(\tilde{\mathcal V},I)$ defined by $$w_\alpha(u)=u-(u,\alpha^{\vee})\alpha,\qquad (u\in\tilde{\mathcal V}),$$ is called the {\it reflection based on} $\alpha$. It is easy to check that \begin{equation}\label{normal} ww_{\alpha}w^{-1}=w_{w(\alpha)}\qquad (w\in \hbox{O}(\tilde{\mathcal V},I)). \end{equation} For a subset $A$ of ${\mathcal V}$, the group \begin{equation}\label{reflection} {\mathcal W}_A=\langle w_\alpha\mid\alpha\in A^\times\rangle, \end{equation} is called the {\it reflection group in }$\hbox{FO}(\tilde{\mathcal V},I)$ {\it based on} $A$. For $\alpha\in{\mathcal V}$ and $\sigma\in{\mathcal V}^{0}$, we define $T^{\sigma}_{\alpha}\in\hbox{E nd}(\tilde{\mathcal V})$ by \begin{equation}\label{xi} T^{\sigma}_{\alpha}(u):=u-(\sigma,u)\alpha+(\alpha,u)\sigma -\frac{(\alpha,\alpha)}{2}(\sigma,u)\sigma\quad(u\in\tilde{\mathcal V}). \end{equation} The basic properties of the linear maps $T_\alpha^\sigma$ are listed in the following lemma. The terms of the form $[x,y]$ appearing in the lemma denotes the commutator $x^{-1}y^{-1}xy$ of two elements $x,y$ in a group. \begin{lem}\label{new-4} Let $\alpha,\beta,\gamma\in{\mathcal V}$, $\sigma,\delta,\tau\in{\mathcal V}^0$ and $w\in\hbox{FO}(\tilde{\mathcal V},I)$. Then (i) $T^{r\sigma}_{\alpha}=T^{\sigma}_{r\alpha}$, $r\in{\mathbb R}$, (ii) $T_{\alpha}^{\sigma+\delta}=T_{\alpha}^{\sigma}T_{\alpha}^{\delta}T_{\delta}^{(\alpha,\alpha)\frac{\sigma}{2}}$, (iii) $T^\sigma_{\alpha+\beta}=T^\sigma_\alpha T^{\sigma}_\beta$, (iv) $[T^{\sigma}_\alpha,T^{\delta}_\beta]=T_\sigma^{(\alpha,\beta)\delta}$, (v) $[T_{\alpha}^{\sigma}T_{\beta}^{\delta},T_{\gamma}^{\tau}]=[T_{\alpha}^{\sigma},T_{\gamma}^{\tau}] [T_{\beta}^{\delta},T_{\gamma}^{\mu}]$, (vi) $wT_{\alpha}^{\sigma}w^{-1}=T_{w(\alpha)}^{\sigma}$, (vii) $ T^\sigma_{\alpha^{\vee}}=w_{\alpha+\sigma}w_{\alpha}$, $\alpha\in{\mathcal V}^{\times},$ (viii) $(T^\delta_\sigma)^{-1}=T^\sigma_\delta$. \end{lem} \noindent{\bf Proof. } The proof of each statement can be seen using the definition of the maps $T^\sigma_\alpha$ and straightforward computations.\hfill$\Box$\vspace{5mm} Part (vii) of Lemma \ref{new-4} has been in fact our motivation for defining the maps $T_\alpha^\sigma$. If we denote by $Z(G)$ the center of a group $G$, then we have from parts (iii), (vi) and (vii) of Lemma \ref{new-4} that for $\alpha\in{\mathcal V}$, $\sigma,\delta\in{\mathcal V}^0$, \begin{equation}\label{neww} T_\alpha^\sigma\in\hbox{FO}(\tilde{\mathcal V},I)\quad\hbox{and}\quad T^\sigma_\delta\in Z\big(\hbox{FO}(\tilde{\mathcal V},I)\big). \end{equation} \begin{lem}\label{formul-T} Let $\alpha\in{\mathcal V}$, $n_r\in{\mathbb R}$ and $1\leq r\leq\nu$. Then $$T_{\alpha}^{\Sigma_{r=1}^{\nu}n_{r}\sigma_{r}}=\prod_{r=1}^{\nu}T_{\alpha}^{n_{r}\sigma_{r}} \prod_{1\leq r<s\leq \nu}T^{n_{r}n_{s}\frac{(\alpha,\alpha)}{2}\sigma_{r}}_{\sigma_s}.$$ \end{lem} \noindent{\bf Proof. } For $ 1\leq r\leq\nu$, set $\delta_{r}:=\sum_{i=r}^{\nu}n_{i}\sigma_{i}$. Using Lemma \ref{new-4}~(ii)-(iii) and (\ref{neww}), we have \begin{eqnarray*} \qquad\qquad\qquad T_{\alpha}^{\Sigma_{r=1}^{\nu}n_{r}\sigma_{r}}&=&T_{\alpha}^{n_{1}\sigma_{1}+\delta_{2}} =T_{\alpha}^{n_{1}\sigma_{1}}T_{\alpha}^{\delta_{2}} T_{\delta_{2}}^{n_1\frac{(\alpha,\alpha)}{2}\sigma_{1}}\\ &=&T_{\alpha}^{n_{1}\sigma_{1}}T_{\alpha}^{n_2\sigma_{2}}T_{\alpha}^{\delta_{3}} T_{\delta_{3}}^{n_{2}\frac{\sigma_{2}}{2}}T_{\delta_{2}}^{n_{1}\frac{(\alpha,\alpha)}{2}\sigma_{1}}\\ &\vdots&\\ &=&\prod_{r=1}^{\nu}T_{\alpha}^{n_{r}\sigma_{r}}\prod_{r=1}^{\nu-1} T_{\delta_{r+1}}^{n_{r}\frac{(\alpha,\alpha)}{2}\sigma_{r}}\\ &=&\prod_{r=1}^{\nu}T_{\alpha}^{n_{r}\sigma_{r}}\prod_{r=1}^{\nu-1} \prod_{s=r+1}^{\nu}T_{\sigma_{s}}^{n_rn_s\frac{(\alpha,\alpha)}{2}\sigma_{r}}\\ \qquad\qquad\qquad&=&\prod_{r=1}^{\nu}T_{\alpha}^{n_{r}\sigma_{r}}\prod_{1\leq r<s\leq \nu}T^{n_rn_s\frac{(\alpha,\alpha)}{2}\sigma_{r}}_{\sigma_s}.\hbox{\qquad\qquad\qquad\hfill$\Box$\vspace{5mm}} \end{eqnarray*} For a subset $A$ of ${\mathcal V}$, consider the subgroup \begin{equation}\label{hb} {\mathcal H}(A):=\langle w_{\alpha+\sigma}w_\alpha\mid \alpha\in A^\times, \;\sigma\in{\mathcal V}^{0},\;\alpha+\sigma\in A\rangle. \end{equation} of ${\mathcal W}_A$. We note that ${\mathcal H}(A)\leqslant {\mathcal H}({\mathcal V})=\langle w_{\alpha+\sigma}w_\alpha\mid \alpha\in {\mathcal V}^\times, \;\sigma\in{\mathcal V}^{0}\rangle$. We recall that a group $H$ is called {\it two-step nilpotent} if the commutator $[H,H]$ is contained in the center $Z(H)$ of $H$. We also recall that in a two-step nilpotent group $H$, the commutator is bi-multiplicative, that is \begin{equation}\label{bicomm} [\prod_{i=1}^{n}x_{i},\prod_{j=1}^{m}y_{j}]=\prod_{i=1}^{n}\prod_{j=1}^{m} [x_{i},y_{j}] \end{equation} for all $x_{i},y_{j}\in H.$ \begin{lem}\label{xi-4} Let $A$ be a subset of ${\mathcal V}$. Then (i) ${\mathcal H}(A)$ is a two-step nilpotent group. (ii) If $X$ is a generating subset of ${\mathcal H}(A)$, then $[X,X]$ generates the commutator $[{\mathcal H}(A),{\mathcal H}(A)]$ of ${\mathcal H}(A)$. \end{lem} \noindent{\bf Proof. } (i) If $w_\alpha w_{\alpha+\sigma}$ and $w_\beta w_{\beta+\delta}$ are two generators of ${\mathcal H}(A)$, $\alpha,\beta\in A^\times$, $\alpha+\sigma,\beta+\delta\in A$, then by Lemma \ref{new-4}(vii),(iv), we have $$ [w_{\alpha+\sigma}w_\alpha,w_{\beta+\delta}w_\beta]=[T_{\alpha^{\vee}}^\sigma,T_{\beta^{\vee}}^{\delta}]=T_\sigma^{(\alpha^{\vee},\beta^{\vee})\delta}. $$ Now the result follows from (\ref{neww}). (ii) is an immediate consequence of (i) and (\ref{bicomm}).\hfill$\Box$\vspace{5mm} \begin{lem}\label{xi-5} ${\mathcal H}({\mathcal V})=\langle T_{\alpha_i}^{n_{i,r}\sigma_r}, T_{\sigma_r}^{m_{r,s}\sigma_s}\mid 1\leq i\leq\ell;\; 1\leq r,s\leq\nu;\; n_{i,r}, m_{r,s}\in{\mathbb R}\rangle.$ \end{lem} \noindent{\bf Proof. } From parts (iii) and (vii) of Lemma \ref{new-4} and Lemma \ref{formul-T} we see that ${\mathcal H}({\mathcal V})$ is a subset of the right hand side. Conversely, it follows from Lemma \ref{new-4}(vii) and (iv) that the right hand side is a subset of ${\mathcal H}({\mathcal V})$. \hfill$\Box$\vspace{5mm} \begin{lem}\label{new-xi-6} Let $h=\prod_{r=1}^{\nu}\prod_{i=1}^{\ell}T_{\alpha_i}^{n_{i,r}\sigma_r}\prod_{1\leq r<s\leq\nu} T_{\sigma_r}^{m_{r,s}\sigma_s}$, where $n_{i,r}, m_{r,s}\in{\mathbb R}$. If $\beta_j=\sum_{i=1}^{\ell}n_{i,j}\alpha_i$, $1\leq j\leq\nu$, then $$\begin{tabular}{c} $h(\lambda_j)=\lambda_r- \beta_j- \frac{(\beta_j,\beta_j)}{2}\sigma_j-\sum_{ 1\leq r\leq j-1}m_{r,j}\sigma_r+\sum_{ j+1\leq s\leq\nu}m_{j,s}\sigma_s.$ \end{tabular}$$ \end{lem} \noindent{\bf Proof. } Let $1\leq j\leq\nu$ and $\dot{\a}\in\dot{\mathcal V}$. Then from (\ref{xi}) and (\ref{hyp1}), it follows that $$T_{\dot{\a}}^{\sigma_j}(\lambda_j)=\lambda_j- \dot{\a}-\frac{(\dot{\a},\dot{\a})}{2}\sigma_j\quad\hbox{and}\quad T_{\sigma_r}^{\sigma_s}(\lambda_j)=\lambda_j-\delta_{s,j}\sigma_r+\delta_{r,j}\sigma_s,\quad 1\leq r,s\leq\nu,$$ and so using (\ref{neww}) and Lemma \ref{new-4}(ii)-(iii), we have \begin{eqnarray*} h(\lambda_j)&=&\prod_{r=1}^{\nu}\prod_{i=1}^{\ell}T_{\alpha_i}^{n_{i,r}\sigma_r}\prod_{1\leq r<s\leq\nu} T_{\sigma_r}^{m_{r,s}\sigma_s}(\lambda_j)\\ &=&\prod_{r=1}^{\nu} \prod_{i=1}^{\ell}T_{\alpha_i}^{n_{i,r}\sigma_r}\prod_{j+1\leq s\leq\nu} T_{\sigma_j}^{m_{j,s}\sigma_s}\prod_{ 1\leq r\leq j-1} T_{\sigma_r}^{m_{r,j}\sigma_j}\hspace{-5mm}\prod_{\{1\leq r<s\leq\nu\mid r,s\neq j\}} \hspace{-5mm}T_{\sigma_r}^{m_{r,s}\sigma_s}(\lambda_j)\\ &=&\prod_{r=1}^{\nu} \prod_{i=1}^{\ell}T_{\alpha_i}^{n_{i,r}\sigma_r} T_{\sigma_j}^{\sum_{j+1\leq s\leq\nu}m_{j,s}\sigma_s} T_{\sum_{ 1\leq r\leq j-1}m_{r,j}\sigma_r}^{\sigma_j}(\lambda_j)\\ &=&\prod_{r=1}^{\nu} \prod_{i=1}^{\ell}T_{\alpha_i}^{n_{i,r}\sigma_r}T_{\sigma_j}^{\sum_{j+1\leq s\leq\nu}m_{j,s}\sigma_s}(\lambda_j-\sum_{ 1\leq r\leq j-1}m_{r,j}\sigma_r)\\ &=&\prod_{r=1}^{\nu} \prod_{i=1}^{\ell}T_{\alpha_i}^{n_{i,r}\sigma_r}(\lambda_j- \sum_{ 1\leq r\leq j-1}m_{r,j}\sigma_r+\sum_{j+1\leq s\leq\nu}m_{j,s}\sigma_s)\\ &=&\prod_{r=1}^{\nu} \prod_{i=1}^{\ell}T_{\alpha_i}^{n_{i,r}\sigma_r}(\lambda_j)- \sum_{ 1\leq r\leq j-1}m_{r,j}\sigma_r+\sum_{j+1\leq s\leq\nu}m_{j,s}\sigma_s\\ \end{eqnarray*} \begin{eqnarray*} &=&\prod_{i=1}^{\ell}T_{\alpha_i}^{n_{i,j}\sigma_j}(\lambda_j)- \sum_{ 1\leq r\leq j-1}m_{r,j}\sigma_r+\sum_{j+1\leq s\leq\nu}m_{j,s}\sigma_s\\ &=&T_{\beta_j}^{\sigma_j}(\lambda_j)-\sum_{ 1\leq r\leq j-1}m_{r,j}\sigma_r+ \sum_{j+1\leq s\leq\nu}m_{j,s}\sigma_s\\ \qquad\qquad&=&\lambda_j- \beta_j- \frac{(\beta_j,\beta_j)}{2}\sigma_j-\sum_{ 1\leq r\leq j-1}m_{r,j}\sigma_r +\sum_{j+1\leq s\leq\nu}m_{j,s}\sigma_s.\hbox{\qquad\qquad\hfill$\Box$\vspace{5mm}} \end{eqnarray*} \begin{lem}\label{xi-6} Each element $h\in {\mathcal H}({\mathcal V})$ has a unique expression in the form \begin{equation}\label{exp} h=h(n_{i,r},m_{r,s}):= \prod_{r=1}^{\nu}\prod_{i=1}^{\ell}T_{\alpha_i}^{n_{i,r}\sigma_r}\prod_{1\leq r<s\leq\nu} T_{\sigma_r}^{m_{r,s}\sigma_s}\quad(n_{i,r}, m_{r,s}\in{\mathbb R}). \end{equation} \end{lem} \noindent{\bf Proof. } Let $h\in {\mathcal H}$. From (\ref{neww}), Lemmas \ref{xi-5} and \ref{new-4}(iv) it follows that $h$ has an expression in the form (\ref{exp}). Now let $h(n'_{i,r},m'_{r,s})$ be another expression of $h$ in the form (\ref{exp}). Then by acting these two expressions of $h$ on $\lambda_j$'s, $1\leq j\leq\nu$, we get from Lemma \ref{new-xi-6} that $n_{i,r}=n'_{i,r}$ and $m_{r,s}=m'_{r,s}$ for all $1\leq i\leq \ell$ and $1\leq r<s\leq\nu$.\hfill$\Box$\vspace{5mm} \begin{lem}\label{xi-7} (i) $Z\big({\mathcal H}({\mathcal V})\big)=\langle T_{\sigma_r}^{m_{r,s}\sigma_s}\mid 1\leq r<s\leq\nu,\;m_{r,s}\in{\mathbb R}\rangle.$ (ii) For any fixed nonzero real numbers $m_{r,s}$, $1\leq r<s\leq\nu$, the group $\langle T_{\sigma_r}^{m_{r,s}\sigma_s}\mid 1\leq r<s\leq\nu\rangle$ is free abelian of rank $\frac{\nu(\nu-1)}{2}$ . (iii) ${\mathcal H}({\mathcal V})$ is a torsion free group. \end{lem} \noindent{\bf Proof. } (i) By (\ref{neww}), Lemmas \ref{new-4}(vii) and \ref{xi-5}, it is clear that the right hand side in the statement is a subset of the left hand side. To show the reverse inclusion, let $h\in Z({\mathcal H}({\mathcal V}))$. Consider an expression $h(n_{i,r},m_{r,s})$ of $h$ in the form (\ref{exp}). We must show that $n_{i,r}=0$, for all $1\leq i\leq\ell$ and $1\leq r\leq\nu$. Since ${\mathcal H}({\mathcal V})$ is a two-step nilpotent group, we have from (\ref{bicomm}) and Lemmas \ref{xi-5} and \ref{new-4}(iv) that for all $1\leq j\leq\ell$ and $1\leq s\leq \nu$, \begin{eqnarray*} 1=[h,T_{\alpha_j}^{\sigma_s}]&=&\prod_{r=1}^{\nu}\prod_{i=1}^{\ell} [T_{\alpha_i}^{n_{i,r}\sigma_r},T_{\alpha_j}^{\sigma_s}]\\ &=& \prod_{r=1}^{\nu}\prod_{i=1}^{\ell}T_{\sigma_r}^{n_{i,r}(\alpha_{i},\alpha_{j})\sigma_s}= \prod_{r=1}^{\nu}T_{\sigma_r}^{(\sum_{i=1}^{\ell}n_{i,r}\alpha_{i},\alpha_{j})\sigma_s}. \end{eqnarray*} Therefore by Lemma \ref{xi-6}, $\sum_{i=1}^{\ell}n_{i,r}(\alpha_{i},\alpha_{j})=0$, for all $1\leq j\leq\ell$ and $1\leq r\leq\nu$. But $\dot{\mathcal V}=\sum_{i=1}^{\ell}{\mathbb R}\alpha_{i}$ and the form restricted to $\dot{\mathcal V}$ is non-degenerate, so $n_{i,r}=0$ for all $1\leq i\leq\ell$, $1\leq r\leq\nu$. (ii) We show that $\{T^{m_{r,s}\sigma_s}_{\sigma_r}\mid 1\leq r\leq s\leq\nu\}$ is a free basis for the group under consideration. Let $\prod_{1\leq r<s\leq\nu}^{\nu}T^{n_{r,s}m_{r,s}\sigma_s}_{\sigma_r}=1$, $n_{r,s}\in{\mathbb Z}$ for $1\leq r<s\leq\nu$. Then by Lemma \ref{xi-6}, $n_{r,s}m_{r,s}=0$ for all $r,s$. The result now follows as $m_{r,s}$'s are nonzero. (iii) Let $h\in {\mathcal H}({\mathcal V})$ and assume $h^n=1$ for some $n\in\mathbb N$. Let $h=h(n_{i,r},m_{r,s})$ (see (\ref{exp})). Since ${\mathcal H}({\mathcal V})$ is two-step nilpotent, we have \begin{eqnarray*} 1=h^n=\prod_{r=1}^{\nu}\prod_{i=1}^{\ell}T_{\alpha_i}^{nn_{i,r}\sigma_r}c\prod_{1\leq r<s\leq\nu} T_{\sigma_r}^{nm_{r,s}\sigma_s}, \end{eqnarray*} where $c$ is a central element. By part (i), $1=h^n=h^n(nn_{i,r},m'_{r,s})$ for some real numbers $m'_{r,s}$. By Lemma \ref{xi-6}, $n_{i,r}=0$ for all $i,r$. Thus $h$ is central and so $c=1$. Now it follows again from Lemma \ref{xi-6} that $m_{r,s}=0$ for all $r,s$.\hfill$\Box$\vspace{5mm} \begin{cor}\label{xi-9} For any subset $A$ of ${\mathcal V}$, ${\mathcal H}(A)$ is a torsion free group. \end{cor} Recall that we have fixed a complement $\dot{\mathcal V}$ of ${\mathcal V}^0$ in ${\mathcal V}$. Now for a subset $A$ of ${\mathcal V}$, we set $$\dot{A}:=\{\alpha\in\dot{\mathcal V}\mid \alpha+\sigma\in A\hbox{ for some }\sigma\in{\mathcal V}^0\}.$$ \begin{pro}\label{xi-10} Let $A$ be a subset of ${\mathcal V}$ such that $w_\alpha(A)\subseteq A$ for all $\alpha\in A^\times$ and ${\mathcal H}(A)=\langle w_{\alpha+\sigma}w_\alpha\mid\alpha\in \dot{A}^\times,\sigma\in{\mathcal V}^0,\alpha+\sigma\in A\rangle$. Then ${\mathcal W}_A={\mathcal W}_{\dot{A}}\ltimes {\mathcal H}(A).$ \end{pro} \noindent{\bf Proof. } Let $\alpha\in{A}$, $\sigma\in{\mathcal V}^0$, ${\alpha}+\sigma\in A$ and $w\in{\mathcal W}_A$. By assumption $w(\alpha)$ and $w(\alpha+\sigma)=w(\alpha)+\sigma$ are elements of $A$. Thus, ${\mathcal H}(A)$ is a normal subgroup of ${\mathcal W}_A$, by (\ref{normal}). Now we show that ${\mathcal H}(A)\cap {\mathcal W}_{A}=\{1\}$. Since ${\mathcal H}(A)\subseteq {\mathcal H}({\mathcal V})$ and ${\mathcal W}_{\dot{A}}\subseteq{\mathcal W}_{\dot{\mathcal V}}$, it is enough to show that ${\mathcal H}(\dot{\mathcal V})\cap {\mathcal W}_{\dot{\mathcal V}}=\{1\}$. Let $h\in{\mathcal W}_{\dot{{\mathcal V}}}\cap {\mathcal H}({\mathcal V})$ and consider an expression of $h$ in the form (\ref{exp}). Since $h\in{\mathcal W}_{\dot{{\mathcal V}}}$, we have from (\ref{hyp1}) that $h(\lambda_j)=\lambda_j$, $1\leq j\leq\nu$ and so it follows from Lemma \ref{new-xi-6} that $h=1$. To complete the proof, we must show ${\mathcal W}_A={\mathcal W}_{ \dot{A}} {\mathcal H}(A)$. Let $\alpha\in \dot{A}$ and $\sigma\in{\mathcal V}^{0}$ such that $\dot{\a}+\sigma\in A$. By assumption, $w_{\dot{\a}+\sigma}w_{\dot{\a}}\in {\mathcal H}(A)\leqslant{\mathcal W}_A$ and $w_{\dot{\a}+\sigma}\in{\mathcal W}_A$, therefore $w_{\dot{\a}}=w_{\dot{\a}+\sigma}w_{\dot{\a}+\sigma}w_{\dot{\a}}\in{\mathcal W}_A$ and so ${\mathcal W}_{ \dot{A}}\subseteq{\mathcal W}_A$. This shows that ${\mathcal W}_{\dot{A}} {\mathcal H}(A)\subseteq{\mathcal W}_A$. To see the reverse inclusion, let $w_{\alpha}$, $\alpha\in A^\times$, be a generator of ${\mathcal W}_A$. Then $\alpha=\dot{\a}+\sigma$ where $\dot{\a}\in \dot{A}^{\times}$ and $\sigma\in{\mathcal V}^{0}$. Since $w_{\dot{\a}}\in{\mathcal W}_{\dot{A}}$ and since by assumption $w_{\dot{\a}}w_{\dot{\a}+\sigma}\in {\mathcal H}(A)$, we have $w_\alpha=w_{\dot{\a}+\sigma}=w_{\dot{\a}}(w_{\dot{\a}}w_{\dot{\a} +\sigma}) \in {\mathcal W}_{\dot{A}}{\mathcal H}(A)$. This completes the proof.\hfill$\Box$\vspace{5mm} \vspace{5mm} \section{\bf EXTENDED AFFINE WEYL GROUPS }\label{heisenberg-sim} \setcounter{equation}{0} In this section, we study Weyl groups of simply laced extended affine root systems. We are mostly interested in finding a particular finite set of generators for such a Weyl group and its center (see Proposition \ref{propo-impor}). Since the semilattice involved in the structure of an extended affine root system plays a crucial role in our study, we start this section with recalling the definition of a semilattice from \cite[II.\S 1]{AABGP} and introducing a notion of {\it supporting class} for semilattices. For the theory of extended affine root systems the reader is referred to [AABGP]. In particular, we will use the notation and concepts introduced there without further explanations. \begin{DEF}\label{deflattice} A {\it semi-lattice} is a subset $S$ of a finite dimensional real vector space ${\mathcal V}^{0}$ such that $0\in S$, $S\pm2S\subseteq S$, $S$ spans ${\mathcal V}^{0}$ and $S$ is discrete in ${\mathcal V}^{0}$. The {\it rank }of $S$ is defined to be the dimension $\nu$ of ${\mathcal V}^{0}$. Note that the replacement of $S\pm2S\subseteq S$ by $S\pmS\subseteq S$ in the definition gives one of the equivalent definitions for a {\it lattice} in ${\mathcal V}^{0}$. Semilattices $S$ and $S'$ in ${\mathcal V}^{0}$ are said to be similar if there exist $\psi\in GL({\mathcal V}^{0})$ so that $\psi(S)=S'+\sigma'$ for some $\sigma'\in S'$. \end{DEF} Let $S$ be a semilattice in ${\mathcal V}^0$ of rank $\nu$. \begin{comment} As it will be used frequently in our work as an index set, we set $$J_\nu=\{1,\ldots,\nu\}. $$ \end{comment} The ${\mathbb Z}$-span $\Lambda$ of $S$ is a lattice in ${\mathcal V}^0$, a free abelian group of rank $\nu$ which has an ${\mathbb R}$-basis of ${\mathcal V}^0$ as its ${\mathbb Z}$-basis. By \cite[ II.1.11] {AABGP}, $S$ contains a subset $B=\{\sigma_1,\ldots,\sigma_\nu\}$ of $S$ which forms a basis for $\Lambda$. We call such a set $B$, a {\it basis} for $S$. Then $$\Lambda=\langle S\rangle=\sum_{i=1}^\nu{\mathbb Z}\sigma_r\;\;\hbox{ with } \;\;\sigma_r\in S\;\;\hbox{ for all }\;\;r. $$ Consider $\tilde{\Lam}:=\Lambda/2\Lambda$ as a ${\mathbb Z}_2$-vector space with ordered basis $\tilde{B}$, the image of $B$ in $\tilde{\Lam}$. For $\sigma\in\Lambda$ and $1\leq r\leq\nu$, let $\sigma(r)\in\{0,1\}$ be the unique integer such that $\tilde{\sg}=\sum_{r=1}^{\nu}\sigma(r)\tilde{\sg}_r$. Then we set $$\mbox{supp}_B(\sigma):=\{1\leq r\leq\nu\mid \sigma(r)=1\}.$$ Then $\sigma=\sum_{r\in\hbox{supp}_{B}(\sigma)}\sigma_r$ ($\hbox{mod}\; 2\Lambda$). By [AABGP, II.1.6], $S$ can be written in the form $$ S=\bigcup_{j=0}^{m}(\delta_j+2\Lambda), $$ where $\delta_0=0$ and $\delta_j$'s are distinct coset representatives of $2\Lambda$ in $\Lambda$. The integer $m$ is called the {\it index} of $S$ and is denoted by $\hbox{ind}(S)$. The collection \begin{equation}\label{support} \hbox{supp}_B(S):=\big\{\hbox{supp}_B(\delta_j)\mid 0\leq j\leq m\big \} \end{equation} is called the {\it supporting class of} $S$, with respect to $B$. Since $\delta_j=\sum_{r\in \hbox{supp}_B(\delta_j)}\sigma_r$ ($\hbox{mod}\; 2\Lambda$), the supporting set determines $S$ uniquely. Therefore, we may write \begin{equation}\label{unique-1} S=\biguplus_{J\in\hbox{supp}_B(S)}(\tau_{_J}+2\Lambda)\quad\hbox{where}\quad \tau_{_J}:=\sum_{r\in J}\sigma_r. \end{equation} (By convention we have $\tau_{_{\emptyset}}:=\sum_{r\in \emptyset}\sigma_r=0$). By [A5, Proposition 1.12], if $\nu\leq 3$, then the index determines uniquely, up to similarity, the semilattices in $\Lambda$. So by \cite[Table II.4.5]{AABGP}, up to similarity, the semilattices of rank $\leq 3$ in $\Lambda$ are listed in the following table, according to their supporting classes: \pagebreak \begin{tab}\label{tab-1} The supporting classes of semilattices, up to similarity, for $\nu\leq 3$. \end{tab} $\begin{tabular}{c|c|c} $\nu$ &$\hbox{index}$ & $\hbox{supp}_B(S)$ \\ \hline 0 & 0 & $\{\emptyset\}$ \\ \hline 1 & 1 & $\{\emptyset,\{1\}\}$ \\ \hline 2 & 2 & $\{\emptyset,\{1\},\{2\}\}$\\ & 3& $\{\emptyset,\{1\},\{2\},\{1,2\}\}$\\ \hline 3 & 3 & $\{\emptyset,\{1\},\{2\},\{3\}\}$\\ & 4 & $\{\emptyset,\{1\},\{2\},\{3\},\{2,3\}\}$\\ & 5 & $\{\emptyset,\{1\},\{2\},\{3\},\{1,3\},\{2,3\}\} $\\ & 6 & $\{\emptyset,\{1\},\{2\},\{3\},\{1,2\},\{1,3\},\{2,3\}\}$\\ & 7&$\{\emptyset,\{1\},\{2\},\{3\},\{1,2\},\{1,3\},\{2,3\},\{1,2,3\}\}$\\ \hline \end{tabular}$ \vspace{.5cm} Next we recall the definition of an extended affine root system. \begin{DEF}\label{ears} {\rm A subset $R$ of a non-trivial finite dimensional real vector space ${\mathcal V}$, equipped with a positive semi-definite symmetric bilinear form $(\cdot,\cdot)$, is called an {\it extended affine root system} if $R$ satisfies the following 8 axioms: \begin{itemize} \item R1) $ 0\in R$, \item R2) $ -R=R$, \item R3) $R$ spans ${\mathcal V}$, \item R4) $\alpha \in R^{\times} \Longrightarrow 2\alpha \notin R$, \item R5) $R$ is discrete in ${\mathcal V}$, \item R6) For $\alpha \in R^{\times}$ and $\beta\in R$, there exist non-negative integers $d,u$ such that $\beta+n\alpha\in R$, $n\in{\mathbb Z}$, if and only if $-d\leq n\leq u$, moreover $(\beta,\alpha^{\vee})=d-u$, \item R7) If $R=R_{1}\cup R_{2}$, where $(R_{1},R_{2})=0$, then either $R_{1}=\emptyset$ or $R_{2}=\emptyset$, \item R8) For any $\sigma \in R^{0}$, there exists $\alpha\in R^{\times}$ such that $\alpha+\sigma\in R$. \end{itemize} The dimension $\nu$ of the radical ${\mathcal V}^0$ of the form is called the {\it nullity} of $R$, and the dimension $\ell$ of $\overline{\v}:={\mathcal V}/{\mathcal V}^0$ is called the {\it rank} of $R$. Sometimes, we call $R$ a $\nu$-extended affine root system.} Corresponding to the integers $\ell$ and $\nu$, we set $$J_\ell=\{1,\ldots,\ell\}\quad\hbox{and}\quad J_\nu=\{1,\ldots,\nu\}. $$ \end{DEF} Let $R$ be a $\nu$-extended affine root system. It follows that the form restricted to $\overline{\v}$ is positive definite and that $\bar{R}$, the image of $R$ in $\overline{\v}$, is an irreducible finite root system (including zero) in $\overline{\v}$ ([AABGP, II.2.9]). The {\it type} of $R$ is defined to be the type of the finite root system $\bar{R}$. In this work {\it we always assume that $R$ is an extended affine root system of simply laced type}, that is it has one of the types $X_\ell=A_\ell$, $D_\ell,$ $E_6,$ $E_7$ or $E_8.$ According to [AABGP, II.2.37], we may fix a complement $\dot{\mathcal V}$ of ${\mathcal V}^0$ in ${\mathcal V}$ such that \begin{equation}\label{rd} \dot{R}:=\{\dot{\a}\in\dot{\mathcal V}\mid\dot{\a}+\sigma\in R\hbox{ for some }\sigma\in{\mathcal V}^0\} \end{equation} is a finite root system in $\dot{\mathcal V}$, isometrically isomorphic to $\bar{R}$, and that $R$ is of the form \begin{equation}\label{AABGP} R=R(X_\ell,S)=(S+S)\cup(\dot{R}+S) \end{equation} where $S$ is a semilattice in ${\mathcal V}^0$. Here $X_\ell$ denotes the type of $\dot{R}$. It is known that if $\ell>1$, then $S$ is a lattice in ${\mathcal V}^{0}$. Throughout our work, we fix two sets $$ \Pi=\{\alpha_1,\ldots,\alpha_{\ell}\}\quad\hbox{and}\quad B=\{\sigma_1,\ldots,\sigma_\nu\}, $$ where $\Pi$ is a fundamental system for $\dot{R}$ with $(\alpha_i,\alpha_i)=2$ for all $i\in J_\ell$, and $B$ is a basis for $S$. In particular, $S$ has the expression as in (\ref{unique-1}). Since $B$ is fixed, we write $\hbox{supp}(S)$ instead of $\hbox{supp}_B(S)$. From $S\pm 2S\subseteq S$, it follows that ${\mathbb Z}\sigma_r\subseteq S$ for $r\in J_\nu$, and so we have from (\ref{AABGP}) that \begin{equation}\label{roots-sim} \alpha_i+{\mathbb Z}\sigma_r\subseteq R,\qquad (i,r)\in J_\ell\times J_\nu \end{equation} As in Section \ref{hyperbolic}, let $\tilde{\mathcal V}=\dot{\mathcal V}\oplus{\mathcal V}^0\oplus({\mathcal V}^0)^*$, where $\dot{\mathcal V}$ is the real span of $\dot{R}$. With respect to the basis $B$, we extend the from $(\cdot,\cdot)$ on ${\mathcal V}$ to a non-degenerate form, denoted again by $(\cdot,\cdot)$, on $\tilde{\mathcal V}$ by (\ref{hyp1}). \begin{comment} From now on we fix this hyperbolic extension $(\tilde{\mathcal V},I)$ of $({\mathcal V},I)$, and we call it a {\it hyperbolic extension determined by} $R$. \end{comment} We recall from (\ref{reflection}) and (\ref{hb}) that \begin{equation}\label{reform0-sim} {\mathcal W}_R=\langle w_{\alpha}\mid\alpha\inR^{\times}\rangle \end{equation} is a subgroup of $\hbox{FO}(\tilde{\mathcal V},I)$ and $$\mathcal H(R)=\langle w_{\alpha+\sigma}w_\alpha\mid\alpha\inR^{\times},\sigma\in{\mathcal V}^0,\alpha+\sigma\in R\rangle$$ is a subgroup of $W_R$. \begin{DEF}\label{EAWG} The groups ${\mathcal W}_R$ and $\mathcal H(R)$ are called the {\it extended affine Weyl group} and the {\it Heisenberg-like group} of $R$, respectively. Since $\dot{R}\subseteq R$, we may identify the finite Weyl group of $\dot{R}$ with the subgroup $\dot{\w}=\langle w_\alpha\mid\alpha\in\dot{R}^\times\rangle$ of ${\mathcal W}_R$. When there is no confusion we simply write ${\mathcal W}$ and ${\mathcal H}$ instead of ${\mathcal W}_R$ and $\mathcal H(R)$ respectively. \end{DEF} \begin{lem}\label{reform2-sim} ${\mathcal H}=\langle T^{\sigma}_{\dot{\a}}\mid\dot{\a}\in\dot{R},\;\sigma\in S\rangle.$ \end{lem} \noindent{\bf Proof. } For $\dot{\a}\in\dot{R}$ and $\sigma\in S$, we have from Lemma \ref{new-4}(vii) and (\ref{AABGP}) that $T^{\sigma}_{\dot{\a}}=w_{\dot{\a}+\sigma}w_{\dot{\a}}\in\mathcal H$. Also if $\alpha\inR^{\times}$, $\sigma\in{\mathcal V}^0$ and $\alpha+\sigma\in R$, then $\alpha=\dot{\a}+\tau$, where $\dot{\a}\in\dot{R}$ and by (\ref{AABGP}), $\tau,\sigma,\tau+\sigma\in S$. Then $w_{\alpha+\sigma}w_\alpha=w_{\dot{\a}+\tau+\sigma}w_{\dot{\a}}w_{\dot{\a}}w_{\dot{\a}+\tau}= T_{\dot{\a}}^{\tau+\sigma}T_{-\dot{\a}}^{\tau}.$ \hfill$\Box$\vspace{5mm} We next want to find certain finite sets of generators for both ${\mathcal W}$ and $\mathcal H$ and their centers. For $r,s\in J_\nu$, we set \begin{equation}\label{def-crs} c_{r,s}:=T_{\sigma_{r}}^{\sigma_{s}}\quad\hbox{and}\quad C:=\langle c_{r,s}\mid 1\leq r<s\leq\nu\rangle. \end{equation} Then by (\ref{neww}) and Lemma \ref{new-4}(viii) for all $r,s\in J_\nu$, we have \begin{equation}\label{comute} c_{r,r}=c_{s,r}c_{r,s}=1\quad\hbox{and}\quad C\leq Z\big(\hbox{FO}(\tilde{\mathcal V},I)\big). \end{equation} Moreover from (\ref{def-crs}) and Lemma \ref{xi-7}(ii), it follows that \begin{equation}\label{free-abelian} \mbox{$C$ is a free abelian group of rank $\nu(\nu-1)/2.$} \end{equation} Also for $(i,r)\in J_\ell\times J_\nu$, we set \begin{equation}\label{def-tir} t_{i,r}:=T_{\alpha_{i}}^ {\sigma_{r}}. \end{equation} From (\ref{roots-sim}) and parts (iv) and (vii) of Lemma \ref{new-4} for all $r,s\in J_\nu$ and $i,j\in J_\ell$, one can see easily that \begin{equation}\label{comu-sim} t_{i,r}=w_{\alpha_i+\sigma_r}w_{\alpha_i}\in {\mathcal H}\quad\hbox{and}\quad [t_{i,r},t_{j,s}]=c_{r,s}^{(\alpha_{i},\alpha_{j})}\in {\mathcal H}. \end{equation} \begin{lem}\label{wtw-sim} $w_{\alpha_{i}}t_{j,r}w_{\alpha_{i}}= t_{j,r}t_{i,r}^{-(\alpha_i,\alpha_j)}$, for $i,j\in J_\ell$, $r\in J_\nu$. \end{lem} \noindent{\bf Proof. } Let $i,j\in J_\ell$, $r\in J_\nu$ and $\alpha=w_{\alpha_i}(\alpha_j)=\alpha_j-(\alpha_j,\alpha_i)\alpha_i$. We have from Lemma \ref{new-4}(vi) that $ w_{\alpha_{i}}t_{j,r}w_{\alpha_{i}}=T_{\alpha_i}^{\sigma_{r}}= t_{j,r} t_{i,r}^{-(\alpha_i,\alpha_j)}.$ \hfill$\Box$\vspace{5mm} In order to describe the centers $Z(\mathcal H )$ and $Z({\mathcal W})$, we use the notion of supporting class of $S$ (with respect to $B$) by assigning a subgroup of $C$ to $S$ as follows. We set \begin{equation}\label{def-F(S)} F(S):=\langle z_{_J}\mid J\subseteq J_\nu\rangle\leq C, \end{equation} where \begin{equation}\label{def-zJ} z_{_J}:=\left\{\begin{array}{ll} \prod_{\{r,s\in J\mid r<s\}} c_{_{r,s}} & \hbox{if }J\in\hbox{supp}(S)\vspace{3mm}\\ c_{_{r,s}}^{2} & \hbox{if $J=\{r,s\}\not\in\hbox{supp}(S)$,}\vspace{3mm}\\ 1, & \hbox{otherwise}. \end{array}\right. \end{equation} (Here we interpret the product on an empty index set to be $1$.) Our goal is to prove that $Z({\mathcal W})=Z(\mathcal H )=F(S)$. Note that if $\{r,s\}\subseteq\hbox{supp}(S)$, then $z_{_{\{r,s\}}}=c_{r,s}$. In particular, if $S$ is a lattice then the second condition in the definition of $z_{_J}$ is surplus and so $F(S)=\langle z_{_{\{r,s\}}}\mid 1\leq r<s\leq\nu\rangle=C$. Also from the way $z_{_{\{r,s\}}}$ is defined and (\ref{comu-sim}) we note that \begin{equation}\label{commutator-sim} [t_{i,r},t_{j,s}]=c_{r,s}^{(\alpha_i,\alpha_j)}\in \langle z_{_{\{r,s\}}}\rangle,\quad i,j\in J_\ell,\quad r,s\in J_\nu. \end{equation} \begin{lem}\label{nice-lemma} Let $\alpha=\sum_{i=1}^{\ell}m_{i}\alpha_{i}\in\dot{R}$ and $\sigma=\sum_{r=1}^{\nu}n_{r}\sigma_{r}\in \Lambda$. Then $$T_{\alpha}^{\sigma}= \prod_{r=1}^{\nu} \prod_{i=1}^{\ell} t_{i,r}^{m_in_r} \prod_{1\leq r<s\leq\nu}c_{s, r}^{n_{r}n_{s}}.$$ \end{lem} \noindent{\bf Proof. } Using Lemma \ref{formul-T} and Lemma \ref{new-4}(iii), we have \begin{eqnarray*} T_{\alpha}^{\sigma}&=& \prod_{r=1}^{\nu}T_{\alpha}^{n_{r}\sigma_{r}} \prod_{1\leq r<s\leq\nu}(T_{\sigma_s}^{\sigma_r})^{n_{s}n_{r}}= \prod_{r=1}^{\nu} \prod_{i=1}^{\ell} t_{i,r}^{m_in_r} \prod_{1\leq r<s\leq\nu}c_{s, r}^{n_{r}n_{s}}. \end{eqnarray*} \hfill$\Box$\vspace{5mm} For a subset $J=\{i_1,\ldots, i_n\}$ of $J_\nu$ with $ i_1<i_2<\cdots <i_n$ and a group $G$ we make the convention $$\prod_{i\in J} a_i=a_{i_1}a_{i_2}\cdots a_{i_n}\qquad\qquad(a_i\in G). $$ \begin{pro}\label{gen-H} ${\mathcal H}=\langle t_{i,r},z_{_J}\mid (i,r)\in J_\ell\times J_\nu,\;J\subseteq J_\nu\rangle$. \end{pro} \noindent{\bf Proof. } Let $T$ be the group in the right hand side of the equality. We proceed with the proof in the following two steps. (1) $T\subseteq {\mathcal H}$. By (\ref{comu-sim}), it is enough to show that $z_{_J}\in {\mathcal H}$ for any set $J\subseteq J_\nu$. First, let $J\in\hbox{supp}(S)$. Then by the definition of $z_{_J}$, we have $z_{_J}=\prod_{\{r,s\in J\mid r<s\}}c_{r,s}$. Now it follows from Lemma \ref{formul-T} that \begin{eqnarray*} T_{\alpha_{i}}^{\tau_{_J}}= T_{\alpha_{i}}^{\Sigma_{r\in J}\sigma_{r}} &=&\prod_{\{r,s\in J\mid r<s\}}T^{\sigma_r}_{\sigma_s} \prod_{r\in J}T_{\alpha_{i}}^{\sigma_{r}}=\prod_{\{r,s\in J\mid r<s\}}c_{s,r} \prod_{r\in J}T_{\alpha_{i}}^{\sigma_{r}}=z_{_J}^{-1}\prod_{r\in J}t_{i,r}. \end{eqnarray*} But since $\alpha_{i}+\tau_{_J}\inR$ (by (\ref{unique-1}) and (\ref{AABGP})), it follows from Lemma \ref{reform2-sim} and (\ref{comu-sim}) that $$z_{_J}=(T_{\alpha_{i}}^{\tau_{_J}})^{-1}\prod_{r\in J}t_{i,r}\in {\mathcal H}.$$ Finally, suppose $J=\{r,s\}\not\in\hbox{supp}(S)$ where $1 \leq r <s\leq\nu$. Then from the definition of $z_{_J}$ and (\ref{comu-sim}) we have $$z_{_J}=c_{r,s}^{2}=[t_{i,r},t_{i,s}]\in{\mathcal H}.$$ This completes the proof of step (1). (2) ${\mathcal H}\subseteq T$. We have from Lemmas \ref{reform2-sim}, \ref{new-4}(ii) and (\ref{unique-1}) that \begin{eqnarray*} {\mathcal H}&=& \langle T_{\alpha}^{\sigma}\mid \alpha\in\dot{R},\; \sigma\inS\rangle\\&=&\langle T_{\alpha}^{\sigma}\mid \alpha\in\dot{R},\; \sigma\in\cup_{J\in\hbox{supp}(S)}(\tau_{_J} +2\Lambda)\rangle\\ &=&\langle T_{\alpha}^{\sigma}\mid \alpha\in\dot{R},\; \sigma\in\tau_{_J} +2\Lambda,\;J\in\hbox{supp}(S)\rangle\\ &=&\langle T_{\alpha}^{\tau_{_J}+\sigma}\mid\alpha\in\dot{R},\; \sigma\in2\Lambda,\;J\in\hbox{supp}(S)\rangle\\ &=&\langle T_{\alpha}^{\tau_{_J}}T_{\alpha}^{\sigma}T^{\tau_{_J}}_{\sigma}\mid\alpha\in\dot{R},\; \sigma\in2\Lambda,\;J\in\hbox{supp}(S)\rangle. \end{eqnarray*} We get from Lemma \ref{reform2-sim} and the facts that $2\Lambda\subseteq S$ and $\tau_{_J}\inS$ for $J\in\hbox{supp}(S)$, that $$\begin{tabular}{c} ${\mathcal H}=\langle T_{\alpha}^{\sigma},~~ T_{\alpha}^{\tau_{_J}},\;T^{\tau_{_J}}_{\sigma}\mid\alpha\in\dot{R}, \;\sigma\in2\Lambda,\;J\in\hbox{supp}(S)\rangle$. \end{tabular}$$ Now we show that each generator of ${\mathcal H}$ of the form $T_{\alpha}^{\sigma}$, $T_{\alpha}^{\tau_{_J}}$, $T^{\tau_{_J}}_{\sigma}$ belongs to $T$. Let $\alpha=\sum_{i=1}^{\ell}m_{i}\alpha_{i}\in\dot{R}$, $\sigma=\Sigma_{s=1}^{\nu}2n_s\sigma_{s}\in 2\Lambda$, $n_s\in{\mathbb Z}$ and $J\in\hbox{supp}(S)$. Then it follows from Lemma \ref{formul-T}, Lemma \ref{nice-lemma}, the definition of $z_{_J}$ and the fact that $c_{r,s}^2\in\langle z_{_{\{r,s\}}}\rangle\subseteq T$, for all $1 \leq r,s\leq\nu$ that \begin{eqnarray*} T_{\tau_{_J}}^{\sigma}&=&T_{\sum_{r\in J}\sigma_r}^{\Sigma_{s=1}^{\nu}2n_s\sigma_{s}}=\prod_{r\in J}\prod_{s=1}^{\nu}(c_{r,s}^{2})^{n_s}\in T \end{eqnarray*} and \begin{eqnarray*} T_{\alpha}^{\sigma}&=&\prod_{r=1}^{\nu}\prod_{i=1}^{\ell}(t_{i,r})^ {2m_{i}n_{r}}\prod_{1\leq r<s\leq\nu} (c_{s,r}^{2})^{2n_{r}n_{s}}\in T. \end{eqnarray*} Finally, \begin{eqnarray*} T_{\alpha}^{\tau_{_J}}&=&T_{\alpha}^{\sum_{r\in J}\sigma_r}=\prod_{r\in J}\prod_{i=1}^{\ell}t_{i,r}^ {m_{i}}\prod_{\{r,s\in J\mid r<s\}}c_{s,r}= \prod_{r\in J}\prod_{i=1}^{\ell}t_{i,r}^ {m_{i}}z^{-1}_{_{J}}\in T. \end{eqnarray*} From steps (1)-(2), the result follows.\hfill$\Box$\vspace{5mm} \begin{cor}\label{latice} If $\ell\geq2$ or $S$ is a lattice, then $$\mathcal H=\langle t_{i,r},\;c_{r,s}\mid 1\leq i\leq\ell,\;1\leq r\leq s\leq\nu\rangle.$$ \end{cor} \noindent{\bf Proof. } This an immediate consequence of Proposition \ref{gen-H} and the fact $F(S)=C$.\hfill$\Box$\vspace{5mm} The remaining results of this section are new only for type $A_1$. In fact for types different from $A_1$, one can find essentially the same results in \cite{MS} and \cite{A4}. However, for completeness we provide a short proof of them, where the proofs now are easy consequences of our results in Section \ref{hyperbolic}. \begin{pro}\label{propo-impor} (i) ${\mathcal W}=\dot{\w}\ltimes {\mathcal H}$. (ii) If $\ell=1$, then ${\mathcal W}=\langle w_{\alpha_1}, t_{1,r},z_{_J}\mid r\in J_\nu,\;J\subseteq J_\nu\rangle.$ (iii) If $\ell>1$, then ${\mathcal W}=\langle w_{\alpha_i}, t_{i,r},c_{r,s}\mid i\in J_\ell,\;r\in J_\nu,\;1\leq r<s\leq\nu\rangle.$ (iv) ${\mathcal H}$ is a torsion free group. (v) ${\mathcal H}$ is a two-step nilpotent group. (vi) $Z({\mathcal W})=Z({\mathcal H})=F(S).$ (vii) $F(S)$ is a free abelian group of rank ${\nu(\nu-1)}/{2}$. \end{pro} \noindent{\bf Proof. } (i) is an immediate consequence of Proposition \ref{xi-10} and Lemma \ref{reform2-sim}. From (i), Corollary \ref{latice}, Proposition \ref{gen-H} and the fact that $\dot{\mathcal W}$ is generated by $w_{\alpha_i},\ldots,w_{\alpha_\ell}$, it follows that (ii) and (iii) hold. (iv) and (v) follow from Lemma \ref{xi-4} and Corollary \ref{xi-9}. From (\ref{def-F(S)}) and Proposition \ref{gen-H} we have $F(S) \subseteq Z({\mathcal W})$. So to prove (vi) it is enough to show that $Z({\mathcal W})\subseteq Z({\mathcal H})\subseteq F(S)$. Let $w\in Z({\mathcal W})$. By (i), $w=\dot{w}h$ for some $\dot{w}\in \dot{{\mathcal W}}$ and $h\in {\mathcal H}$. Since for all $(i,r)\in J_\ell\times J_\nu$, $t_{i,r}\in {\mathcal H}$ we have from Lemma \ref{new-4} that $$1=wt_{i,r}^{-1}w^{-1}t_{i,r}=wT_{-\alpha_{i}}^{\sigma_{r}}w^{-1}T_{\alpha_{i}}^{\sigma_{r}}= T_{-w(\alpha_{i})}^{\sigma_{r}}T_{\alpha_{i}}^{\sigma_{r}}=T_{\alpha_{i}-w(\alpha_{i})}^{\sigma_{r}}.$$ This gives $\alpha_i-w(\alpha_i)\in{\mathcal V}^0$. Since $w=\dot{w} h$ and $h(\alpha_i)=\alpha_i$, $\hbox{mod}\;{\mathcal V}^0$, it follows that $\dot{w}(\alpha_i)=\alpha_i$ for all $i\in J_\ell$. Thus $\dot{w}=1$ and so $w=h\in {\mathcal H}\cap Z({\mathcal W}).$ This gives $Z({\mathcal W})\subseteq Z({\mathcal H})$. Next let $h\in Z({\mathcal H})$. From Proposition \ref{gen-H} and (\ref{commutator-sim}), it follows that $$h=z\prod_{r=1}^{\nu}\prod_{i=1}^{\ell}t_{i,r}^{m_{i,r}},\qquad(z\in F(S),\;m_{i,r}\in{\mathbb Z}).$$ Then by (v), (\ref{bicomm}) and (\ref{comu-sim}) we have that \begin{eqnarray*} 1=[h,t_{j,s}]=\prod_{r=1}^{\nu}\prod_{i=1}^{\ell}[t_{i,r}^{m_{i,r}},t_{j,s}] &=&\prod_{r=1}^{\nu}\prod_{i=1}^{\ell} c_{r,s}^{m_{i,r}(\alpha_{i},\alpha_{j})}=\prod_{r=1}^{\nu} c_{r,s}^{(\sum_{i=1}^{\ell}m_{i,r}\alpha_{i},\alpha_{j})}. \end{eqnarray*} Therefore from (\ref{free-abelian}), it follows that $(\sum_{i=1}^{\ell}m_{i,r}\alpha_{i},\;\alpha_{j})=0$ for all $j\in J_\ell$. Since the form on ${\mathcal V}$ restricted to $\dot{\mathcal V}=\sum_{i=1}^{\ell}{\mathbb R}\alpha_{i}$ is positive definite we get $m_{i,r}=0$ for all $i\in J_\ell$ and $r\in J_\nu$. Then $h=z\in F(S)$ and so (vi) holds. By (\ref{commutator-sim}) and the fact that $c_{r,s}^{2}=[t_{1,r},t_{1,s}]$, we see that the group in the statement is squeezed between two groups $\langle c_{r,s}^2:1\leq r<s\leq\nu\rangle$ and $C$. Since $C$ is free abelian on generators $c_{r,s}$, $1\leq r<s\leq\nu$, then (vii) follows. \hfill$\Box$\vspace{5mm} The following important type-dependent result gives explicitly the center $Z({\mathcal W})=Z({\mathcal H})$ in terms of the generators $c_{r,s}$. \begin{cor}\label{centers-1} (i) If $X_\ell=A_1$, then $$Z({\mathcal H})=\langle c_{r,s}^2,\; z_{_J}\mid 1\leq r< s\leq\nu,\;J\in\hbox{supp}(S)\rangle.$$ In particular if $S$ is a lattice, then $Z({\mathcal H})=\langle c_{r,s}\mid1\leq r<s\leq\nu\rangle.$ (ii) If $X_\ell=A_\ell(\ell\geq2), \;D_\ell$ or $E_\ell$, then $Z({\mathcal H})=\langle c_{r,s}\mid1\leq r<s\leq\nu\rangle.$ \end{cor} \noindent{\bf Proof. } Both (i) and (ii) follow immediately from (\ref{def-zJ}) and Proposition \ref{propo-impor}.\hfill$\Box$\vspace{5mm} \begin{pro}\label{w-form} (i) If $X_\ell=A_1$, then each element $w$ in ${\mathcal W}$ has a unique expression in the form \begin{equation}\label{form=1} w=w(n,m_{r},z):=w_{\alpha_1}^n\prod_{r=1}^{\nu}t_{1,r}^{m_{r}}z\quad(n\in\{0,1\},\;m_{r}\in{\mathbb Z},\;z\in F(S)). \end{equation} (ii) If $X_\ell=A_\ell(\ell\geq2), \;D_\ell$ or $E_\ell$, then each element $w$ in ${\mathcal W}$ has a unique expression in the form \begin{equation}\label{form>1} w=w(\dot w,m_{i,r},m_{r,s} ):=\dot w\prod_{r=1}^{\nu}\prod_{i=1}^{\ell}t_{i,r}^{m_{i,r}} \prod_{1\leq r<s\leq\nu}c_{r,s}^{m_{r,s}}, \end{equation} where $\dot w\in\dot{\mathcal W},$ and $m_{i,r},\;m_{r,s}\in{\mathbb Z}$. \end{pro} \noindent{\bf Proof. } (i) First we can express each element $w\in{\mathcal W}$ in terms of generators given in Proposition \ref{propo-impor}(ii). Next we can reorder the appearance of generators in any such expression using (\ref{commutator-sim}), Proposition \ref{propo-impor}(vi), Corollary \ref{centers-1}(i), Lemmas \ref{wtw-sim} and the fact that $w_{\alpha_1}^2=1$. Now to complete the proof it is enough to show that the expression of $w$ in the form (\ref{form=1}) is unique. Let $w(n',m_{r}',z')$ be another expression of $w$ in the form (\ref{form=1}). Then from Proposition \ref{propo-impor}(i) and Lemma \ref{xi-6}, it follows that $n=n'$ $n_{r}=n_{r}'$ for all $r\in J_\nu$ and $z=z'$. (ii) Let $w\in{\mathcal W}$. By parts (iii) and (vi) of Proposition \ref{propo-impor}, Corollary \ref{centers-1}(ii), Lemma \ref{wtw-sim} and (\ref{commutator-sim}), $w$ can be written in the form (\ref{form>1}). Let $w(\dot{w}',n_{i,r}',m_{r,s}')$ be another expression of $w$ in the form (\ref{form>1}). Then from Proposition \ref{propo-impor}(i) and Lemma \ref{xi-6}, it follows that $\dot w=\dot w'$ and $n_{i,r}=n_{i,r}'$, $(i,r)\in J_\ell\times J_\nu$ and $m_{r,s}=m_{r,s}'$ for all $r,s\in J_\nu$. \hfill$\Box$\vspace{5mm} \vspace{5mm} \section{\bf A PRESENTATION FOR HEISENBERG-LIKE GROUP } \label{presentation-sim} \setcounter{equation}{0} We keep all the notations as in Section \ref{heisenberg-sim}. In particular, $R$ is a simply laced extended affine root system and $\mathcal H$ is its Heisenberg-like group. We recall from Proposition \ref{propo-impor} that $F(S)=Z(\mathcal H).$ For $1\leq r<s\leq\nu$, we define \begin{eqnarray}\label{min-2-sim} n(r,s)=\mbox{min}\{n\in\mathbb N: c_{r,s}^n\in F(S)\}. \end{eqnarray} \begin{rem}\label{rem1} It is important to notice that we may consider $C=\langle c_{r,s}:1\leq r<s\leq\nu\rangle$ as an abstract free abelian group (see Lemma \ref{xi-7}(ii)) and $F(S)$ as a subgroup whose definition depends only on the semilattice $S$. It follows that the integers $n(r,s)$ are uniquely determined by $S$ (and so by $R$). \end{rem} We note from (\ref{def-zJ}) that if $1\leq r<s\leq\nu$, then depending on either $\{r,s\}$ is in $\hbox{supp}(S)$ or not we have $z_{_{\{r,s\}}}=c_{r,s}$ or $z_{_{\{r,s\}}}=c_{r,s}^2$. Thus we have the following lemma. \begin{lem}\label{nrs-1-sim} (i) $n(r,s)\in\{1,2\}$, $1\leq r<s\leq \nu.$ (ii) If $\{r,s\}\in\hbox{supp}(S)$ for all $1\leq r<s\leq\nu$, then $n(r,s)=1$ for all such $r,s$ and $$F(S)=\langle c_{r,s}~|~1\leq r<s\leq\nu\rangle.$$ In particular, this holds if $S$ is a lattice. (iii) If $\hbox{supp}(S) \subseteq\big\{\emptyset,\;\{r,s\}~~\mid~1<r\leq s\leq\nu\}$, then $$F(S)=\langle c_{r,s}^{n(r,s)}~|~1\leq r<s\leq\nu\rangle$$ where \begin{eqnarray}\label{formula-1-sim} n(r,s)=\left\{\begin{array}{ll} 1, &\hbox{if}\; \{r,s\}\in\hbox{supp}(S),\vspace{2mm} \\ 2, &\hbox{if}\; \{r,s\}\not\in\hbox{supp}(S).\\ \end{array}\right. \end{eqnarray} \end{lem} Let $A=(a_{i,j})_{1\leq i,j\leq\ell}$ be the Cartan matrix of type $X_\ell$, that is $$a_{i,j}=(\alpha_{i},\alpha_{j}),\quad i,j\in J_\ell.$$ \begin{lem}\label{div-1-sim} $n(r,s)\mid a_{i,j},\quad i,j\in J_\ell$,\;\; $1\leq r<s\leq\nu$ \end{lem} \noindent{\bf Proof. } By (\ref{commutator-sim}), $c_{r,s}^{a_{i,j}}=c_{r,s}^{(\alpha_i,\alpha_j)}\in F(S)$ for all $r,s\in J_\nu$ and $i,j\in J_\ell$. Now the result follows by the way $n(r,s)$ is defined. \hfill$\Box$\vspace{5mm} By Lemma \ref{div-1-sim}, we have $a_{i,j}n(r,s)^{-1}\in{\mathbb Z}$, $i,j\in J_\ell$, $1\leq r<s\leq\nu$. So from (\ref{comu-sim}) we have \begin{equation}\label{comm-3-sim} \begin{array}{c} [t_{i,r},t_{j,s}]=(c_{r,s}^{n(r,s)})^{a_{i,j}n(r,s)^{-1}}. \end{array} \end{equation} Note that the integer $n(r,s)$ appears only in type $A_1$ as in other types $n(r,s)=1$. \begin{thm}\label{presen-1-sim} Let $R=R(X_\ell, S)$ be a simply laced extended affine root system of type $X_\ell$ and nullity $\nu$ with Heisenberg-like group $\mathcal H$. Let $A=(a_{i,j})$ be the Cartan matrix of type $X_\ell$, $n(r,s)$'s be the unique integers defined by (\ref{min-2-sim}) and $m=\hbox{ind}(S)$. If \begin{equation}\label{000} F(S) \hbox{ is generated by elements }c_{r,s}^{n(r,s)},\;\; 1\leq r<s\leq\nu,\end{equation} then ${\mathcal H}$ is isomorphic to the group $\widehat{\mathcal H}$ defined by generators \begin{equation}\label{absh-sim} \left\{ \begin{array}{ll} y_{i,r}& 1\leq i\leq\ell,\;\;1\leq r\leq\nu,\\ z_{r,s}&1\leq r<s\leq\nu, \end{array}\right. \end{equation} and relations \begin{equation}\label{relations-sim} {\mathcal R}_{\widehat{\mathcal H}}:=\left\{\begin{array}{ll} [z_{r,s},z_{r',s'}],\vspace{2mm}\\ [y_{i,r},z_{r',s'}],\;[y_{i,r},y_{j,r}],\vspace{2mm}\\ [y_{i,r},y_{j,s}]=z_{r,s}^{a_{i,j}n(r,s)^{-1}},\;\;r<s, \end{array}\right. \end{equation} where if $\ell>1$ or $\ell=1$ and $\nu\leq 3$, the condition (\ref{000}) is automatically satisfied. Moreover, if $\ell >1$, $n(r,s)=1$ for all $r,s$ and if $\ell=1$ and $\nu\leq 3$, $n(r,s)$'s are given by the following table: \begin{comment} \begin{itemize} \item[(i)] if $\ell>1$, then $n(r,s)=1$ for all $r,s$, \item[(ii)] if $\ell=1$ and $\nu\leq 3$, then the assumption (\ref{000}) is automatically satisfied. Furthermore, $n(r,s)$'s are given by the following table: \end{itemize} \end{comment} \begin{equation}\label{tab-100} \begin{tabular}{c|c|c|c|c} $\nu$&$m$ & $n(1,2)$ & $n(1,3)$ & $n(2,3)$ \\ \hline 0 & 0 & - & - & - \\ \hline 1 & 1 & - & - & - \\ \hline 2 & 2 & 2 & - & - \\ & 3 & 1 & - & -\\ \hline 3 & 3 & 2 & 2 & 2 \\ & 4 & 2 & 2 &1 \\ & 5 & 2 & 1 & 1 \\ & 6 & 1 & 1 & 1\\ & 7 & 1 & 1 & 1\\ \hline \end{tabular} \end{equation} \end{thm} \vspace{2mm} \noindent{\bf Proof. } By Propositions \ref{gen-H}, \ref{propo-impor}(vi), (\ref{def-F(S)}) and assumption (\ref{000}), we have $${\mathcal H}=\langle t_{i,r}\mid (i,r)\in J_\ell\times J_\nu\rangle F(S)=\langle t_{i,r},c_{r,s}^{n(r,s)}\mid 1\leq i\leq\ell,\;1\leq r<s\leq\nu\rangle.$$ From (\ref{comm-3-sim}) and the fact that $c_{r,s}^{n(r,s)}\in Z({\mathcal H})$, for $1\leq r<s\leq\nu$, it is clear that there exists a unique epimorphism $\varphi:\widehat{\mathcal H}\longrightarrow {\mathcal H}$ such that $\varphi(y_{i,r})=t_{i,r}$ and $\varphi(z_{r,s})= c_{r,s}^{n(r,s)}.$ We now prove that $\varphi$ is a monomorphism. Let $\hat{h}\in\widehat{\mathcal H}$ and $\varphi(\hat{h})=1$. By (\ref{relations-sim}), $\hat{h}$ can be written as \begin{equation*} \hat{h}=\prod_{r=1}^{\nu}\prod_{i=1}^{\ell}y_{i,r}^{m_{i,r}}\prod_{1\leq r<s\leq\nu} z_{r,s}^{n_{r,s}}\qquad(m_{i,r}, n_{r,s}\in{\mathbb Z}). \end{equation*} Then $$1=\varphi(\hat{h})=\prod_{r=1}^{\nu}\prod_{i=1}^{\ell}t_{i,r}^{m_{i,r}}\prod_{1\leq r<s\leq\nu} c_{r,s}^{n(r,s)n_{r,s}}.$$ Now it follows from Proposition \ref{w-form} and (\ref{free-abelian}) that $m_{i,r}=n_{r,s}=0$, for all $i,r,s$ and so $\hat{h}=1$. Next let $\ell>1$. By [AABGP, Proposition II.4.2] the involved semilattice $S$ in the structure of $R$ is a lattice and so by Lemma \ref{nrs-1-sim}(ii), $n(r,s)=1$ for all $r,s$. Finally, let $\ell=1$. According to [AABGP, Proposition II.4.2], any extended affine root system of type $A_1$ and nullity $\leq 3$ is isomorphic to an extended affine root system of the form $R=R(A_1,S)$ where $\hbox{supp}(S)$ is given in Table \ref{tab-1}. \begin{comment} $$\begin{tabular}{c|c|c} $\nu$ &$\hbox{index}$ & $\hbox{supp}(S)$ \\ \hline 0 & 0 & $\{\emptyset\}$ \\ \hline 1 & 1 & $\{\emptyset,\{1\}\}$ \\ \hline 2 & 2 & $\{\emptyset,\{1\},\{2\}\}$\\ & 3& $\{\emptyset,\{1\},\{2\},\{1,2\}\}$\\ \hline 3 & 3 & $\{\emptyset,\{1\},\{2\},\{3\}\}$\\ & 4 & $\{\emptyset,\{1\},\{2\},\{3\},\{2,3\}\}$\\ & 5 & $\{\emptyset,\{1\},\{2\},\{3\},\{1,3\},\{2,3\}\}$\\ & 6 & $\{\emptyset,\{1\},\{2\},\{3\},\{1,2\},\{1,3\},\{2,3\}\}$\\ & 7&$\{\emptyset,\{1\},\{2\},\{3\},\{1,2\},\{1,3\},\{2,3\},\{1,2,3\}\}$\\ \hline \end{tabular}$$ \end{comment} The result now follows immediately from this table and Lemma \ref{nrs-1-sim}(ii)-(iii). \hfill$\Box$\vspace{5mm} \begin{comment} From Theorem \ref{presen-1-sim} and Lemma \ref{nrs-1-sim}, we have the following result. \begin{cor}\label{presen-2-sim} If $S$ is a lattice, then ${\mathcal H}$ is isomorphic to the group $\widehat{\mathcal H}$ defined by generators (\ref{absh-sim}) and relations (\ref{relations-sim}), where $n(r,s)=1$ for all $1\leq r<s\leq\nu$. \end{cor} We recall from Section \ref{Semilattices} that any semilattice $S$ in $\Lambda=\sum_{i=1}^{\nu}{\mathbb Z}\sigma_i$ of index $m$ has the form $S=S(J^0,\ldots, J^m)$ where $\{J^0,\ldots, J^m\}$ is the supporting set of $S$ with respect to the basis $B=\{\sigma_1,\ldots,\sigma_\nu\}$ of $S$. By [A5, Proposition 1.12], if $\nu\leq 3$, then the index determines uniquely, up to similarity, the semilattices in $\Lambda$. Thus, for $\nu\leq 3$ and up to similarity, the semilattices in $\Lambda$ are those listed in Table \ref{tab-1} of Appendix. The results of this section are summarized in the following theorem. \begin{thm}\label{mainthm1-sim} Let $\mathcal H$ be the Heisenberg-like group of a simply laced extended affine root system $R$ of type $X_\ell$, index $m$ and nullity $\nu$. Then (i) If $\ell>1$, then $\mathcal H$ is isomorphic to the group $\widehat{\mathcal H}$ is defined by generators \begin{equation*} \left\{ \begin{array}{ll} y_{i,r},& 1\leq i\leq\ell,\;\;1\leq r\leq\nu,\vspace{1mm}\\ z_{r,s},&1\leq r,s\leq\nu \end{array}\right. \end{equation*} and the relations \begin{equation*} {\mathcal R}_{\widehat{\mathcal H}}:=\left\{\begin{array}{ll} [z_{r,s},z_{r',s'}],\vspace{2mm}\\ [y_{i,r},z_{r',s'}],\;\;[y_{i,r},y_{j,r}],\vspace{2mm}\\ [y_{i,r},y_{j,s}]=z_{r,s}^{a_{i,j}},\;\; r<s,\\ \end{array}\right. \end{equation*} where $A=(a_{i,j})_{1\leq i,j\leq\ell}$ is the Cartan matrix of type $X_\ell$ . (ii) if $\ell=1$ and $\nu\leq3$, then $\mathcal H$ is isomorphic to the group $\widehat{\mathcal H}$ is defined by generators \begin{equation*} \left\{ \begin{array}{ll} y_{r},& 1\leq r\leq\nu,\vspace{1mm}\\ z_{r,s},&1\leq r,s\leq\nu \end{array}\right. \end{equation*} and the relations \begin{equation*} {\mathcal R}_{\widehat{\mathcal H}}:=\left\{\begin{array}{ll} [z_{r,s},z_{r',s'}],\vspace{2mm}\\ [y_{r},z_{r',s'}],\;\;[y_{r},y_{r}],\vspace{2mm}\\ [y_{r},y_{s}]=z_{r,s}^{\delta(\nu,m,r,s)},\;\; r<s,\\ \end{array}\right. \end{equation*} where $\delta(\nu,m,r,s)$'s are given by Table \ref{tab-3}. \begin{tab}\label{tab-3} The numbers $\delta(\nu,m,r,s)$'s, for semilattice $S$ of rank $\nu\leq 3$ and index $m$. \end{tab} \begin{equation}\label{tab-1} \begin{tabular}{c|c|c|c|c} $\nu$&$m$ & $\delta(\nu,m,1,2)$ & $\delta(\nu,m,1,3)$ & $\delta(\nu,m,2,3)$ \\ \hline 0 & 0 & - & - & - \\ \hline 1 & 1 & - & - & - \\ \hline 2 & 2 & 1 & - & - \\ & 3 & 2 & - & -\\ \hline 3 & 3 & 1 & 1 & 1 \\ & 4 & 1 & 1 &2 \\ & 5 & 1 & 2 & 2 \\ & 6 & 2 & 2 & 2 \\ & 7 & 2 & 2 & 2\\ \hline \end{tabular} \end{equation} \end{thm} \noindent{\bf Proof. } (i) By [AABGP, Proposition II.4.2] the involved semilattice $S$ in the structure of $R$ is a lattices and so from Corollary \ref{presen-2-sim}, it follows that (i) holds. (ii) By Theorem \ref{presen-1-sim} and the fact $a_{1,1}=2$, it is enough to show that \begin{equation}\label{final-sim} F(S)=\langle c^{n(r,s)}_{r,s}\mid 1\leq r<s\leq\nu\rangle\quad\hbox{and}\quad \delta(\nu,m,r,s)=2n(r,s)^{-1} \end{equation} According to [AABGP, Proposition II.4.2], any extended affine root system of type $A_1$ and nullity $\leq 3$ is isomorphic to an extended affine root system of the form $R=R(A_1,S)$ where $S$ is one of the semilattices given in Table \ref{tab-1}. Considering the supporting sets of semilattices in this table we see immediately from Lemma \ref{nrs-1-sim}(ii)-(iii) and Table \ref{tab-3} the equalities (\ref{final-sim}) hold. \hfill$\Box$\vspace{5mm} \end{comment} \vspace{5mm} \section{\bf A PRESENTATION FOR EXTENDED AFFINE WEYL GROUPS}\label{presentation-W-sim} \setcounter{equation}{0} We keep the same notation as in the previous sections. As before $R=R(X_\ell,S)$ is a simply laced extended affine root system of nullity $\nu$, ${\mathcal W}$ is its extended affine Weyl group and and $\mathcal H$ is its Heisenberg-like group. Using the Coxeter presentation for the finite Weyl group $\dot{{\mathcal W}}$, Theorem \ref{presen-1-sim} and the semidirect product $\dot{{\mathcal W}}\ltimes {\mathcal H}$, we obtain a finite presentation for ${\mathcal W}$. Let $A=(a_{i,j})_{1\leq i,j\leq\ell}$ be the Cartan matrix of type $X_\ell$. We recall from \cite[ Proposition 3.13]{Ka} that $\dot{\mathcal W}$ is a Coxeter group with generators $w_{\alpha_1},\ldots,w_{\alpha_\ell}$ and relations \begin{equation}\label{Coxe-rela} w_{\alpha_i}^2\quad\hbox{and}\quad (w_{\alpha_i}w_{\alpha_j})^{a_{i,j}^2+2}\quad (i\neq j). \end{equation} \begin{thm}\label{presen-4-sim} Let $R=R(X_\ell, S)$ be a simply laced extended affine root system of type $X_\ell$ and nullity $\nu$ with extended affine Weyl group ${\mathcal W}$. Let $A=(a_{i,j})$ be the Cartan matrix of type $X_\ell$ and $n(r,s)$'s be the unique integers defined by (\ref{min-2-sim}). If \begin{equation}\label{000-w} F(S) \hbox{ is generated by elements }c_{r,s}^{n(r,s)},\;\; 1\leq r<s\leq\nu,\end{equation} then ${\mathcal W}$ is isomorphic to the group $\widehat{{\mathcal W}}$ defined by generators \begin{equation*} \left\{\begin{array}{ll} x_i,&1\leq i\leq\ell, \\ y_{i,r},& 1\leq i\leq\ell,\; 1\leq r\leq\nu,\\ z_{r,s},&1\leq r<s\leq\nu\\ \end{array}\right. \end{equation*} and relations \begin{equation*} {\mathcal R}_{\widehat{{\mathcal W}}}:=\left\{\begin{array}{l} x_i^2,\;(x_ix_j)^{a_{i,j}^2+2},\quad (i\neq j),\vspace{2mm}\\ x_{i}y_{j,r}x_i=y_{j,r}y_{i,r}^{-a_{i,j}},\vspace{2mm}\\ [z_{r,s},z_{r',s'}],\;[y_{i,r},z_{r',s'}],\;[y_{i,r},y_{j,r}],\vspace{2mm}\\ [y_{i,r},y_{j,s}]=z_{r,s}^{a_{i,j}n(r,s)^{-1}},\;1\leq r<s\leq\nu. \end{array}\right. \end{equation*} Moreover if $\ell>1$ then $n(r,s)=1$ for all $r,s$, (in particular, the assumption (\ref{000-w}) holds). Furthermore, if $\ell=1$ and $\nu\leq 3$ then the assumption (\ref{000-w}) is automatically satisfied and the relations $\mathcal R_{\widehat{{\mathcal W}}}$ reduces to the relations \begin{equation*} {\mathcal R}_{\widehat{{\mathcal W}}}:=\left\{\begin{array}{l} x^2,\;\;xy_{r}x=y_{r}^{-1},\vspace{2mm}\\ [z_{r,s},z_{r',s'}],\;[y_{r},z_{r',s'}],\vspace{2mm}\\ [y_{r},y_{s}]=z_{r,s}^{2n(r,s)^{-1}},\;1\leq r<s\leq\nu, \end{array}\right. \end{equation*} where $n(r,s)$'s are given explicitly by (\ref{tab-100}) (depending on $m=\hbox{ind}(S)$). \end{thm} \noindent{\bf Proof. } From parts (ii), (iii) and (vi) of Proposition \ref{propo-impor}, (\ref{def-F(S)}) and assumption (\ref{000-w}), it follows that \begin{equation} \begin{array}{ll} {\mathcal W}=\langle w_{\alpha_i},\; t_{i,r}\mid (i,r)\in J_\ell\times J_\nu\rangle F(S)\vspace{2mm}\\ \hspace{4mm}=\langle w_{\alpha_i},\; t_{i,r},\;c_{r,s}^{n(r,s)}\mid 1\leq i\leq\ell,\;1\leq r<s\leq\nu\rangle. \\ \end{array} \end{equation} By (\ref{comm-3-sim}), Lemma \ref{wtw-sim}, (\ref{Coxe-rela}) and the fact that $c_{r,s}^{n(r,s)}\in Z({\mathcal H})$, for $1\leq r<s\leq\nu$, the assignment $x_i\longmapsto w_{\alpha_i}$, $y_{i,r}\longmapsto t_{i,r}$ and $z_{r,s}\longmapsto c^{n(r,s)}_{r,s}$ induces a unique epimorphism $\psi$ from $\widehat{{\mathcal W}}$ onto ${\mathcal W}$. Also by Lemma (\ref{Coxe-rela}), the restriction of $\psi$ to $\widehat{\dot{\mathcal W}}:=\langle x_i\mid 1\leq i\leq\ell\rangle$ induces the isomorphism \begin{equation}\label{finite-case} \widehat{\dot{\mathcal W}}\cong^{^{\hspace{-2mm}\psi}}\dot{{\mathcal W}}. \end{equation} We now show that $\psi$ is injective. Let $\psi(\hat{w})=1$, for some $\hat{w}\in\widehat{{\mathcal W}}$. From the defining relations for $\widehat{{\mathcal W}}$, it is easy to see that $\hat{w}$ can be written in the form \begin{equation*} \hat{w}=\hat{\dot{w}}\prod_{r=1}^{\nu}\prod_{i=1}^{\ell}y_{i,r}^{m_{i,r}} \prod_{1\leq r<s\leq\nu}z_{r,s}^{n_{r,s}} \quad(\hat{\dot{w}}\in\widehat{\dot{{\mathcal W}}},\hspace{1mm} n_{r,s}, m_{i,r}\in{\mathbb Z}). \end{equation*} Then $$ 1=\psi(\hat{w})=\psi(\hat{\dot{w}})\prod_{r=1}^{\nu} \prod_{i=1}^{\ell}t_{i,r}^{m_{i,r}} \prod_{1\leq r<s\leq\nu}c_{r,s}^{n(r,s)n_{r,s}}.$$ Therefore from (\ref{finite-case}) and Propositions \ref{w-form}, it follows that $m_{i,r}=0$, $n_{r,s}=0$ for all $i,r,s$ and $\hat{\dot w}=1$. Thus $\hat{w}=1$ and so ${\mathcal W}\cong\widehat{{\mathcal W}}$. Now an argument similar to the last paragraph of the proof of Theorem \ref{presen-1-sim} complets the proof. \hfill$\Box$\vspace{5mm} \begin{comment} From Proposition \ref{presen-4-sim} and Lemma \ref{nrs-1-sim}, we have \begin{cor}\label{presen-5-sim} If $S$ is a lattice, then ${\mathcal W}$ is isomorphic to the group $\widehat{{\mathcal W}}$ defined by generators (\ref{gen-W}) and relations (\ref{relations-W}), where $n(r,s)=1$ for all $1\leq r<s\leq\nu$. \end{cor} \begin{thm}\label{presen-6-sim} Let ${\mathcal W}$ be extended affine Weyl group of a simply laced extended affine root system $R$ of type $X_\ell$, index $m$ and nullity $\nu$. Let $A=(a_{i,j})_{1\leq i,j\leq\ell}$ be the Cartan matrix of type $X_\ell$. Then (i) If $\ell>1$, then ${\mathcal W}$ is isomorphic to the group $\widehat{{\mathcal W}}$ is defined by generators \begin{equation*} \left\{\begin{array}{ll} x_i,&1\leq i\leq\ell, \\ y_{i,r},& 1\leq i\leq\ell,\; 1\leq r\leq\nu,\\ z_{r,s},&1\leq r<s\leq\nu\\ \end{array}\right. \end{equation*} and the relations \begin{equation*} {\mathcal R}_{\widehat{{\mathcal W}}}:=\left\{\begin{array}{l} x_i^2,\;(x_ix_j)^{a_{i,j}^2+2},\quad (i\neq j)\vspace{2mm}\\ x_{i}y_{j,r}x_i=y_{j,r}y_{i,r}^{-a_{i,j}},\vspace{2mm}\\ [z_{r,s},z_{r',s'}],\;[y_{i,r},z_{r',s'}],\;[y_{i,r},y_{j,r}],\vspace{2mm}\\ [y_{i,r},y_{j,s}]=z_{r,s}^{a_{i,j}},\;1\leq r<s\leq\nu, \end{array}\right. \end{equation*} (ii) if $\ell=1$ and $\nu\leq3$, then ${\mathcal W}$ is isomorphic to the group $\widehat{{\mathcal W}}$ is defined by generators \begin{equation*} \left\{\begin{array}{ll} x,\;y_{r},& 1\leq r\leq\nu,\\ z_{r,s},&1\leq r<s\leq\nu\\ \end{array}\right. \end{equation*} and the relations \begin{equation*} {\mathcal R}_{\widehat{{\mathcal W}}}:=\left\{\begin{array}{l} x^2,\;\;xy_{r}x=y_{r}^{-1},\vspace{2mm}\\ [z_{r,s},z_{r',s'}],\;[y_{r},z_{r',s'}],\vspace{2mm}\\ [y_{r},y_{s}]=z_{r,s}^{\delta(\nu,m,r,s)},\;1\leq r<s\leq\nu, \end{array}\right. \end{equation*} where $\delta(\nu,m,r,s)$'s are given by Table \ref{tab-3}. \end{thm} \noindent{\bf Proof. } By Proposition \ref{presen-4-sim}, we must to show that \begin{equation}\label{fina} F(S)=\langle c^{n(r,s)}_{r,s}\mid 1\leq r<s\leq\nu\rangle \end{equation} and \begin{equation}\label{fina-1} n(r,s)=\left\{\begin{array}{ll} 1, & \hbox{if $\ell>1$,} \vspace{2mm}\\ \delta(\nu,m,r,s)/2, & \hbox{if $\ell=1$, $\nu\leq 3$,}\\ \end{array}\right. \end{equation} where the numbers $\delta(\nu,m,r,s)$'s are given by Table \ref{tab-3}, if $\ell=1$ and $\nu\leq3$. First, let $\ell>1$. Then by [AABGP, Proposition II.4.2] the involved semilattice $S$ in the structure of $R$ is a lattices and so by Lemma \ref{nrs-1-sim}(ii), the equalities (\ref{fina}) and (\ref{fina-1}) hold. Next assume $\ell=1$ and $\nu\leq 3$. According to [AABGP, Proposition II.4.2], any extended affine root system of type $A_1$ and nullity $\leq 3$ is isomorphic to an extended affine root system of the form $R=R(A_1,S)$ where $S$ is one of the semilattices given in Table \ref{tab-1}. Considering the supporting sets of semilattices in this table we see immediately from Lemma \ref{nrs-1-sim}(ii)-(iii) and Table \ref{tab-3} that the equalities (\ref{fina}) and (\ref{fina-1}) hold. \hfill$\Box$\vspace{5mm} \end{comment} We close this section with the following remark. \begin{rem}\label{rem2} In Section \ref{heisenberg-sim}, we fixed a hyperbolic extension $(\tilde{\mathcal V},I)$ of $({\mathcal V},I)$, determined by extended affine root system $R$ and then we defined the extended affine Weyl group ${\mathcal W}$ as a subgroup of $\hbox{O}(\tilde{\mathcal V},I)$. However, Remark \ref{rem1} and Theorem \ref{presen-4-sim} show that the definition of ${\mathcal W}$ is independent of the choice of this particular hyperbolic extension. \end{rem} \begin{comment} \vspace{5mm} \section{\bf Comparison with generalized Coxeter presentation} \label{comparison}\setcounter{equation}{0} According to Theorem \ref{presen-4-sim}, we have obtained a finite presentation for $A_1$-type extended affine Weyl groups with explicit relations if $\nu\leq 3$. As it was mentioned in the introduction, our result????? is the first finite presentation one obtains for $\nu>2$. In this section we compare the elements of our presentation, for type $A_1$ and nullity $2$, with those of ``generalized Coxeter presentation'' given by [ST]. To do so, we give a direct proof that the two presented groups are isomorphic. This in particular compares the generators and relations of two presentations. Up to isomorphism there are two extended affine root systems of type $A_1$ ([AABGP, Proposition II.4.2]) which correspond to the semilattices $$ (2{\mathbb Z}\sigma_1\oplus2{\mathbb Z}\sigma_2)\cup(\sigma_1+2{\mathbb Z}\sigma_1\oplus2{\mathbb Z}\sigma_2)\cup (\sigma_2+2{\mathbb Z}\sigma_1\oplus2{\mathbb Z}\sigma_2)\quad\hbox{and}\quad {\mathbb Z}\sigma_1\oplus{\mathbb Z}\sigma_2.$$ In terms of the index $m=2,3$ of these semilattices we denote the corresponding extended affine root systems by $R(2)$ and $R(3)$ respectively. The {\it generalized Coxeter presentation}, given by [ST], for extended affine Weyl group of $R(m)$, $m=2,3$ is as follows: $$ \tilde{{\mathcal W}}(2)=(a_0,a_1,a_1^*\mid {\mathcal R}(2))\quad\hbox{and}\quad \tilde{{\mathcal W}}(3)=(a_0,a_0^*,a_1,a_1^*\mid {\mathcal R}(3)), $$ where \begin{eqnarray}\label{index3} {\mathcal R}(2)=\left\{\begin{array}{l} a_0^2=a_1^2=(a_1^*)^2=1,\vspace{2mm}\\ (a_1a_1^*a_0)^2=(a_1^*a_0a)^2=(a_0a_1a_1^*)^2, \end{array}\right. \end{eqnarray} and \begin{eqnarray}\label{index4} {\mathcal R}(3)=\left\{\begin{array}{l} a_0^2=(a_0^*)^2=a_1^2=(a_1^*)^2=1,\vspace{2mm}\\ a_0a_0^*a_1a_1^*=a_0^*a_1a_1^*a_0=a_1a_1^*a_0a_0^*=a_1^*a_0a_0^*a_1. \end{array}\right. \end{eqnarray} Set \begin{eqnarray}\label{set-1} x:=a_1,\qquad y_{1}:=a_1a_0,\qquad y_{2}:=a_1^*a_1\quad\hbox{and}\quad z_m=\left\{\begin{array}{ll} (a_0a_1^*a_1)^2, & \hbox{if}\; m=2\vspace{2mm} \\ a_0^*a_0a_1^*a_1, & \hbox{if}\; m=3.\\ \end{array}\right. \end{eqnarray} Since $a_1^2=1$, one can check easily that for $m=2,3$, \begin{eqnarray}\label{as-gen} \tilde{{\mathcal W}}(m)=\langle x,\;y_{1},\;y_{2},\; z_m\rangle. \end{eqnarray} \begin{lem}\label{relations-as} (i) $x^2=(xy_{1})^2=(xy_{2})^2=1$. (ii) $[x,z_m]=[y_{1},z_m]=[y_{2},z_m]=1$. (iii) $[y_{1},y_{2}]=z_m^{m-1}$. \end{lem} \noindent{\bf Proof. } From (\ref{index3}) and (\ref{index4}), we have $x^2=a_1^2=1$, $$xy_{1}x=a_1a_1a_0a_1=a_0a_1=(a_1a_0)^{-1}=y_{1}^{-1}$$ and $$xy_{2}x=a_1a_1^*a_1a_1=a_1a_1^*=(a_1^*a_1)^{-1}=y_{2}^{-1}.$$ Thus (i) holds. Now we want to show that (ii) and (iii) hold. First let $m=3$. Then from (\ref{index4}) we have \begin{eqnarray*} [x,z_3]&=&[a_1,\;\;a_0^*a_0a_1^*a_1]\\ &=&a_1(a_0^*a_0a_1^*a_1)^{-1}a_1a_0^*a_0a_1^*a_1\\ &=&a_1(a_1a_1^*a_0a_0^*)a_1a_0^*a_0a_1^*a_1\\ &=&a_1^*a_0a_0^*a_1a_0^*a_0a_1^*a_1\\ &=&a_1^*a_0a_0^*a_1(a_1^*a_0a_0^*a_1)^{-1}=1, \end{eqnarray*} \begin{eqnarray*} \qquad\qquad\qquad[y_{1},z_3]&=&[a_1a_0,\;\;a_0^*a_0a_1^*a_1]\\ &=&a_0\big(a_1(a_0^*a_0a_1^*a_1)^{-1}a_1\big)a_0a_0^*a_0a_1^*a_1\\ &=&a_0[a_1,\;\;a_0^*a_0a_1^*a_1](a_0^*a_0a_1^*a_1)^{-1}a_0a_0^*a_0a_1^*a_1\\ &=&a_0[x,\;\;z_3](a_0^*a_0a_1^*a_1)^{-1}a_0a_0^*a_0a_1^*a_1\\ &=&a_0(a_0^*a_0a_1^*a_1)^{-1}a_0a_0^*a_0a_1^*a_1\\ &=&a_0(a_1a_1^*a_0a_0^*)a_0a_0^*a_0a_1^*a_1\\ &=&a_0a_1a_1^*a_0a_0^*a_0(a_0^*a_0a_1^*a_1)\\ &=&a_0a_1a_1^*a_0a_0^*a_0(a_1a_1^*a_0a_0^*)^{-1}\\ &=&a_0a_1a_1^*a_0a_0^*a_0(a_0^*a_1a_1^*a_0)^{-1}\\ &=&a_0a_1a_1^*a_0a_0^*a_0a_0a_1^*a_1a_0^*\\ &=&a_0(a_1a_1^*a_0a_0^*)a_1^*a_1a_0^*\\ &=&a_0(a_0a_0^*a_1a_1^*)a_1^*a_1a_0^*=1, \end{eqnarray*} \begin{eqnarray*} [y_{2},z_3]&=&[a_1^*a_1,\;\;a_0^*a_0a_1^*a_1]\\ &=&a_1a_1^*(a_0^*a_0a_1^*a_1)^{-1}a_1^*a_1a_0^*a_0a_1^*a_1\\ &=&a_1a_1^*(a_1a_1^*a_0a_0^*)(a_1^*a_1a_0^*a_0)a_1^*a_1\\ &=&a_1a_1^*(a_1a_1^*a_0a_0^*)(a_0a_0^*a_1a_1^*)^{-1}a_1^*a_1\\ &=&a_1a_1^*(a_1a_1^*a_0a_0^*)(a_1a_1^*a_0a_0^*)^{-1}a_1^*a_1\\ &=&1, \end{eqnarray*} and \begin{eqnarray*} [y_{1},y_{2}]&=&[a_1a_0,\;\;a_1^*a_1]\\ &=&a_0a_1a_1a_1^*a_1a_0a_1^*a_1\\ &=&a_0a_1^*a_1a_0a_1^*a_1\\ &=&(a_0a_1^*a_1a_0^*)(a_0^*a_0a_1^*a_1)\\ &=&(a_0^*a_0a_1^*a_1)^{2}=z_3^2. \end{eqnarray*} Finally let $m=2$. We have from (\ref{index3}) that \begin{eqnarray*} [x,z_2]&=&[a_1,\;\;(a_0a_1^*a_1)^2]\\ &=&a_1(a_0a_1^*a_1)^{-2}a_1(a_0a_1^*a_1)^2\\ &=&a_1(a_1a_1^*a_0a_1a_1^*a_0)a_1a_0a_1^*a_1a_0a_1^*a_1\\ &=&a_1^*a_0a_1a_1^*a_0a_1a_0a_1^*a_1a_0a_1^*a_1\\ &=&(a_1^*a_0a_1)^2(a_0a_1^*a_1)^2\\ &=&(a_1^*a_0a_1)^2(a_1a_1^*a_0)^{-2}\\ &=&(a_1^*a_0a_1)^2(a_1^*a_0a_1)^{-2}=1, \end{eqnarray*} \begin{eqnarray*} [y_{1},z_2]&=&[a_1a_0,\;\;(a_0a_1^*a_1)^2]\\ &=&a_0[a_1,\;(a_0a_1^*a_1)^2](a_0a_1^*a_1)^{-2}a_0(a_0a_1^*a_1)^2\\ &=&a_0(a_0a_1^*a_1)^{-2}a_0(a_0a_1^*a_1)^2\\ &=&a_0a_1a_1^*a_0a_1a_1^*a_0a_0a_0a_1^*a_1a_0a_1^*a_1\\ &=&a_0a_1a_1^*a_0a_1a_1^*a_0a_1^*a_1a_0a_1^*a_1\\ &=&(a_0a_1a_1^*)^2(a_0a_1^*a_1)^2\\ &=&(a_0a_1a_1^*)^2(a_1a_1^*a_0)^{-2}\\ &=&(a_0a_1a_1^*)^2(a_0a_1a_1^*)^{-2}=1, \end{eqnarray*} \begin{eqnarray*} [y_{2},z_2]&=&[a_1^*a_1,\;\;(a_0a_1^*a_1)^2]\\ &=&a_1a_1^*((a_0a_1^*a_1)^2)^{-1}a_1^*a_1(a_0a_1^*a_1)^2\\ &=&a_1a_1^*(a_1a_1^*a_0)^2a_1^*a_1(a_0a_1^*a_1)^2\\ &=&a_1a_1^*(a_0a_1a_1^*)^2a_1^*a_1(a_0a_1^*a_1)^2\\ &=&a_1a_1^*a_0a_1a_1^*a_0a_1a_1(a_0a_1^*a_1)^2 \end{eqnarray*} \begin{eqnarray*} &=&(a_1a_1^*a_0)^2(a_0a_1^*a_1)^2=1, \end{eqnarray*} and \begin{eqnarray*} [y_{1},y_{2}]&=&[a_1a_0,\;\;a_1^*a_1]\\ &=&a_0a_1a_1a_1^*a_1a_0a_1^*a_1\\ &=&a_0a_1^*a_1a_0a_1^*a_1\\ &=&(a_0a_1^*a_1)^2=z_2. \end{eqnarray*} \hfill$\Box$\vspace{5mm} Let $\hat{{\mathcal W}}(m)$ be the group defined by generators $x,y_1,y_2,z_m$ and relations as in Lemma \ref{relations-as}(i)-(iii). One can see from Section \ref{presentation-W-sim} that $\hat{{\mathcal W}}(m)$ is in fact the presentation we defined for the Weyl group of $R(m)$. Next we set $$b_0=xy_1,\quad b_1:=x,\quad \;b_1^*=y_2x, $$ and if $m=3$ we also set $$b_0^*:=xy_1y_2z_m^{-1}.$$ It is clear that \begin{eqnarray}\label{s-gen} \widehat{{\mathcal W}}(m)=\left\{\begin{array}{ll} \langle b_0,\;b_1,\;b_1^*\rangle & \hbox{if}\; m=2\vspace{2mm} \\ \langle b_0,\;b_0^*,\;b_1,\;b_1^*\rangle & \hbox{if}\; m=3.\\ \end{array}\right. \end{eqnarray} \begin{lem}\label{relations-saito} (i) $b_0^2=b_1^2=(b_1^*)^2=1$. (ii) If $m=2$, $(b_1b_1^*b_0)^2=(b_1^*b_0b_1)^2=(b_0b_1b_1^*)^2$. (iii) If $m=3$, $(b_0^*)^2=1$ and $b_0b_0^*b_1b_1^*=b_0^*b_1b_1^*b_0 =b_1b_1^*b_0b_0^*=b_1^*b_0b_0^*b_1$. \end{lem} \noindent{\bf Proof. } From the defining relations for $\hat{{\mathcal W}}(m)$ we have: (i) $b_1^2=x^2=1$, $b_0^2=xy_1xy_1=y_1^{-1}y_1=1$ and $(b_1^*)^2=y_2xy_2x=y_2y_2^{-1}=1.$ (ii) \begin{eqnarray*} \qquad(b_1b_1^*b_0)^2&=&(xy_2xxy_1)^2xy_2y_1xy_2y_1=(xy_2x)(xy_1x)y_2y_1 =y_2^{-1}y_1^{-1}y_2y_1\\ &=&[y_2,\;y_1]=[y_2^{-1},\;y_1^{-1}]=y_2y_1xy_2y_1x=(y_2xxy_1x)^2\\ &=&(b_1^*b_0b_1)^2\\ &=&[y_2^{-1},\;y_1^{-1}]=[y_1,\;y_2^{-1}]=y_1^{-1}y_2y_1y_2^{-1}=xy_1xy_2y_1xy_2x\\ &=&xy_1xy_2xxy_1xy_2x=(xy_1xy_2x)^2\\ &=&(b_0b_1b_1^*)^2.\qquad\qquad \end{eqnarray*} (iii) \begin{eqnarray*} \qquad\qquad(b_0^*)^2&=&xy_1y_2z_m^{-1}xy_1y_2z_m^{-1}\\ &=&(xy_1x)(xy_2x)y_1y_2z_m^{-2}\\ &=&y_1^{-1}y_2^{-1}y_1y_2z_m^{-2}\\ &=&[y_1\;y_2]z_m^{-2}=1, \end{eqnarray*} and \begin{eqnarray*} b_0b_0^*b_1b_1^*&=&xy_1xy_1y_2z_m^{-1}xy_2x\\ &=&xy_1xy_1y_2xy_2xz_m^{-1}=y_1y_1^{-1}y_2y_2^{-1}z_m^{-1}=z_m^{-1}\\ &=&y_1^{-1}y_2^{-1}y_2y_1z_m^{-1}=xy_1y_2z_m^{-1}xy_2xxy_1\\ &=&b_0^*b_1b_1^*b_0\\ &=&z_m^{-1}=y_2^{-1}y_1^{-1}y_1y_2z_m^{-1}=xy_2xxy_1xy_1y_2z_m^{-1}\\ &=&b_1b_1^*b_0b_0^*=z_m^{-1}=y_2y_1y_1^{-1}y_2^{-1}z_m^{-1}\\ &=&y_2xxy_1xy_1y_2xz_m^{-1}=y_2xxy_1xy_1y_2z_m^{-1}x\\ &=&b_1^*b_0b_0^*b_1. \end{eqnarray*} \hfill$\Box$\vspace{5mm} The following proposition clarifies the relations between the elements of our presentation, for type $A_1$, with those given by [ST]. \begin{pro}\label{saito-thm} $\widehat{{\mathcal W}}(m)\cong\tilde{{\mathcal W}}(m)$. \end{pro} \noindent{\bf Proof. } By (\ref{s-gen}) and Lemma \ref{relations-saito} there exists an epimorphism $\varphi$ of $\tilde{{\mathcal W}}(m)$ onto $\widehat{{\mathcal W}}(m)$ so that $a_0\longmapsto b_0$, $a_1\longmapsto b_1$ and $a_1^*\longmapsto b_1^*$, and if $m=3$, $a_0^*\longmapsto b_0^*$. Also by (\ref{as-gen}) and Lemma \ref{relations-as}, there exists an epimorphism $\psi$ from $\widehat{{\mathcal W}}(m)$ onto $\tilde{{\mathcal W}}(m)$ so that $x\longmapsto a_1$, $y_{1}\longmapsto a_1a_0$, $y_{2}\longmapsto a_1^*a_1$ and $$z_m\longmapsto \left\{\begin{array}{ll} (a_0a_1^*a_1)^2, & \hbox{if}\; m=2\vspace{2mm} \\ a_0^*a_0a_1^*a_1, & \hbox{if}\; m=3.\\ \end{array}\right.$$ We are done if we show that $\varphi\psi=1$. It is enough to show that $\varphi\psi(x)=x$, $\varphi\psi(y_1)=y_1$, $\varphi\psi(y_2)=y_2$ and $\varphi\psi(z_m)=z_m$. But from the definitions of $\psi$ and $\varphi$ we have $$\begin{array}{l} \varphi\psi(x)=\varphi(a_1)=b_1=x,\vspace{2mm}\\ \varphi\psi(y_1)=\varphi(a_1a_0)=b_1b_0=xxy_1=y_1,\vspace{2mm}\\ \varphi\psi(y_2)=\varphi(a_1^*a_1)=b_1^*b_1=y_2xx=y_2, \end{array} $$ and $$\begin{array}{c} \varphi\psi(z_2)=\varphi((a_0a_1a_1^*)^2) =(b_0b_1b_1^*)^2=(xy_1y_2)^2=y_1^{-1}y_2^{-1}y_1y_2=z_2,\vspace{2mm}\\ \varphi\psi(z_3)=\varphi(a_0a_0^* a_1a_1^*)= b_0b_0^* b_1b_1^*=xy_2y_1xy_1y_2z_m^{-1}= y_1^{-1}y_2^{-1}y_1y_2z_m^{-1}=z_3. \end{array}$$ \begin{cor}\label{saito-thm1} $Z\big(\tilde{{\mathcal W}}(m)\big)=\left\{\begin{array}{ll} \langle (a_1a_1^*a_0)^2 \rangle & \hbox{if}\;\; m=2\vspace{2mm}\\ \langle a_1a_1^* a_0a_0^*\rangle & \hbox{if}\;\; m=3.\\ \end{array}\right.$ \end{cor} \noindent{\bf Proof. } Use the fact that $Z\big(\widehat{{\mathcal W}}(m)\big)=\langle z_m\rangle$ and the isomorphism $\psi$ in the proof of Theorem \ref{saito-thm}. \hfill$\Box$\vspace{5mm} \end{comment}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,100
"""Ping the SoftLayer Message Queue service.""" # :license: MIT, see LICENSE for more details. import SoftLayer from SoftLayer.CLI import environment from SoftLayer.CLI import exceptions import click @click.command() @click.option('--datacenter', help="Datacenter, E.G.: dal05") @click.option('--network', type=click.Choice(['public', 'private']), help="Network type") @environment.pass_env def cli(env, datacenter, network): """Ping the SoftLayer Message Queue service.""" manager = SoftLayer.MessagingManager(env.client) okay = manager.ping(datacenter=datacenter, network=network) if okay: return 'OK' else: exceptions.CLIAbort('Ping failed')
{ "redpajama_set_name": "RedPajamaGithub" }
4,698
Georgetown Mourns the Passing of an Inspirational Alumni Leader Arthur Calcagnini, C'54 (1933-2019) Alumni Reminisce, Celebrate 50 Years of First-Gen Support, Success Alumni of Georgetown's Community Scholars Program gathered at a 50th-anniversary celebration Oct. 5 to reminisce and talk about the importance of continuing support for first-generation and underrepresented students. Bradley Cooper Says Directorial Debut Mirrored Goal to Get Into Georgetown Georgetown Professor Writes Dramatic Memoir from an Italian Villa Much like the Italian laborers who built Georgetown University's Villa le Balze estate atop a cliff in Tuscany, Professor of English John Glavin (C'64, Parent'95) devoted years and abundant patience... Black Student Alliance Celebrates 50-Year Legacy of Community and Support Georgetown's Black Student Alliance marks history and milestones on its 50th anniversary. Street Law Student Alum Patrick Campbell (C'92): Standing on Others' Shoulders Patrick Campbell's (C'92) experience being taught by Georgetown Law students as a D.C. high school student in the Street Law program paved the way for his future success as a lawyer. Black Student Alliance Celebrates its 50th Anniversary Campus organization launched to provide support and sense of community for African-American students continues its legacy fifty years later By Chelsea Burwell Photo by Ndeye Ndiaye (C'18). The 1960s... In Memoriam: Professor Emeritus John G. "Jack" Murphy Jr. (L'61) Alumnus and Professor Emeritus John G. "Jack" Murphy Jr. (L'61), who joined the Georgetown Law faculty in 1965, has passed away.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,444
Glen Campbell Collection is a compilation album by Glen Campbell released in 2004 as a double CD and consisting of hits and album tracks recorded in the sixties, seventies and nineties. It is also released on digital media by EMI Gold. Some tracks were remastered in 2001, 2002 and 2003. Track listing Disc 1: "By The Time I Get To Phoenix" (Jimmy Webb) "Help Me Make It Through the Night" (Kris Kristofferson) "Bridge Over Troubled Water" (Paul Simon) "It's Only Make Believe" (Conway Twitty, Jack Nance) "Unconditional Love" (Lowery, Sharp, Dubois) "All I Have To Do Is Dream" (with Bobbie Gentry) (Boudleaux Bryant) "Gentle On My Mind" (John Hartford) "If Not For You" (Bob Dylan) "Elusive Butterfly" (Lind) "Galveston" (Jimmy Webb) "(I Never Promised You A) Rose Garden" (South) "Mr. Tambourine Man" (instrumental) (Bob Dylan) "For The Good Times" (with Ernie Ford) (Kris Kristofferson) "God Only Knows" (Brian Wilson, Tony Asher) "King Of The Road" (instrumental) (Roger Miller) "Little Green Apples" (with Bobbie Gentry) (Russell) "You'll Never Walk Alone" (Richard Rodgers, Oscar Hammerstein) Disc 2: "Your Cheatin' Heart" (Hank Williams) "Honey Come Back" (Jimmy Webb) "For Once In My Life" (Murden, Miller) "There Goes My Everything" (with Ernie Ford) (Frazier) "Tomorrow Never Comes" (Ernest Tubb, Bond) "California" (Micheal Smotherman) "Crying" (Roy Orbison, Joe Melson) "Rhinestone Cowboy" (Larry Weiss) "Living In A House Full Of Love" (Sutton, Sherrill) "Country Boy (You Got Your Feet In LA)" (Lambert, Potter) "Cold Cold Heart" (Hank Williams) "He Ain't Heavy, He's My Brother" (Bob Russell, Bobby Scott) "Homeward Bound" (Paul Simon) "I Believe" (Drake, Shirl, Stillman, Graham) "Reason To Believe" (Tim Hardin) "Marie" (Randy Newman) "William Tell Overture" (Gioachino Rossini) 2004 compilation albums Glen Campbell compilation albums EMI Records compilation albums
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,793
Nursing Degree Programs > Arkansas > BSN Programs Arkansas Health employers in Arkansas and across the country have always felt the pressure to hire more registered nurses to meet population demands. But since the Affordable Care Act rolled out in 2103, extending health insurance coverage to millions of Americans, the need for more nurses has almost doubled. A surge in Arkansas' population further complicates an already taxing situation. The growing and aging population require greater care. In a state that is home to almost 3 million people, 600,000 (nearly 21 percent) are over the age of 60 and suffer some sort of chronic health condition. Vast improvements in the economy are also affecting the workforce as many nurses, who would have stayed on after retirement, are leaving the workforce. Employers have to think outside the box and offer attractive compensation packages to recruit more nurses, including hiring nurses with an associate degree as a way to boost their numbers. The Magnet designation from the American Nurses Credentialing Center requires hospitals to hire more nurses who have a bachelor's degree, but confronted with a widening pool of vacancies, associate degree nurses are the next best option. Bold steps to relieve the nursing shortage in the state – in the long and short term – will stabilize the workforce and the practice of hiring bachelor's degree nurses will revive. Several studies link higher nurse education with an improvement in patient outcomes. For this reason, the Institute of Medicine's Future of Nursing Report recommends the bachelor's degree in nursing as the minimum educational preparation for entry-level nursing. Steps to implement this recommendation are already underway as many colleges across the United States replace the ADN curriculum with the BSN. While it is employers and nursing associations making the call for better-educated nurses, RNs can also expect to reap innumerable benefits from pursuing the degree at the onset. BSN graduates secure employment faster and earn higher compensation than their ADN counterparts. More than 78% of advertised hospital vacancies for registered nurses require a bachelor's degree. The BSN curriculum supplies advanced training in the areas of communication, leadership, public health, critical thinking, and decision-making, which empowers BSN-prepared nurses to secure positions in administration and management. The BSN RN can also branch out into new areas of interest or enroll in a graduate nursing program for advanced practice in one of the specialty areas – which leads to an even more rewarding career. The long-term benefits of the bachelor's degree are inarguable. However, there's a distinct short-term benefit that relates to patient care. The aging population is facing a multitude of complex conditions. Some patients suffer from multiple chronic conditions that require the knowledge and competencies of the BSN-prepared nurse. Nursing is a physically demanding profession, but the privilege of working with a healthcare team to nurse a patient back to health using the superior clinical skills obtained in the bachelor's program is priceless. Following is a list of the various types of BSN programs available in Arkansas. LPN to BSN: Licensed practical nurses are still relevant, but there has been a major shift in their place in healthcare. The expansion in the roles of nurses as a part of the healthcare team, stricter requirements for nursing staff in hospitals, and an increase in the need for direct care providers in long-term care have a direct impact on LPN employment opportunities. Today, more LPNs are employed in long-term care settings. Those who wish to remain in the fast-paced hospital environment will need to upgrade their education and skills. The LPN to BSN program allows the nurse to complete the requirements for the bachelor's degree in a much faster time as prior education and experience allow for advanced placement. RN to BSN: The rise in RN-to-BSN programs is a direct response to the call for better-educated nurses. Studies link better patient outcomes to the education of nurses. RNs who have a bachelor's degree or higher have better patient outcomes and lower morbidity rates according to one study. Hospitals seeking the magnet designation are also influencing nurses' decision to return to school. The program is designed in a flexible online format, granting nurses the ability to balance work-life-education commitments. Full-time students can complete the program in 12 to 18 month, depending on former education. Traditional BSN: Any individual, with or without a prior education or experience in healthcare, can enroll in the traditional BSN program and complete the requirements in 3 to 4 years. The prospective student must have a high school diploma and complete the general education and prerequisite courses with a grade C or higher before enrolling in the nursing department to commence the core nursing courses. Graduates receive a bachelor's degree with a major in nursing that they can use to apply for the NCLEX for licensure as a registered nurse. Fast-track BSN: An individual who already holds a bachelor's degree can skip the general education requirements and commence the core nursing courses from the onset. This will allow the student to complete the program in 12 to 24 months with the transfer of courses. In some cases, the student may need to complete science and other prerequisite courses before, which will extend the time needed to complete the program. To practice as a registered nurse, graduates must apply to the Board of Nursing to take the NCLEX for licensed as a registered nurse. The traditional BSN program is designed for students with no prior nursing experience. The program produces graduates who have a broad knowledge base in preparation for practice. Through extensive studies in the areas of liberal education, core competencies, professional values, and role development students are prepared to collaborate with other members of the healthcare team in the best interest of the patient. They can also function as a designer, manager and coordinator of care and have the foundational competencies to assume leadership positions in healthcare or pursue graduate study. The content on the curriculum is as important as where you enroll to receive the training. To secure the Board of Nursing's approval to sit NCLEX, you must attend a program that is approved by the board. It is also essential to choose a program that is accredited by the Accreditation Commission for Education in Nursing (ACEN), especially if you plan to pursue graduate study. Common enrollment criteria include: Registration as a student at the specific college before applying to the Department of Nursing. A completed application submitted before the specified deadline. A passing score on the HESI A2 admission exam. A grade C or above on the prerequisite courses. Official transcripts of all college-level work completed. Provide letters of reference from professional and academic sources. Liability insurance, criminal background clearance, BLS for Healthcare Providers, documentation of immunization (or waiver), TB clearance, and a negative drug screen report must be completed by students who receive conditional acceptance. Prerequisite courses may include anatomy and physiology, microbiology, college composition, algebra, and psychology. The second-degree BSN lets professionals take up an alternative career in the shortest possible. The curriculum is designed to allow persons who hold a bachelor's degree to complete the core nursing cores and earn a bachelor's of science in nursing. The prospective student must have a degree from an accredited program to transfer the general education and science courses. Prerequisite courses may apply. The intensive nature of the 12 to 18-month program requires students to attend full-time and employment is not recommended. Admission to the program requires official transcripts, a cumulated GPA of 2.0 and above, an acceptable score on the HESI A2 pre-admission exam, criminal background clearance, health clearance, a negative drug screen, personal interview, and professional references. Graduates are eligible to sit the NCLEX-RN to become a registered nurse. Unrealistic expectations or improper planning could affect your ability to complete college. Meet with a college advisor early to work out the cost of attendance, apply for financial aid, and prepare a budget to manage the expenses not covered by financial aid. Public schools are more affordable than private ones. However, admission might be competitive, especially for nursing programs. Evaluate your expenses for the entire two to four years to avoid roadblocks, as much as possible, along the way. Some expenses you must budget for include clinical supplies, shoes, uniform, living expenses, tuition, transportation, and fees. Financial aid will take into account all the resources you can use to pay your way through college, including funding from the government, scholarships, grants, family support, and private loans. Scholarships and grants are an excellent resource as they represent money you won't have to repay. Interest-free loans from family and friends are also agreeable, especially when compared to high-interest private loans. Health care employers frequently offer aid in return for your commitment to work with the organization. Registered nurses are in a short supply, so there are a lot of employer incentives for nursing-related education. The approximate cost of completing a BSN at public universities in Arkansas is $37,375. Here's a list of Board-approved registered nursing programs with NCLEX-RN pass rates in Arkansas. Arkansas BSN Programs and NCLEX-RN Pass Rate Arkadelphia, AR BSN Programs: 1100 Henderson Street, Arkadelphia, AR 71999 Conway, AR BSN Programs: 201 Donaghey Avenue, Conway, AR 72035 NCLEX-RN Pass Rate: 89.8% Fayetteville, AR BSN Programs: 3189 Bell, 1 University of Arkansas 800 West Dickson Fayetteville, AR 72701 Fort Smith, AR BSN Programs: University of Arkansas at Fort Smith 5210 Grand Avenue, Fort Smith, AR 72913-3649 Jonesboro, AR BSN Programs: 2105 Aggie Rd, Jonesboro, AR 72467 Little Rock, AR BSN Programs: 4301 West Markham Street, Little Rock, AR 72205 Magnolia, AR BSN Programs: Southern Arkansas University 100 East University, Magnolia, AR 71753-5000 Monticello, AR BSN Programs: 346 University Drive, Monticello, AR 71656 Russellville, AR BSN Programs: Arkansas Tech University – Ozark 1605 N Coliseum Dr, Russellville, AR 72801 Searcy, AR BSN Programs: Harding University 915 East Market Avenue, Searcy, AR 72143 Arkansas, BSNaccelerated BSN AR, BSN schools Arkansas
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,421
The embattled president of the National Union of Public Workers (NUPW), Akanni McDowall, whose leadership is under threat, has accused the union's highest decision-making body of accepting a new Medicare plan without considering all the facts. In a three-page letter dated May 20, 2016, signed by McDowall and submitted to the National Council last week, McDowall seeks to make it clear that at no time throughout the process, did he make any unilateral decision in relation to the plan. However, he took the Council to task for going ahead and accepting Sagicor as the insurance provider and Capita as its broker without investigating all other available options. Barbados TODAY has received a copy of the letter in which McDowall also accuses the Council of shutting out alternative proposals from the new chairman of the NUPW's Insurance Inc. board, Tyrone Lowe. "On May 5, 2016, a special council meeting was convened and chaired by the first vice president [of NUPW] in the absence of the president, where Capita was given an opportunity to present their proposal before the council," the president's letter notes. "Unfortunately, that equal opportunity was not afforded the chairman of the Insurance Inc. to present the board's proposal and therefore could not fulfill the mandate as entrusted to them by the said council previously," it adds. "As a result, the council made an ill-informed decision to accept Capita as the broker and Sagicor as the provider to carry NUPW's insurance plan, " he said. McDowall also condemned the previous board of the NUPW Insurance Inc. for the role it played leading up to Insurance Corporation of Barbados [ICBL] terminating its coverage of the Medicare policy. The President's position on the plan, which is due to service the union's 1,800 Medicare policyholders, has been cited as one of the main reasons for a no confidence motion that has tabled against him, with some union members currently accusing McDowall of, among other things, acting outside of his authority in seeking to overturn the National Council's decision to accept Sagicor as the new insurance provider and Capita Financial as the broker for the NUPW's medical plan. McDowall has described the claims, including condemnation of his participation in last month's Opposition march for justice, as "spurious". I back you all the way Akanni! help elect you there! You should simply take the bribes like all the others. If you do take any of the excellent packages or gifts, you can be rest assured that when the time for muzzling comes, they can throw what they give you right back in your face. If you doubt me, simply ask all the past Presidents. Who ain't get big rides got diplomatic positions with a nice payment package. So Akanni simple take the bribe, and all is well. He did his own self in,simple as that. Sunshine sunny sunshine I agree with you totally as these ignorant members who oppose this young guy is blind to the facts. They have not had an increase for 8 years but the former chairman got a big job promotion for holding out hence he got his big increase they got none. The former GS got his big package and the one before a diplomatic post. After all of this the young man unlike Derek Alleyne and Murrel two well known Dems are trying to get back in to hold out and avoid any pressure on the Government and hence no increase. Aren't they really ignorant.
{ "redpajama_set_name": "RedPajamaC4" }
4,418
\section{Introduction} Wetting and dewetting phenomena are ubiquitous in soft matter systems and have a profound impact on many disciplines, including biology~\cite{Prakash2012}, microfluidics~\cite{Geoghegan2003}, and microfabrication~\cite{Chakraborty2010}. One problem of great interest concerns the suspension of fluid films on or near structured surfaces where, depending on the interplay of competing short-range molecular or capillary forces (e.g. surface tension), gravity, and long-range dispersive interactions (i.e. van der Waals or more generally, Casimir forces), the film may undergo wetting or dewetting transitions, or exist in some intermediate state, forming a continuous surface profile of finite thickness~\cite{Bonn2009, Geoghegan2003}. Thus far, theoretical analyses of these competing effects have relied on approximate descriptions of the dispersive van der Waals (vdW) forces~\cite{arodreview, Israelachvili, Parsegian}, i.e. so-called Derjaguin~\cite{Derjaguin1934} and Hamaker~\cite{Hamaker1937} approximations, which have recently been shown to fail when applied in regimes that fall outside of their narrow range of validity~\cite{Buscher2004, lambrechtPWS, Emig2001,arodreview}. In this paper, building on recently developed theoretical techniques for computing Casimir forces in arbitrary geometries~\cite{Reid2013, Reid2009}, we demonstrate an approach for studying the equilibrium shapes (the wetting and dewetting properties) of liquid surfaces that captures the full non-additivity and non-locality of vdW interactions~\cite{aroddesigner}. As a proof of concept, we consider the problem of a fluid surface on or near a periodic grating, idealized as a deformable perfect electrical conductor (PEC) surface (playing the role of a fluid surface) interacting through vacuum below a fixed periodic PEC grating [\figref{schematic}], and show that the competition between surface tension and non-additive vdW pressure leads to quantitatively and qualitatively different equilibrium fluid shapes and wetting properties compared with predictions based on commonly employed additive approximations. Our simplifying choice of PEC surfaces allows for a scale-invariant analysis of the role of geometry on both non-additivity and fluid deformations, ignoring effects associated with material dispersion that would otherwise further complicate our analysis and which are likely to result in even larger deviations~\cite{Noguez2004,arodreview}. Our results provide a basis for experimental studies of fluid suspensions in situations where vdW non-additivity can have a significant impact. Equilibrium fluid problems are typically studied by way of the augmented Young-Laplace equation~\cite{interfacecolloidYLE}, \begin{equation} \gamma \nabla \cdot \left(\frac{\nabla \Psi}{\sqrt{1+|\nabla \Psi|^2}}\right) + \frac{\delta}{\delta\Psi} \left(\mathcal{E}_{\mathrm{other}}[\Psi]+\mathcal{E}_{\mathrm{vdW}}[\Psi] \right) = 0 \label{eq:YLE} \end{equation} describing the local balance of forces (variational derivatives of energies) acting on a fluid of surface profile $\Psi(\vec{x})$. The first two terms describe surface and other external forces (e.g. gravity), with $\gamma$ denoting the fluid--vacuum surface tension, while the third term $\frac{\delta}{\delta \Psi} \mathcal{E}_{\mathrm{vdW}}$ denotes the local disjoining pressure arising from the changing vdW fluid--substrate interaction energy $\mathcal{E}_{\mathrm{vdW}}$. Semi-analytical~\cite{Quinn2013, Ledesma2012nanoscale} and brute-force~\cite{Ledesma2012multiscale, Sweeney1993} solutions of the YLE have been pursued in order to examine various classes of wetting problems, including those arising in atomic force microscopy, wherein a solid object (e.g. spherical tip) is brought into close proximity to a fluid surface~\cite{Quinn2013, Ledesma2012nanoscale, Ledesma2012multiscale}, or those involving liquids on chemically~\cite{Bauer1999, Checco2006} or physically~\cite{Geoghegan2003, Bonn2009, Sweeney1993} textured surfaces. A commonality among prior theoretical studies of \eqref{YLE} is the use of simple, albeit heuristic approximations that treat vdW interactions as additive forces, often depending on the shape of the fluid in a power-law fashion~\cite{Derjaguin1934, Derjaguin1956, Hamaker1937}. Derjaguin or proximity-force approximations (PFA) are applicable in situations involving nearly planar structures, i.e. small curvatures compared to their separation, approximating the interaction between the objects as an additive, pointwise summation of plate--plate interactions between differential elements comprising their surfaces~\cite{Derjaguin1934, Derjaguin1956}. Hamaker or pairwise-summation (PWS) approximations are applicable in situations involving dilute media~\cite{lambrechtPWS}, approximating the interaction between two objects as arising from the pairwise summation of (dipolar) London--vdW~\cite{caspol1} or Casimir--Polder~\cite{caspol2} forces between volumetric elements of the same constitutive materials~\cite{Hamaker1937}; such a treatment necessarily neglects multiple-scattering and other non-additive effects. When applied to geometries consisting of planar interfaces, PFA can replicate exact results based on the so-called Lifshitz theory (upon which it is based)~\cite{Dzyaloshinskii1961}, whereas PWS captures the distance dependence obtained by exact calculations but differs in magnitude (except in dilute situations)~\cite{lambrechtPWS}. Typically, the quantitative discrepancy of PWS is rectified via a renormalization of the force coefficient to that of the Lifshitz formula, widely known as the Hamaker constant~\cite{Bergstrom1997}. The inadequacy of these additive approximations in situations that fall outside of their range of validity has been a topic of significant interest, spurred by the recent development of techniques that take full account of complicated non-additive and boundary effects arising in non-planar structures, revealing non-monotonic, logarithmic, and even repulsive interactions stemming from geometry alone~\cite{arodreview, aroddesigner, arodpistons, bordagcyl}. These brute-force techniques share little semblance with additive approximations, which offer computational simplicity and intuition at the expense of neglecting important electromagnetic effects. In particular, the exact vdW energy in these modern formulations is often cast as a log-determinant expression involving the full (no approximations) electromagnetic scattering properties of the individual objects, obtained semi-analytically or numerically by exploiting spectral or localized basis expansions of the scattering unknowns~\cite{arodreview, Lambrecht2006}. The generality of these methods does, however, come at a price, with even the most sophisticated of formulations requiring thousands or hundreds of thousands of scattering calculations to be performed~\cite{arodreview}. Despite the fact that fluid suspensions motivated much of the original theoretical work on vdW interactions between macroscopic bodies~\cite{Lamoreaux2006, Dzyaloshinskii1961, Israelachvili, Parsegian}, to our knowledge these recent techniques have yet to be applied to wetting problems in which non-additivity and boundary effects are bound to play a significant role on fluid deformations. \begin{figure}[t!] \centering \includegraphics[width=0.75\columnwidth]{unitcellschematic.eps} \caption{Schematic of fluid--grating geometry comprising a fluid (blue) of surface profile $\Psi(\vec{x})$ in close proximity (average distance $d$) to a solid grating (red) of height profile $h(\vec{x})$, involving thin nanorods of height $H$, thickness $2P$, and period $\Lambda$. (a) Representative mesh employed by a recently developed FSC boundary-element method~\cite{SCUFF1} for computing exact vdW energies in complex geometries. (b) and (c) illustrate commonly employed pairwise--summation (PWS) and proximity--force approximations (PFA), involving volumetric and surface interactions throughout the bodies, respectively.} \label{fig:schematic} \end{figure} \emph{Methods.--} In order to solve \eqref{YLE} in general settings, we require knowledge of $\frac{\delta}{\delta\Psi} \mathcal{E}_{\mathrm{vdW}} [\Psi]$ for arbitrary $\Psi$. We employ a mature and freely available method for computing vdW interactions in arbitrary geometries and materials~\cite{SCUFF1,SCUFF2}, based on the fluctuating--surface current (FSC) framework~\cite{Reid2009, Reid2013} of electromagnetic scattering, in which the vdW energy, \begin{equation} \mathcal{E}_\mathrm{FSC} = \frac{\hbar}{2\pi}\int_0^\infty \mathrm{d}\xi \, \ln(\det(\mathbb{M} \mathbb{M}_{\infty}^{-1})) \label{eq:FSC} \end{equation} is expressed in terms of ``scattering'' matrices $\mathbb{M}$, $\mathbb{M}_\infty$ involving interactions of surface currents (unknowns) flowing on the boundaries of the bodies~\cite{Reid2009, Reid2013} and integrated along imaginary frequencies $\xi = \mathrm{i} \omega$; these are computed numerically via expansions in terms of localized basis functions, or triangular meshes interpolated by linear polynomials [\figref{schematic}(a)], in which case it is known as a boundary element method. Because exact methods most commonly yield the total vdW energy or force, rather than the local pressure on $\Psi$, it is convenient to consider the YLE in terms of an equivalent variational problem for the total energy~\cite{bormashenko,silin}: \begin{equation} \min_{\Psi} \; \left(\gamma \int \sqrt{1 + |\nabla \Psi|^2} + \mathcal{E}_{\mathrm{other}}[\Psi] + \mathcal{E}_{\mathrm{vdW}}[\Psi]\right), \label{eq:Emin} \end{equation} where just as in \eqref{YLE}, the first term captures the surface energy, the second captures contributions from gravity or bulk thermodynamic/fluid interactions, and the third captures the dispersive vdW interaction energy. For simplicity, we ignore other competing interactions, including thermodynamic and viscous forces~\cite{Ledesma2012multiscale, Ledesma2012nanoscale} and neglect gravity when considering nanoscale fluid deformations, focusing instead only on the impact of surface and dispersive vdW interactions. \Eqref{Emin} can be solved numerically via any number of available nonlinear optimization/minimization techniques~\cite{bormashenko,silin}, requiring only a convenient parametrization of $\Psi$ using a finite number of degrees of freedom. In what follows, we consider numerical solution of \eqref{Emin} for the particular case of a deformable incompressible PEC surface $\Psi$ interacting through vacuum with a 1d-periodic PEC grating of period $\Lambda$ and shape $h(\vec{x}) = d - H\left(\frac{1}{e^{\alpha (x - P)} + 1} + \frac{1}{e^{-\alpha (x + P)} + 1} - 2\right)$, for $|x| < \frac{\Lambda}{2}$, with half-pitch $P = 0.03\Lambda$ and height $H =1.2\Lambda$. \Figref{schematic} shows the grating surface and fluid profile obtained by solving \eqref{Emin} for a representative set of parameters and mesh discretization. Here, $d = 0.4\Lambda$ is the initial minimum grating-fluid separation, and $\alpha\Lambda = 150$ is a parameter that smoothens otherwise sharp corners in the grating, alleviating spatial discretization errors in the calculation of $\mathcal{E}_{\mathrm{vdW}}$ while having a negligible impact on the qualitative behavior of the energy compared to what one might expect from more typical, piecewise-constant gratings~\cite{Buscher2004}. To minimize the energy, we employ a combination of algorithms found in the NLOPT optimization suite~\cite{NLOPT, COBYLA, BOBYQA}. Although the localized basis functions or mesh of the FSC method provide one possible parametrization of the surface, for the class of periodic problems explored here, a simple Fourier expansion of the surface provides a far more efficient and convenient basis, requiring far fewer degrees of freedom to describe a wide range of periodic shapes. Because the grating is translationally invariant along the $z$ direction and mirror-symmetric about $x = 0$, we parametrize $\Psi$ in terms of a cosine basis, $\Psi(\vec{x}) = \sum_{n} c_n \cos\left(\frac{2\pi{} nx}{\Lambda}\right)$, with the finite number of coefficients $\{c_n\}$ functioning as minimization parameters. As we show below, this choice not only offers a high degree of convergence, requiring typically less than a dozen coefficients, but also automatically satisfies the incompressibility or volume-conservation condition $\int \Psi = 0$, which would otherwise require an additional, nonlinear constraint. Note that the optimality and efficiency of the minimization can be significantly improved when local derivative information (with respect to the minimization parameters) is available, but given that even a single evaluation of $\mathcal{E}_{\mathrm{vdW}} [\Psi]$ is expensive---a tour-de-force calculation involving hundreds of scattering calculations~\cite{arodreview}---this is currently prohibitive in the absence of an adjoint formulation (a topic of future work)~\cite{Giles2000}. Given our interest in equilibrium fluid shapes close to the initial condition of a flat fluid surface ($\Psi = 0$) and because of the small number of degrees of freedom $\{c_n\}$ needed to resolve the shapes, we find that local, derivative-free optimization is sufficiently effective, yielding fast-converging solutions. In what follows, we compare the solutions of~\eqref{Emin} based on~\eqref{FSC} against those obtained through PFA and PWS, which approximate $\mathcal{E}_{\mathrm{vdW}}$ in this periodic geometry as: \begin{align} \mathcal{E}_{\mathrm{PFA}} &= -\frac{\pi^2 \hbar c}{720} \int_{-\Lambda/2}^{\Lambda/2} \mathrm{d}x \left(\frac{1}{h(x) - \Psi(x)}\right)^{3} \label{eq:PFA} \\ \mathcal{E}_\mathrm{PWS} &= A \int_{-\Lambda/2}^{\Lambda/2} \mathrm{d}x' \int_{-\infty}^{\infty} \mathrm{d}x \int_{h(x')}^{\infty} \mathrm{d}y' \int_{-\infty}^{\Psi(x)} \mathrm{d}y \frac{1}{s^6}, \label{eq:PWS} \end{align} where $A = -\frac{2\pi\hbar c}{45}$ is a Hamaker-like coefficient obtained by requiring that \eqref{PWS} yield the correct vdW energy for two parallel PEC plates, as is typically done~\cite{Bergstrom1997}. \Eqref{PWS} is obtained from pairwise integration of the $r^{-7}$ Casimir--Polder interactions following integration over $z$ and $z'$, with $r = \sqrt{s^2 + (z - z')^2}$ and $s = \sqrt{(x - x')^2 + (y - y')^2}$~\footnote{Note that in situations involving a deformed PEC surface and flat PEC plate, one can show that $\mathcal{E}_\mathrm{PWS} = \mathcal{E}_\mathrm{PFA}$~\cite{Emig2003}, as this is a direct consequence of the additivity of the interaction.}. Note that because we only consider perfect conductors, there is no dispersion to set a characteristic length scale and hence all results can be quoted in terms of an arbitrary length scale, which we choose to be $\Lambda$. Additionally, we express the surface tension $\gamma$ in units of $\gamma_{\mathrm{vdW}} = \frac{\pi^{2}\hbar c}{720d^{3}}$, the vdW energy per unit area between two flat PEC plates separated by distance $d$. In what follows, we consider the impact of non-additivity on the fluid shape under both repulsive [\figref{fig2}] or attractive [\figref{fig3}] vdW pressures (obtained by appropriate choice of its sign), under the simplifying assumption of PEC surfaces interacting through vacuum. In either case, we consider local optimizations with small initial trust radii around $\Psi = 0$, and characterize the equilibrium fluid profile $\Psi(x)$ as $\gamma$ is varied. Our minimization approach is also validated against numerical solution of \eqref{YLE} under PFA (green circles). \begin{figure}[t!] \centering \includegraphics[width=0.97\columnwidth]{repulsivehdiffsvsg_smallstep_withinsets.eps} \caption{Maximum displacement $\Delta\Psi/d$ of a fluid--vacuum interface that is repelled from a grating (insets) by a repulsive vdW force, as a function of surface tension $\gamma/\gamma_{\mathrm{vdW}}$, obtained via solution of \eqref{Emin} using FSC (blue), PWS (red), and PFA (green) methods. Circles indicate results obtained through \eqref{YLE}. Insets show the equilibrium fluid--surface profiles at selected $\gamma \in{} \{0.006, 0.055, 0.277\}\gamma_{\mathrm{vdW}}$, with the unperturbed $\Psi = 0$ surface denoted by black dashed lines.} \label{fig:fig2} \end{figure} \emph{Repulsion.--} We first consider the effects of vdW repulsion on the equilibrium profile of the fluid--vacuum interface, enforced in our PEC model by flipping the sign of the otherwise attractive vdW energy. Such a situation can arise when a fluid film either sits on or is brought in close proximity to a solid grating [\figref{fig2}(insets)], causing the fluid to either wet or dewet the grating~\cite{Israelachvili}, respectively. \Figref{fig2} compares the dependence of the maximum displacement $\Delta\Psi = \Psi_{\mathrm{max}} - \Psi_{\mathrm{min}}$ of the fluid surface on $\gamma$, as computed by FSC (blue), PWS (red), and PFA (green). Also shown are selected surface profiles at small, intermediate, and large $\gamma/\gamma_\mathrm{vdW}$. Note that the combination of a repulsive vdW force, surface tension, and incompressibility leads to a \emph{local} equilibrium shape that is corroborated via linear stability analysis~\cite{ivanov1988thin}. Under large $\gamma$, the surface energy dominates and thus all three methods result in nearly-flat profiles, with $|\Psi| \ll d$. While both additive approximations reproduce the exact energy of the plane--plane geometry (with the unnormalized PWS energy underestimating the exact energy by 20\%~\cite{lambrechtPWS}), we find that (at least for this particular grating geometry) $\mathcal{E}_\mathrm{PWS,PFA} / \mathcal{E}_\mathrm{FSC} \approx 0.25$ in the limit $\gamma \to \infty$, revealing that even for a flat fluid surface, the grating structure contributes significant non-additivity. Noticeably, at large but finite $\gamma \gg \gamma_\mathrm{vdW}$, $\Delta\Psi$ is significantly larger under FSC and PFA than under PWS, with $\Psi_\mathrm{FSC,PWS}$ exhibiting increasingly better qualitative and quantitative agreement compared to the sharply peaked $\Psi_\mathrm{PFA}$ as $\gamma$ decreases [\figref{fig2}(insets)]. The stark deviation of PFA from FSC and PWS in the vdW--dominated regime $\gamma \ll \gamma_{\mathrm{vdW}}$ is surprising in that PWS involves volumetric interactions within the objects, whereas PFA and FSC depend only on surface topologies. Essentially, the pointwise nature of PFA means $\mathcal{E}_{\mathrm{PFA}}$ depends only on the local surface--surface separation, decreasing monotonically with decreasing separations and competing with surface tension and incompressibility to yield a surface profile that nearly replicates the shape of the grating in the limit $\gamma\to 0$. Quantitatively, PFA leads to larger $\Delta\Psi$ as $\gamma \to 0$, asymptoting to a constant $\lim_{\gamma \to 0} \Delta \Psi_{\mathrm{PFA}} \to H = 3d$ at significantly lower $\frac{\gamma}{\gamma_{\mathrm{vdW}}} < 10^{-5}$. On the other hand, both $\mathcal{E}_{\mathrm{FSC}}$ and $\mathcal{E}_{\mathrm{PWS}}$ exhibit much weaker dependences on the fluid shape at low $\gamma$, with the former depending slightly more strongly on the surface amplitude and hence leading to asymptotically larger $\Delta\Psi$ as $\gamma \to 0$; in this geometry, we find that $\Delta\Psi_{\mathrm{FSC,PWS}} \to \{0.32, 0.28\}d$ for $\frac{\gamma}{\gamma_{\mathrm{vdW}}} \lesssim 10^{-2}$. Furthermore, while PFA and PWS are found to agree with FSC at large and small $\gamma$, respectively, neither approximation accurately predicts the surface profile in the intermediate regime $\gamma \sim \gamma_\mathrm{vdW}$, where neither vdW nor surface energies dominate. Ultimately, neither of these approximations is capable of predicting the fluid shape over the entire range of $\gamma$. \emph{Attraction.--} We now consider the effects of vdW attraction, which can cause a fluid film either sitting on or brought into close proximity to a solid grating [\figref{fig3}(insets)] to dewet or wet the grating, respectively~\cite{Israelachvili}. Here, matters are complicated by the fact that $\mathcal{E}_\mathrm{vdW} \to -\infty$ as the fluid surface approaches the grating, leading to a fluid instability or wetting transition below some critical $\gamma^{(\mathrm{c})}$, depending on the competition between the restoring surface tension and attractive vdW pressure. Such instabilities have been studied in microfluidic systems through both additive approximations~\cite{Bonn2009, kerle, Geoghegan2003, Quinn2013}, but as we show in \figref{fig3}, non-additivity can lead to dramatic quantitative discrepancies in the predictions obtained from each method of computing $\mathcal{E}_{\mathrm{vdW}}$. To obtain $\gamma^{(\mathrm{c})}$ along with the shape of the fluid surface for $\gamma > \gamma^{(\mathrm{c})}$, we seek the nearest local solution of~\eqref{Emin} starting from $\Psi = 0$. \Figref{fig3} quantifies the onset of the wetting transition by showing the variation of the minimum grating-fluid separation $h_{\mathrm{min}} - \Psi_{\mathrm{max}}$ with respect to $\gamma$, as computed by FSC (blue), PWS (red), and PFA (green), along with the corresponding $\mathcal{E}_{\mathrm{vdW}}$ [\figref{fig3}(inset)] normalized to their respective values for the plane--grating geometry (attained in the limit $\gamma\to\infty$). Also shown in the top-right inset are the optimal surface profiles at $\gamma\approx\gamma^{(\mathrm{c})}$ obtained from the three methods. In contrast to the case of repulsion, here the fluid surface approaches rather than moves away from the grating, which ends up changing the scaling of $\mathcal{E}_{\mathrm{vdW}}$ with $\Psi$ and leads to very different qualitative results. In particular, we find that $\mathcal{E}_{\mathrm{FSC}}$ exhibits a much stronger dependence on $\Psi_{\mathrm{max}}$ compared to PWS and PFA, leading to a much larger $\gamma^{(\mathrm{c})}$ and a correspondingly broad surface profile. As before, the strong dependence of $\mathcal{E}_{\mathrm{PFA}}$ on the fluid surface, a consequence of the pointwise nature of the approximation, produces a sharply peaked surface profile, while the very weak dependence of $\mathcal{E}_{\mathrm{PWS}}$ on the fluid shape ensures both a gross underestimation of $\gamma^{(\mathrm{c})}$ along with a broader surface profile. Interestingly, we find that $\gamma^{(\mathrm{c})}_{\mathrm{FSC,PFA, PWS}} \approx \{0.65,0.38,0.07\}\gamma_{\mathrm{vdW}}$, emphasizing the failure of PWS to capture the critical surface tension by nearly an order of magnitude. \begin{figure}[t!] \centering \includegraphics[width=0.87\columnwidth]{attractive_new.eps} \caption{Minimum surface--surface separation $\frac{h_{\mathrm{min}} - \Psi_{\mathrm{max}}}{d}$ of a fluid--vacuum interface that is attracted to a grating (insets) by an attractive vdW force, as a function of surface tension $\frac{\gamma}{\gamma_{\mathrm{vdW}}}$, obtained via solution of \eqref{Emin} using FSC (blue), PWS (red), and PFA (green) methods. Circles indicate results obtained through \eqref{YLE}. Wetting transitions occurring at critical values of surface tension $\gamma^{(\mathrm{c})}$, marked as 'x'. The top-right inset shows the equilibrium fluid--surface profiles near $\gamma^{(\mathrm{c})}$ while the bottom-left inset shows the equilibrium vdW energies normalized by the energies of the unperturbed ($\Psi=0$) plane--grating geometry (the limit of $\gamma \to \infty$).} \label{fig:fig3} \end{figure} \emph{Concluding Remarks.--} The predictions and approach described above offer evidence of the need for exact vdW calculations for the accurate determination of the wetting and dewetting behavior of fluids on or near structured surfaces. While we chose to employ a simple materials-agnostic and scale-invariant model for the vdW energy, realistic (dispersive) materials can be readily analyzed within the same formalism, requiring no modifications. We expect that in these cases, non-additivity will play an even larger role. In fact, recent works~\cite{Noguez2004, lambrechtPWS} have shown that additive approximations applied to even simpler structures can contribute larger discrepancies in dielectric as opposed to PEC bodies. For the geometry considered above, assuming $\Lambda = 50 \, \mathrm{nm}$ and a nonretarded Hamaker constant $A = 10^{-19}~\mathrm{J}$~\cite{Gu2001, Bergstrom1997, Israelachvili}, corresponding to a gold--water--oil material combination (with the thin $d = 20~\mathrm{nm}$ water film taking the role of vacuum in our model), we estimate that significant fluid displacements $\Delta \Psi \sim 10~\mathrm{nm}$ and non-additivity can arise at $\gamma \approx 10^{-6}~\mathrm{J/m^{2}}$. By exploiting surfactants, it should be possible to explore a wide range of $\gamma \in [10^{-7}, 10^{-2}] \, \mathrm{J/m^{2}}$~\cite{Quinn2013} and hence fluid behaviors, from vdW- to surface-energy dominated regimes. Yet another tantalizing possibility is that of observing these kinds of non-additive interactions in extensions of the original liquid He$^4$ wetting experiments that motivated development of more general theories of vdW forces (Lifshitz theory) in the first place~\cite{Dzyaloshinskii1961}. In the future, it might also be interesting to consider the impact of other forces, including but not limited to gravity as well as finite-temperature thermodynamic effects arising in the presence of gases in contact with fluid surfaces. \emph{Acknowledgments.--} We are grateful to Howard A. Stone, M. T. Homer Reid, and Steven G. Johnson for useful discussions. This material is based upon work supported by the National Science Foundation under Grant No. DMR-1454836 and by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE 1148900.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,059
Rissoa är ett släkte av snäckor som beskrevs av Freminville in Desmarest 1814. Rissoa ingår i familjen Rissoidae. Kladogram enligt Catalogue of Life och Dyntaxa: Källor Externa länkar Snäckor Rissoa
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,023
Q: WooCommerce PayPal Checkout Gateway This is with regards to the woocommerce/wordpress plugin. How can I force PayPal to use a particular language with code in functions.php? I have looked through a lot on internet, and also on the doc pages PayPal have. Only thing I can find, is that locale can be set, but no reference about where. And also docs say that its possible to set language in the settings page (nothing there) and that PayPal will use WP_lang to set language (doesn't happen). A: Which WooCommerce plugin are you using? This newest one: https://woocommerce.com/woocommerce-and-paypal/ Is API-based, and much better than the "Standard" HTML redirect method that comes with many base WooCommerce installs, so I would recommend upgrading if you fancy it. As far as setting a default locale parameter, it will depend on which plugin's integration method you are using. There may be a setting for it in the plugin, or you may need to append an lc or locale variable if the feature isn't implemented by the plugin. Typically setting a manual locale is not so necessary, though. The PayPal checkout will automatically detect the most appropriate setting based on the buyer's user agent language setting and GeoIP. I'm not sure how exactly it all works, but that auto detection will sometimes be so strong as to override what you specify with lc or locale, making those parameters difficult to test, and not so meaningful to implement.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,038
Q: how to show data through input tag I am working on a Django project .In front-end I want to add user data through input tags but I do not know how to save data through input tags My models.py file is:: class User(AbstractBaseUser): full_name = models.CharField(max_length=255, blank=True, null=True) sur_name = models.CharField(max_length=255, blank=True, null=True) email = models.EmailField(max_length=255 ,unique=True) choose_subject = models.CharField(choices=SUBJECT_CHOICES , max_length=100) staff = models.BooleanField(default=False) admin = models.BooleanField(default=False) time_stamp = models.DateTimeField(auto_now_add=True) father_name = models.CharField(max_length=255) father_sur_name = models.CharField(max_length=255) mobile_phone_number = models.IntegerField(blank=True,null=True) father_email = models.EmailField(max_length=255, unique=True) fiscal_code = models.CharField(max_length=20) address = models.CharField(max_length=200 ) A part of my register.html file is: <div class="card-body"> <h2 class="title">Registration Form</h2> <form method="POST">{% csrf_token %} <div class="row row-space"> <div class="col-2"> <div class="input-group"> <label class="label">Full name</label> <input class="input--style-4" type="text" name="first_name"{{form.full_name}}> </div> </div> <div class="col-2"> <div class="input-group"> <label class="label">Sur name</label> <input class="input--style-4" type="text" name="last_name"{{form.sur_name}}> </div> </div> </div> I just want the Django model field to be working in input tags A: You must create in the views.py file a method in which the templates and the logic of it, an example can be the following views.py @csrf_exempt def register(request): if request.method == 'POST': var1 = request.POST.get('var1') var2 = request.POST.get('var2') var3 = request.POST.get('var3') #Save to the database here return render_to_response( 'home.html', {'message': 'Update Success', } ) else: #elif request.method == "GET": obj = table.objects.all() if obj: ctx['data'] = obj return render(request, template_name, ctx) and in your template add a form tag <form method="POST"> { % csrf_token % } <div class="row row-space"> <div class="col-2"> <div class="input-group"> <label class="label">Full name</label> <input class="input--style-4" type="text" value={{data.full_name}}> </div> </div> <div class="col-2"> <div class="input-group"> <label class="label">Sur name</label> <input class="input--style-4" type="text" value={{data.sur_name}}> </div> </div> </div> <input type="submit" value="OK"> </form> refer this tutorial
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,813
Q: can we load an external angular site inside another angular site? Site1 and Site2 are built in angularjs. Whether do we have any concept in angularjs (except iframe) to load pages of Site2 inside a div of Site1 ? Incase if you have any examples, please share here. Thanks in advance. Apart from the html content loaded from a external url (site2) to the source site (site1), there are lot of css and js files included in the site2 screens, whether that will also get loaded ? A: You can use 1) ngInclude Check documentation for usage 2) ng-bind-html, but you'll have to take care of ngSanitize. You have to put your html content in a special variable that is marked as trusted html. For this you have to use the $sce service. Hope it helps A: No need to specific angular tags, you can directly use object tag like this: <div> <object type="text/html" data="http://validator.w3.org/" width="800px" height="600px"> </object> </div> A: did u tried with <iframe src="http://www.w3schools.com"></iframe> the below url can help u Using external url in ng-include in AngularJS
{ "redpajama_set_name": "RedPajamaStackExchange" }
439
Tanymecus är ett släkte av skalbaggar som beskrevs av Ernst Friedrich Germar 1817. Tanymecus ingår i familjen vivlar. Dottertaxa till Tanymecus, i alfabetisk ordning Tanymecus abyssinicus Tanymecus acutus Tanymecus aegrotus Tanymecus affinis Tanymecus agrestis Tanymecus agricola Tanymecus albicans Tanymecus albomarginatus Tanymecus alboscutellatus Tanymecus albus Tanymecus alienus Tanymecus alutaceus Tanymecus angustulus Tanymecus arcuatipennis Tanymecus arenaceus Tanymecus argentatus Tanymecus argyrostomus Tanymecus ariasi Tanymecus arushamus Tanymecus arushanus Tanymecus aureosquamosus Tanymecus bayeri Tanymecus beckeri Tanymecus benguelensis Tanymecus bibulus Tanymecus bidentatus Tanymecus biplagiatus Tanymecus boettcheri Tanymecus bonnairei Tanymecus brachyderoides Tanymecus breviformis Tanymecus brevirostris Tanymecus brevis Tanymecus brunnipes Tanymecus burmanus Tanymecus canescens Tanymecus cervinus Tanymecus chevrolati Tanymecus chloroleucus Tanymecus cinctus Tanymecus cinereus Tanymecus circumdatus Tanymecus confertus Tanymecus confinis Tanymecus confusus Tanymecus convexiformis Tanymecus convexifrons Tanymecus costulicollis Tanymecus crassicornis Tanymecus cruciatus Tanymecus curviscapus Tanymecus deceptor Tanymecus destructor Tanymecus diffinis Tanymecus dilatatus Tanymecus dilaticollis Tanymecus discolor Tanymecus excursor Tanymecus fausti Tanymecus favillaceus Tanymecus femoralis Tanymecus fimbriatus Tanymecus fruhstorferi Tanymecus glis Tanymecus graminicola Tanymecus griseus Tanymecus hercules Tanymecus heros Tanymecus hirsutus Tanymecus hirticeps Tanymecus hispidus Tanymecus hololeucus Tanymecus humilis Tanymecus inaffectatus Tanymecus indicus Tanymecus infimus Tanymecus insipidus Tanymecus kolenatii Tanymecus konbiranus Tanymecus kricheldorffi Tanymecus lacaena Tanymecus lateralis Tanymecus latifrons Tanymecus lautus Tanymecus lectus Tanymecus lefroyi Tanymecus lethierryi Tanymecus leucophaeus Tanymecus lineatus Tanymecus lomii Tanymecus longulus Tanymecus luridus Tanymecus makkaliensis Tanymecus marginalis Tanymecus metallinus Tanymecus micans Tanymecus migrans Tanymecus misellus Tanymecus mixtor Tanymecus mixtus Tanymecus mniszechi Tanymecus modicus Tanymecus montandoni Tanymecus morosus Tanymecus mozambicus Tanymecus murinus Tanymecus musculus Tanymecus nebulosus Tanymecus necessarius Tanymecus nevadensis Tanymecus niloticus Tanymecus niveus Tanymecus nothus Tanymecus nubeculosus Tanymecus obconicicollis Tanymecus obscurus Tanymecus obsoletus Tanymecus oculatus Tanymecus orientalis Tanymecus ovalipennis Tanymecus ovatus Tanymecus ovipennis Tanymecus palliatus Tanymecus parvus Tanymecus perrieri Tanymecus piger Tanymecus pisciformis Tanymecus planifrons Tanymecus planirostris Tanymecus potteri Tanymecus praecanus Tanymecus protervus Tanymecus pubirostris Tanymecus revelierei Tanymecus revelieri Tanymecus rhodopus Tanymecus robustus Tanymecus rudis Tanymecus rusticus Tanymecus rutilans Tanymecus sareptanus Tanymecus sciurus Tanymecus seclusus Tanymecus setulosus Tanymecus sibiricus Tanymecus siculus Tanymecus simplex Tanymecus sitonoides Tanymecus sparsus Tanymecus squameus Tanymecus steveni Tanymecus submaculatus Tanymecus subvelutinus Tanymecus telephus Tanymecus tenuis Tanymecus tessellatus Tanymecus tetricus Tanymecus texanus Tanymecus tonsus Tanymecus trivialis Tanymecus umbratus Tanymecus urbanus Tanymecus vagabundus Tanymecus variabilis Tanymecus variatus Tanymecus variegatus Tanymecus versicolor Tanymecus versutus Tanymecus villicus Tanymecus viridans Tanymecus vittiger Källor Externa länkar Vivlar Tanymecus
{ "redpajama_set_name": "RedPajamaWikipedia" }
76